#ubuntu-server 2006-08-31
* Starting logfile irclogs/ubuntu-server.log
<tkup> any reason why proftpd isn't in the package repository?
<iceman> hello everyone!
<iceman> I've got a problem trying to configure a cups printer via the web interface on an ubuntu server
<iceman> can anyone help me please?
<iceman> oups my bad, hadn't read the topic yet
#ubuntu-server 2006-09-01
<drew32> hey all, anyone got time to help me with a problem?
<sten> hi.  Is IMAP4 + TLS, with plaintext passwords disabled (default, I think) and a 256bit SSL key (default, I think) authentication more, or less vulnerable than a kerberos5 authenticated one.  Essentially, I'm asking if it's worth the hassle to set up kerberos just for Dovecot.  I have very few users, and all have local accounts.  My server is also running SSH, so I'd like to secure IMAP logins.
<JulienH> Hi all
<JulienH> i'm using dapper, and I'm surprised by the fact that apache2 package wasn't compiled with the LFS support
<JulienH> I got this error in my log : "(75)Value too large for defined data type" when I try to access a file >2GB
<Madeye-2> Hello
<rene_> when running quotatool: "Error while detecting kernel quota version: No such process". How do I fix this?
<lionelp> rene_: /etc/init.d/quota start
#ubuntu-server 2006-09-02
<dmg_> Hello, everybody
<dmg_> I was looking at the small business server page and the related threads but I did found any quite precise specification or informations about what that bunch of people are really going too. If there is someone in that stuff can we have a talk?
<TTT_Travis> Hi, I am running ubuntu server and for some reason it didn't detect my network card during the install, how do I add a new one through the command line?
<dmg_> ouch, which nic type do you use ?
<TTT_Travis> Linksys LNE100TX
<dmg_> humm should work fine, is that you're second card and the first one is of the same type ?
<TTT_Travis> it worked under ubuntu on a different box
<TTT_Travis> and this one before
<TTT_Travis> dmg_ nope
<TTT_Travis> just 1 card
<TTT_Travis> I run ifconfig and it only shows loopback
<TTT_Travis> do you know how to add a card manually
<dmg_> humm usually you just have to load the module and that works fine
<TTT_Travis> ok I am going to experiment
<dmg_> try modprobe tulip.o
<dmg_> (or maybe lsmod | grep tulip to see if it's already loaded)
<TTT_Travis> tis loaded
<TTT_Travis> but no network card
<TTT_Travis> it shows up in /etc/network/interfaces
<dmg_> oh
<dmg_> ifconfig -a 
<dmg_> might just be down :p
<TTT_Travis> didn't help
<TTT_Travis> I got an idea though
<TTT_Travis> yay I think it worked
<TTT_Travis> I took out my TV tuner card
<TTT_Travis> guess it was causing a conflict
<dmg_> strange
<dmg_> might help to just switch port between them
<TTT_Travis> still doesn't work
<TTT_Travis> I was wrong
<dmg_> are you sure the card ok ?
<dmg_> (and the pci slot also ?)
<TTT_Travis> probably not
<TTT_Travis> could be the problem
<TTT_Travis> but the card worked on windows
<TTT_Travis> just before I reformatted
<TTT_Travis> so I don't think it's a system problem
<TTT_Travis> I have had problems before
<TTT_Travis> but it just magically started work
<TTT_Travis> but the lights on the nic aren't lighting like they're supposed to
<dmg_> do you mean that the light are off ?
<TTT_Travis> ok so normally when you plug the cable in the main light is on and the other ones blink, so I unplug the cable and it still the same exact way, so I try booting with it unplugged and the light still lights up
<TTT_Travis> and it's not supposed to
<TTT_Travis> it worked fine on windows
<TTT_Travis> maybe I should just get another nic
<TTT_Travis> or swap it with a different computer
<TTT_Travis> I'll try that
<dmg_> yep that's definetly weird
<dmg_> omg, ubuntu wiki organisationnal scheme is definitely mayhem
<TTT_Travis> hmm still nothing
<TTT_Travis> I switched the card
<TTT_Travis> with one out of my other ubuntu server box
<dmg_> still the same nic model ?
<dmg_> (try using another pci slot then, the slot might be damaged)
<TTT_Travis> different nic
<TTT_Travis> but I think ubuntu might be not detecting it
<TTT_Travis> the new card
<TTT_Travis> because if I try ifconfig eth0 it says device not found
<dmg_> humm
<TTT_Travis> don't know what drive it users
<dmg_> what is the new nic ?
<TTT_Travis> that common realtek chipset
<TTT_Travis> super generic
<TTT_Travis> it's in a different pci slot then the other one too
<dmg_> 8139too
<dmg_> the driver should be that one
<TTT_Travis> ok
<TTT_Travis> ok modprobe 8139too then what?
<dmg_> then ifconfig
<dmg_> (ifconfig -a)
<TTT_Travis> still nothing
<dmg_> huh... then I'm out of answer
<TTT_Travis> hmm
<duderock> hi there!
<duderock> I'm having a problem with ubuntu server on an AMD X2 socket AM2 - Gigabyte MOBO... Here's a log from boot to lockup, if anyone has any ideas..... http://paste.ubuntu-nl.org/22300
#ubuntu-server 2006-09-03
<stian> Hey, postfix on my server seems to be collecting email for some reason. I suspect it wants to use them later on in a drought or something, but I really need them sent right away
#ubuntu-server 2007-08-27
<ph1zzle> DustWolf: fact of the matter is, thats not a standard thing to happen in ubuntu, nor does it have to do with the route cache, there is software that is autobanning them, now it can be something that is just in a cron script that the other "root"
<ph1zzle> guy you mentioned added it
<ph1zzle> or it can be one of dozens of apps that do exactly what fail2ban does
<ph1zzle> do a DustWolf =>
<ph1zzle> ps -A -o command | sort -u
<ph1zzle> that command will show you all processes running
<ph1zzle> sudo crontab -l
<ph1zzle> that command will list the root crontab
<ph1zzle> and if that doesn't work, and maybe it's not the every ten minutes, maybe it's hourly, then check /etc/cron.hourly
<rinman> I have just installed SSL for apache as by the ubuntu tutorial, but now when I try wgetting the page i get "ERROR: certificate common name `s.p10.hostingprod.com' doesn't match requested host name `my.host.com'". What didn't I do or did wrong?
<rinman> I created my own certificate btw
<ScottK> If you didn't use your hostname as the common name when you created it, that's what happens.
<rinman> mkay, will try that, thx!
<halcyonCorsair> hi, i'm trying to install ubuntu 6.06 server onto an HP DL140 G3 with an LSI SAS RAID controller, but the 2.6.15 kernel doesn't seem to support SAS, and the partitioner can't find any drives, is there any way I can somehow install 6.06 but use a later kernel or something?
<ScottK> Not and have it be supported.
<ScottK> Ubuntu is getting close to releasing a 6.06.2 and newer hardware support is the primary reason.
<ScottK> I'd ask here during the week if your hardware will be supported.  The people that would know are rarely here during the weekend.
<halcyonCorsair> geh, its monday here... :(
<halcyonCorsair> is there a solid timeline on 6.06.2?
<halcyonCorsair> either way, i need this reasonably quickly, so support or no, is it possible?
<Kamping_Kaiser> hm. .2 took sits time
<ScottK> halcyonCorsair: Why 6.06?  If you are going to do something that doesn't get LTS supported, why not try Feisty?
<ScottK> Most of the relevant people work in Europe, so standard working hours there.
<halcyonCorsair> ScottK: 6.06, because its currently our preferred platform for all our servers, and we want it to be as standardised as possible
<ScottK> Ah.
<ScottK> If I were in your position, I'd try Feisty and if it works, ask about if you hardware will be supported in 6.06.2.
<rinman> scottk: if i wanted to use the same crt for a bunch of subdomains, would it be possible to set it to just domain.com rather than svn.subdomain.domain.com and still get it to work?
<ajmitch> the kernel team did have something planned for driver backports, I'm not sure of its status though
<ScottK> rinman: I think, but am not sure, that you need to roll one cert per domain.  I've never had to do it, so that may be wrong.
<halcyonCorsair> i did see a mention of a 2.6.15-50 kernel...is there any way I can install using that? (i don't know what the process would be to install to a kernel other than the one on the cd)
<ajmitch> you can have an ssl certificate for *.domain.com
<ScottK> rinman: What ajmitch said.
<rinman> oki, so should i write .domain.com, *.domain.com or just domain.com as the common name?
* ScottK shuts up and lets ajmitch answer since he knows what he's talking about.
<rinman> =)
<ScottK> halcyonCorsair: I don't either.
<ajmitch> I believe domain.com vs *.domain.com are different
<halcyonCorsair> hmm
* ScottK agrees with ajmitch since that seems to be working so far.
<ScottK> halcyonCorsair: The extremely painful way to do it would be to install a later working system (e.g. Edgy or Feisty), install the older kernel and then downgrade everying.  Not sure if it's actually possible.
<rinman> ajmitch: sorry, but was that an answer? =)
<ajmitch> rinman: in a sense
<ScottK> I think it was.
<ajmitch> latest 2.6.15 kernel I see is 2.6.15-28.57
<ScottK> rinman: Based on ajmitch's sense of an answer, I'd try *.domain.com and see how it works.
<ajmitch> or not, that's security
<ajmitch> 2.6.15-50.61 is in -proposed
<halcyonCorsair> its a shame i couldn't use a usbstick to load a new kernel while running
<ajmitch> kexec would be useful there
<ajmitch> 2.6.15-50.61 doesn't appear to have anything helpful in its changelog
<halcyonCorsair> i might have to go as high as 2.6.20
<ScottK> That's the Feisty kernel, so at least it's available in a release and not just in the developmental version.
<halcyonCorsair> hmm
<halcyonCorsair> ajmitch: i think its the fusion mpt sas driver
<halcyonCorsair> ok, so i'm gonna try installing feisty for the moment
<halcyonCorsair> is there any particular advantage to a separate /boot partition when using xfs for the main partition?
<ajmitch> https://bugs.launchpad.net/ubuntu/+source/linux-source-2.6.15/+bug/37452
<ubotu> Launchpad bug 37452 in linux-source-2.6.15 "fusion mpt sas driver does not find a RAID1 disk during installation(Sun Galaxy X4200 and X4100, Dell SASR5/i)" [High,In progress] 
<ajmitch> probably not, as grub should read xfs without any issues
<halcyonCorsair> ajmitch: yeah i found that before, just working my way through the links in the comments
<ajmitch> some tedious workarounds in there
<halcyonCorsair> is there any advantage to a separate boot partition with the main partition being xfs on lvm?
<ajmitch> yes, grub can't handle lvm
<halcyonCorsair> good to know
<rinman> yay, it worked! thx a lot!
<halcyonCorsair> ok....so like...how different from 6.06 is 7.04?
<ScottK> halcyonCorsair: This is for server, right?
<halcyonCorsair> ScottK: yeah
<ScottK> The biggest thing you will immediately notice is that mounts are done by UUID instead of device name.
<ScottK> Upstart is used instead of the traditional Sys V init system, but it has a compatibility layer, so you may not even notice that.
<halcyonCorsair> ScottK: will be switching from mawk to gawk, using bind9 for dns, squid, dhcp3-server, and our own custom software
<halcyonCorsair> hmm, upstart.... >:( i hope so....i don't want to rewrite our init scripts again....
<ScottK> Oh, one other big thing is the default shell switches from Bash to Dash, so no mor Bashisms.
<ScottK> You shouldn't have to.
<ScottK> mor/more
<Kamping_Kaiser> mounts by uuid is iky :(
<halcyonCorsair> ScottK: i think i need the crazy-mad bash stuff
<ajmitch> your initscripts are full of bashisms?
<ScottK> '/bin/sh is a symlink, so you can change it.
<ScottK> Kamping_Kaiser: Why?  It's the only thing that actually works reliably with mutliple SATA/RAID drives.
<halcyonCorsair> ajmitch: well....sec
<Kamping_Kaiser> ScottK, because its unreadable (by me). /dev/sda i can read [15 character string]  i cant (or write)
<ScottK> Agreed, but unless you want random failures on reboot, for systems with multiple drives, it's the only way to fly.
<Kamping_Kaiser> yea. quite probably :(
<ajmitch> Kamping_Kaiser: so use LVM on top & don't worry about it
<Kamping_Kaiser> ajmitch, potentially the first thing i've heard that makes me like the sound of lvm, but i dont like the 'one drive dies, you loose all' bit that i've herad of
<ajmitch> so use LVM on RAID :)
<halcyonCorsair> what are the chances of this working in dash?: http://rafb.net/p/R9j5P643.html
<ajmitch> or a separate volume group per drive, if you're that way inclined
<ajmitch> my eyes...
<halcyonCorsair> thats what happens when you get help from #bash....i know what it does, and mostly how it does it, but.....its a monster nonetheless
<Kamping_Kaiser> i usually do pretty welll out of #bash
<halcyonCorsair> heh :)
<halcyonCorsair> greycat forever!
<Kamping_Kaiser> hehe
<kraut> moin
<Kamping_Kaiser> hi
<coNP> guten morgen, kraut
<kraut> morning coNP
<Kamping_Kaiser> how many levels deep can a sub domain go?
<soren> Kamping_Kaiser: Many. Why?
<Kamping_Kaiser> soren, i was thinking of writing a bind config file, but found a script my mate wrote to automate the job (to a degree). took a look, and it only allows 3 levels from root sub.domain.ext.cc
<Kamping_Kaiser> so i was wondering if you could only do 3, or if that was a limit of the script
<soren> Kamping_Kaiser: Oh, you can do plenty more than three.
<soren> Kamping_Kaiser: Hang on, I'll look it up.
<Kamping_Kaiser> oh, cool. thanks.
<maswan> you can have max 256 chars in the whole name, and any number of dots
<soren> Kamping_Kaiser: the maximum number of characters in an fqdn is 255. Each element in it can be max 63 characters.
<maswan> well, 255, close enough. :)
<soren> maswan: 256 with the final ., yes.
<Kamping_Kaiser> hehe. thanks both.
<Kamping_Kaiser> so up to 128 parts *grin*
<maswan> just look at the reverses for ipv6 addresses...
<maswan> (not _quite_ the same, but the same mechanics pretty much)
<maswan> 7.3.1.0.0.0.0.0.0.0.0.0.0.0.0.0.8.1.0.2.e.0.0.0.0.b.6.0.1.0.0.2.ip6.arpa domain name pointer tutankhamon.acc.umu.se.
<spiekey> hi
<spiekey> i have install ubuntu LTS on a DQ965GF intel board.
<spiekey> After a reboot i only get: GRUB _
<spiekey> it seems to work with SuSE ;)
<ScottK> spiekey: It should work on the current Ubuntu release (Feisty) too.
<spiekey> same problem with Feisty
<ScottK> Hmmm
<ScottK> Do you have any IDE devices?
<spiekey> during the installation?
<ScottK> Yes.
<ScottK> Different 965 variants have one of two different IDE controllers (non-Intel) on them.
<ScottK> One has been problematic on kernel support and the other hasn't.
<ScottK> All of the 965 install problems I've seen relate to that.
<ScottK> On pure SATA systems that don't use the IDE controller, it generally installs fine.
<spiekey> yes. i had. sda and sdb
<ScottK> How about the CD/DvD?
<spiekey> one moment, i will reinstall again.
<nandemonai> Sounds like the jmicron issue I came across, not that it will probably help but I had a similar issue on Asus P5B, updated the Bios and it's ok now.
<nandemonai> Actually while I'm here, has anyone got Ubuntu to install on an old Compaq Proliant server? So far all I've got to boot is Centos/RH. I'm thinking it's either a RAM or Raid issue. The installer just kaputs trying to load the kernel.
<spiekey> ScottK: DVD is sr0. Could thta be correct?
<spiekey> the others are sda and sdb for sure
<ScottK> spiekey: I'm not sure at all.  I don't recall having seen that before.  I don't actually have a 965 (my newest is a 945).  My CD on that box is /dev/hda        /media/cdrom0   udf,iso9660 user,noauto     0       0
<spiekey> ScottK: if we send you a 965, will you fix it for us? :D
<soren> Did someone say free hardware?
<spiekey> ScottK did! :D
<ScottK> spiekey: soren would be a better bet than me.
<soren> ...to be on the receiving end of a free hardware transaction? Sure. Bring it :)
<spiekey> hehe, its not mine anyway :P
<ScottK> soren: Would you have a moment to review http://revu.tauware.de/details.py?upid=105.  It's a pretty simple Python C extension for the Sendmail Milter protocol.  I'm pretty sure it's good to upload, but need a MOTU ack since it's new.
<soren> ScottK: I believe cdbs automatically install README without you having to list it in docs.
<ScottK> OK.
<ScottK> I'm trying that now.
<soren> At a glance, it looks ok. Gimme a few minutes.
<ScottK> Sure
<soren> Wow. It took a whopping 8 seconds to build :)
<coNP> Thanks soren, even in advance :)
<ScottK> Yeah.  There's not much to it.
<ScottK> soren: It should pick up libmilter1 as a dependency through shlibs:depends.
<ScottK> soren: If you can start Python and "import milter" without error it's working.
<soren> ScottK: Sure, got it.
<soren> Just one more thing I want to check. Hang on.
<ScottK> OK.  You were correct about README.  I have a revised upload prepared.
<soren> ScottK: The build-dep on python-all-dev should be "(>= 2.3.5-11)", I believe.
<soren> Also, shouldn't there be a XB-Python-Version: ${python:Versions} ?
<ScottK> soren: debian/pyversions takes care of both of those I believe.
<soren> ScottK: No, not exactly.
<ScottK> No?
<soren> ScottK: The former is needed for pyversions (the script) to do the right thing (or something like that).
<soren> ScottK: The latter is needed, and in the absence of debian/pyversions, it will be used in its place.
<soren> ScottK: Did that make sense?
<ScottK> Yes.
<soren> "Note: if there's no debian/pyversions, dh_pysupport will try to use the XS-Python-Version field to find out the list of supported versions."
<soren> http://wiki.debian.org/DebianPython/NewPolicy
<ScottK> I'll add the python-all-dev versioned dependency.
<ScottK> Right.
<ScottK> So python-all-dev (>= 2.3.5-11) is what I need to add.
<soren> Er... Oh, I'm messing things up here.
<soren> XB-Python-Versions *is* needed, but the quote above is unrelated to that, actually.
<ScottK> So what's the point of debian/pyversions then?
<ScottK> I had a delivery arrive just now.  I'll be back in a couple of minutes.
<soren> debian/pyversions specifies which versions of python the package is compatible with.
<soren> python-{support,central} extracts this info and creates a substitution variable that can be put in debian/control, namely ${python:Versions}. That's how I understand it.
<soren> ..so add the XB-Python-Versions, and I'm happy.
<ScottK> OK.  Will do.
<ScottK> soren: I got called away by a bit of a family crisis.  I'm back and I'll get it straightened out and re-uploaded.
<soren> ScottK: Cool.
* soren takes off for today
<ScottK> soren: Any chance you can hang around long enough for me to dput this so you can advocate it....
<ScottK> soren: Tested and uploaded.
<soren> ScottK: Yeah, this feels like on of those days, where "taking off" means leaving my basement office and going back to work in the living room with my laptop. :)
<ScottK> Ah.  OK.
<ScottK> I understand how that is.
<ScottK> soren: http://revu.tauware.de/details.py?upid=110
<soren> ScottK: You forgot the XB-Python-Versions (for the binary packages). It's a bit silly as changing the XS-Python-version to XSB-Python-Version really ought to do it, but the wiki page says to add it.
<ScottK> Ah. Frick.  OK.
<soren> Ah, no.
<soren> It's not the same.
<soren> You set XS-Python-Version manually, but XB-Python-Versions should be set to ${python:Versions}
* soren wanders out of the office..
<soren> ...and into the livingroom.
<soren> ScottK: Just ping me whenever.
<ScottK> soren: How about http://revu.tauware.de/details.py?upid=112
<soren> ScottK: Commented.
<soren> ScottK: by the way: http://www.debian.org/doc/packaging-manuals/python-policy/ch-module_packages.html#s-specifying_versions
<ScottK> soren: Thanks.  Urgh.  This what I get for trying to package and pack for a business trip at the same time.  Thanks for your patience.
<soren> ScottK: Oh, don't mention it.
<ScottK> soren: How about http://revu.tauware.de/details.py?upid=114
<soren> Advocated
<ScottK> Thanks.
<ScottK> soren: Uploaded.  Thanks again.  See you in a few days...
<soren> Sure thing. Have a nice trip.
<keescook> soren: would you happen to have a chance to do the vim merge?  I'd like to get current debian into gutsy -- it has a minor security fix in it.
<soren> keescook: Will tomorrow be ok or is it urgent in any way?
<keescook> soren: not urgent at all, just wanted to ping you about it
<soren> keescook: Sure, no problem.
<keescook> cool, thanks.
<mwh_> Hi, just installed ubuntu-server
<mwh_> during the install, I couldn't setup the network ... the network works manually .. but how can I configure it now permanently in an easy way?
<mwh_> is there a console ncurses tool I can use to enter the ssid and key for my wifi net?
<infinity> man 5 interfaces
<mwh_> Or do I have to update /etc/network/interfaces manually?
<ubuntu-jeff> It sounds like you don't have DHCP setup on your networking equipment
<mwh_> it works well with dhcp
<mwh_> no problem
<mwh_> I just do an iwconfig ... then dhclient .. and it works
<mwh_> I would like to have that set in stone though ;)
<ubuntu-jeff> what release are you using?
<mwh_> the latest stable
<ubuntu-jeff> ok
<infinity> Do I want to know why you have a server that's connected with wireless?  Probably not, right? :)
<mwh_> ha ha
<infinity> Anyhow, you can configure your ESSID in /etc/network/interfaces.
<infinity> The manpage is very friendly about the whole thing.
<mwh_> yes .. I know
<mwh_> though I was still wondering why there isn't a handy tool for setting it up
<infinity> If your wireless NIC had been found at install, d-i would have let you configure it then too, but you may have skipped that bit if you had a wired network too.
<mwh_> while there is one during the installationprocess
<ubuntu-jeff> Yeah, setting up a server with a Wireless connection is (a) a security hazard and (b) performance will always suck.
<infinity> No need for a tool, really, the config file is easy enough to understand...
<mwh_> btw, how does ubuntu behave when the wifi connection goes away for a time and comes back .. will it automatically pick up where it left?
<infinity> We have tools for GUI users, but (puts on his elitist jerk hat), a sysadmin that can't figure out interface(5) won't get much further with his system, IMO.
<infinity> It should keep the association, in general.
<mwh_> infinity, sure its easy .. though it would be super easy just to run a curses program
<mwh_> okay
<infinity> Yes, but it would be super-pointless for me to WRITE a curses application, since the config file is easy.  See? :)
<infinity> If it's an itch someone cares to scratch, they may.
<ubuntu-jeff> Dose anybody know if ifolder will be available in the Universe repositories for 7.10 Gutsy? I know of some people who claim to have gotten it working, but only on 6.06.
<mwh_> infinity, well .. its an old discussion :)
<ubuntu-jeff> Do any of you have a solid understanding of Samba3?
<mathiaz> ubuntu-jeff: you'd better ask your questions
<mathiaz> ubuntu-jeff: someone may have enough understanding of samba3 to help you
<ubuntu-jeff> ok
<ubuntu-jeff> well...
<ubuntu-jeff> If I setup a SAMBA share with Read (or Read Write Permissions) to /mnt/smb/data
<ubuntu-jeff> Which is it's own partition (single hd, raid, not relevant for the question)
<ubuntu-jeff> Then I set the noexecute option for /mnt/smb/data
<ubuntu-jeff> in fstab
<ubuntu-jeff> will that prevent the Windows XP/2k clients
<ubuntu-jeff> from executing code (.exe's) stored on the server, available on the share?
<mralphabet> ubuntu-jeff: no
<ubuntu-jeff> Or will that only prevent Unix users, telnet / ssh'ed into the server from executing code in /mnt/smb/data?
<mralphabet> ubuntu-jeff: windows copies the file to the windows client and executes it out of /userdata/temp/
<ubuntu-jeff> damn
<ubuntu-jeff> so
<ubuntu-jeff> Even though my nice shiny Ubuntu server is immune to viruses itself, it could become the typhoid marry of the network, if enough users make use of an infected file?
<ubuntu-jeff> Even with noexecute set in fstab?
<mralphabet> if they do not have write access to the share, the clients can't write back to infect files on the server
<ubuntu-jeff> ok
<ubuntu-jeff> thanks
<mralphabet> this isn't a "ubuntu fails" type question (and I know you weren't going to say that) but this is a standard file serving issue
* mralphabet sighs
<infinity> ubuntulog: noexec can never stop anyone from executing code anyway.
<infinity> Err.
<infinity> Sorry, ubuntulog.. You got tab completion when the other dude left. :P
* infinity pats ubuntulog gently.
<Alpha232> :Can I tune the arp cache size? i'm getting "Neighbour table overflow" errors, or an easy way to flush as  ip neigh flush dev eth0 doesn't work.
#ubuntu-server 2007-08-28
<kshah> command line cmd for the active processes? i need to kill mysql, it just crapped itself
<Kamping_Kaiser> whats your question?
<kshah> I need to kill mysql and restart
<kshah> it just went nuts
<kshah> lspci I thought listed processes
<kshah> when I try it, I get input / output error
<ajmitch> no, 'ps aux' would give you a list of active processes
<ajmitch> but you don't need to kill mysql that way
<ajmitch> sudo invoke-rc.d mysql restart
<kshah> unable to execute
<kshah> (i just tried it, thats what i got)
<Kamping_Kaiser> ajmitch, is that a upstartism or a way of restarting i havent seen before?
<ajmitch> Kamping_Kaiser: no, it's a debianism way of running the initscripts
<Kamping_Kaiser> ajmitch, neat. never seen it before
<kshah> nor had I
<ajmitch> kshah: what did you try & run?
<kshah> sudo invoke-rc.d mysql restartt
<kshah> won't work
<ajmitch> I hope that 'restartt' was a typo
<kshah> it was :)
<kshah> I'm using drupal (a LAMP based CMS) and I just started getting mysql errors, someone told me to repair the table
<kshah> but I can't even restart mysql, or shut it down, or apparently anything
<ajmitch> mysql from ubuntu packages?
<kshah> yeah, I used the Ubuntu Server 7 guide to install everything
<kshah> so it came pre installed i guess
<ajmitch> ok, and what did you say you got when you tried to run that?
<ajmitch> and what do  you mean by mysql going nuts, also?
<kshah> the error was:
<kshah> unable to execute /usr/sbin/invoke-rc.d: Input/output error
<ajmitch> sigh, I'm being called away, I'll let Kamping_Kaiser help :)
<ajmitch> back later
<kshah> k thx
<Kamping_Kaiser> gah. i'm only here for 10 more min :S *reads up*
<Kamping_Kaiser> ajmitch, catch you later
<Kamping_Kaiser> pity i almost never use mysql.
<Kamping_Kaiser> kshah, can you stop it?
<kshah> don't think so
<kshah> kill -9 mysql ?
<Kamping_Kaiser> /etc/init.d/mysql-server stop
<Kamping_Kaiser> ouch, my RSI is kicking in :S
<kshah> lol
<kshah> uh oh
<kshah>  you may have been right
<kshah> well, scorpking might have been
<kshah> Read-only file system
<kshah> how can I check diskspace?
<Kamping_Kaiser> df -h
<Kamping_Kaiser> and df
<Kamping_Kaiser> (-h is human readable, but the rounding may confuse things)
<kshah> no, less than 1% of everything is used
<Kamping_Kaiser> what about `mount`
<kshah> I just logged in, it said new mail, I wrote mail, it says 'read only'
<Kamping_Kaiser> check if anythings marked ro
<kshah> oh the TEMP folder maybe
<kshah> could that be it?
<Kamping_Kaiser> temp is ro??
<kshah> a memory leak, causing the temp folder to be filled, unemptied
<kshah> causing other bad shit to go down?
<Kamping_Kaiser> if temp is full, thats a bad look
<kshah> temp is ro?? I don't know what you mean
<kshah> I'm new to linux
<Kamping_Kaiser> ro=read only
<Kamping_Kaiser> kshah, can you put the output of `df -h && mount` in pastebin for me to look at?
<Kamping_Kaiser> !paste
<ubotu> pastebin is a service to post large texts so you don't flood the channel. The Ubuntu pastebin is at http://paste.ubuntu-nl.org (make sure you give us the URL for your paste - see also the #ubuntu channel topic)
<kshah> yes, temp is RO in that case, the error returned from the 'mail' command was /tmp/mail.Rx(some other chars here): Read only file system
<Kamping_Kaiser> temp being read only is a bad thing
<kshah> I can't run mount command
<kshah> Input / output error
<kshah> df -h works
<Kamping_Kaiser> give me the df -h output then
<mathiaz> kshah: do you see any hardware problems ?
<kshah> I don't believe there, this is a server running off of a laptop thought
<kshah> *though
<kshah> so...
<ajmitch> I/O errors like that generally aren't good
<ajmitch> usually dmesg will have more info in that case
<kshah> going to pastebin in a sec
<kshah> http://pastebin.com/m72ef7083
<kshah> Kamping_Kaiser: ^^
<Kamping_Kaiser> kshah, i'd look in dmesg like ajmitch suggests - `dmesg | tail -n 30`
* Kamping_Kaiser tried to tab complete dmesg *slap*
<ajmitch> if dmesg will run :)
<kshah> I can't run that command
<Kamping_Kaiser> ajmitch, quite *heh*
<kshah> I've already tried at scorpkings behest
<ajmitch> sounds like you have some far more serious issues there
<Kamping_Kaiser> what about `tail -f /var/log/syslog`
<kshah> that works
<kshah> ill pastebin
<Kamping_Kaiser> when i started getting errors like that i put the drive in the fridge and started data recovery stuff.. :|
<Kamping_Kaiser> that was a laptop hdd too
<kshah> http://pastebin.com/m6853111d
<Kamping_Kaiser> could you run with -n 30? `tail -n 30 -f /var/log/syslog`
<Kamping_Kaiser> er, no -f
<mathiaz> kshah: yeah.. it seems that your hd is dead
<kshah> shut it down ?
<Kamping_Kaiser> kshah, what season is it where you are?
<kshah> summer, i'm in in NJ
<Kamping_Kaiser> summer, your going to have trouble finding somewhere cold then *heh*
<kshah> http://pastebin.com/m245a3225
<kshah> it is in an air conditioned office, I guess its just too old and too much of a over heating laptop to handle it all
<Kamping_Kaiser> drive doesnt look good
<kshah> how can I shut it down?
<Kamping_Kaiser> halt
<kshah> sudo halt?
<Kamping_Kaiser> or the power button
<Kamping_Kaiser> yea
<kshah> lol input output error
<Kamping_Kaiser> try method B
<kshah> I'm not there
<ajmitch> looks like it can't run anything not in the cache
<kshah> it's about 20 miles away
<ajmitch> oops
<kshah> yeah...
<Kamping_Kaiser> ah
<kshah> init 6?
<Kamping_Kaiser> you can try it
<kshah> init: not being executed as init
<kshah> I don't know what that means
<Kamping_Kaiser> it means you need to get on the phone :)
<kshah> oh NOs lol
<kshah> oh jeez... this is going to suck
<kshah> kill -9 kills a process right?
<Kamping_Kaiser> yes. term, iirc
<ajmitch> echo O > /proc/sysrq-trigger
<ajmitch> a truly evil way of shutting down
<kshah> if it works..
<Kamping_Kaiser> its still evil.
<kshah> why is that?
<Kamping_Kaiser> ajmitch, is that 'shut down' or 'die now'
<kshah> permission denied (ran as sudo)
* Kamping_Kaiser wonders if `kill -15 `pidof init`` counts as 'shut down' (as i think about it, no it doesnt)
<jdstrand> kshah: sudo sh -c 'echo O > /proc/sysrq-trigger'
* Kamping_Kaiser debates this as a reason to look into ramdiscs again
<kshah> command didn't do anythin jdstrand
<kshah> well, it may have..
<kshah> but..
<kshah> i'm still connected ssh
<Kamping_Kaiser> i'm heading out
<ajmitch> there's always the option of getting in a car & going there
<Kamping_Kaiser> catch you all later on
<kshah> that is what is going to happen it looks like
<kshah> thank you all for the help
<ajmitch> always a nice time for a scenic drive
<jdstrand> kshah: I was addressing only the 'permission denied' error.  try 'tail -f /var/log/syslog' and see if stuff is shutting down.  I looked at your errors in pastebin-- my sympathies...
<ajmitch> jdstrand: triggering a shutdown that way should just do a poweroff, no gentle shutdown
<briansp> I'm trying to install postfix and somehow I seem to have deleted /etc/init.d/postfix
<briansp> apt-get/dpkg complain that its missing, so I touch it, remove, and reinstall, but its still just the empty file I created
<briansp> how do I get a completely clean reinstall from scratch?
<leonel> apt-get --purge --force-all postfix
<leonel> ups ..
<leonel> briansp:  sudo apt-get --purge --force-yes remove postfix
<mignish> Could someone direct me to a howto or some documentation on creating a user in proftpd
<mignish> i'm having trouble with write access.
<mignish> Does anyone know anything about proftpd user privileges?
* Starting logfile irclogs/ubuntu-server.log
<kraut> moinm
<mwh_> Hello, I have a simple question on installing ubuntu-server
<mwh_> during the setup the installer halts
<mwh_> writing: Initializing gfx code...
<mwh_> then static memory: 0x40020 - .... etc
<mwh_> and some mallocs
<mwh_> 0, 1, 2, 3
<mwh_> then it just halts
<mwh_> any idea what might be the case?
<soren> mwh_: What's the very last thing you see?
<soren> mwh_: And which version?
<mwh_> latest stable
<mwh_> the very last thing is malloc 3: 0x0 - 0x0
<soren> mwh_: What's the hardware? Does it work with a regular (non-server) Feisty CD?
<soren> Ah, no, forget about trying it with another CD. It's the same kernel. I forgot.
<mwh_> fails on both desktop and server
<soren> What's the hardware?
<mwh_> let me see
<mwh_> its not a regular pc ...  that might be the problem
<mwh_> its a tiny box
<mwh_> anyways working to get some hardware data
<soren> EPIA motherboard, perhaps?
<soren> Or even tinier?
<mwh_> its not that tiny ... 30x25 centimeters I think
<soren> It doesn't sound like something I've encountered before. Could you try with Gutsy Tribe-5?
<mwh_> Its called Mini Pc its a K6-350MHz, socket supe r7, with a cyberpro2010 gfx card
<mwh_> anyways .. we are thinking about buying a small box designed for linux
<mwh_> soren: anyways, thank you
<soren> mwh_: Did you try with gutsy?
<mwh_> no
<jordi> ahoy
<jordi> I'm looking into customising gutsy server to do automated installs, or at least no-question installs, and I wonder how different is the ubuntu server installer from the etch installer
<jordi> I'm guessing seed vs tasksel changes will be in place
<jordi> are there any docs out there?
<sommer> jordi: here's an article about kickstart...it looks a little dated though:  https://help.ubuntu.com/community/KickstartCompatibility
<sommer> jordi: also clonezilla may be what you're looking for:  http://clonezilla.sourceforge.net/
<sommer> jordi: There was a presentation on clonezilla at Ubuntu Live...I think this is the link:  http://video.google.com/videoplay?docid=1037666355089325851
<jordi> jesus, SF is *slow*
<jordi> sommer: will poke at these two directions, thanks!
<jordi> my initial idea was to make the necessary changes tp the installer
<jordi> I don't know what clonezilla does yet, but the idea is that debian-installer is flexible that you can do network, cd, usb, etc installs
<jordi> which is good, specially network
<jordi> aha
<jordi> clonezilla webpage downloads me a php file. yay
<nealmcb> less than 2 hours from now: https://wiki.ubuntu.com/ServerTeam/Meeting
<halcyonCorsair> i'm trying to run bind9 on a system with about 759 vlans over two interfaces, but from 497th vlan onwards, bind says it can't listen because there aren't enough free resources
<halcyonCorsair> can anyone help?
<BenC> quick question, what is the default firewall GUI for gutsy?
<mathiaz> BenC: I don't think there is one.
<BenC> is there a suggested GUI?
<nealmcb> BenC: with a "no ports by default" security policy, firewalls are of less value in general
<nealmcb> ...no ports open by default....
<BenC> I'm not asking for a lecture on firewalls, I'm trying to answer a customers question :)
<nealmcb> what sort of firewall needs do you have?
<BenC> like a local firewall, like windows offers
<kshah> my temp folder has become read only, something bad happened and I may have a hardware failure, how can I clear the cache by the command line?
<BenC> kshah: sudo sync
<nealmcb> BenC: you may get a better answer on that in #ubuntu
<BenC> kshah: but that may fail if you already have a failure in hw
<kshah> BenC: it did indeed fail
<BenC> nealmcb: I'm not trying real scroll through on #ubuntu, I'm hoping the server team can answer a question for the kernel team :)
<mathiaz> BenC: there are a couple of GUI that exists, but I don't remember them right now.
<nealmcb> I've heard of firestarter, kraal (mentioned in an old note from a blueprint I have), etc
<nealmcb> but I can't recommend anything personally
<mathiaz> nealmcb: are these in main ?
<nealmcb> no idea.  I don't use them :-)
<nealmcb> BenC: moving up the stack, I see :-)
<BenC> nealmcb: yes, but basically it's reality already matching title though :)
<nealmcb> i've uses shorewall years ago and liked it, but it may be more complicated than they want
<nealmcb> wasn't gui as I recall
<mathiaz> nealmcb: correct. shorewall has a set of text files
<BenC> yeah, this customer is looking for a desktop solution, and even though I agree it's a "warm fuzzy" sort of thing, it is a selling point for getting users to switch from os-competitor to ubuntu
<mathiaz> nealmcb: to describe your network. It's then translated into iptables rules.
<mathiaz> BenC: you could point him to firestarter
<mathiaz> BenC: it's in universe
<BenC> mathiaz: ok, thanks
<mathiaz> BenC: and it seems to be maintained upstream
<mathiaz> BenC: However I've never used it - so I don't know how simple it is
<BenC> mathiaz: customer mentioned that program, so I'll let them know it's in universe
<soren> jordi: If you're familiar with preseeding d-i, just go with that. We use d-i, too.
<sommer> soren: is there docs somewhere on preseeding?
<soren> sommer: Yes, hang on.
<sommer> ubuntu specific anyway
<soren> https://help.ubuntu.com/7.04/installation-guide/i386/appendix-preseed.html
<soren> Knock yourself out :)
<sommer> soren: genious...thanks
<soren> sommer: np
<nealmcb> any thoughts on ifolder?  I looked it up after the question earlier on.  packaging status: https://bugs.launchpad.net/ubuntu/+bug/87122
<ubotu> Launchpad bug 87122 in ubuntu "[needs-packaging]  iFolder for Edgy and Feisty" [Wishlist,Confirmed] 
<nealmcb> 20 minutes, in #ubuntu-meeting:  https://wiki.ubuntu.com/ServerTeam/Meeting
<kshah> server (aka laptop) bit the bullet yesterday, hdd fail, recommendations on a sub $500 machine? where to?
<lynx_hamp> Hello all
<lynx_hamp> I have a problem trying to install a S5000VSA board ... can't detect the SLI Raid
<lynx_hamp> any ideia ???
<mralphabet> What cd are you trying to install off of?
<lynx_hamp> I try with Ubuntu 7.04 Desktop, 7.04 Server and 6.06 server
<lynx_hamp> S5000 (ESB2) SATA RAID and LSI1064e/1068 SAS RAID
<mralphabet> too new?
<lynx_hamp> maybe ... but SuSe and RedHat already have support
<Yahooadam> ive got a problem with my server (again)
<Yahooadam> getting grub error 24 (again i think)
<Yahooadam> How would i go about fixing it, if i do a sudo mount hda1 /mnt/temp
<Yahooadam> i get a wrong FS type error
<Yahooadam> and dmesg logs - "[  843.992000]  EXT3-fs: hda1: couldn't mount because of unsupported optional features (1000100)."
<sommer> Yahooadam: not sure if this applies to you but:  http://ubuntuforums.org/archive/index.php/t-330229.html
<sommer> looks like the grub 24 error is bad news for your hard drive.
<sommer> more from the grub manual: http://www.gnu.org/software/grub/manual/grub.html#Stage1-errors
<Yahooadam> dang :(
<Yahooadam> u mean i have to re-install again :'(
<sommer> Yahooadam: possibly.  You might try booting from a CD and seeing if you can access the partition.
<Yahooadam> im on a booted CD :p
<Yahooadam> and when i try and mount, thats the error
<Yahooadam> no way to repair ?
<Yahooadam> or atleast recover my hda1 ...
<sommer> Yahooadam: sorry I don't have much experience trying to recover hard drives...if you have a backup you can restore from backup.
<Yahooadam> fraid not :(
<Yahooadam> dont even know how u would backup ...
<Yahooadam> it was on my todo list :p
<sommer> heh...ya that's happened to me too.
<Fibres> Hi all
<Fibres> Hi can anyone tell me where the kernel sources on a standard install of ubuntu server 6.06?
<sommer> Yahooadam: you might try another LiveCD...are you using Gutsy?
<Yahooadam> zomg theres a new revision ? :o
<Yahooadam> im on feisty
<Yahooadam> fiesty - or however u speel it :p
<Yahooadam> never could get it right
<sommer> Gutsy isn't out, but there's test CD's
<dho_ragus> Fibres: /usr/local/src
<sommer> s/test/Tribe/
<Yahooadam> well new case, new HDD caddys, and a new HDD on order :(
<sommer> Yahooadam: here's a link if you want to try it:  http://cdimage.ubuntu.com/releases/gutsy/tribe-5/
<Fibres> Does not have anything in it.
<Fibres> Do I need to install them via apt-get. If so what do i need to install. Name of package would be good.
<mathiaz> Fibres: linux-source
<soren> Fibres: linux-source-2.6.15
<Fibres> Ah thanks hopefully might vmware installed soon
<soren> Fibres: And then it's in /usr/src
<Yahooadam> 700mb :(
<Yahooadam> dang
<Fibres> Which would I be better installing linux-source or linux-source-2.6.15?
<soren> Fibres: Either.
<soren> Fibres: :)
<soren> Fibres: Installing linux-source will also install linux-source-2.6.15
<Fibres> Ah thanks
<Yahooadam> adam@ubuntu:/dev$ fdisk -l /dev/hda
<Yahooadam> Cannot open /dev/hda
<Fibres> ahok it has now given me a .tar.bz2 file. How do I unzip that. Im used to tar.gz
<sommer> Yahooadam: try fdisk -l /dev/sda
<Yahooadam> forgot to sudo it :p
<Yahooadam> Disk /dev/hda: 41.1 GB, 41110142976 bytes
<Yahooadam> 255 heads, 63 sectors/track, 4998 cylinders
<Yahooadam> Units = cylinders of 16065 * 512 = 8225280 bytes
<Yahooadam> if i get a new HDD
<Yahooadam> how easy is it to transfer the OS drive over to it
<Yahooadam> and have it bootable and all that jazz
<soren> It's not that hard.
<soren> ...if you know how. :)
<Yahooadam> any good guides around ? ;)
<soren> Dunno, actually.
<Yahooadam> :(
<soren> I've never done it, really. I usually have my data stored in /home (and /var/lib/mysql) and all my config is in /etc, so I just grab the list of installed packages and install them on a fresh system.
<Yahooadam> you can grab a list of installed packages ?
<Yahooadam> dang i need to learn linux :p
<soren> But moving it to a new disk isn't really that hard. Install the new disk, partition it, mount the filesystems where they should be and restore from the old system.
<soren> Yahooadam: dpkg --get-selections
<soren> Yahooadam: That's the machine-readable version.
<soren> Yahooadam: ...which you can feed to dpkg --set-selections.
<soren> Yahooadam: 'dpkg -l' is more human readable.
<Yahooadam> wow
<Yahooadam> i just fixed my problem :o
<Yahooadam> i think
<Yahooadam> w00t :D :D
<Yahooadam> mke2fs -S /dev/hda1
<Yahooadam> fixed
<Yahooadam> ok how can i backup HDA1 b4 it breaks :p
<Yahooadam> (or prefeferable, HDA)
<halcyonCorsair> hi, i'm trying to run bind9 to supply dns to either 759 (over 2 interfaces; 253, 506) or 1012 vlans (over 3 interfaces; 253, 253, 506), but am running into resource issues, can anyone help? syslog is displaying this: http://rafb.net/p/M9xUdJ51.html
<halcyonCorsair> otherwise, what alternatives are there to bind9?
#ubuntu-server 2007-08-29
<jordi> soren: nod. I'll poke at that.
<halcyonCorsair> hi, can anyone tell me how limits are dealt with in ubuntu/debian? (ie. limits.conf?)
<pschulz01> halcyonCorsair: I don't have that file? Is that something that exisits in another distribution?
<ajmitch> pschulz01: you probably do have it, at /etc/security/limits.conf
<pschulz01> ajmitch: Ta.
<halcyonCorsair> ajmitch: all i really want to do is increase the maximum number of open files, whats the best wat to do that?
<ajmitch> you may be hitting a kernel limit as well
<ajmitch> though unlikely
<pschulz01> halcyonCorsair: You are 'probably' looking for something like what is set in /proc/sys/fs/file-max
<ajmitch> pschulz01: that value is very high
<ajmitch> having that many files open would be unusual
<pschulz01> halcyonCorsair: Although.. the result of 'cat /proc/sys/fs/file-max' is 202982
<pschulz01> ajmitch: yep.
<ajmitch> ulimit -n would show 1024 usually
<halcyonCorsair> ajmitch: pschulz01: i think the limit i'm looking at can be found via ulimit -n
<halcyonCorsair> yeah...
<ajmitch> using pam_limits?
<halcyonCorsair> ajmitch: i'm pretty sure i need to increase that....as i mentioned, i'm trying to run bind9 over 1012 vlans...
<ajmitch> sounds painful
<pschulz01> halcyonCorsair: Looks like /etc/security/limits.conf is what you need.. this is what pam_limits uses.
<halcyonCorsair> ajmitch: well....1012 vlans, each segregated, each needs dns *shrugs*
<ajmitch> and pam_limits would need to be in /etc/pam.d/common-session, I presume you have that there
<halcyonCorsair> ajmitch: i only have pam_unix.so and pam_foreground.so in common-session
<ajmitch> ok, add it & try
<ajmitch> though bind9 may itself have a limit on how many files it can open
<ajmitch> and I have to run out in 5 minutes, sorry
<halcyonCorsair> ajmitch: can i just add the line "pam_limits.so" to the end of common-session?
<ajmitch> 'session required pam_limits.so'
<ajmitch> order is important, so it may need to go near the top
<ajmitch> however I don't know how this will get used with a daemon like bind9
<halcyonCorsair> ajmitch: http://rafb.net/p/M9xUdJ51.html
<halcyonCorsair> that was what i was getting
<pschulz01> halcyonCorsair: My setup shows 'pam_limits.so' entries in pam.d in cron,ssh,login,su...
<ajmitch> damn, have to leave
<ajmitch> back later
<halcyonCorsair> alright, i'll soldier on :) see what i can come up with
<pschulz01> halcyonCorsair: Are you convinced that it's 'number of files' problem?
<Nafallo> lalala
<pschulz01> Nafallo: Howdy babe.
<Nafallo> hehe. hi :-)
<halcyonCorsair> pschulz01: not convinced, but its my best lead at the moment
<pschulz01> halcyonCorsair: Other than googling for the error.. If you can start up bind with 'strace' you should be able to find the exact call that fails.
<Nafallo> oh!
<Nafallo> I forgot to move a cable :-)
<Nafallo> good thing I got access til 7 in the morning :-)
<Kamping_Kaiser> hi all
<pschulz01> Kamping_Kaiser: Morning.
<Kamping_Kaiser> pschulz01, morning. hows it going?
<pschulz01> Nafallo has already sang the 'good-morning' babe song.
* Nafallo yawns
<pschulz01> Kamping_Kaiser: Working on some requirements documentation :-)
<Kamping_Kaiser> o_0
<Kamping_Kaiser> pschulz01, fun fun.
<pschulz01> Kamping_Kaiser: You'll see it shortly :-)
<Kamping_Kaiser> abotu to start typing notes again  (and multitasking with irc, so bad look)
<Kamping_Kaiser> pschulz01, that sounds rather hopeful :)
<pschulz01> Kamping_Kaiser: Where there's life there's hope.
<pschulz01> Kamping_Kaiser: What do you know about XSL/XSLT ?
<Kamping_Kaiser> pschulz01, how very philosphical... just wish i could agree atm - i'm about to commit myself to another day of typing... RSI anyone?
<Kamping_Kaiser> pschulz01, i've heard of themb efore
<Kamping_Kaiser> they sound fairly familiar
<pschulz01> Kamping_Kaiser: I'm moving all my 'user' details into an XML format. The plan is that that will help with the migration.
<Kamping_Kaiser> ah... then i've probably heard them in relation to docbook
<pschulz01> Kamping_Kaiser: I'll leave you to it.
<Kamping_Kaiser> pschulz01, i'm looking at irc now and then, if i dont reply, i'm typing elseware :)
<halcyonCorsair> pschulz01: does this mean anything to you? http://rafb.net/p/GBvKm991.html
<pschulz01> halcyonCorsair: You live in NZ
<Kamping_Kaiser> lol
<halcyonCorsair> halcyonCorsair: yes?
<halcyonCorsair> pschulz01: the strace....anything there?
<Kamping_Kaiser> rofl
<halcyonCorsair> oops
<pschulz01> halcyonCorsair: Can yuo run it again with the 'include children' option? (-f I think)
<pschulz01> halcyonCorsair: What I want to see is the print call that produces the error that you are seeing... then we can work back from there.
<halcyonCorsair> pschulz01: eek, its 1.3M
<pschulz01> halcyonCorsair: tail -n 200
<pschulz01> halcyonCorsair: With maybe a grep.
<halcyonCorsair> i won't get to see it print the error in there i don't think, will I/
<ajmitch> grep helps
<Kamping_Kaiser>  grep "string" -A15 -B15
<halcyonCorsair> i think this is the relevant section for the first vlan that bind9 fails for:
<halcyonCorsair> http://rafb.net/p/A9bKUz60.html
<halcyonCorsair> ajmitch, pschulz01: see anything interesting there?
<ajmitch> you mean apart from the 'too many open files' bit?
<halcyonCorsair> ajmitch: which would bring me back to trying to change the max open files limit, yes?
<ajmitch> quite
<halcyonCorsair> peachy
<Kamping_Kaiser> heh
<halcyonCorsair> ajmitch: i'm running bind chrooted, so in my limits.conf file, i need: "bind hard nofile 4096" perhaps?
<ajmitch> you can try it
<halcyonCorsair> i love the confidence :)
<ajmitch> not many people ask about having >1000 vlans in here :)
<halcyonCorsair> well sure....they're just afraid to push the boundaries!  :P
* Kamping_Kaiser tries to imagine under what circumstances he can justify 1000 vlans to connect his laptop+desktop
<Nafallo> ehrm
<Nafallo> what would you need that for?
<halcyonCorsair> Nafallo: a hotel with about 500 rooms, most of those needing two vlans with dhcp
<halcyonCorsair> hmm, although potentially the second vlan in each room doesn't need dhcp.....
<halcyonCorsair> but thats irrelevant
<halcyonCorsair> ajmitch: i get a lot more of this now: socket: too many open file descriptors
<halcyonCorsair> but i don't see a noticable difference
<ajmitch> I wonder what would happen if you had multiple bind instances, just as a test
<halcyonCorsair> how would you propose i do that?
<ajmitch> you could do multiple chroots & start each individually
<halcyonCorsair> an interface / subnet per instance?
<ajmitch> that would end up with a lot of processes
<ajmitch> try just splitting it in 2 for now
<halcyonCorsair> hmm
<ajmitch> I presume you're generating a list of interfaces to listen on?
<halcyonCorsair> ajmitch: i'm not specifying the interfaces, no
<halcyonCorsair> ajmitch: what i'm specifying is the subnets
<ajmitch> either way
<ajmitch> so you can split based on that
<halcyonCorsair> ajmitch: http://rafb.net/p/smj01748.html
<ajmitch> ok, that looks like dhcp to me
<ajmitch> I'm meaning how you specify the bind configuration
<halcyonCorsair> oh whoops
<halcyonCorsair> hmm, now that you mention it....
<halcyonCorsair> come to think of it, i've been thinking bind and looking at dhcp....
<halcyonCorsair> i don't do anything to specify interfaces or anything for bind....
<halcyonCorsair> ajmitch: any idea on where i should begin?
<ajmitch> probably something to look at then
* Kamping_Kaiser passes ajmitch a cluebat ;)
<ajmitch> how are you starting it chrooted?
<halcyonCorsair> ajmitch: yes
<halcyonCorsair> also, where can i check whether the packager compiled it multithreaded?
<ajmitch> I asked how
<ajmitch> apt-get source bind9, see debian/rules
<halcyonCorsair> ajmitch: as per: http://www.howtoforge.com/perfect_setup_ubuntu704_p4
<ajmitch> ok, interesting
<ajmitch> so you could probably hack up the initscripts & create a new chroot, etc
<ajmitch> if you wanted to try it out that way
<ajmitch> I'd prefer to solve the open file descriptors issue
<halcyonCorsair> me too
<halcyonCorsair> but what i'm trying is coming up empty
<halcyonCorsair> using /etc/security/limits.conf to up the hard and soft nofiles limit
<halcyonCorsair> upping /proc/sys/fs/file-max
<halcyonCorsair> and still no change
<ajmitch> most likely a hard limit in bind9 itself, at compile time
<ajmitch> since I've seen references to things like mangling headers to get a single process to use > 1024 open file descriptors
<ajmitch> eg FD_SETSIZE in /usr/include/linux/posix_types.h
<halcyonCorsair> are you saying i'll need to change that and recompile?
<ajmitch> I'm saying that I've seen that recommended, yes
<ajmitch> I don't know if it'll work or if it's a good idea
* halcyonCorsair wonder if anyone could lend him a brick wall to bash his head against....
<ajmitch> sure
<halcyonCorsair> ajmitch: can i be sure its getting FD_SETSIZE from there?
<ajmitch> nope
<ajmitch> bits/typesizes.h:#define        __FD_SETSIZE            1024
<ajmitch> it's also in 1 or 2 other places
<ajmitch> it'd be a really ugly hack if you have to do it this way
<KurtKraut> How can I know how many kbytes a process is occupying in RAM ?
<halcyonCorsair> there is a lib/bind/include/fd_setsize.h in the bind source, but it says:
<halcyonCorsair> * If you need a bigger FD_SETSIZE, this is NOT the place to set it.
<halcyonCorsair>  * This file is a fallback for BIND ports which don't specify their own.
<halcyonCorsair> ajmitch: unless you have another suggestion, short of trying another dns server, its all i got
<ajmitch> you've tried it, I presume?
<halcyonCorsair> another dns server?
<halcyonCorsair> or the FD_SETSIZE?
<ajmitch> no, changing the header & recompiling bind9
<ajmitch> it should only take a few minutes to verify
<halcyonCorsair> oh, i'm about to do that now
<ajmitch> doing it in a chroot may be wise
<ajmitch> you know the general procedures for building debian packages, right?
<halcyonCorsair> i've changed typesizes.h & posix_types.h
<halcyonCorsair> i haven't done it in about 2 or 3 years, since i last used debian, so not really....a howto of some kind or something would be handy here
<ajmitch> generally just apt-get install build-essential, apt-get build-dep bind9, apt-get source bind9, then go into the dir & debuild
<ajmitch> (you probably want devscripts as well)
<ajmitch> changing the version number in debian/changelog could be a good idea to avoid the package getting clobbered on upgrade
<ajmitch> but then you're on your own for further updates
<halcyonCorsair> right
<halcyonCorsair> building now
<halcyonCorsair> whats the easiest way to install all my new packages?
<Kamping_Kaiser> dpkg -i?
<halcyonCorsair> ah dpkg -i *.deb did the trick
<halcyonCorsair> i wasn't sure if it would figure out the deps properly
<halcyonCorsair> ajmitch: success
<halcyonCorsair> how depressing
<Kamping_Kaiser> hehe. have to rebuild bind?
<ajmitch> halcyonCorsair: I was afraid of that
<ajmitch> Kamping_Kaiser: worse - rebuilding bind after modifying headers in /usr/include
<halcyonCorsair> ajmitch: do you have the bind9 source at the moment?
<ajmitch> sure
<Kamping_Kaiser> ajmitch, ew
<halcyonCorsair> ajmitch: look at lib/isc/unix/resource.c and search for FD_SETSIZE
<halcyonCorsair> ajmitch: there is a big comment starting with "The BIND 8 documentation reports:"
<ajmitch> hm
<halcyonCorsair> ajmitch: postfix and squid both seem to hack around this, but i guess bind hasn't had enough problems for people to care
<ajmitch> bind tends not to need to open thousands of file descriptors at the same time
<halcyonCorsair> *shrugs*
<ajmitch> most people tend not to have quite that many interfaces, for some reason
<halcyonCorsair> ajmitch: it used to be a problem with an older design, particularly on windows, and they've obviously taken note of it
<ajmitch> or they'd run bind on 1 ip address & rely on something like iptables or routing to redirect packets :)
<halcyonCorsair> potentially that might be smarter :)
<ajmitch> it may save you headaches in the long run
<lamont> why does someone want BIND to have that many open files?
<ajmitch> 11:52 < halcyonCorsair> ajmitch: well....1012 vlans, each segregated, each needs dns *shrugs*
<ajmitch> moderate levels of insanity
<halcyonCorsair> i didn't design the system....just trying to get it to work
<lamont> and they can talk to BIND on an off-subnet IP, no?
<halcyonCorsair> lamont: they're not meant to be able to no
<lamont> multiple BIND instances comes to mind, or just have bind listen on A.B.C.D and have everyone talk to that IP
<ajmitch> both of those suggestions have come up today
<lamont> put the DNS server in it's own DMZ, which forces accessability from each vlan
<halcyonCorsair> sometimes i really don't step back enough to see that fixing the current way the system works is stupid, and i should update the method
<lamont> beyond that, you're talking making changes to bind9 that probably don't want to be in the stock distro for ubuntu (or debian, or anywhere, for that matter) --> support costs
<halcyonCorsair> lamont: yeah
<lamont> ergo, I'd be inclined to make it listen on 1 IP, and have 1012 views if I need to keep everything that separate...
<halcyonCorsair> i'll probably just allow each vlan back to the main IP of that interface for DNS
<dezmaeth> hi, im having a problem, i installed a new network card, and now , any of my network cards are working, i did /etc/init.d/networking restart , but nothing , none gets ip
<nandemonai> Hey guys, general nfs question.. If I have machine A mounting from machine B and machine B is rebooted or shutdown.. Will it umount on machine A safely?
<ICU> no
<soren> nandemonai: No. Any attempt to access the share will hang until the machine is back up.
<nandemonai> soren, Hmm ok.. by hang you mean time out or will it cause real issues?
<soren> nandemonai: Depends on how it was mounted. It might hang indefinetely.
<soren> nandemonai: I've heard stories about nfs server going down due to hardware issues, being sent to the manufacturer for repairs, coming back, getting turned back on, and then all of the clients just continued where they left off.
<nandemonai> soren, Mounted via a term, sudo mount yadda yadda. It's just I have a share from my desktop to the server and wondered about if I need to shutdown or whatnot.
<nandemonai> soren,  Heh that's pretty impressive.
<soren> nandemonai: If you're just rebooting the server, everything should be fine.
<nandemonai> soren, I guess if I need to shutdown for a period of time I should just umount server-side.
<nandemonai> soren, Server will be up fulltime, it's the fact it has a mount coming from my desktop I want to share that's the issue. I don't want to have to keep the desktop up fulltime also.
<soren> nandemonai: You can just leave it as is. If you try to read from it, it just won't return until the share is available.
<nandemonai> soren, Okie doke. It's not like it's a corporate environment or anything, just didn't want the server to die if one of my roomies tries to access the share and it's not up my end.
<nandemonai> soren, Thanks for the info ;)
<soren> nandemonai: Any time
<kurosaki> soren, Umm, hate to be a nag, but I went ahead and rebooted the desktop and now even though it's back up I'm getting: Stale NFS file handle trying to access the share on the server.
<mralphabet> kurosaki=nandemonai?
<kurosaki> oh yeah.. sry
<soren> kurosaki: Could you pastebin the output of /proc/mounts, please?
<ivoks> hello
<soren> Hi, ivoks!
<nandemonai> I went ahead and umounted them and remounted so my proc wont be helpful right?
<soren> nandemonai: Not really :)
<nandemonai> ro,async,subtree_check
<nandemonai> They look ok?
<soren> nandemonai: Yeah, that looks sane.
<nandemonai> soren, That's what I thought. Oh well no biggie. I just have to remember to umount if I'm going to shutdown for a bit.
<soren> nandemonai: http://sysunconfig.net/unixtips/stale_nfs.txt
<soren> nandemonai: Do any of those apply, you think?
<soren> nandemonai: What was shared?
<soren> nandemonai: It really, really should work. You should not have to reboot
<nandemonai> soren, Well it's shared to a www dir via nfs
<soren> nandemonai: I'm more interested in where it's shared *from*.
<nandemonai> soren, That all works primo just didn't seem to like the server(ie my desktop) rebooting.
<nandemonai> soren, My desktop. Which I want to be able to shutdown on occasion. To local house 'server' ubuntu as well which is the client mounting from my machine.
<soren> nandemonai: Sure, but which part of the filesystem?
<nandemonai> soren, Just a dir with some data from my home dir.
<soren> nandemonai: I'm trying to determine if it's plausible that might appear to disappear for nfs.
<soren> nandemonai: Sounds odd. If you can provide simple steps to reproduce it, I'd appreciate it.
* soren -> phone call
<nandemonai> soren, Ok sure. I'll reboot my machine again and see if the share stales out again.
<nandemonai> Well that would be right..
<nandemonai> It's fine now.
<nandemonai> Although, I didn't try and access it from another machine while it was down, which I did the previous time. I'll try that.
<nandemonai> Okies that did it. Seems that rebooting the nfs-server (my desktop) and then trying to access the share on the nfs-client (home server) hangs when my machine is down which makes sence ;) Ok then bring the nfs-server back up and the shares are showing as mounted via df with - - - - and an ls of the mount point reads Stale NFS Handler again.
<nandemonai> Err on the nfs client that is after my machine is back up.
<nandemonai> Both running ubuntu 7,04 and up to date. The home server is 64bit if that makes a difference... Any other info you'd like soren?
<nandemonai> soren, http://paste.ubuntu-nl.org/35548/ is my /proc/mounts after this has occurred. Last two lines are the two stale mounts.
<soren> nandemonai: nfs-kernel-server?
<nandemonai> soren, Yes indeedy.
<soren> nandemonai: Could you please file a bug about it?
<nandemonai> soren, Sure, will do.
<soren> nandemonai: Thanks very much.
<nandemonai> soren, So expected behaviour is that it should resume nicely once the server is back up right?
<soren> nandemonai: IMO, yes.
<nandemonai> soren, Okies.
<raky> i've just finished a long trial with centos 5.  too many things missing for what i'm doing.  is ubuntu server 7.04 good-to-go, or should i revert to 6.10 or 6.06 server?
<mralphabet> raky: it depends on what your requirements are
<mralphabet> raky: do you need certain versions of packages?
<raky> i just want to do a pxe-install and use Xen or VMware
<mralphabet> vmware works out of 6.06 and 6.10, I am using it on 7.04 atm (though at the time I installed, it was unsupported)
<mralphabet> for xen I would suggest 7.04
<mralphabet> I am sure vmware is working out of the box now
<raky> so i saw a few tweaks for VMware in 7.04.  where they as small as i saw - only like fixing 5 or so lines with command line?
<mralphabet> I did it ~4 months ago and at the time it was less then 2 minutes to do it
<raky> great, thanks
<raky> and what OSs have you made into VMs?
<mralphabet> 2003 server, xp, vista
<raky> sweet
<raky> i'll just need to virt. some other linux distros
<mralphabet> should work great
<raky> thanks, mralphabet
<mralphabet> np
<osmosis> anyone using Xen ?
<osmosis> I wanted to know how it is.
<raky> I'm gonna choose a gui for Xen in a few days, there are several to choose from.
<raky> osmosis, xen has a lot more variables to choose from than vmware.  and virtualization techniques are a little diff between the two.
<raky> oh, and see ##xen for more details, osmosis
<osmosis> raky: okay cool
<raky> i have hardware RAID 5.  is install ubuntu to hdd diff from install to first hdd on the first prompt?
<mralphabet> raky: should be fine, software raid is troublesome, however
<dho_ragus> raky: i've had some issues with hardware raid and found solutions for all of them.  i don't understand what you're asking though.
<raky> i just wondered if the install process was diff for first hdd versus install on hdd, dho_ragus
<raky> everything looks like its working now, dho_ragus , but i'm curious about alternatives
<raky> why is the new server image upgrade offereing not authenticated?  whta's the deal?
<raky> if i'm going to run vmware, will i need this?  i know support has been increasing lately, so it may be good for that
<mralphabet> 13:49 < raky> why is the new server image upgrade offereing not authenticated?  whta's the deal?
<mralphabet> what?
<raky> it may be a eth0/1 problem, i'm looking at my connection
<raky> i feared this, this connection thing is a problem, i have to work with the gui, maybe/maybe not
<mralphabet> this connection thing?
<raky> network settings
<raky> ok, i see, confused eth0 and eth1
<raky> n/m, good-to-go, but about the authentication, mralphabet i'm checking it again
<raky> yeah, the linux-server-image 2.6.20-16-server is not authenticated, mralphabet
<raky> i didn't expect server images to be not authenticated, and wonder why
<raky> i'm in no hurry now, i'll be back around 11 or 12 PM, or tomorrow morning.
<mralphabet> interesting
<mralphabet> I didn't have anything like that come up
<raky> hmm.  well, later, ttyl
<WaVeR> Hello
<Yahooadam> do you have a problem WaVeR - or just saying hi ?
<kraut> i have a problem with your plenking!
<Yahooadam> ... ?
<kraut> aaaaaaah!
<kraut> stop it!
<WaVeR> Yahooadam>  Just saying hi, and I try to offer my help.
<kraut> WaVeR: please help Yahooadam to stop plenking ;)
<WaVeR> I just join the Ubuntu-server team so I'm here to see ^^
<WaVeR> I can't ^^
<kraut> damn it
<kraut> Yahooadam: please read this! http://en.wikipedia.org/wiki/Plenk
<Yahooadam> wtf is plenking :S
<Yahooadam> ah
<Yahooadam> ./shrug
<Yahooadam> its teh interwebs, who cares about the space between punctuation :p
<kraut> is this plenking? <- no
<kraut> is this plenking ? <- yes
<kraut> web2.0 doesn't support plenking!
<Yahooadam> it so does !
<WaVeR> LoL
<Yahooadam> see :p
<kraut> GAH ! ! ! 1 1 one eleven
<kraut> that was prell-plenking :P
<mathiaz> WaVeR: did you apply for membership on launchpad ?
<kraut> i did and i am still waiting for replies :/
<WaVeR> Yes mathiaz
<WaVeR> And I think you approve me
<mathiaz> WaVeR: ah ok. I tought you said you can't join the Ubuntu-server team.
<WaVeR>  I try to know how can I help. Now I'm looking the Ubuntu-server wiki page
<ajmitch> morning
<ajmitch> mathiaz: just wrote to the MC list for you
<mathiaz> ajmitch: hi. I've seen it. Thanks for your support.
<ajmitch> np
<mathiaz> WaVeR: great. Let me know if it's confusing.
<mathiaz> WaVeR: or if you have any question
<WaVeR> Thanks for your support mathiaz. I will inform you if I have trouble
<Yahooadam> plenked ?
<Yahooadam> :p
<halcyonCorsair> ajmitch: ping
<ajmitch> hello
<halcyonCorsair> ajmitch: so now that i know that changing FD_SETSIZE works, and is horrible
<ajmitch> heh
<halcyonCorsair> how do i get rid of the packages I created and revert back to the official version?
<ajmitch> you could remove & reinstall, or try apt-get --reinstall
<ajmitch> the latter is preferable
#ubuntu-server 2007-08-30
<halcyonCorsair> ajmitch: Depends: libdns21 (= 1:9.3.2-2ubuntu1.3) but 1:9.3.2-2ubuntu1.4 is to be installed
<halcyonCorsair> there are about four of those messages
<ajmitch> great
<ajmitch> that's usually security or updates
<halcyonCorsair> ajmitch: no, its the original version of the package vs. the one that i built, i'm downgrading by specifying each one to be the older version...
<ajmitch> did you bump the version to 1:9.3.2-2ubuntu1.4?
<keescook> halcyonCorsair: yeah, that's what I've done in the past.  used "apt-cache madison PKG" to figure out the prior version, and then specified each on on the apt-get install line
<ajmitch> apt won't automatically downgrade
<ajmitch> hello keescook
<keescook> hiya ajmitch!
<keescook> how goes it?
<ajmitch> good, how are you?
<keescook> good, wading through a kernel update.  :)
<ajmitch> I see that we finally have a name for 8.04
<ajmitch> heh
<keescook> Yeah, I was curious if "H" was going to be reused
<keescook> I wonder why "C" was skipped.
<ajmitch> but noone remembers hoary now :)
<halcyonCorsair> i guess those voting for "Hungry Hippo" lost out in the end :)
<keescook> that would have rocked.
<ajmitch> how's the search for a security guy going?
<keescook> ajmitch: good, I think we're almost done.
<ajmitch> that's great
<ajmitch> I won't bother applying then :)
<halcyonCorsair> hmm....how do i remove my custom packages to make sure they don't interfere with anything anymore?
<ajmitch> I thought you just did that?
<halcyonCorsair> i downgraded them to the original version yeah...so if i do a apt-get upgrade, they're not gonna come up to haunt me?
<halcyonCorsair>  ah...it seems they won't, way cool
<ajmitch> no, since they're not in an apt repository in your sources.list
<ajmitch> so apt forgets all about them
<halcyonCorsair> learn something new every day...
<keescook> ajmitch: you keep a table of high/med/low changes between debian unstable and ubuntu, is that right?
<ajmitch> keescook: rc bugs fixed in debian that aren't in ubuntu
<osmosis> can anyone recommend a small boxen...something like a mac mini, but with hardware that doesn't fail.
<keescook> I just got burned by an update in debian that was prior to Etch, but after Feisty.  (i.e. fixed in gutsy)
<ajmitch> keescook: django.ajmitch.net.nz/rcbugs
<keescook> ajmitch: and that's just gutsy, right?
<ajmitch> yeah, just gutsy right now
<ajmitch> you want it for multiple distros?
<keescook> ajmitch: yeah, but I'm trying to figure out how to specify what I'm looking for.
<keescook> because it's "changes between a stable release of ubuntu's version and the following stable release of debian"...
<keescook> anyway, might not be needed, I guess.  A CVE should have been assigned for this bug.
<halcyonCorsair> how can i get apt-get to install config files over the top?
<ajmitch> overwrite existing config files?
<ajmitch> keescook: if there's something I can help with, just ask - I've got a mirror of debian's BTS which I rsync
<keescook> ajmitch: okay, I'll ponder it.  :)
<ajmitch> halcyonCorsair: it's generally not a good thing - you can purge & reinstall if you really need it
<ajmitch> or remove the config file & reinstall with apt-get -o DPkg::Options::="--force-confmiss"
<halcyonCorsair> mmm, yeah i did the purge and reinstall...although it kinda felt like i was taking to it with a hatchet
<keescook> ajmitch: any chance I could convince you to add a tiny column to your rcbugs output for main/restricted/universe/multiverse?  like maybe a single character: m, r, u, v ?
<ajmitch> keescook: yeah, though this list is only for universe right now :)
<ajmitch> I'll have to change some code around, should only be a few lines
<keescook> oh! heh. then can I convince you to add main?  :)
<ajmitch> it needs burnt down & rewritten anyway
<keescook> hehe
<ajmitch> it's seriously crap code that I can hardly understand
<ajmitch> must have been some late nights there
<keescook> heh
<halcyonCorsair> hey, can anyone tell me why my feisty server doesn't have colours when I ssh in? (i don't think it had colours with local logon either)
<halcyonCorsair> i mean, .bashrc says it'll be fancy and colourful if it detects xterm-color, but dapper seems to ignore that and be colourful anyway....
<halcyonCorsair> why not feisty?
<halcyonCorsair> hmm...ok, its not that part...its the ls alias somehow...
<halcyonCorsair> not, thats wrong
<halcyonCorsair> vims not displaying colours...thats whats wrong
<halcyonCorsair> *sigh* i'm talking to myself here...
<Kamping_Kaiser> hehe
<ajmitch> halcyonCorsair: ls displays colour, vim doesn't?
<ajmitch> if so, there were changes from vim 6 to 7, check /etc/vim/vimrc
<halcyonCorsair> yes....it seems feisty comes fully loaded with vim-tiny which could be the problem...
<ajmitch> quite
<ajmitch> that was done for a reason, of course :)
<halcyonCorsair> oh?
<ajmitch> disk space for the desktop cd
<Kamping_Kaiser> bleh @ the excuse
<ajmitch> I presume that noone cared enough to reseed the full vim for the server
<ajmitch> Kamping_Kaiser: you'll live
<halcyonCorsair> best way to get vim in colour would be to update to full blown vim, or try and readd colour conf to /etc/vim/vimrc?
<Kamping_Kaiser> ajmitch, potentially
<ajmitch> ok, lunchtime, back later
<Kamping_Kaiser> you nead real vim to get colouring
<osmosis> can anyone tell my why feisty has different xen packages for desktop and server ?
<osmosis> what is PAE ?
<halcyonCorsair> Kamping_Kaiser: what do i need for real vim to start colorizing?
<Kamping_Kaiser> halcyonCorsair, a ~/.vimrc or /etc/vim/vimrc
<Kamping_Kaiser> afaik
<halcyonCorsair> crap....so i'm gonna have to do "THAT"!
<Kamping_Kaiser> hm?
<halcyonCorsair> i know nothing about configuring vimrc
<halcyonCorsair> oh....that was easy...hehe!@
<CharlieSu> Does anyone know of a linux program that is basically like WinSCP that uses an ncurses interface?
<sommer> hey all is it bad form to compare PEAR to CPAN and RubyGems?
<pschulz01> halcyonCorsair: Howdy..
<halcyonCorsair> pschulz01: sup
<pschulz01> halcyonCorsair: Did you find an answer to the 'files' problem?
<halcyonCorsair> pschulz01: yes, two solutions, one that is totally evil, and one that is pretty good
<halcyonCorsair> i still have to test them though
<pschulz01> :-)
<halcyonCorsair> pschulz01: soln1: alter /usr/include/linux/posix_types.h and /usr/include/bits/typesizes.h to set FD_SETSIZE to be larger than 1024 (EVIL!!!)
<pschulz01> halcyonCorsair: Yup.
<halcyonCorsair> soln2: add a listen-to directive to named.conf.options for each physical interface, and then have iptables allow access to that address for dhcp
<ajmitch> 2) still sounds a bit painful
<halcyonCorsair> ajmitch: not really
<halcyonCorsair> ajmitch: can you envision anything less painful?
<pschulz01> What does FD_SETSIZE do?
<pschulz01> Is it just used as an option? can it be over written within named?
<ajmitch> halcyonCorsair: listening on 1 interface & relying solely on the networking stack to get the packets to the right place
<ajmitch> which has its own problems
<halcyonCorsair> ajmitch: sure, I could do that, but my way means i have to modify fewer scripts :)
<ajmitch> it depends whether bind needs to open a socket for every interface it explicitly listens on
<halcyonCorsair> i think it does
<halcyonCorsair> either way, listening on 5 interfaces is a lot better than listening on 1000
<ajmitch> right, I was thinking that you meant  listening on every vlan interface
<ajmitch> but if you're listening on just 5, then it sounds much better
<halcyonCorsair> no, just the PHYSICALL interfaces
<halcyonCorsair> ie. eth2 covers eth2.2 - eth2.254, etc
<ajmitch> right, I missed that when reading it earlier
<pschulz01> halcyonCorsair: Option3. hack bind
<halcyonCorsair> when i first made the change, i forgot to listen on 127.0.0.1 :) wondered why dns suddenly didn't work on the server
<ajmitch> ls -la
<ajmitch> hm
<ajmitch> (ssh is very lagged)
<pschulz01> halcyonCorsair: port/*/include/fd_setsize.h <-- from the CHANGES  file.. I'm looking now,
<ajmitch> pschulz01: hacking bind or rebuilding should be the last option
<halcyonCorsair> pschulz01: eh? not really....what our system was doing in the first place was kinda stupid
<ajmitch> since it means an unsupportable package
<pschulz01> ajmitch: first step in the  'get stuff fixed' process -> ..create patch.. submit patch..
<ajmitch> pschulz01: only if it's necessary
<pschulz01> halcyonCorsair: I can't see the file in the bind source anyway for 'linux' so.. hmm..
<halcyonCorsair> pschulz01: the files i modified aren't part of bind
<pschulz01> halcyonCorsair: That is the main problem! :-)
<halcyonCorsair> ajmitch: what package is /usr/include/linux/posix_types.h from?
<pschulz01> halcyonCorsair: You entire system is now unsupported.
<halcyonCorsair> pschulz01: i'm not sure you understand the nature of what i'm talking about here...
<halcyonCorsair> either that, or your making a joke that i missed
<pschulz01> halcyonCorsair: dpkg -S /usr/include/linux/posix_types.h
<pschulz01> linux-libc-dev
<pschulz01> halcyonCorsair: (Just a little one)
<pschulz01> halcyonCorsair: (joke.. that is)
<halcyonCorsair> right :)
<pschulz01> If bind is not able to do what you want it to do, then there are proably going to be other people in the same situation.. so rather than a work around, fixing it properly is probably a 'nice thing to do'.
<halcyonCorsair> pschulz01: its not a problem that should be encountered if bind is configured in a more "correct" way
<pschulz01> (Please correct me if I'm wrong ajmitch) Unless  FD_SETSIZE is a system limit, in which case there should be a kernel configuration parameter to change it. If it is problem with bind, then.. well.. that was where I was heading.
<pschulz01> halcyonCorsair: Just rephasing your problem...
<pschulz01> halcyonCorsair: You have a single 'bind' listening on lots of interfaces (1000+) and you want bind to open a socket to listen individually for each one?
<halcyonCorsair> pschulz01: thats what the system was doing, however its not really very desirable behaviour when you think about it
<halcyonCorsair> i just hadn't questioned the logic of what it was doing is all
<halcyonCorsair> how can i check the version of a running module?
<ajmitch> halcyonCorsair: kernel module?
<halcyonCorsair> ajmitch: yes, i've just installed an updated intel e1000 driver and want to check that the new one is running
<ajmitch> hm, I think modinfo just tells you details of the one on disk
<ajmitch> so dmesg or syslog will be your best bet
<halcyonCorsair> *sigh* it didn't fix the problem....
<halcyonCorsair> damn HP for not supporting ubuntu
<halcyonCorsair> ....luckily i don't think its a showstopper
<ajmitch> what's the problem with it?
<halcyonCorsair> e1000: eth3: e1000_request_irq: Unable to allocate MSI interrupt Error: -22
<halcyonCorsair> i guess i'll find out shortly though
<ajmitch> does it cause issues, or just give an error?
<halcyonCorsair> just give an error so far, i'm about to start testing
<ph1zzle> hey guys, I am looking for a wan failover software or mechanism, basically designed for a machine with two nics, each hooked up to a seperate wan, and when one wan goes down, it fails over ti the other, any thoughts?
<halcyonCorsair> ph1zzle: what sort of NICs? (not that it really matters, just curious)
<halcyonCorsair> ph1zzle: look up under your kernel source, look up Documentation/networking/bonding.txt and then the "Configuring Bonding for High Availability"
<ph1zzle> sure, I actuall. just now found something easier, and I have been looking at this for a while
<ph1zzle> basically what was stated was define two default routes, each with the different network cards, if one route fails, the second will be used, there for if each nic is on a different wan
<halcyonCorsair> oh true :) i was thinking more enterprisey :)
<ph1zzle> and you have your primary link defined as the link on wan 1 and your secondary on wan 2 then when the router on wan 1 is no good, then wan 2 should suffice
<ph1zzle> yeah, well at this point I just want a solution, lol
<ph1zzle> this office only contains 20 people, we have a adsl link that seems to fail constantly, and a spriny ev-do link as a backup and frankly... I just want it to work
<halcyonCorsair> http://www.epr.ch/brb/linux/backroute.php
<halcyonCorsair> that might be another option, i've never used it though
* ph1zzle looks and goes to bed in a min, cheers
<ph1zzle> huh, that actually looks damned cool, thanks halcyonCorsair
<halcyonCorsair> ph1zzle: hope it works
<kraut> moin
<nealmcb> kraut: any idea where people get the idea of adding spaces before punctuation??
<kraut> because they want to annoy other people with it?
<nealmcb> hmm - looks like it is the default in french punctuation - I wonder when that started and why.  not much info here:  http://en.wikipedia.org/wiki/Punctuation_in_French
* Starting logfile irclogs/ubuntu-server.log
<radicall> hi all
<ScottK> Hellol
<radicall> someone has been installed Sun Application Server on Ubuntu Server 6.06?
<raky> net boot client set in bios to boot from network, but doesn't http://pastebin.com/d36396a38 this is my dhcpd.conf file
<sommer> raky: I've actually done a net boot, but don't you need to add an entry for your tftp server?
<raky> i'm following the hottoforge guide here http://www.howtoforge.com/ubuntu_pxe_install_server
* sommer checking
<sommer> raky: I think you need a next-server entry to point to your tftp server.  Check the last highlight box in the guide.
<osmosis> what is the difference between xen i386 and xen PAE ?
<necrite_> hi all
<necrite_> i have too many "dummy internal connection" on my webserver
<necrite_> any idea why apache make this connections?
<osmosis> how do I make a xen initrd ?
<ajmitch> good morning
<raky> i keep getting a message that eth0 subnet is not configured, here's the paste:  http://pastebin.com/d36396a38
<soren> raky: Pointing out that it's a dhcp server config file might help..
<soren> raky: What's the IP of your eth0 ?
<mralphabet> soren: took me a minute too
<soren> mralphabet: Well, it's pretty obvious when I actually clicked the link, but until then..
<ajmitch> hello soren
<soren> ajmitch: Hey, Andrew.
* ajmitch wonders when the person who's modified authtool will actually contact the author about it :)
* mralphabet clicked the link and was expecting /etc/network/interfaces
<raky> 169.254.4.178 / 192.168.1.2 mralphabet
<soren> raky: Um... Which one is it?
<soren> raky: It can only have one.
<raky> 169.254.4.178
<ajmitch> sounds like a link-local address
<ajmitch> ie it's not been configured
<mralphabet> raky: pastebin your /etc/network/interfaces
<ajmitch> you had a mention of eth0:avahi in the dhcp configuration?
<soren> raky: If you have an error about eth0 not begin configured properly, it often helps to provide information about said interface.
<raky> this is when configing dhpcd server
<raky> /etc/network/interfaces http://pastebin.com/d3c0769e1
<soren> raky: That config will not work.
<soren> raky: Your gateway needs to be on the same subnet as you.
<soren> Er..
<soren> Oh, it says dhcp.
<soren> That's a pretty strange interfaces file :)
<raky> lol, i need to understand more, antoher person said the same thing  was wrong
<ajmitch> I hope this isn't on the same box that you're running a dhcp server on?
<ajmitch> the dhcp server needs to have a configured interface (statically assigned IP address) to hand out leases
<ajmitch> and you shouldn't assign 169.254 addresses :)
<mralphabet> raky: normal dhcp client http://pastebin.com/m740da274
<mralphabet> raky: are you expecting the DHCP server to assign the IP to its own interface?
<soren> ajmitch: Strictly speaking, the dhcp server can get an address from dhcp. It's a bit strange, but doable.
<ajmitch> soren: from itself?
<soren> ajmitch: No, from something else.
<soren> ajmitch: ...which will only hand out and IP to that machine.
<soren> ajmitch: It's probably not what he wants, but it's possible :)
<ajmitch> right, but in the common case it's not done :)
<soren> Indeed :)
<ajmitch> and assigning link-local addresses is still bad :)
<mralphabet> very
<soren> It doesn't get assigned as it's a dhcp interface.
<ajmitch> soren: no, but that was the apparant intent, at least
<soren> Right. I think we should stop trying to make sense of that interfaces file.
* ajmitch might actually work on some code this weekend
<ajmitch> maybe if I look at authtool I'll get an email from the soc student who made some changes :)
#ubuntu-server 2007-08-31
<raky> is this error message the subnet error that sorren has been talking about?  if it is, i've been going in completely the wrong direction!
<raky> The error was:
<raky> Address range 192.168.0.10 to 192.168.0.49, netmask 255.255.255.225 spans multiple subnets!
<raky> so the subnet is the addresses i dhcp-server give out, not what the server is operating within or creating another subnet set?
<nijaba> raky, did you check your subnet with an ip range calculator ?
<nijaba> in this range you only have 30 adresses
<nijaba> and in you case you can go from 0 to 29
<nijaba> not over
<nijaba> unless you have made a mistake in the in the range and it should end with 255 instead of 225
<nijaba> actually 29 (from 1)
<osmosis> what was the last version of ubuntu that had php4 ?  feisty doesnt have it.
<nijaba> 6.10 has it I beleive
<nijaba> let me check
<nijaba> confirmed
<nijaba> ttgtb...  good night
<rakyr> what is the diff between a netmask and subnet mask?
<mathiaz> rakyr: it's the same thing.
<Nafallo> morning
<Nafallo> mathiaz: around? :-)
<Nafallo> I'm sitting with a Dell PowerEdge 2850 and tries to update the BIOS. Dell only support RH and SuSE, and RH BIN has something in the script called typeset. do we have a wrapper for that application somewhere in Ubuntu?
<mathiaz> Nafallo: is the script using zsh ?
<mathiaz> Nafallo: typeset seems to be part of zsh
<Nafallo> mathiaz: ah. thanks. I'll try that :-)
<Nafallo> mathiaz: cheers! :-)
<mathiaz> Nafallo: np
<Nafallo> *sighs*
<Nafallo> vps1# PATH=/bin:/usr/bin:/sbin:/usr/sbin zsh PE2850_BIOS_LX_A06.BIN
<Nafallo> PE2850_BIOS_LX_A06.BIN Error: Cannot find utilities on the system to execute
<Nafallo> package.
<Nafallo> Make sure the following utilities are in the path: tar gzip tail rm mkdir
<Nafallo> chmod ls basename wc
<rakyr> fopen failed to getting a static address.  UDP socket: address in use.  do i need to open a port?  http://pastebin.com/d79ad7005
<rakyr> this is the interface file for eth0 for a pxe-install  http://pastebin.com/m234a809c
<rakyr> this is my dhcpd.conf file http://pastebin.com/d42d7c3b3
<osmosis> anyone know why xen is using a generic kernel?
<computerman> Hello I have a question about Apache server on Ubuntu
<computerman> I am trying to connect to transfer files from a Windows Vista box to a Ubuntu box.
<kraut> moin
<jdstrand> cool stuff...
<jdstrand> just upgraded to feisty from etch
<jdstrand> no, that was not a typo!
<soren> Sounds scary :)
<jdstrand> I wouldn't say it was 'smooth', but the fact I did it at all is pretty cool!
<gamble6x> and it worked?
<soren> jdstrand: Should you encounter any bugs, *please*, *please*, *please* remember to mention it in the bug report :)
<jdstrand> soren: of course
<jdstrand> soren: I wouldn't be reporting bugs for this-- this is a temporary situation
<jdstrand> gamble6x: it didn't 'work' in the traditional 'apt-get dist-upgrade' with no problems way
<jdstrand> gamble6x: manually installed dpkg and sysv* from feisty
<jdstrand> apt-get dist-upgrade
<Kamping_Kaiser> ugly
<jdstrand> deborphan several times
<jdstrand> downgraded through apt-pinning
<jdstrand> then ran a little script to see what was left to do manually
<jdstrand> all feisty pacakges now, and it rebooted and services are running.  :)
<jdstrand> don't try this at home kids
<jdstrand> ;)
<Kamping_Kaiser> no kidding :o
<jdstrand> really I did it all all because I didn't want to go behind my desk and move the cables around to connect a keyboard and monitor.  Then I was really just curious if I could pull it off.
<jdstrand> I knew I could-- I have come across some pretty hosed systems and were able to fix them, so this was really not hard
<jdstrand> anyhoo... thought others might find it mildy entertaining
<gamble6x> wow, impressive.
<gamble6x> time consuming, and scary.  But impressive.
<jdstrand> it actually wasn't too scary-- this is not a production machine and it is sitting right next to me, so I could just go for it
<jdstrand> I should probably mention this is a server, not a desktop
* jdstrand doesn't want anyone to get too 'inpired' by this
<jdstrand> s/inpired/inspired/
<Lin> hi there all
<Lin> I will repost a message from ubuntu+1 here
<Lin>  Isn't a REAL BUG. There is any easy configurator to manage samba servar configs? As: Create SMB user, set up domain.
<Lin> cause shares-admin (smb) can add a share, ok. But users smb users are not created. Users think that is not working at all.  and there is no easy way to change domain name or join a existing domain.
<_ruben> swat
<Lin> if there is no such tool, I iwll try to build one this weekend. trying with python-gtk
<_ruben> samba web admin tool
<Lin> _ruben: user tool.
<Lin> they are gnome integrated?
<Lin> I want something to user, not admin, just simple things as that you can fine in a windows host.
<Lin> s/fine/find/
<Lin> as admin I never used swat.
<Lin> just vi.
<_ruben> i dont think there's a seperation between user and admin when it comes to configure samba ..
<_ruben> then again, i dont use samba that much
<Lin> _ruben: in gnome we have shares-admin. Its possible to share something in SMB or NFS
<Lin> good. BUT! It dont create smb users or set a domain for authentication
<Lin> users share the directories but nobody can access.
<_ruben> heh, that's kinda sucky
<Lin> _ruben: A LOT
<Lin> that is the reason to ask for a gnome/user friendly too.
<_ruben> but the only config tool i know of is swat, so im afraid i cant help you on this
<Lin> _ruben: I know swat too. I dont like but I know.
<soren> Lin: There is something.
<soren> Lin: system-config-samba.
<Lin> soren: good. where?
<soren> Lin: Not packaged for Ubuntu, I'm afraid.
<soren> Lin: Well, that's not entirely true.
<soren> Lin: I packaged it, but the upstream tarball was not properly redistributable, so it didn't get accepted.
<Lin> seems like a RH tool
<Lin> soren: why itsn't properly redistributable? license?
<jdstrand> soren: licensing problems?
<soren> It's sort of ridiculous, actually.
<Lin> is a RH tool. ;-)
<soren> It's all GPL, but they forgot to include the actual GPL license in the tarball.
<Lin> a tool like this isn't a big deal for developers. But can change lives for users
<Lin> I will not play WoW this weekend.
<Lin> I will try to build a free redistributable one.
<jdstrand> soren: you could fork it, and add the license!  :)
<Lin> AFK
<soren> Lin: If I were you, I wouldn't bother.
<soren> jdstrand: The lack of license makes that impossible. It's the license that grants me that right and since it's not there..
<soren> Lin: It's a question of bugging upstream until they add the license to the tarball and that's it.
<soren> Hang on, I'll find my package.
<jdstrand> soren: I wonder what license would it fall under then?  It says its GPL, but doesn't have the license, but the code is out there...
<ScottK> soren: For cases like that the archive will allow you to repack the tarball to add the license text as long as it's clear what the upstream intended.
<soren> ScottK: I know.
<ScottK> OK.
<soren> ScottK: I just never got around to that, I'm afraid.
<ScottK> Understand.
<Lin> soren: ok.
* jdstrand now gets it too
<soren> Hmm... There's a new upstream tarball.
<Lin> soren: maybe with the license.txt
<Lin> I will be back soon.. AFK
<soren> perhaps
<soren> Nope
* jdstrand wonders if that could get packaged, then it and samba could be used as the File Server task for ubuntu-server
<mathiaz> jdstrand: I'm sure. As the configuration app runs in a X environment.
<mathiaz> jdstrand: I'm *not* sure.
<jdstrand> mathiaz: I thought rh were pretty good about those being both cli and gui
<jdstrand> mathiaz: eg system-config-securitylevel is gui and system-config-securitylevel-tui is cli
* jdstrand hasn't looked at the code, and is only putting the idea out there
<mathiaz> jdstrand: hum.. yes - that's right.
<mathiaz> jdstrand: they usually have a curse version of their system-config-* apps.
<jdstrand> mathiaz: exactly
* mathiaz hasn't looked at the code either
<soren> No, system-config-samba needs X.
<jdstrand> bummer
* jdstrand is thinking a user could run it remotely via ssh, but this would hardly be optimal
<kbrooks> how do you 6update ubuntu server
<ScottK> There are a number of ways to do it.  sudo apt-get update && apt-get upgrade would be one.
<raky> how do i find my network card driver?
<raky> what do i grep to get my hardware specs?
<mralphabet> dmesg
<mralphabet> ./proc/pci
<osmosis> apache isnt processing my .php files, its pushing them to my browser for download. What to do ?
<sommer_> osmosis: do you have libapache2-mod-php5 installed?
<osmosis> sommer: affirmative.   libapache2-mod-php5                        5.2.1-0ubuntu1.4
<osmosis> here is my vhost config  http://dpaste.com/18275/
<sommer_> did you enable php?
<sommer_> with a2enmod
<sommer_> or create a symlink in /etc/apache2/mods-enabled?
<mralphabet> did you restart apache?
<sommer_> that too..heh
<osmosis> strangest thing...now my sites are all showing...  SSH-2.0-OpenSSH_4.3p2 Debian-8ubuntu1   when I try and acess them via http. I dont know what gives.
<osmosis>  "Apache/2.2.3 (Ubuntu) DAV/2 SVN/1.4.3 mod_python/3.2.10 Python/2.5.1 PHP/5.2.1 (internal dummy connection)"
<osmosis> sommer
<osmosis> sommer_
<osmosis> what is   internal dummy connection
<sommer_> not sure
<sommer_> I've never seen that before, did you get the default exmaple page when you installed apache?
<sommer_> what do you get if you go to http://localhost?
<osmosis> sommer_: yah ive been running a working apache server for months. not sure what change caused this.
<osmosis> sommer_:  if I go to localhost, i get a valid page...as it should be.
<osmosis> sommer_: maybe im port fowarding to the wrong place? to port 22 maybe ?
<sommer_> Mmmmm...so is it just php pages that are messed up?
<sommer_> you might try changing <VirtualHost *> to <VirtualHost *:80>
<osmosis> yup
<osmosis> thats was is...my router was sending 80 -> 22
<sommer_> ah
<osmosis> thanks
<sommer_> np
<pteague_work> maybe it's a new php module?
<Lin> im back
<leonel>  was webmin removed from ubuntu ?
<ScottK> Yes
<ScottK> long long ago it was terminated with extreme predjudice.
<ScottK> !webmin | leonel
<ubotu> leonel: webmin is no longer supported in Debian and Ubuntu. It is not compatible with the way that Ubuntu packages handle configuration files, and is likely to cause unexpected issues with your system
<leonel> ScottK:  never liked  but a friend  asked me for webmin for ubuntu
<leonel> thanks
<pteague_work> webmin is nifty, but it seems to mess up config files that you've manually altered
<ScottK> nifty isn't the word I would use, but whatever.
<pteague_work> nifty for people that enjoy a gui... it frustrated the hell out of me when i tried to use it
<jbrouhard> webmin is a NIGHTMARE
<jbrouhard> ebox looks like a much better alternative
<ScottK> ebox is the one that soren is working on getting into Ubuntu.
<jbrouhard> yup
<kshah> what combination of email daemons are being used generally?
<kshah> should i use postfix or exim4?
<nealmcb> kshah: the default is postfix, so that is a good bet unless you need features of exim4
<kshah> k
<nealmcb> kshah: what sort of deployment are you looking at?  need pop3, imap etc?
<kshah> imap
<kshah> I guess it doesn't really really matter, small company, not like I'm setting up anything for hundreds of people to use
<kshah> the postfix config is a beast
<nealmcb> dovecot-imapd seems popular
#ubuntu-server 2007-09-01
<nealmcb> kshah: `sudo dpkg-reconfigure postfix` does simple configurations
<kshah> the ubuntu server guide is bringing me through a crazy SMTP authentication config
<kshah> i'm messing with ssl keys and whatnot
<kshah> I did this before I believe.. (then the server aka laptop died)
<kbrooks> kshah, huh? server == lappy? wtf.....
<nealmcb> kshah: I have found that supporting email is getting increasingly tiresome over the years.  keeping up in the war on spam et al is a nightmare.  outsourcing to google apps or fastmail can be appealing
<kshah> that'll probably be the end solution once things get going
<kshah> kbrooks: not my choice
<kshah> but it resulted in a proper server being bought
<nealmcb> kbrooks: what do you mean?  built-in ups that lasts for hours!!  :-)
<kbrooks> nealmcb, ups?
<nealmcb> (aka battery)
<nealmcb> how long does the UPS on your server last :-)
<kbrooks> nealmcb, but laptops are prone to overheating due to their small size
<nealmcb> note the smiley faces.  but I bet they are a good choice in some circumstances....
<kshah> yeah in mexico
<Pyretic> it even has a built in console :)
<Anarch> How can I tell whether `cat /proc/cpuinfo` is telling me about 2 physical CPUs or about a hyperthreaded CPU?
<Kamping_Kaiser> look for an HT switch
<Kamping_Kaiser> *flag
<Kamping_Kaiser> iirc
<pteague> could there be an issue with trying to install ubuntu server on an intel?
<kshah> no bro
<justinwray> pteague: Well, it is built for i1368, so doubtful, what sort of error?
<kshah> what yous prob?
<kshah> r
<justinwray> i386* Sorry for the typo
<pteague> i keep getting file corruption errors on deb packages when trying to install... the md5sum checks out & i've burned 3 CDs & all gave me the same error, but different deb files... the last i burned at 4x
<justinwray> And you do the CD check on boot, which comes back fine?
<justinwray> Run memtest on that system?
<pteague> cd check? guess not
<justinwray> When you boot, you have the option of checking the integrity of the CD.
<pteague> k, checking disk
<justinwray> Okay.
<pteague> console-tools_0.2.3dbs-65ubuntu3_i386.deb failed the md5 checksum... why would the iso pass an md5 check, but then would corrupt the disk in different places when it's burned?
<justinwray> Run memtest on that system?
<pteague> ok, apparently i can't even burn a the small net installer for debian without it being corrupt
<kbrooks> i have a question. i would like to monitor everything (only silently) that is sent over suspect ports. For example, if i type
<kbrooks> telnet mail.server.com 25
<kbrooks> i would like to see the normal stuff sent by the server, except logged
<kbrooks> how would i do this?
<soren> tcpdump
<kbrooks> soren, What if a user has finished sending spam to the mail server when I run tcpdump?
<soren> what?
<Kamping_Kaiser> kbrooks, look in the mail server logs?
<kbrooks> "suspect ports"
<soren> kbrooks: Are you asking if Ubuntu saves all your network traffic so that you can look at it later?
<kbrooks> soren, No. It doesn't. I would like it to.
<soren> kbrooks: tcpdump
<soren> kbrooks: But you need to start it before it does anything..
<kbrooks> soren, Does tcpdump log user information like who connected to a particular server?
<soren> kbrooks: It logs plain network traffic. You can look at it afterwards (e.g. with wireshark or something).
<kbrooks> soren, what about the user info? Please don't evade questions
<soren> kbrooks: Why would I be evading questions? I'm trying my best to ask your rather odd questions.
<soren> kbrooks: First you call it suspect ports, then it's telnet, then it's spam.. What are you trying to do
<soren> ?
<soren> kbrooks: I you want answers, you've got to ask proper questions.
<kbrooks> soren, port 25 is a suspect port because it can be used to send spam.
<soren> kbrooks: Well, I can choose to send nuclear missile launch codes over port 80 if I choose to. That's beside the point.
<soren> kbrooks: What are you trying to do?
<soren> kbrooks: Please don't evade the question.
<soren> :)
<kbrooks> soren, monitor specific inbound and outbpund ports  for questionable content, and pinpoint the user who is sending or receiving the content at which port so that  i can enforce rules
<soren> kbrooks: You seriously want to sit around and stare at network traffic all day?
<kbrooks> soren, No. hence the log request.
<infinity> kbrooks: You realise that going from packet filtering to spam filtering is far from trivial, right?
<soren> kbrooks: Ok, so you want to log it all day and stare at it all night. That's not much different.
<infinity> ie: The contents of individual packets aren't all that interesting.
<soren> kbrooks: Is it at all posibble that the question you want to ask is: "How do I avoid spam?"
<soren> kbrooks: If so, please ask that instead.
<infinity> And knowing when a message begins and ends, when you didn't initiate the connection, is.  Special.
<kbrooks> infinity, thats why there is  wireshark.
<infinity> Or, "how do I avoid untrusted users sending spam from my machine?"
<kbrooks> soren, um, i think i want to ask that yes
<infinity> To which the answer should almost surely be "don't let untrusted users use your machine".
<infinity> Fighting spam (inbound or outbound) at the packet level seems very much the wrong way to go about things.
<soren> kbrooks: Alright. Is this spam you want to avoid coming into your machine or going out from it? That seems to be a bit unclear.
<kbrooks> soren, going out from my machine by a user.
<soren> kbrooks: Wack him with a stick?
<kbrooks> soren, how do i whack him when i don't know if he/she hasnt sent spam?
<infinity> (We return to my "why are you allowing untrusted users access to your machine?" question)
<kbrooks> because i am doing a free shell/web hosting service.
<infinity> Ngh.
<infinity> Alright.
<soren> Do you use it as a mail server?
<infinity> Block 25 outbound from all but the local MTA, forcing all users to send mail via the MTA, and do your spam filtering at the MTA level.
<infinity> (filtering, monitoring, logging, whatever)
<kbrooks> infinity, i would do that, and i will - in a second - but i don't have a local MTA. i would like one so that I can support checking e-mail via the CLI.
<soren> kbrooks: You don't need and MTA to check e-mail from the cli.
<soren> kbrooks: And if you don't have an MTA, just block outgoing connections to port 25.
<kbrooks> OK.
<kbrooks> thank you for breakig down my question earlier. noW, HOW do i block outgoin g connections to port 25?
<soren> iptables -I OUTPUT -j DROP -p tcp --dport 25
<kbrooks> also, ulimit didnt seem to work to stop fork bombs. i set the ulimit thru /etc/security/limits.conf
<kbrooks> and logged out and in
<kbrooks> but when i ran :() { : | : &; }; :
<kbrooks> the ulimit didnt kick in
<soren> In the future, asking questions like: "I want to achieve X. I've thought doing that by doing B and C, but have run into problem D..."   makes everything much easier for everyone.
<soren> That would have saved us the first 15 minutes of this conversation :)
<infinity> @untrusted hard nproc 20
<infinity> ^^ something like that?
<kbrooks> infinity, where @untrusted  means?
<infinity> kbrooks: A UNIX group that you add your untrusted users to.
<infinity> (was just an example)
<kbrooks> *                hard    nproc           4000 # i did this, but this doesnt work
<infinity> I'm sure it works fine.
<infinity> 4000 is a lot of forks.
<soren> 4000 is enough for a fork bomb to be quite annoying.
<kbrooks> infinity, then why don't  i think it works fine?
<infinity> Because it's bringing the machine to its knees before it gets to 4000?
<kbrooks> ok.
<infinity> Have you ever had 4000 processes runniing on that machine?
<kbrooks> no
<infinity> Exactly.
<infinity> Your users probably don't need more than 10 processes at a time, really. :)
<soren> For the kind of users you're looking at, 20 should be plenty.
<kbrooks> i set that limit, now i logout and in?
<infinity> You don't want that as a global limit, though.
<infinity> It'll choke apache and such, as well.
<kbrooks> infinity, fair enough....
<kbrooks> k good
<kbrooks> works.
<kbrooks> thanks!
<kbrooks> sigh
<kbrooks> okay. so in tcpdump, i see something about a IRC server IP
<kbrooks> and i get suspicious
<kbrooks> so i use netstat and notice that a user is possibly connected to the irc server (only a suspicion so far, dont know yet)
<kbrooks> so i attempt to connect to the server and get kicked off bc of clones.
<kbrooks> the lines in netstat i'm talking about are
<kbrooks> tcp        0      0 192.168.0.101:4424      8.7.233.44:ircd         ESTABLISHED
<kbrooks> tcp        0      0 192.168.0.101:3367      ffneural.net:13333      ESTABLISHED
<kbrooks> so i type something like, ps aux and notice that the user in question runs a eggdrop
<kbrooks> and THEN i sudo as the user, cd to the configuration, read it, and notice that....
<kbrooks> the bot is named fswh, and it connects to a hub
<kbrooks> and the irc server ip
<kbrooks> i dont know what the hub is for, maybe exploits? i dunno.
<kbrooks> either way, i suspect abuse, and i will block the ports (except for IRC, which I will disconnect and talk to the irc server admin about) mentioned in the config file
* kbrooks scrolls up
<raky> i have to make a decision on whether to install xen or vmware on my LUG server.  its a compaq proliant with 2 P3s and 5x 19.2GB drives with 1 gig ram
<raky> i have seen the virt. techniques - para vs. full virt, and of course i'm limited by the server.
<raky> i just want to virtualize a linux distro or two
<raky> personally, i would choose the vm software that looks like the market will choose, and that's probably vmware, but things change so often
<raky> has feisty been fixed for xen and vmware yet?
<ivoks> fixed?
<ivoks> i would choose xen...
<raky> i've seen tutorials on fixing ubuntu for vmware. i didn't know if xen was the same
<ivoks> um... what's there to fix?
<raky> just like 5 cli edits
<raky> for vmware, that is
<ivoks> do you have link for that tutorial?
<raky> in a bit
<raky> sorry, can't find it.  i think the edits were before vmware was in the ubuntu repos
<ivoks> vmware isn't in ubuntu repo
<raky> ivoks, so for a noob, learning xen is not a whole lot more time consuming than learning vmware?
<ivoks> eh... vmware is probably a lot easier for a beginer
<raky> i have googled for plenty of tutorials over the past few weeks.  is vmware easier because of the GUI, or the virtualization techniques?
<ivoks> it's easier cause of the gui, of course
<raky> hehe, of course, ok
<raky> i have like 3 real days or 2.5 days worth of hours to get the vmsoftware working.
<ivoks> http://www.ubuntugeek.com/how-to-install-vmware-server-from-canonical-commercial-repository-in-ubuntu-feisty.html
<raky> ok, thanks, haha
<ivoks> no fixing here
<ivoks> i guess you won't be using vi for editing sources.list :D
<raky> lol
<ivoks> bbl bye
<steve____> ok
<steve____> something is really, REALLY messed up on my server.
<steve____> whenever it boots up, it doesnt have a net connection
<steve____> the cat5 is plugged in, its configured for a static ip
<steve____> but ifconfig eth0 doesnt show any ip
<steve____> /etc/init.d/networking restart doesnt do anything
<steve____> it gets an ip if i say ifup eth0
<steve____> but its still not connected to the net even then
<steve____> it says network is unreachable
<steve____> .......
<steve____> ok scratch that
<steve____> doing ifup eth0 makes it work
<steve____> but why doesnt it do that from the start?
<steve____> brb
<stiv2k> ok
<stiv2k> ughhhhhh
<stiv2k> anyone here???
<stiv2k> ok
<stiv2k> sorry my server is going haywire
<stiv2k> please help
<kbrooks> stiv2k, ask
<stiv2k> kbrooks: i asked already
<stiv2k> ill ask it again
<stiv2k> :P
<stiv2k> hokay, so we just had a power outage at my apartment here
<stiv2k> and when i booted my server back up, eth0 is down by default
<stiv2k> and every tiem when i boot it up
<stiv2k> eth0 is down
<stiv2k> /etc/init.d/networking restart does nothing
<stiv2k> i can only bring it up by saying ifup eth0
<stiv2k> but sholdnt it be up when the machine boots
<stiv2k> i dont want to have to manually bring it up every time
<kbrooks> pastebin your /etc/networking/interfaces file
<stiv2k> ok
<stiv2k> kbrooks: its only 4 lines
<stiv2k> ill paste it here if its ok
<kbrooks> ok
<kbrooks> go on
<stiv2k> well i have these 2
<stiv2k> auto lo
<stiv2k> iface lo inet loopback
<stiv2k> iface eth0 inet static
<stiv2k> address 192.168.1.10
<stiv2k> netmask 255.255.255.0
<stiv2k> gateway 192.168.1.1
<kbrooks> stiv2k, notice any differences?
<stiv2k> auto eth0 ?
<kbrooks> correct
<stiv2k> yikes
<kbrooks> add that and youre off to go
<stiv2k> ok
<stiv2k> straight
<stiv2k> kbrooks: one more thing im not sure if you can help me w/ this
<kbrooks> ask
<stiv2k> my bopm wont start with this error
<stiv2k> [Sep 01 19:07:47 2007]  MAIN -> Error opening /var/run/bopm/bopm.pid: No such file or directory
<ScottK> Does /var/run/bopm exist?
<stiv2k> no
<kbrooks> stiv2k, create it.
<ScottK> mkdir /var/run/bopm and try again
<stiv2k> but i was under the impression that it gets created by the program
<ScottK> Sort of
<ScottK> '/var/run is a tempfs in Ubuntu so it needs to get tested for at every start, not just install
<ScottK> Is bopm from the Ubuntu repositories?
<stiv2k> oh
<stiv2k> um
<stiv2k> yeah
<ScottK> It's not a tempfs in Debian by default, so a lot of programs miss this.
<ScottK> Assuming recreating the dir works, please file a bug against the package.
<stiv2k> ok
<stiv2k> wtf
<stiv2k> it works if i start bopm directly with `sudo bopm -dd`
<stiv2k> but it doesnt work if i call the script
<stiv2k> sudo /etc/init.d/bopm start
<ScottK> Right because the script tries to creat the PID file and can't.  When you call it directly you don't do that.
<stiv2k> ScottK: what should i do
<ScottK> Did the init script work when you recreated the dir?
<stiv2k> i dont know i have no way of looking at the debug output
<stiv2k> when using the init script
<stiv2k> -rw-r--r-- 1 root root 4 2007-09-01 15:12 /var/run/bopm/bopm.pid
<stiv2k> it exists now though
<ScottK> ps -AF|grep bopm and see if it's running
<stiv2k> steve@galatea:~$ ps -AF|grep bopm
<stiv2k> steve     4701  4589  0   720   748   0 15:15 pts/2    00:00:00 grep bo
<stiv2k> pm
<kbrooks> bbl
<ScottK> look in the PID file for the process number and see if that process is running.
<ScottK> OK.  Gotta run for a bit too.
<stiv2k> ScottK: it's not
<ScottK> That's a separate problem then.  Definitely file a bug on var/run/bopm not getting recreated by the init.
<stiv2k> ScottK: file it to who?
<ScottK> Not sure how much help I can be beyond that.  I don't even know what bopm is....
<ScottK> Launchpad
<stiv2k> ScottK: it was working fine before my server lost power
<stiv2k> ok
<ScottK> Gotta run.
<stiv2k> cya
<danp> does feisty preseeding support software RAID?
#ubuntu-server 2007-09-02
<marktangowaltz> I am needing assistance with using ubuntu 7.04 as a file server for my two Mac G4's
<m12> hi all
<m12> i need to enable TUN/TAP with modconf but i dont know how to install it
<soren> m12: For hat?
<soren> m12: er... for what?
<m12> for hotspot
<m12> when i do modconf, it dont list TUN under KERNEL/DRIVERS/NET/...
<soren> the module is already there.
<soren> Just add it to /etc/modules
<m12> sudo modprobe tun ?
<soren> m12: Just tun
<soren> m12: Look in the file. It should all make sense then.
<m12> i have restarted server and lost connection to it :(
<m12> cant get to it for 1h or more :/
<soren> Well, loading tun will not do that. You must have done something else.
<Kamping_Kaiser> could i build feisty+gutsy chroots using debian etch? (anyone here tried it?)
<m12> will load it for 1h then i check, tnx for help soren :)
<soren> Kamping_Kaiser: That shouldn't be a problem.
<Kamping_Kaiser> soren, i know older versions of both systems (sarge and dapper) cant build 'forward' does etch do something special?
<Kamping_Kaiser> s/etch/debootstrap
<soren> Kamping_Kaiser: You can just grab the debootstrap package from gutsy, install it and use that.
<Kamping_Kaiser> sounds bad. i'll try it out Some Time
<kraut> moin
<Kamping_Kaiser> hi
<soren> Kamping_Kaiser: No, it's really no problem.
<soren> Kamping_Kaiser: The difference between the one in Etch and the one in Gutsy is just gutsy and feisty install scripts, iirc.
<disposable> i followed a chapter in a book "SAMBA by example" to set up a simple anonymous server. my smb.conf is at http://pastebin.com/d64b6c38 The problem is, i am now getting cca 800KBps-2MBps throughput with samba on a 100Mbit network. It's definitely not a network problem, because NFS runs at 10-11MBps. Can somebody tell me which options i need to put in?
<mikubuntu> well, i'm sorry i din't find this room b4 i stayed up all nite and going crosseyed already, *but* i'll be back when everybodys a little more chatty and i can see and/or think straight.   arrrrrgggghhhhhh. nite guys.
#ubuntu-server 2008-08-25
<Dedi> anyone has a good postfix reject_rbl_client config for recipient/helo/sender restriction?
<luckyone> I have set up a samba share on my ubuntu server. My windows pc can connect to it just fine. However, my ubuntu desktop can't seem to connect/mount it
<fujin> hiya, anyone here familiar with reprepro?
<pschulz01> Greetings..
<vk5foss> allo
<pschulz01> Is there a 'pdf' for the 'Ubuntu Server Guide anywhere?
<nxvl> i think it is
<nxvl> somewhere
<nxvl> not sure where
<azteech> pschulz01 - try this link dl2.foss-id.web.id/dokumen/ubuntu/serverguide.pdf
<azteech> or this link ... http://www.linuxinet.com/free-linux-ebooks/download-free-pdf-ebooks-ubuntu-server-guide.html
<pschulz01> azteech: Ta.
<pschulz01> azteech: The first link seems a leetle slow.
<azteech> all the pdf's i have tried are slow ... but then again, i am on dial ...
<azteech> if you want to find another one ... google for it and use pdf+ubuntu server guide ....
<pschulz01> azteech: Second link was quick.. done..
<azteech> k
<pschulz01> azteech: Looks a little dated as well.
<pschulz01> Copyright 2006
<azteech> well the ubuntu server doc site only shows the 5.10 and 6.10 versions ... of the doc ... so the pdf's might be as well ...
<azteech> the 8.04 version I found a little while ago is still in draft form ..
<azteech> and as such, probably not a pdf out there for it ..
<pschulz01> Ta. Looks like it should submit some updates :-)
<azteech> agreed ... and am sure they will when draft doc is done ..
<Ontolog> Is there an ImageMagick page that doesn't rely on X?
<jgiorgi> i am planning on install ubuntu server on my desktop for testing purposes, i understand that for security no GUI is installed by default, i agree that is a great idea for security on most servers but is there a way (easy way) to install a gui and it be all set up like it was default so i dont have to get another computer
<jgiorgi> i was guessing sudo apt-get install gnome
<vk5foss> thats an #ubunttu question (the answer is to run tasksel and chose the desktop task)
<vk5foss> * #ubuntu
<jgiorgi> lol sorry
<_ruben> or just use some virtualization product
<kraut> moin
<ghaleb> hello all, I have installed pptp VPN server, but my clients suffer from a very slow connections, is that necessary for VPN server to be a gatway ?
<uvirtbot> New bug: #261066 in mysql-dfsg-5.0 (main) "mysql-server  5.0.67-0ubuntu1 not starts" [Undecided,New] https://launchpad.net/bugs/261066
<Koryo> Can anyone in here help me setup BIRD please? i have downloaded and installed it through KPackage Manager, but i am unsure of how to proceed from here.
<Adakos> hey folks. IM putting together a server, does anyone have a list of compatible motherboards, or recommendations on a relatively cheap (entry level) motherboard?
<uvirtbot> New bug: #260291 in openvpn (universe) "Network Manager 0.7, openvpn, VPN Connection Failed" [High,Confirmed] https://launchpad.net/bugs/260291
<jcastro> kirkland: update-motd is great, it'd make an excellent little blog post subject
<gsalah> hello all, I really have a big problem, I posted many times, I have a very slow pptp vpn connection and don't know why, any leading points?
<zul> Koon: ping
<Koon> zul: pong
<zul> Koon: does this look sane to you: http://pastebin.com/d38ca8862
<zul> its for the openvpn/network manager fix
 * Koon looks
<Koon> zul: i don't know that much rc7, but that looks sane, yes
<zul> coolio
<zul> thanks
<Koon> (haven't tested it so ymmv) :)
<zul> Koon: I already have
<Koon> ok cool ;)
<gsalah__> hello all, I really have a big problem, I posted many times, I have a very slow pptp vpn connection and don't know why, any leading points?
<Pupeno> Hello.
<Pupeno> Anyone running a mail server with imap and server-side filtering?
<uvirtbot> New bug: #261198 in apache2 (main) "Add UFW profile integration with apache2" [Undecided,Confirmed] https://launchpad.net/bugs/261198
<soren> Pupeno: Yes.
<ChrisGilmore> hey, I've installed Ubuntu Server through parallels on my mac successfully, but how would I access the localhost through my mac when it is on the Ubuntu server?
<LMJ> is anyone tried mod_authnz_external with Apache 2.2 ?
<LMJ> Got a 500 error
<themoleste> i'm not sure if i should ask here, or main chan
<themoleste> i'm having some issues making a minimalist desktop for a mpc
<themoleste> htpc
<themoleste> the software I'm using is xbmc and I set up a dual boot between ubuntu server and ubuntu desktop for testing
<themoleste> on desktop, the program works fine, on server it is very slow and xorg uses 100% cpu
<themoleste> so, i'm missing something on server, that is not included with apt-get install xorg
<themoleste> also, i'm trying to avoid a window manager, so while installing gdm might fix it, I would rather not
<themoleste> if anyone is here, i found out the difference... on the desktop glxinfo says it is using direct rendering... on server it is indirect
<sommer> jdstrand: I updated the ufw sections this weekend, wasn't 100% sure I understood the --add-new option for the application integration
<jdstrand> sommer: you are not the only person to have said that, so I clearly need to be more, well, clear
<jdstrand> sommer: if you don't mind entertaining me-- how do you understand it?
<jdstrand> (it may help me to clarify it if I know how people are currently interpreting it)
<sommer> jdstrand: if you add a port rule to an application profile it won't be enabled until you specifically enable it, unless you set the option to default
<sommer> jdstrand: or at least that's close to my interpretation :)
<jdstrand> sommer: yes, if I understand your phrasing correctly :)
<jdstrand> sommer: adding a profile in /etc/ufw/applications.d simply makes the profile available to use when adding rules
<jdstrand> sommer: eg 'ufw allow Apache'
<leonel> ScottK: https://bugs.edge.launchpad.net/ubuntu/+source/clamav/+bug/261249
<uvirtbot> Launchpad bug 261249 in clamav "MIR report for CLAMAV" [Undecided,New]
<jdstrand> sommer: the 'app update' command allows a maintainer or admin to change the profile, and then update all referenced rules with one command
<jdstrand> sommer: eg, user does 'ufw allow WebApp', which simply opens port 8080
<jdstrand> sommer: admin then changes the profile to have '8080,8081'
<jdstrand> sommer: a user can then run 'ufw app update WebApp' and now both 8080 and 8081 are open
<sommer> jdstrand: ah, that makes sense, and if they've set --add-new to default they don't need to do the "app update" part?
<jdstrand> sommer: a package maintainer can take advantage of this for long-term maintainablity
<jdstrand> sommer: exactly
<jdstrand> sommer: 'update --add-new' does all that 'update' does, and it will add a new rule to the firewall depending on how DEFAULT_APPLICATION_POLICY is set
<jdstrand> (which is skip by default)
<sommer> jdstrand: okay makes sense now, I'll double check what I wrote... probably needs clarified, heh
<jdstrand> sommer: when the DEFAULT_APPLICATION_POLICY is skip, 'app update' and 'app update --ade-new' are equivalent
<sommer> jdstrand: gotcha
<jdstrand> sommer: the nice thing about this is for example samba
<jdstrand> sommer: it might have a profile that is 137,138/udp|139,445/tcp
<jdstrand> sommer: if a user did:
<jdstrand> ufw allow to any app Samba from 192.168.0.0/16
<jdstrand> ufw allow to any app Samba from 10.0.0.3
<jdstrand> then if change the profile, you only have to run 1 command, as opposed to (at least) 8
<jdstrand> (4 ports * 2 rules)
<sommer> cool, so are any profiles currently shipping?  I believe I saw a bug for apache...
<sommer> jdstrand: something to use as an example anyway
<jdstrand> sommer: a community member has approached me and said he'd work on apache and openssh initially
<Pupeno> soren: What do you use for that?
<jdstrand> sommer: I might also add that the average user will likely only need to use 'app list', 'app info <profile>' and then references the profiles in the rules
<jdstrand> sommer: the 'app update' command is really to make sure there is some way a maintainer can update the profiles sanely
<jdstrand> though, an admin may find it useful...
<sommer> jdstrand: for new profiles is the procedure to file a bug against the application?
<soren> Pupeno: Dovecot.
<jdstrand> sommer: yes, you can see UbuntuFirewall for developing them, and ServerTeam/Roadmap for targeted applications
<Pupeno> soren: and postfix I presume, right?
<sommer> jdstrand: awesome, I'll update the guide this evening, thanks
<soren> Pupeno: Postfix is somewhat involved, yes. IMAP and server side filtering is all dovecot, though.
<jdstrand> sommer: thank you! :)
<Pupeno> soren: I'm reading Dovecot page at the moment. How do you perform and/or set the server-side filtering?
<ScottK> leonel: Thanks.
<soren> Pupeno: I tell postfix to use Dovecot's LDA. Dovecot then reads a sieve script from my home directory.
<didrocks> jdstrand: are you there?
<jdstrand> didrocks: yes, hi!
<didrocks> hi :)
<didrocks> I think the first debdiff is ready
<jdstrand> didrocks: for apache? I haven't looked at it yet, but saw it come in
<didrocks> I just have one question before: is it delibaretly for ufw that we do not remove the rules after purging the package ?
<didrocks> (I haven't uploaded it at the moment :))
<jdstrand> didrocks: well, we remove the profile, but don't remove the rule if the user had DEFAULT_APPLICATION_POLICY set to something other than skip and a rule was automatically added
<didrocks> yes, that's what I say
<didrocks> /say/saw
<didrocks> this one is located on /var/lib/ufw/user.rules apparently :)
<jdstrand> didrocks: oh, I thought we were talking about application rules
<didrocks> no no, firewall rules, sorry for the misleading term :)
<jdstrand> didrocks: so /var/lib/ufw is not deleted on purge?
<didrocks> yes, the rule corresponding to the removed profile is not removed (as described in https://wiki.ubuntu.com/UbuntuFirewall#Integrating%20UFW%20with%20Packages)
<jdstrand> didrocks: I think we are talking about different things
<didrocks> hum, I am speaking about rules visible through ufw status
<jdstrand> didrocks: on apt-get remove --purge ufw, what are you expecting to happen, and what is happening?
<Goosemoose> hi. does anyone have a good hardy preseed file. especially one that joins an AD domain?
<didrocks> jdstrand: on --purge, the profile (so, the file in /etc/ufw/applications.d/... is removed), but the firewall rule (which is shown by "ufw status") is not purged
<didrocks> and I don't find it logical (for me, both has to be removed)
<jdstrand> didrocks: ok-- now I know what you are talking about
<jdstrand> didrocks: 'apt-get remove --purge apache2.2-common' removes the profile, but does not update the running firewall
<jdstrand> didrocks: that is intentional
<didrocks> why?
<jdstrand> didrocks: the stance I took is that if the administrator added rules referencing a profile, then the admin should update the firewall when that package is removed
<jdstrand> didrocks: I was trying to avoid ufw making policy decisions on its own, which it could very easily do wrong
<didrocks> yes, that's understandable
<jdstrand> didrocks: it's also possible that a profile might not be package specific for some people/distributions
<didrocks> that makes sense. ok, that was my last question I think. I have ust uplodaded apache2 debdiff. It build well and I tried it on my intrepid VM (playing with ufw ;))
<jdstrand> didrocks: eg, profiles could be distributed by someone in a 'ufw-profiles' package, that has things such as 'Web Server'.
<didrocks> yes, some common profiles between packages
<didrocks> jdstrand: thanks a lot :)
<jdstrand> then the install appache, do "ufw allow 'Web Server'", then remove the ufw-profiles package and apache is not longer accessible
<jdstrand> (if we automatically removed the rules on profile removal)
<didrocks> yes yes, I understand
<didrocks> regarding terms
<didrocks> rules is for firewall
<didrocks> profiles for application-port association
<didrocks> is it correct?
<jdstrand> didrocks: yes
<jdstrand> didrocks: rules as in 'iptables rules'
<didrocks> jdstrand: that's what I was assuming :)
<didrocks> it's a really great tool as iptables configuration is very difficult the first timeâ¦ (then, it seems easier, but well ^^)
<jdstrand> thanks! :)
<didrocks> jdstrand: when you will have time (after your move, for instance ;)). Keep me in touch about this package and when it will be ok (maybe some changes has to be done), I will try to put some efforts on other packages
<didrocks> it is very straightforward
<jdstrand> didrocks: I sure will-- I will definitely look at it this week
<jdstrand> didrocks: thanks again for your work!
<jdstrand> :)
<didrocks> jdstrand: you're welcome :)
<didrocks> have a good evening (or day) :)
<jdstrand> you too
<didrocks> thanks
<kirkland> emgent: thanks ;-)
<kirkland> sommer: thanks ;-)
<didrocks> jdstrand: someone uploaded ubuntu1 during this time, I have to update my package :/
<didrocks> kees is the guilty ^^
<kees> didrocks: which package?
<didrocks> apache2 :)
<kees> oh, did we just collide?
<kees> I _just_ uploaded an ubuntu1 version :(
<didrocks> yes, but no problem, I will provide a new version :)
<didrocks> (and hate you secretly :))
<kees> okay, cool, sorry about that.  I didn't realize one was in the works.
<kees> hehe
<kees> mathiaz: hrm, slapd isn't installable for me under intrepid.  it yells about existing directories.
<nxvl> kirkland: yay for superpowers! Congratulations!
<mathiaz> kees: hm - which ones ?
<PumpkinPie> is anyone using a lot of bandwidth ?
<nxvl> PumpkinPie: what for?
<kees> mathiaz: http://pastebin.osuosl.org/21869
<mathiaz> kees: seems like an issue with the rootpassword - can you share the root password ?
<mathiaz> kees: or look into /tmp/slapd_init.ldif.SAhzi17406 ?
<kees> mathiaz: er, there's no root password (running in a schroot)
<kees> 105 lines in that tmp file... what am I looking for?
 * kees starts over in his schroot
<mathiaz> kees: olcRootPW
<mathiaz> kees: no root password -> that's why
<mathiaz> kees: there is probably a line similar to this: olcRootPW:
<uvirtbot> New bug: #261274 in redhat-cluster-suite (main) "clustat(8) truncates columnar output when stdout is not a TTY" [Undecided,New] https://launchpad.net/bugs/261274
<kees> mathiaz:
<kees> olcRootDN: cn=admin,cn=config
<kees> olcRootPW:
<kees> # olcRootDN: cn=admin,dc=outflux,dc=net
<kees> # olcRootPW:
<kees> what should I do to work around this?
<mathiaz> kees: can you set a root password ?
<mathiaz> kees: or is it part of an automated install ?
<kees> this is an automated install.
<kees>         db_get slapd/internal/adminpw
<kees>         adminpass=$(echo $RET | sed -e 's|/|\\/|g')
<kees> that appears to be the place it's getting that password from?
<mathiaz> kees: correct - slapd/internal/adminpw is the debconf template
<kees> mathiaz: so this is a debconf prompting level problem?
<mathiaz> kees: hm - which level are you running deconf ?
<mathiaz> kees: the password is prompted at level high
<kees> mathiaz: critical.  ;)
<kees> mk-sbuild-lv uses:
<kees> echo set debconf/frontend Noninteractive | debconf-communicate
<kees> echo set debconf/priority critical | debconf-communicate
<mathiaz> kees: right
<mathiaz> kees: so you
<mathiaz> kees: so you'd have to preseed the admin pw value
<_jpierre> Guys I have setup DHCP, but my clients can't access the internet. Can anyone help
<mathiaz> kees: that should work
<kees> mathiaz: I think this is a bit of a bug, actually -- a randomized default should be chosen in the case that debconf can't prompt.
<osmosis> Anyone know if this effects ubuntu?  http://blog.vipul.net/2008/08/24/redhat-perl-what-a-tragedy/
<kees> mathiaz: and this needs a -p I think:   Creating initial slapd configuration... mkdir: cannot create directory `/etc/ldap/slapd.d/': File exists
<kees> mathiaz: hrm, seems that test-openldap.py in the regression testing suite still fails on intrepid.  it's complaining about a missing pid file note.
<jdstrand> osmosis: somone said in a comment that debian isn't affected (which is where we get our perl). I tried the sample code and have:
<jdstrand> real	0m0.314s
<jdstrand> user	0m0.136s
<jdstrand> sys	0m0.004s
<jdstrand> (so under a second and not affected)
<mathiaz> kees: I'll have a look at it
<kees> mathiaz: okay, thanks.
<antdedyet> Anyone around that would know who may be aware of possible ubuntu mailing list chokes? For the past couple of months there has began to be some several day delay in receiving email from some of the addresses to my mailbox (ubuntu-devel is the most recently one and I don't encounter similiar issues with any other incoming email).
<kees> zul: looks like mysql failed on amd64: http://launchpadlibrarian.net/17049756/buildlog_ubuntu-intrepid-amd64.mysql-dfsg-5.0_5.0.67-0ubuntu1_FAILEDTOBUILD.txt.gz
<kees> antdedyet: check with folks in #canonical-sysadmin -- they manage the infrastructure machines
<kees> zul: but I wasn't able to reproduce it on my amd64.
<antdedyet> kees: alright. thanks!
<PumpkinPie> baffle: ?
#ubuntu-server 2008-08-26
<zul> kees: yep it seems to only fail on the builds for that one test on amd64
<zul> but it built fine on my ppa
<zul> kees: its on my todo list :)
<kees> zul: oop, it just failed on i386 too, I've hit "retry" on it.
<zul> kees: *sigh* ok ill fix it right now
<kees> zul: well, I think it might be a transient bug.  :(
<zul> did it fail in the samespot?
<kees> zul: nope, the next test, ironically
<zul> wtf?
<zul> kees: can you look at the 60_disabled_test.dpatch?
<zul> i think its broken
<kees> ?
<kees> seems okay?
<zul> its missing the @DPATCH@ part isnt it?
<kees> hrm, so it is -- does it matter for modern dpatch?
<zul> dunno thats why I asked you :)
<kees> let me check logs
<zul> anyways I regenerated the patch with disabling the failed test
<kees> applying patch 60_disabled_tests to ./ ... ok.
<kees> seems okay
<kees> umm... that's not a fix.  :P
<zul> sec
<zul> well its a temporary fix :P
<owh> Salutations all. vmware-server and kernel modules has been a thorn in my side for some time. I've just found out that there is a vmware-server-kernel-source package. I've just recompiled for gutsy 2.6.22-15, but I don't have a hardy or intrepid install handy to test and was wondering if someone here might be able to help.
<excalibas> hello, I have internet with my wireless connection (wlan0) i want to connect to my wired connection (eth0) at the same time. is there a simple way to do this?
<owh> The process is outlined here: https://answers.launchpad.net/ubuntu/+question/36849
<owh> excalibas: I would think that you can create a bridge between wlan0 and eth0 and use that for your connection, but I've not done it and I don't know if wlan supports bridging.
<excalibas> owh: thanks, and how do i create this bridge?
<owh> excalibas: Just to make sure here, you want to have connectivity to the network via both pipes at the same time, or is that not what you're asking?
<owh> excalibas: Or are you routing traffic from the LAN to WiFI?
<excalibas> well, im not sure what is the difference, i want to have internet from the WiFi and network from Lan on my laptop at the same time.
<owh> excalibas: Hmm, so you're not talking about a redundant link on a server then?
<excalibas> owh: no, i dont want to connect to the same server wifi is for a router and the lan is for a computer (ssh)
<owh> excalibas: Sorry, that's not what I meant - this is the ubuntu-server room and I based my advice on that you are running a server which you want to configure - however, that does not seem to be the case - you seem to be talking about a workstation that you want to connect to the 'net twice - via wlan and eth. Is that correct?
<MattJ> Is upgrading 7.04 direct to 8.04 discouraged?
<excalibas> owh: hehe sorry this is true, I want to do this on a workstation, not a server
<MattJ> do-release-upgrade goes to download gutsy
<owh> MattJ: I think that the only "jump" update is from Dapper to Hardy, all the others are incremental, but I may be wrong.
<MattJ> Ok, seems to be the case
<MattJ> Any idea how likely things are to break during upgrades? :)
<owh> MattJ: You know the answer to that already I'm sure -- it depends :)
<MattJ> Guessed so :)
 * MattJ crosses his fingers tight
<owh> MattJ: Make a backup first.
<MattJ> Good idea
<owh> MattJ: rsync is your friend :)
<MattJ> I have daily incremental ones, but I think I'll make a full one
<MattJ> custom scripts > rsync :P
<owh> MattJ: That depends also -- I use the custom script at the rsync server end :)
<MattJ> Heh
<owh> excalibas: I've been looking for a GUI way to achieve what you want, but have not found one. Depending on your skill-level there are many ways to create what you're asking for, but I'm not tempted to walk you through that given that it is likely to make you disconnect from the 'net in the middle of the process.
<excalibas> owh: ok i was hoping for a GUI way, mabie i try firestarter
<vk5foss> excalibas: what you want A. shoul be asked in #ubuntu, and B. requires the other end of teh wifi+ethernet connections to know they are sharing. (option C. involves lots of iptables magic, and other hackery)
<jdstrand> MattJ: 6.06 -> 8.04 and 7.10 -> 8.04 are supported. any other upgrades are not supported
<jdstrand> MattJ: anything else and you are on your own.
<nxvl> or you can make upload by upload
<nxvl> :p
<nxvl> 6.10 -> 7.06 -> 7.10 -> 8.04
<nxvl> s/upload/upgrade
<uvirtbot> New bug: #261326 in ipsec-tools (main) "racoon init script fairs poorly when using an unpriv user" [Undecided,New] https://launchpad.net/bugs/261326
<jdstrand> that said (and I am not recommending this ;), I once performed an etch -> gutsy up/sidegrade that was tremendously hairy, but knowing the problems I would likely face and how to fix them, I was able to come out the other end with a 7.10 system
<jdstrand> MattJ: I also performed several 7.04 -> 7.10 -> 8.04 upgrades and they went very smoothely overall (and likely worth the double download time)
<vk5foss> nxvl: 7.04 :p
<nxvl> jdstrand: yeah once i do that and i almost died in using the sistem
<owh> For those contemplating helping me with my vmware-server test -- don't bother, I just realised that Hardy and Intrepid don't even have vmware-server yet :(
<MattJ> jdstrand: Thanks for that
<sommer> mathiaz: I was just wondering if you had an easy way of adding schemas to slapd in cn=config mode?
<sommer> mathiaz: more specifically slapd in cn=config mode configured with n-way replication
<mathiaz> sommer: not yet - as in there isn't any scripts to automate that
<sommer> mathiaz: heh, okay
<mathiaz> sommer: the good news is that you can add the schema with a simple ldap add command
<mathiaz> sommer: to the master, and it should replicate automatically to the slaves
<mathiaz> sommer: althought I've never tested n-way replication of cn=config
<sommer> mathiaz: right, I guess I was thinking more of converting a .schema file to ldif
<sommer> mathiaz: I've documented the n-way in the bzr branch :)
<sommer> I did find some instructions in the openldap mailing list archive, so I can just document those... it's not too bad, but not polished either
<mathiaz> sommer: interesting the idea of converting a .schema file to .ldif
<mathiaz> sommer: may be worth writting a script to do that :)
<sommer> mathiaz: well don't you need to do that before adding a schema to cn=config?
<sommer> mathiaz: http://markmail.org/message/ulxz4gnpzz3eg2md#query:openldap%20cn%3Dconfig%20convert%20schema+page:1+mid:pa3clnnt33ynxlwf+state:results
<mathiaz> sommer: well - it depends how you get the schema
<mathiaz> sommer: right - that's what I was suggesting
<sommer> those are the instructions I found... I guess the other thing would be to convert the schemas to ldif and supply the ldifs with the package?
<mathiaz> sommer: using slaptest to do the conversion
<mathiaz> sommer: right
<sommer> mathiaz: cool, so are you going to add a script to the package?
<mathiaz> sommer: probably not before FeatureFreeze
<sommer> mathiaz: either way, I'm just trying to update the Samba and LDAP section
<mathiaz> sommer: I'm currently testing the nss slapd overlay
<sommer> ah nice, were you thinking a shell script for the ldif conversion?
<mathiaz> sommer: seems that it's simple enough for a shell script
<sommer> I could probably work on something... FF is Friday?
<mathiaz> sommer: thursday
<sommer> mathiaz: doh, well I can just document the process :-)
<mathiaz> sommer: all of our Freezes and other important dates are on Thursday
<mathiaz> sommer: *thursdays*
 * sommer was too lazy to look at a calendar, heh
<sommer> mathiaz: whoa... did you just talk to someone about the cn=config schema issue?
<mathiaz> sommer: nope ;)
<mathiaz> sommer: I was about to mention the thread on the openldap-pkg-maintainer
<mathiaz> sommer: mailing list that is discussing the same issue
<mathiaz> sommer: discussion is ongoing currently
<sommer> heh, small world :)
<jameswf-home> greetinngs
<jameswf-home> anyone use aiccu
<henkjan> jameswf-home: i'm using it at my home route (wrt54gl)
<lukehasnoname> how would I format the rsync command to use a non-standard ssh port?
<kraut> moin
<skep> henkjan: rsync --port=xyz
<skep> err.. lukehasnoname
<henkjan> np
<slim_> hello all,
<slim_> i asked the question in #ubuntu but no answer, then come here, i try to setup a router i search for this and reach to script but it seem that something still needed in this script because it not work,  this is the script i'm using > http://phpfi.com/347352
<macgerhard> when I run sar, I'm getting this error: Cannot open /var/log/sysstat/sa26: No such file or directory
<macgerhard> I'm on 8.04. Any ideas why? Google is clueless...
<ScottK> sommer: I reviewed you MIR and subscribed ubuntu-mir.  Good work.  Please do more.
<ScottK> sommer: I'd particularly appreciate arj and unzoo.
<soren> ScottK: You consider them essential? Wow. I don't even think I have them installed.
<soren> hm, apparantly, I do.
<ScottK> soren: They're recommends for clamav, so they need MIR to support that one.
<sommer> ScottK: you're talking about the clamav MIRs?  those are due thursday, correct?
<ScottK> sommer: Yes.
<ScottK> They all are.
<ScottK> clamav is just a lot closer to done.
<sommer> ScottK: cool, I may have some time tonight
<ScottK> Great.
<sommer> ScottK: I have a quick question for you... I've packaged up the latest release of ldapscripts, and wanted to make sure that the next step is to get it sponsored?
<ScottK> yes.
<sommer> cool, I've also updated the debian package, and the original maintainer has orphaned it... so maybe they'll let me be that guy :)
<BUGabundo> hello
<BUGabundo> does anyone as the build link for 2.6.27 ?
<zul> for the kernel?
<BUGabundo> yes
<zul> http://kernel.ubuntu.com/pub
<BUGabundo> can't find it on https://edge.launchpad.net/ubuntu/+builds?build_text=2.6.27&build_state=all
<BUGabundo> didn't it get upload on Saturday?
<zul> still hasnt built yet probably
<BUGabundo> but I can't find it on LP builds
<zul> BUGabundo: it probably hasnt been accepted yet
<BUGabundo> I guess
<BUGabundo> https://edge.launchpad.net/ubuntu/+source/linux
<BUGabundo> but its there zul
<BUGabundo> any md5sum for http://kernel.ubuntu.com/pub/next/2.6.27-rc3/intrepid/linux-image-2.6.27-1-generic_2.6.27-1.1_amd64.deb ?
<BUGabundo> my came corrupt
<CrummyGummy> Hi all, I've noticed now that HP supports Debian. Has anyone here gotten their hpasm software to run on Ubuntu?
<CrummyGummy> I'm soooo glad I found this I almost installed Centos *shiver*
<Fenix|work> Greetings
 * delcoyote hi
<zul> mathiaz: ping
<nijaba> Server Team meeting in #ubuntu-meeting in 10min
<mathiaz> zul: hello
<zul> mathiaz: so mysql everything builds except it fails for one test on 386 but it doesnt fail anywhere else
<zul> I talked to kees about this last night and he thought I shouldnt disable the test because he thinks it is a timing error but according to the buildlogs the mysql server is timing out when doing the test
<zul> and im not sure what to do about it
<mathiaz> zul: does it also fail in your local build environment ?
<zul> mathiaz: no I cant reproduce it here
<mathiaz> zul: could you try to increase the timeout ?
<zul> yeah I could do that, thats in the mysql-test suite perl program isnt it?
<zul> ill just double it
<mathiaz> zul: not sure - I'd guess it's in the test definition somewhere
<zul> mathiaz: gotcha
<Adakos> hey, anyone here good with NIC bonding? My attempts have lead to fatal kernel errors
<incorrect> i've had it working for years
<incorrect> i wrote a page on it sometime ago
<Adakos> incorrect: i have tried three times in the past 24 hours on my test system and all have ended in kernel errors
<Adakos> you dont have the link do you?
<incorrect> give me a sec
<Adakos> alright. I appreciate it
<Adakos> ls
<Adakos> wrong window, sorry !
<lukehasnoname> thanks skep
<nxvl> good morning!
<Adakos> morning
<zul> nxvl: do you want to split up the ufw list?
<Xetrov`> i have a dual quad core xeon server i  need to run vmware server on.  my plan is to load ubuntu 6.06.2 as the base os.  will vmware run on the x64 kernel or should i use the x86?
<nxvl> zul: ok
<nxvl> zul: i was going to start now
<zul> nxvl: i was too after the meeting
<nxvl> zul: ok, lets start after the meeting
<nxvl> :D
<nxvl> jdstrand: a profile is just what's describen under "Package Integration" at w.u.c/UbuntuFirewall, didn't it?
<jdstrand> nxvl: profiles are technically entries in files located in /etc/ufw/applications.d
<jdstrand> nxvl: so if you look at https://wiki.ubuntu.com/UbuntuFirewall#Package%20Integration
<jdstrand> nxvl: the apache bits might be in /etc/ufw/applications.d/apache and the samba ones in /etc/ufw/applications.d/samba
<nxvl> jdstrand: yep, but i mean, the contents of those files
<jdstrand> nxvl: /etc/ufw/applications.d/apache has 3 profiles in it, and samba has one
<soren> macd: Put the output of "apt-cache policy fail2ban" on pastebin.
<jdstrand> nxvl: each stanza is a profile
<nxvl> as it is under package integration
<nxvl> ok
<nxvl> that was the question
<nxvl> :D
<nxvl> jdstrand: thank you
<jdstrand> nxvl: np :)
<jdstrand> nxvl: and thanks for working on it!
<didrocks> (hi ;))
<jdstrand> nxvl: I should mention that didrocks is aready doing apache and openssh-- be sure to check ServerTeam/Roadmap for available applications, and assign yourself accordingly
<nxvl> jdstrand: yes, i'm doing that
<jdstrand> \m/
<jdstrand> hi didrocks :)
<didrocks> hi jdstrand \o/
<macd> soren, I goofed, I was looking for the intrepid version in hardy, but there is an SRU filed against the package in hardy to backport from intrepid, looks like it has some SRU acks just needs upload: bug 222804
<uvirtbot> Launchpad bug 222804 in fail2ban "[SRU] fail2ban fails to start after reboot" [Undecided,Confirmed] https://launchpad.net/bugs/222804
<soren> macd: Mkay.
<macd> having hostnames with the release in them might help me ;)
<nxvl> didrocks: is there any package already with profile?
<nxvl> jdstrand: ?
<jdstrand> nxvl: not yet, but didrocks submitted a debdiff for apache I believe
<didrocks> nxvl: see bug #261198
<uvirtbot> Launchpad bug 261198 in apache2 "Add UFW profile integration with apache2" [Undecided,Confirmed] https://launchpad.net/bugs/261198
<didrocks> but it has to be checked :)
<nxvl> that's what i was looking for
<nxvl> :D
<nxvl> didrocks: shouldn't apache use Apache full profile instead of just Apache?
<nxvl> oh! i forgot that apache doesn't come with ssl out-of-the-box
<didrocks> nxvl: you have your answer :)
<Xetrov`> do any of you have experience with vmware server atop ubuntu?
<zul> nxvl: it does for intrepid
<nxvl> zul: generates the certificate and runs https?
<zul> nxvl: think so
<nxvl> zul: i doubt it, for generating the certificate you need a lot of information
<foolano> snake oil certificates are not that picky :)
<mathiaz> nxvl: as of intrepid, apache2 comes with ssl ootb
<mathiaz> nxvl: apache2 depends on ssl-cert and uses the snake-oil-cert IIRC
<nxvl> didrocks: then my comment is valid
<didrocks> mathiaz, nxvl: ok, I will update my package so :)
<didrocks> mathiaz: are you sure?
<didrocks> oh yes, got it, sorry :)
<didrocks> nxvl: new version of the patch uploaded
<didrocks> soren: I think you are a little bit nervous regarding Michael's mail (and that's understandableâ¦) :)
<nxvl> didrocks: i'm not a core-dev so you need to find other to sponsor it :P
<zul> mathiaz: increasing the timeout doesnt help, im going to disable the test for now
<mathiaz> zul: have you looked into MySQL upstream bug tracker to see if there is a bug report about this issue ?
<didrocks> nxvl: yes, I know (I saw your application for MOTU). It just to keep you inform :)
<nxvl> :D
<zul> mathiaz: im looking there as well
<nxvl> jdstrand: if i install a package and then install ufw i won't get the profile added, will i?
<jdstrand> nxvl: well, ufw will know about it, but nothing else will happen.
<zul> mathiaz: cant find anything about it :9
<jdstrand> nxvl: however, ufw is Recommended by ubuntu-standard, so it is installed by default
<nxvl> jdstrand: yeah, that part i know, just checking
<nxvl> :D
<nxvl> samba complete and tested
<nxvl> btw
<nxvl> how do i check it's enabled?
<jdstrand> \o/ -- can you file a bug with the debdiff?
<didrocks> nxvl: with my current behavior, the profile is added in /etc/ufw/applications.d/ but no rule has been added to ufw (as it has not been installed)
<nxvl> jdstrand: yep i will now
<jdstrand> nxvl and didrocks: there is a 'Testing' section under 'Integrating UFW with Packages'
<didrocks> jdstrand: do you want me to continue to work on it even if the first debdiff is not ack yet ?
<didrocks> jdstrand: yes, I tried that on my VM (plus some home added tests) :)
<jdstrand> didrocks: oh yes, keep going-- it looks like you followed my directions, so keep going
<didrocks> ok, so, OpenSSL here we go :)
<nxvl> mm
<nxvl> i seems wrong
<nxvl> :S
<nxvl> jdstrand: http://paste.ubuntu.com/40710/
<nxvl> jdstrand: that's not what i expect, doesn't it?
<jdstrand> nxvl, didrocks: you can also use 'ufw app default allow', then do 'dpkg-reconfigure <package>' and then do 'ufw status' and 'ufw status verbose' to see if the rule got added
<jdstrand> nxvl: something is not right there. can you post the output of 'ufw app list'?
<nxvl> http://paste.ubuntu.com/40712/
<nxvl> mm now it works
<zul> mathiaz: can you have a look at this when you get a chance?
<nxvl> for some reason after i do 'ufw app default allow' it was skipping profiles
<nxvl> jdstrand: http://paste.ubuntu.com/40713/
<jdstrand> nxvl: what you pasted seems correct to me
<jdstrand> am I missing something?
<mathiaz> zul: sure
<nxvl> jdstrand: at the end, read it complete
<nxvl> jdstrand: first i runn status and it doesn't show me anything
<nxvl> but after running 'app default allow' started work
<nxvl> that's the expected behavior?
<jdstrand> nxvl: what I see from your paste, is that you did 'status', it showed nothing. that is correct. then you changed the default application policy to 'allow', then did status and no rules. that is correct
<jdstrand> nxvl: then did dpkg-reconfigure, which added a rule, and status showed it worked
<jdstrand> that is all correct
<nxvl> ok
<nxvl> then it's correct
<jdstrand> nxvl: if the default application policy is 'skip' no new rules are added with '-add-new'
<jdstrand> nxvl: the first status shows it was indeed skip
<nxvl> jdstrand: yep, and skip is the default policy for applications then
<jdstrand> nxvl: yes, because ufw shouldn't set the policy for application integration
<nxvl> ok then
<nxvl> Bug 261544
<uvirtbot> Launchpad bug 261544 in samba "Please add UFW profile integration with Samba" [Undecided,New] https://launchpad.net/bugs/261544
<jdstrand> nxvl: 'ufw allow Samba ; ufw status verbose' should have worked though
<jdstrand> nxvl: did that not work, or did I misinterpret things?
<nxvl> mmm
 * nxvl checks
<jdstrand> (all this after the package was installed, of course)
<nxvl> jdstrand: http://paste.ubuntu.com/40717/
<nxvl> jdstrand: here it is the complete history (after installing the package)
<jdstrand> nxvl: can you purge both and start over?
<nxvl> yep
<nxvl> is a chroot
<nxvl> so i just need to restart it
<nxvl> :D
<jdstrand> nxvl: ie, purge both, then install both, then do 'ufw disable ; ufw enable ; ufw allow Samba ; ufw status verbose'
<uvirtbot> New bug: #260687 in samba (main) "pam_smbpass.so segfaults" [Undecided,New] https://launchpad.net/bugs/260687
<jdstrand> nxvl: it is highly possibly that the chroot is grabbing your firewall configuration from your host
<jdstrand> (hence the disable and enable)
<nxvl> jdstrand: is pbuilder
<nxvl> it doesn't even has ufw installed
<uvirtbot> New bug: #261544 in samba (main) "Please add UFW profile integration with Samba" [Undecided,New] https://launchpad.net/bugs/261544
<jdstrand> nxvl: do you have ufw running on your host? (sudo ufw status)
<jdstrand> (outside the chroot)
<nxvl> yep
<nxvl> http://paste.ubuntu.com/40718/
<jdstrand> nxvl: pbuilder doesn't have it's own kernel, so when ufw does its querying, it is from your live firewall
<nxvl> mmm
<nxvl> will try on a vm
<jdstrand> nxvl: that would be best, yes
<jdstrand> nxvl: you may want to do 'sudo /etc/init.d/ufw force-reload' on your host to make sure it is in a consistent state
<nxvl> jdstrand: not enabled (i'm on hardy)
<jdstrand> nxvl: well, hardy does have ufw-- but if you don't want it enabled, just do 'sudo ufw disable' and you are good to go
<jdstrand> nxvl: if using another firewall tool, restart it
<didrocks> jdstrand: I have a weird behavior with reconfigure the package: http://ubuntu.pastebin.com/d4f46009f
<nxvl> jdstrand: i don't really use it, or any other
<jdstrand> didrocks: you did 'ufw default deny', not 'ufw app default deny'
<didrocks> oupsss
<didrocks> sorry ;)
<didrocks> jdstrand: surprinsingly, it works better :)
<jdstrand> np-- there are two different policies-- one for the default policy of the firewall if no matching packets are found, and one for newly added application rules
<nxvl> jdstrand: ok installes
<nxvl> installed
<nxvl> new profiles: skip
<didrocks> jdstrand: the services discovered in "/etc/services" and "protocol:port" relies on the default policy?
<nxvl> same behavior as chroot
<jdstrand> nxvl: please post the entire history for this
<jdstrand> didrocks: I don't understand your question
<nxvl> after sudo ufw disable; sudo ufw enable it worked
<didrocks> jdstrand: I made some tests and it's ok. I found my answers :)
<nxvl> i think we should add that into the postinst
<jdstrand> nxvl: I am thoroughly confused-- can you post your complete command history?
<nxvl> mm
<nxvl> complicated, but i can try
<jdstrand> you didn't ssh into the vm?
<nxvl> nop
<nxvl> :D
<jdstrand> nxvl: we should not disable and enable the firewall in postinst-- people may not have ufw enabled so we shouldn't enable it
<nxvl> mm
<nxvl> right
<didrocks> nxvl: for me, everything's fine. The new rule is added for the profile corresponding to the default app policy after installing it or reconfiguring it
<nxvl> restarting the process in my -desktop vm to post the history
<jdstrand> nxvl: I need to see the command history to determine if there is a bug, or something else
<nxvl> jdstrand: yep, restarting the process
<jdstrand> nxvl, didrocks: I updated the application integration testing section to clarify some things. please refresh it and use the new procedures (if needed)
<jdstrand> (in UbuntuFirewall)
<nxvl> ok
<didrocks> jdstrand: ok, with the skip policy as ufw default
<jdstrand> didrocks: ah, good point-- updated again to explicitly set the default app policy to 'skip' at the beginning of the tests
<jdstrand> nxvl: ^^
<nxvl> jdstrand: http://paste.ubuntu.com/40724/
<nxvl> :(
<nxvl> restarting test
<jdstrand> nxvl: turn that frown upside down, looks good to me-- unless I am missing something
<nxvl> nop
<nxvl> it's good
<nxvl> i just haven't tested the default skip thing
<jdstrand> *phew* ;)
<didrocks> jdstrand: I tried to clean all my rules and the one which has been added by /etc/services seems to not be taken into account (http://ubuntu.pastebin.com/d20743bb6)
<nxvl> moving to next package
<jdstrand> didrocks: yes, this is expected, though a bit of a bug
<nxvl> jdstrand: would you like to sponsor it? Bug #261544
<uvirtbot> Launchpad bug 261544 in samba "Please add UFW profile integration with Samba" [Undecided,New] https://launchpad.net/bugs/261544
<jdstrand> didrocks: basically, you added something without specifying a protocol, but then tried to delete something that does specify it
<jdstrand> didrocks: this is treated as two different things by ufw
<soren> didrocks: Nervous?
<nxvl> mm
<didrocks> ok, so I must make sudo ufw delete deny ssh, so that it checks again on /etc/services to see what to update
<nxvl> jdstrand: for dovecot would you recommend just one dovecot-common with all protocols on it, or a profile fro -imap, -pop, etc..?
<jdstrand> didrocks: yes. this happens because /etc/services lists both udp and tcp for 'ssh', so ufw groups those together as one rule
<didrocks> soren: your answer to Michael about "boot from degraded raid" (and I understand from the way he is taking thisâ¦) :)
<Adakos> does anyone here use port/nic trunking/bonding ?
<nxvl> just one with multiple profiles described on it, isn't it?
<jdstrand> didrocks: you really want 'ufw allow ssh/tcp'
<soren> didrocks: I just don't understand what you mean by "nervous"?
<didrocks> jdstrand: that makes sense
<jdstrand> (in the first place)
 * soren wanders off for dinner things.
<didrocks> soren: hum, probably bad word, let's say angry against Michael :)
<jdstrand> nxvl: I have an example in the source for dovecot, that you could use as a template
<jdstrand> nxvl: in the source of ufw for dovecot that is
<nxvl> oh ok
<didrocks> jdstrand: so, for the openssh package, I won't call the profile ssh (it will be a bad idea because of the corresponding service name)
<jdstrand> didrocks: now I understand your question from long ago
<didrocks> hum, which one? :)
<jdstrand> didrocks: the profile name and the 'service' name from /etc/services are completely different
<didrocks> oh ok, one of my last questions :)
<didrocks> yes yes, I see that now
<Adakos> I am using my server as a router, what is the best web interface for a new admin to use ?
<didrocks> jdstrand: all the tests are ok
<jmedina> Adakos: simple routing or advance?
<didrocks> I will work on the packages tomorrow (it's getting late and I want some rest ^^)
<jmedina> Adakos: what do you want to admin via web?
<jmedina> for routing I would recomend VYATTA
<Adakos> jmedina: i dont need VPN support, but I need a basic replacement for a dying router (and i have this box laying around) port forwarding/masquerading, etc
<Adakos> dhcp,  that sort of thing
<jmedina> I dont know a good wui, but for that things I prefer doit by hand, if I need more features I would use vyatta
<jmedina> maybe webmin has something
<Adakos> yeah i was looking at webmin
<Adakos> i'll poke around with VYATTA though, thanks :)
<jdstrand> didrocks: there is a chance for collision using the simple syntax. eg, profile uses [ssh] and /etc/services has 'ssh'. In this case, /etc/services wins
<jdstrand> didrocks: I should add a warning when that happens...
<didrocks> jdstrand: do you think I really have to keep the ssh name for the profile?
<jdstrand> didrocks: no-- I recommend you use OpenSSH
<didrocks> jdstrand: we agree :)
<didrocks> also the profiles are case sensitive apparently
<zul> nxvl: the samba ufw profile has been tested?
<jdstrand> didrocks and nxvl: there is also a 'Profile Recommendations' in UbuntuFirewall that may be helpful
<zul> jdstrand: what no one uses telnet anymore?
<didrocks> but "ufw allow apache" or "ufw allow Apache" would be use for me, not regarding the case of the wordâ¦
<Adakos> jdstrand: ssh is a much better solution to almost any problem than telnet (unless you are diagnosing mail server woes) iirc
<jdstrand> zul: heh-- if you want to update the package, go for it
<zul> jdstrand: no no no :)
<jdstrand> Adakos: yes, zul was just kidding around
<Adakos> oh , hah
<jdstrand> (I hope)
<jdstrand> ;P
<zul> maybe..
<jdstrand> didrocks: well, the main point of the profile name is to uniquely identitify the profile, and using upstream's capitalization or first letter capitalized seems to be the easiest way to stay consistent
<jdstrand> didrocks: currently it is case sensitive. if you feel strongly about it being case insensitive, I'll take your patch :)
<didrocks> jdstrand: ok :) The guidelines are clear enough so that there is no derivative from them
<jdstrand> s/take/consider, review and possibly use/
<didrocks> jdstrand: in which langage is ufw written?
<jdstrand> python
<didrocks> ok, I will give an eye so :)
<didrocks> first update some packages and then look at it.
<didrocks> good night everyone ;)
<jdstrand> goodnight didrocks
<nxvl> zul: yep, check the changelog
<nxvl> err
<nxvl> check the backlog
<nxvl> jdstrand: for dovecot, i will need to add the ufw line to dovecot-$service.postinst, don't it?
<nxvl> since i want a different profile to be added depending on the package i install
<nxvl> btw is dovecot ssled by default?
<jdstrand> nxvl: it is not ssl'd by default last I checked...
<zul> jdstrand: it does both now
<nxvl> yep
<nxvl> it does
<nxvl> using snakeoil
<nxvl> just checked
<jdstrand> nxvl: dovecot-common should probably add the file to /etc/ufw/applications.d, and then both dovecot-imapd and dovecot-pop3d should each do 'ufw app update --add-new ...'
<nxvl> that's what i meant
<nxvl> :D
<nxvl> thank you
<soren> didrocks: He's not exactly famous for being easy to work with..
 * soren reads his response and concludes that not much has changed *sigh*
<slim_> hello all, i have ubuntu server with two NiC eth0 that connect to internet and eth1 for internal network , how can i enable internal network to connect to internet , after search i find the script > http://phpfi.com/347516 but still not working, anyone can tell me what is missing ?
<zul> soren: and thats why you use hardware raid
 * soren doesn't like hardware raid at all
<slim__> back , sorry disconnected, i asked a question before disconnect :)
<slim__> is about that i want to use ubuntu-server as a router
<sommer> slim__: you might take a look at: https://help.ubuntu.com/8.04/serverguide/C/firewall.html#ip-masquerade-ufw
<slim__> thanks sommer
<sommer> np
<nxvl> zul: Chuck
<nxvl> zul: your last revision of mysql FTBFS on AMD64 due an issue with a test
<nxvl> zul: and it's making courier FTB also
<nxvl> zul: did you know the issue or have plans for it?
<mathiaz> nxvl: yes - one of the test fails on i386
<mathiaz> nxvl: it seems related to a timeout in the ndb test
<nxvl> mathiaz: failed: 157: Could not connect to storage engine
<nxvl> mathiaz: it seems that it's trying to connect to the server, or something :S
<mathiaz> nxvl: right - the test should have a sleep somewhere so that the storage server can start
<mathiaz> nxvl: the timeout may not be long enough - zul reported it was correctly for his local builds
<mathiaz> nxvl: could you try to build it on your local machine and see if the test fails ?
<nxvl> mathiaz: i can't right now, i'm at the university on a windows machine
<nxvl> mathiaz: i will on the recess (in an hour maybe)
<nxvl> mathiaz: it's also odd that they are completely different tests
<nxvl> and they seem to be architecture independent
<nxvl> zul: have you retried the build?
<nxvl> maybe is a random issue
<nxvl> oh! i love FF, it makes things funnier and my brain to blow
<nxvl> :D
<Xetrov`> might there be a list of standardized server hardware that ubuntu itself supports?  im having an issue with there being no support for hardware raid with an intel server board we are attempting to turn into a vmware server
<nxvl> there is a wiki page
 * nxvl searchs
<ivoks> khm... no support for on-board hardware raid?
<nxvl> https://wiki.ubuntu.com/Hardware
<ivoks> there's no such thing as on-board hardware raid :)
<ivoks> nxvl: wazup? :)
<nxvl> ivoks: i think he is refering to a hardware raid + an intel board
<nxvl> ivoks: at the university in the only interesting course i have this period
<ivoks> oh, nice...
<nxvl> yep
<nxvl> networking stuff
<ivoks> Xetrov`: which hardware raid do you have?
<Xetrov`> well its an intel s5000psl
<nxvl> but as in electronic communications
<ivoks> Xetrov`: that's fake raid
<nxvl> ivoks: every raid is a fake feeling of security
<nxvl> :D
<ivoks> eh
<nxvl> there no better thing than backups
<ivoks> hardware raids usually cost twice as much MB :)
<nxvl> there is*
<ivoks> nxvl: well, raid and backup are different things
<Xetrov`> hmmm, i see, software raid on board
<nxvl> ivoks: and lot more of money
<nxvl> ivoks: yep, but at the end they just ensure your data will be ok
<ivoks> Xetrov`: disable that and buy a real hardware raid, or use linux raid
<Xetrov`> so ive never done software raid with ubuntu, is it simple enough?
<nxvl> yep
<ivoks> nxvl: no, raid ensures your server will keep on going in case disk damage
<nxvl> is the same with every linux distribution
<ivoks> Xetrov`: what kind of raid would you like to setup?
<nxvl> ivoks: mmmm
<Xetrov`> raid 5
<nxvl> ivoks: with a backup server i can do the same
<Xetrov`> my boss actually jumped the gun and bought a hardware key to unlock raid 5 through the motherboard
<ivoks> Xetrov`: ok, there's a problem with booting from software raid 5
<nxvl> ivoks: but what i meant is that they just support bussiness continuity plans
<Xetrov`> so im trying to figure a way to let the board handle raid just as it would everywhere else
<ivoks> Xetrov`: you need special partition for /boot, outside of raid
<nxvl> ivoks: AND it depends on what raid you use
<Xetrov`> hmmm
<nxvl> and as /boot it's only used at boot time, it won't hurt
<nxvl> :D
<ivoks> Xetrov`: how many disks do you have?
<ivoks> 3?
<Xetrov`> 4 if i remember correctly
<ivoks> ok, 4
<ivoks> you can do this:
<ivoks> disk 1 - two partitions - 1GB and the rest, both raid
<ivoks> disk 2 - two partitions - 1GB and the rest, both raid
<ivoks> disk 3 - two partitions - 1GB and the rest, smaller swap, other raid
<ivoks> disk 4 - two partitions - 1GB and the rest, smaller swap, other raid
<ivoks> with two small partitions from first two disks, create mirror raid and create /boot on it
<ivoks> with bigger partitions on all disks, create raid 5
<ivoks> this way you have redundant /boot
<ivoks> and raid5 for rest of the system
<Xetrov`> hmmmm
<Xetrov`> assuming there really isnt any support for the onboard
<ivoks> there isn't
<ivoks> cause that's not raid
<Xetrov`> understood
<ivoks> if you go and install windows, it will recognize all 4 disks
<ivoks> same goes with linux
<Xetrov`> windows sees 1, as i have them configured that way
<ivoks> and then driver inside the kernel wraps it into one disk
<ivoks> that's cause you installed the driver
<Xetrov`> true.
<Xetrov`> so its either that or buy a true raid card
<ivoks> fake raids use processor for all the calculations
<ivoks> they don't have raid logic in it self
<ivoks> they are just marketing trick
<Xetrov`> looks like he gets to return the unlocking key  :)
<ivoks> those are SATA or SAS disks?
<ivoks> sata i guess
<Xetrov`> sas
<ivoks> heh
<ivoks> i missed :D
<ivoks> 3ware has some really nice raid controllers
<ivoks> and has more then great support for linux
<Xetrov`> price is an issue as well hah
<Xetrov`> oh im sure
<Xetrov`> i had great success with dell perc as well
<ivoks> dell perc works, true...
<ivoks> there's also intel/LSI logic
<Xetrov`> hehe
<ivoks> or whatever it is called these days
<ivoks> but those have very poor performance
<ivoks> i would say that linux raid is better than those lsi logic
<Xetrov`> well im gonna grab a bite, thanks for the help, ill probably bug you more in a bit  hah
<ivoks> well, i think not
<ivoks> cause it's midnight over here :)
<Xetrov`> where is here
<ivoks> croatia
<Xetrov`> ahhhh
<Xetrov`> maybe not then haha
<ivoks> hehe
<ivoks> well, take care guys...
<ivoks> umm... Xetrov`
<ivoks> Xetrov`: according to http://www.intel.com/Products/Server/Motherboards/S5000PSL/S5000PSL-specifications.htm
<ivoks> Xetrov`: your MB has a LSI raid controller
<ivoks> which should work in ubuntu
<Xetrov`> odd
<Xetrov`> it tells me it finds no disk drive and asks which driver to use
#ubuntu-server 2008-08-27
<nxvl> mathiaz: just tryed mysql build, it failed but in a different test
<mathiaz> nxvl: the test was also related to the NDB engine ?
<nxvl> ndb_config
<nxvl> but it is on i386, not amd64
<nxvl> and i386 on buildd falided on subselect
<mathiaz> nxvl: right - so all the failure are related to the set of ndb tests ?
<nxvl> don't really think so
<nxvl> i *think* subselect is not ndb
 * nxvl checks
<nxvl> yep
<nxvl> subselect is a different issue
<nxvl> mysqltest: At line 3057: query '$start $end' failed with wrong errno 2013: 'Lost connection to MySQL server during query', instead of 0...
<zul> mathiaz: should it be changed so that if the tests failed it shouldnt error out?
<uvirtbot> New bug: #261689 in dovecot (main) "Please add UFW profile integration with dovecot" [Undecided,Confirmed] https://launchpad.net/bugs/261689
<nxvl> kees: around?
<nxvl> jdstrand: i'm done with the primary ufw integration package
<nxvl> packages*
<nxvl> jdstrand: they are on the main queue waiting for sponsorship
<uvirtbot> New bug: #261698 in postfix (main) "Please add UFW profile integration with postfix" [Undecided,Confirmed] https://launchpad.net/bugs/261698
<uvirtbot> New bug: #261699 in bind9 (main) "Please add UFW profile integration with Bind9" [Undecided,Confirmed] https://launchpad.net/bugs/261699
<kees> nxvl: just heading out the door
<tominglis> ï»¿hey i have xubuntu 8.04,  i have installed postfix, apache2, mysql, and php5 (including gd / mysql); ï»¿i have chowned /var/www to www-data, but when i try and install drupal, my browser tries to download phtml file, so presumably it is not being parsed?; ï»¿i have checked to see that the php5 module is installed and that the files for libapache2-mod-php5 are installed; ï»¿i have reinstalled php5 and restarted apac
<tominglis> i have also tried installing libapache2-mod-auth-mysql and loading that?
<tominglis> does anyone know what i can do? sorry but i am a bit of a noobie
<sommer> ScottK: new clamassassin mir: bug #261705
<uvirtbot> Launchpad bug 261705 in libsys-hostname-long-perl "MIR for libsys-hostname-long-perl" [Undecided,New] https://launchpad.net/bugs/261705
<jdstrand> tominglis: I'm not a drupal guy, but your situation typically means that AddHandler wasn't setup right
<jdstrand> tominglis: I'm sorry, AddType
<tominglis> ok
<tominglis> jdstrand: do you know where i can look to tell me how to set this up?
<tominglis> should this be done by one of the packages i installed?
<jdstrand> tominglis: libapache2-mod-php5 should have done it for you. make sure you have php5 enabled with 'sudo a2enmod php5'
<tominglis> yes, i have checked that it is enabled
<jdstrand> tominglis: otherwise, check in /var/log/apache2/error.log
<tominglis> if i purge that package and install it again will it redo the stuff?
<jdstrand> tominglis: after doing the above, you should reload/restart apache. purging shouldn't be required unless you think you messed up your configuration
<tominglis> i didn't do any configuring except installing the postfix, apache2, mysql5, and php5 packages
<tominglis> i tried reinstalling libapache2-mod-php5, but it didn't have any effect
<tominglis> but if i purge it will that help?
<tominglis> i have restarted apache each time
<jdstrand> tominglis: based on what you said you did, no.
<tominglis> oh :(
<tominglis> so what do you think i should do. can i find out what the libapache2-mod-php5 package would have written and write it myself?
<jdstrand> tominglis: does putting this in /var/www/test.php work?
<jdstrand> <?php phpinfo(); ?>
<tominglis> jdstrand: oh, yes! so is there something i am maybe doing wrong with drupal. should the files have different permissions?
<tominglis> oh
<jdstrand> tominglis: does renaming that file to test.phtml work?
<tominglis> i tried doing http://localhost/index.php and that loads up drupal installer
<tominglis> is there some way to make apache read php files by defaault
<jdstrand> tominglis: I don't understand your question
<jdstrand> (your test.php shows it is working)
<tominglis> yes, and drupal installer works if you go to that specific link
<tominglis> but if you just do http://localhost it tries to download a phtml file
<tominglis> or open it
<jdstrand> tominglis: need to adjust DirectoryIndex
<jdstrand> tominglis: eg DirectoryIndex index.html index.htm index.shtml index.php
<tominglis> should that be in the apache2.conf file?
<tominglis> or in the relevant file in /sites-enabled
<jdstrand> tominglis: the latter
<jdstrand> tominglis: you'll need to reload apache after making the change
<tominglis> weird, i tried doing http://127.0.0.1 before i made the change, and that seems to work now, but http://localhost doesn't
<tominglis> is that normal?
<tominglis> also also, do i actually need libapache2-mod-auth-mysql?
<jdstrand> tominglis: I bet http://localhost/ works. it has to do with name resolution. google for apache and trailing slash for how to fix that
<jdstrand> tominglis: as for libapache2-mod-auth-mysql, I have no idea (not a drupal user)
<tominglis> i just installed it on the off chance it did something useful (it isn't required by drupal)
<jdstrand> tominglis: gotta go! hopefully I helped somewhat. good luck!
<tominglis> thanks loads!
<jdstrand> np
<tominglis> does anyone know why http://localhost/ or http://localhost fails to load index.php from my drupal install, but http://127.0.0.1 or / works fine?
<sommer> tominglis: ping localhost, it's probably going to 127.0.1.1
<sommer> tominglis: just edit your /etc/hosts file
<ScottK> sommer: Thanks.
<sommer> ScottK: and another #261722
<sommer> bug #261722
<uvirtbot> Launchpad bug 261722 in libtest-distribution-perl "MIR for libtest-distribution-perl" [Undecided,New] https://launchpad.net/bugs/261722
<ScottK> Great.
<FuRom> Has anyone in here ever setup something called "phusion passenger"? I'm having issues with making it work on ubuntu hardy heron.
<ScottK> sommer: ubuntu-mir subscribed on both.  I didn't make any changes.  Please keep going.
<ScottK> FuRom: Someone is currently trying to get it included in the official archive.
<ScottK> A couple of us looked at the code today and it was stunningly scary.
<ScottK> At least from a package management perspective.
<FuRom> Stunningly scary? xD
<ScottK> Pretty well ignores the way good packages are supposed to interact with the package management system.
<ScottK> Mind you this is not rare in the RoR world, but it doesn't suprise me that one might have problems.
<FuRom> =/ It's been driving me nuts for the past 5 hours or so. I've had to redo how PHP works and all, just to make it work without removing stuff that I need to work. I've gotten it all installed, for the most part. I just need to really configure it, I guess.
<ScottK> Can't help you there.
<FuRom> I've come across the problem that, it's not as simple as mod_ruby =/
<FuRom> lol
<wantok> perhaps an exception to the 'packages is best' moto? :)
<FuRom> Hell, I can't even get normal rails installed, because "sudo gem install rails" tells me it doesn't exists in the repository.
<ScottK> wantok: I'd say rather the opposite.
<wantok> ScottK: ah ok.
<wantok> i understood this to be dodgy 3rd party packages
<wantok> back in 15, afk
<ScottK> Maybe I misunderstood you.
<ScottK> The whole RoR management system conflicts pretty seriously with any notion of how package management works on Debian and derived systems.
<wantok> my misunderstanding
<wantok> :)
<FuRom> I just came up with an interesting question.... (#apache seems to be dead, so, I guess this would be my 2nd best place to ask), anyways, is there a good way to turn apache into a multi-server? Like.... If the request type is xttp instead of http, it bridges to another server?
<kraut> moin
<uvirtbot> New bug: #261809 in freeradius (universe) "freeradius listen on random port" [Undecided,New] https://launchpad.net/bugs/261809
<CrummyGummy> K, report back. The HP software doesn't work on Hardy 64 bit.
 * CrummyGummy considers installing Debian boxen.
<maswan> what HP software was that? some kind of weird san stuff?
<maswan> hpacucli works at least, that's the only hp software I need. :)
<ahasenack> are you guys aware of some issue with intrepid in kvm (host hardy) and vesa framebuffer? I can't get "vga=773" to work within that kvm
<ahasenack> 773 is 1024x768 IIRC, I use it with other guests
<soren> How does it fail?
<ahasenack> let me try to get a screenshot
<ahasenack> soren: the kvm window switches to the right size, but all black
<ahasenack> soren: it does boot and work apparently
<ahasenack> but nothing can be seen
<ahasenack> soren: when booting normally, there are some errors which seem related to framebuffer, I'll try to get those
<ahasenack> soren: in a normal boot, I can see two logs from "uvesafb"
<ahasenack> "getting vbe info block failed (eax=0x4f00, err=-3)
<ahasenack> "vbe_init() failed with -22"
<soren> Ok.
<Abracadabra> Hi, what is the command to untar a file ?
<sommer> tar -xf file_name, if it's been compressed you add "z" for gzip and "j" for bzip2
<Abracadabra> the file is filename.tar
<sommer> just xf should do it then
<Abracadabra> ty
<sommer> np
<CrummyGummy> maswan: The proliant stuff, hpacm and all. I've eventually come to the conclusion that it doesn't support my server anyway. DL 120 (simple gw boxen)
<CrummyGummy> Any ideas why the pxe boot on a server iso would install a generic kernel?
<CrummyGummy> cheers
<maswan> CrummyGummy: Ah, never heard of it. :)
<zul> mathiaz: ping did you get anywhere with mysql?
<mathiaz> zul: not yet - I finished the nss overlay upload for openldap
<zul> mathiaz: k
<mathiaz> zul: is it critical from a FF point of view ?
<didrocks> hi,
<didrocks> colin watson is the Maintainer of openssh package. Do I still have to change it to core dev?
<ScottK> Yes.
<didrocks> it still from the old nomenclature?
<zul> mathiaz: kind of but I dont think the build should fail because of a stupid testsuite that apparently breaks in different spots
<mathiaz> zul: well - it may be related to the PIE upload from kees
<mathiaz> zul: if tests are failing randomly it may be that sort of issues
<mathiaz> zul: have you tried to rebuild it on the buildd a couple of times ?
<zul> mathiaz: it could be, it could also be the buildds
<mathiaz> zul: does it always fail on the same test ?
<mathiaz> zul: or it's just random tests that fails
<zul> no it doesnt
<zul> it failed on the very first test as well
<zul> ill take another look at it though
<jdstrand_> didrocks: fyi, I am going to change your debdiffs based on changes I made to ufw and UbuntuFirewall. The only changes are removing some things. I am going to use dpkg triggers so we don't have to play with the maintainer scripts
<jdstrand_> didrocks: we lose a little bit of functionality (ie, don't use '--add-new'), but it will be much easier to maintain and palatable to Debian
<jdstrand_> didrocks: when dpkg triggers supports arguments, I'll revisit --add-new functionality
<didrocks> jdstrand_: ok. Keep me in touch about the change so that I can update my others patches
<jdstrand_> didrocks: https://wiki.ubuntu.com/UbuntuFirewall#Integrating%20UFW%20with%20Packages
<jdstrand_> didrocks: but don't change anything yet, as I haven't uploaded the new ufw yet
<didrocks> ok, do you want me to change my debdiff when you will upload ufw?
<jdstrand_> didrocks: there is a bit of a time cruch, so I'll just do it all in one shot
<didrocks> jdstrand_: so, for the moment, I don't upload the others debdiff (they were almost ready) ?
<ScottK> sommer: All good.  Thanks.  Keep going.
<mathiaz> zul: I'll also try to build it on my build server
<mathiaz> zul: I'd also ask infinity about the buildd
<mathiaz> zul: something may have changed there that affects mysql tests.
<zul> mathiaz: k
<jdstrand_> didrocks: please upload-- it is much easier to pull out the bits
<didrocks> jdstrand_: ok, sorry, it will be the old with postinst and postrm ;)
<didrocks> (but I am interested in seeing with dpkg triggers :))
<jdstrand_> didrocks: no sorries needed-- it is a last minute change
<Abracadabra> Anyone here also in #ubuntu chan ?
<Abracadabra> I typed in caps, by mistake, and got banned :(
<uvirtbot> New bug: #261884 in openssh (main) "Add UFW profile integration with OpenSSH" [Undecided,New] https://launchpad.net/bugs/261884
<_ruben> hmm .. seems my perl got a bit confused .. now debmirror wont run
<_ruben> IO::Compress::Gzip version 2.008 required--this is only version 2.005 at /usr/share/perl5/Compress/Zlib.pm line 13.
<_ruben> yet /usr/share/perl5/IO/Compress/Gzip.pm shows being version 2.008 .. kinda odd i'd say
<ScottK-laptop> sommer: I think for arj you linked the wrong wiki page.
<sommer> ScottK-laptop: gar... I'll fix it
<sommer> good to go
<ScottK-laptop> Thanks.
<leonel> ScottK-laptop: anyone working in Mir for  unzoo ??
<ScottK-laptop> leonel: Maybe sommer?
<ScottK-laptop> He's been doing a bunch.
<ScottK-laptop> sommer: If leonel wants to do unzoo, how about you head on to spamassassin itself?
<ScottK-laptop> leonel: Did you see my changes in your clamav MIR?
<sommer> okay, I haven't done unzoo, so that'd be great
<sommer> I can throw down on spamassassin
<ScottK-laptop> Great.
<ScottK-laptop> leonel: unzoo is all yours.
<leonel> ScottK-laptop: yes and sorry for those errors ..
<leonel> ScottK-laptop:  ok
 * leonel goes to the zoo...
<ScottK-laptop> sommer: Only one small change on arj.  I think it was just copy/paste error.
<ScottK-laptop> leonel: No problem.  Thank you for helping.  This is a learning process, so no one expects it perfect the first time.
<sommer> ScottK-laptop: cool
<didrocks> jdstrand_: I will upload openssh and cups in a few minutes
<didrocks> I finishes some additionnal tests :)
<didrocks> finish
<ScottK-laptop> sommer: Instead of dropping madcoder's "This package was debianized by ..." add a line under it that says, and is currently mainted by you.
<sommer> ScottK-laptop: which file is that?
<jdstrand_> didrocks: thanks!
<jdstrand_> didrocks: I'm processing them now
<ScottK-laptop> sommer: debian/copyright.
<sommer> ScottK-laptop: ah will do
<jdstrand_> didrocks: if you're planning on doing any others, just check UbuntuFirewall for what to do
<didrocks> jdstrand_: hum, not others at the moment. I will wait for your modification :)
<didrocks> just openssh and cups
<ScottK-laptop> sommer: DEB_PATCHDIRS := debian/patches isn't required for simple.patchsys.  Only for quilt.
<jdstrand_> didrocks: ok. fyi-- ufw 0.21 is uploaded and all modifications in place
<jdstrand_> (though it may not be mirrored yet)
<sommer> ScottK-laptop: gotcha, that was in there from the original... removed
<ScottK-laptop> sommer: Did you combine debian/runtime.debian and debian/ldapscripts.conf?
<didrocks> I haven't it yet on the french intrepid server :)
<ScottK-laptop> If so why and you need to document it.
<sommer> ScottK-laptop: the debian/runtime.debian simply had some extra code to pull the bind suffix and binddn from libnss-ldap, and the new version doesn't need it
<sommer> ScottK-laptop: ah, I'll add that to the changelog
<ScottK-laptop> sommer: Right.
<ScottK-laptop> sommer: Generally, what I'd suggest is grab the current 1.7.1 package and yours and then diff the debian dirs for each.
<ScottK-laptop> sommer: From the changelog, you should generally be able to understand why all that diff is there.
<sommer> ScottK-laptop: makes sense
 * sommer diffing 
<ScottK-laptop> sommer: Feel free to ping me when you have an updated diff.
<didrocks> jdstrand_: I assigned you the second. Both patches are uploaded
<sommer> ScottK-laptop: cool will do, thanks man
<didrocks> jdstrand_: but there is a mystery in the cups package. There is no dh_installdirs instruction in debian/rules but cups.dirs is used (I tested it)
<jdstrand_> didrocks: I'm assuming the file gets installed ok?
<didrocks> jdstrand_: yes, everything all right, even if /etc/ufw/applications.d does not exist :)
<jdstrand_> didrocks: well, there are quite a few ways to do this sort of thing...
<didrocks> jdstrand_: the package seems to use debhelper. But I found no occurence of *.dirs
<zul> kees: how does mysql use the hardening-wrapper?
<didrocks> (back in a few minutes, dinner time)
<kees> zul: the wrapper is installed as a build-dep, and enabled via the rules file's "export DEB_BUILD_HARDENING=1"
<zul> kees: thanks
<kees> zul: there's talk on #u-d about it right now (doesn't seem to be the hardening bits)
<sommer> ScottK-laptop: diff updated
 * ScottK-laptop looks
<ScottK-laptop> sommer: Why did you add the dirs file?
<sommer> ScottK-laptop: I followed the packaging guide
<sommer> ScottK-laptop: and whichever command, I forget, adds those files... so I thought why not :)
<sommer> or are they not really needed?
<ScottK-laptop> I don't think it told you that or it needs to be fixed.  You only need to list empty dirs that the package needs.  If you install the files, the dirs get created automagically.  They only other case is if you are mv'ing files into the dir.
<ScottK-laptop> I'm fairly certain if that were actually needed in this case, the package would have failed to build before.
<sommer> ah I see, I'll get rid of em
<ScottK-laptop> Is Removed debian.runtime ... related to Package no longer needs .... ?
<ScottK-laptop> sommer: ^^
<Koryo> hi all ^^
<sommer> ScottK-laptop: yes, I was trying to explain the removed dependencies... so I guess I don't need to re-exlpain the debian.runtime file?
<ScottK-laptop> I was thinking make one a sub entry of the other.
<sommer> sure, makes sense
<ScottK-laptop> sommer: Something like this (if it's accurate) http://paste.ubuntu.com/40970/
<sommer> ScottK-laptop: that says it very well... updating
<ScottK-laptop> sommer: Also, instead of just Fixes LP #49007, LP #114006, LP #126759, and LP #256091., a few words (a line maybe) about each.  End users can get debian/changelog presented to them when they install.
<uvirtbot> Launchpad bug 49007 in ldapscripts "The script is looking for /etc/ldap.secret" [Undecided,Confirmed] https://launchpad.net/bugs/49007
<uvirtbot> Launchpad bug 114006 in ldapscripts "Dependancies may be too rigid." [Undecided,Incomplete] https://launchpad.net/bugs/114006
<uvirtbot> Launchpad bug 126759 in ldapscripts "_ldapinit doesn't work with default slapd configuration" [Undecided,New] https://launchpad.net/bugs/126759
<uvirtbot> Launchpad bug 256091 in ldapscripts "ldapscripts in hardy tries to read /etc/pam_ldap.conf" [Undecided,New] https://launchpad.net/bugs/256091
<leonel> ScottK-laptop: LP #261938
<uvirtbot> Launchpad bug 261938 in unzoo "MIR report for unzoo" [Undecided,New] https://launchpad.net/bugs/261938
<ScottK-laptop> Looking.  Thanks.
<ScottK-laptop> leonel: Looks pretty good.  My only comment is that it's a recommends of clamav, not a depends.  Please change that and I'll subscribe ubuntu-mir.
<leonel> ScottK-laptop:  done
<ScottK-laptop> OK.  Thanks.
<ScottK-laptop> Now we wait and be ready to answer questions for the MIR team.
<leonel> Great !
<ScottK-laptop> sommer: Since you don't actually maintain the package in Ubuntu, the thing about currently maintaining in debian/copyright should go only in the Debian version of your package.  Just delete the line here.
<didrocks> jdstrand_: are you still there?
<ScottK-laptop> sommer: Assuming it actually builds and works, I think that's it.
<sommer> ScottK-laptop: I'll upload another diff in a second, just to be sure
<ScottK-laptop> OK.
<jdstrand_> didrocks: I am
<jdstrand_> didrocks: apache uploaded
<didrocks> jdstrand_: yeah, I saw that, thanks :)
<jdstrand_> thank you!
<didrocks> but because of mysql is broken for all architectures appart from i386, there is a FTBFS for all architectures but i386
<didrocks> :(
<zul> didrocks: yes we know
<jdstrand_> that'll clearly have to wait ;)
<didrocks> jdstrand_: and you will ask for a rebuild then ?
<didrocks> (with the same version number)
<sommer> ScottK-laptop: diff updated
<jdstrand_> didrocks: others are taking care of it. once the dust settles, we can resubmit
<jdstrand_> it'll have to be a FFe
<didrocks> jdstrand_: ok :)
<ScottK-laptop> jdstrand_: Did you see my last comment on Postfix and UFW profile?
<ScottK-laptop> sommer: Looking.
<jdstrand_> ScottK-laptop: regarding 'Submission'
<jdstrand_> ?
<ScottK-laptop> jdstrand_: Yes.
<jdstrand_> ScottK-laptop: yes :)
<ScottK-laptop> jdstrand_: Can you have multiple profiles per package?
<jdstrand_> ScottK-laptop: also, ufw uses dpkg triggers now, so we don't have to fiddle with maintainer scripts
<jdstrand_> ScottK-laptop: you can-- apache has 3
<jdstrand_> ScottK-laptop: I already uploaded postfix, but we can tweak it later if needed
<ScottK-laptop> jdstrand_: Then postfix might want postfix, postfix-smtps, and postfix-submission.
<ScottK-laptop> OK
<jdstrand_> ScottK-laptop: right now, it has 'Postfix' and 'Postfix Submission'
<jdstrand_> ScottK-laptop: we also don't use the '--add-new' at this point, so no ports are opened via package installation (regardless of what DEFAULT_APPLICATION_POLICY is)
<ScottK-laptop> OK.
<jdstrand_> that'll come later-- as in intrepid+1 or later depending on the status of dpkg triggers funcionality
<ScottK-laptop> Right.
<egze-qype> Hi guys
<ScottK-laptop> I think postfix and postfix-submission is probably enough.
<ScottK-laptop> smtps is deprecated anyway.
<egze-qype> can someone help me with configuring postfix please?
<ScottK-laptop> egze-qype: Perhaps.  What's your question.
<jdstrand_> ScottK-laptop: well, we have time to think about it :) the idea behind the profiles is to not have too many-- just the most common (otherwise there are so many choices and it is no easier for the user)
<ScottK-laptop> Tight.
<ScottK-laptop> Err
<egze-qype> i'm testing my setup in telnet. sending mails to localhost works fine. but when I try to send a mail to my gmail account, I get this 550 5.1.1 <my_email@gmail.com>: Recipient address rejected: gmail.com
<ScottK-laptop> Right.
<ScottK-laptop> egze-qype: Find in /var/log/mail.log where that happens and pastebin it and the several lines before it.
<egze-qype> ScottK-laptop http://pastebin.com/m7b1b3b8a
<egze-qype> any ideas?
 * ScottK-laptop looks
<egze-qype> want me to paste also my conf file?
<ScottK-laptop> sommer: I'm doing some more formatting work on your debian/changelog.  I'll just fix it and then you can look at it if you don't mind.
<ScottK-laptop> egze-qype: Just a moment.
<uvirtbot> New bug: #261952 in samba (main) "passwd does not work whith special smb.conf options (maybe a libpam-smbpass bug?)" [Undecided,New] https://launchpad.net/bugs/261952
<sommer> ScottK-laptop: heh, sure... a coupld of the bugs are resolved by the removing the old dependencies, so I just moved them up :)
<leonel> ScottK-laptop:  in case  clamav get into main   what about the patches   do we keep sending  diffs   as today ?
<ScottK-laptop> egze-qype: Yes.  Please pastebin the output of postconf -n
<ScottK-laptop> leonel: Yes.  Canonical will do it if we don't, but they appreciate the help.
<leonel> great
<ScottK-laptop> Also there will be USN whenever it's updated.
<leonel> perfect
<uvirtbot> New bug: #261956 in mysql-dfsg-5.0 (main) "libmysqlclient15off requires mysql-common >= 5.0.67-0ubuntu2" [Undecided,New] https://launchpad.net/bugs/261956
<ScottK-laptop> sommer: Try running lintian on the .deb (use the current Intrepid version).
<ScottK-laptop> sommer: I think E: ldapscripts: FSSTND-dir-in-usr usr/man/ must be fixed before an upload.  The rest can wait for a rev 2 if you want.
<ScottK-laptop> egze-qype: Are you getting your postconf -n?
<sommer> ScottK-laptop: spamassassin done, bug #261968
<uvirtbot> Launchpad bug 261968 in spamassassin "MIR for spamassassin" [Undecided,New] https://launchpad.net/bugs/261968
<ScottK-laptop> sommer: Great.
 * lamont mutters at jdstrand, makes a note to go fetch postfix and push the changes into the VCS tree
<ScottK-laptop> hey boss.  I tried to hold them off until you showed up.
<jdstrand_> lamont: now hold on a sec :) there were already changes made to that tree that weren't in git IIRC (and bind9)
<jdstrand_> lamont: I was gonna get you the git stuff later, but I'm under a serious time crunch atm
<lamont> heh. no worries
<lamont> jdstrand_: ah... is kees fault.
<jdstrand_> lamont: do you want me to give you the debdiffs I uploaded for postfix and bind9? or get you the git stuff when I have time (probably tomorrow)
<jdstrand_> (the debdiffs are different than what was submitted)
<kees> erg?
<lamont> jdstrand_: no worries - it's more a question of if you had multiple changes, gettting them as separate diffs would be wonderful...
<lamont> kees: just blaming you for the ubunu1 bind9 version (hardening)
<lamont> which is really just me being a slacker and not uploading it before you did
<lamont> btw, thanks for testing that for me. :-)
<kees> ah! heh, sorry.  I wasn't sure with some of the packages which were in VCS.  the control files can be vague.  :P
<lamont> LOL
 * lamont looks around for his trusty trout
<lamont> jdstrand_: with the exception of apparmor being overly agressive to certain bind9 instances (due to untested integration issues with certain universe packages), you haven't given me any cause to worry about your changes.
<lamont> kees has done even better. :)
<jdstrand_> \o/
<jdstrand_> lamont: if you give me bug reports, I can update our qa-regression-testing scripts accordingly
<jdstrand_> lamont: to fix up the bind9 apparmor stuff
<lamont> jdstrand_: dnscvsutil is the one that had things, um, beyond what we expected.
<kees> jdstrand_: while you're in there, could you add a rw rule for /var/log/named ?  I've seen that location used for custom logging.
<lamont> and that got fixed
<lamont> oh.  cool
<lamont> kees: I'll make time to get that change in before the freeze-after-tomorrow :-)
<lamont> if jdstrand_ isn't already still pending-upload on bind9
<kees> lamont: ah, very good.
 * lamont wonders when the bzr frontend to talk to a git repo will be done
<jdstrand_> lamont: bind9 is uploaded (for the ufw stuff)
<lamont> right
<kees> zul: this upstream bug fix for mysql is giving me a headache.
<jdstrand_> I won't get to the apparmor today, but made a note of it
<lamont> I just pulled the source for all versions-extant from archive.u.c, so I can rationalize the source
<jdstrand_> (it's not FF sensitive anyway)
<lamont> jdstrand_: right.  for that matter, just blat a bug into launchpad, since I scrape that before I upload anyway
<zul> kees: how so?
<jdstrand_> ok
<kees> zul: they hardcoded their build path in the error messages....
<kees> +mysqltest: At line 1: expecting a SQL-state (00000) from query 'remove_file /misc/mysql/forest/23921/51-23921/mysql-test/var/tmp/test_nonexistent.tmp' which cannot produce one...
<kees> which, obviously fails when I try to build... *roll eyes*
<zul> kees: heh sounds like quality stuff
<kees> zul: yeah, I'm digging around in their vcs to see if they've improved on it...
<zul> kees: im guessing they havent
<kees> and LP keeps timing out.
<egze-qype> ScottK-laptop sorry, was out getting food. here's my postconf http://pastebin.com/m54a909de
 * ScottK-laptop looks
<lamont> kees/jdstrand_: I assume there's no reason that your changes would be a bad thing to upload to debian?
<ScottK-laptop> egze-qype: Why did you set default_transport = error and inet_interfaces = loopback-only ?
<egze-qype> ScottK-laptop it was set by default like this. Is this wrong? I only want to send emails, and not to receive them
<egze-qype> and only send from localhost
<ScottK-laptop> egze-qype: When you installed postfix, what option did you pick?
<jdstrand_> lamont: mine would be odd at this point because ufw isn't in debian yet. on todo list for after debian freeze
<ScottK-laptop> egze-qype: inet_interfaces and default_transport affect sending.
<egze-qype> ScottK-laptop Local only
<kees> lamont: afaik, it should be fine.
<lamont> thanks
<ScottK-laptop> egze-qype: inet_interfaces would be how you want it to send the mail out.  With loopback only it won't get far.
<ScottK-laptop> egze-qype: You don't want local only if you want to send mail out.
<egze-qype> ScottK-laptop should I pick Internet-Website then?
<fostermarkd> of course it can send mail out
<fostermarkd> inet_interfaces refers to what IP it _listens_ on
<fostermarkd> which is for accepting mail, not for sending it
<ScottK-laptop> egze-qype: Run sudo dpkg-reconfigure postfix and then pick internet.
<lamont> jdstrand_: there's a difference between "odd" and "broken" :-)
<jdstrand_> lamont: it certainly won't break anything-- just drops a file somewhere :)
<ScottK-laptop> fostermarkd: Right.  Sorry.
<ScottK-laptop> egze-qype: He's right.
<egze-qype> wohoo. reconfigured as Internet and the sending works
<ScottK-laptop> Since only loopback addresses are in mynetworks anyway, it won't actually change anything to make inet_interfaces loopback only.
<ScottK-laptop> OK.
<lamont> jdstrand_: exactly
<egze-qype> How can I make sure that the mails can be sent only from localhost?
<ScottK-laptop> egze-qype: mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 does that for you already.
<didrocks> jdstrand_: thanks for the upload :)
<jdstrand_> np!
<egze-qype> ScottK-laptop nice! So I don't need to protect it with a password if it only accepts requests from localhost? Or I should still think about security?
<ScottK-laptop> Your config didn't have any SASL in it, so the only place it will accept mail from is one of those addresses (which are all localhost addresses).
<egze-qype> ScottK-laptop thanks a lot!
<sommer> ScottK-laptop: new diff uploaded
 * sommer didn't know lintian worked on .debs, coolnes
<lamont> sommer: it uses the .changes just far enough to get the list of debs. :-)
<sommer> ah
<sommer> I sometimes feel like lintian is just waiting to betray me :-)
<kees> fetching all of the mysql bzr tree: not quick.
<basicshwa> I need help with two things
<basicshwa> I installed ubuntu server on my laptop. I then installed fluxbox because I ahve a taste for minimalism. I need to now make sure fluxbox starts on bootup and I need to learn how to install programs via apt-get
<basicshwa> I only need basic programs to basically do simple things such as play media. thats about it really
<slicslak> proftpd still the recommended ftp server?
<nxvl> jdstrand_: jamie, are you updating Samba's patch or waiting for me to work on it?
<jdstrand_> nxvl: I am updating it
<jdstrand_> nxvl: everything else is uploaded
<jdstrand_> nxvl: thanks for your help on this
<jdstrand_> nxvl: if you plan to do more, be sure to see the new procedures in UbuntuFirewall (it's simpler now, cause I'm using dpkg triggers)
<jdstrand_> though note, I won't be able to sponsor more today :)
<ScottK-laptop> sommer: Uploaded.  Thank you for your contribution to Ubuntu.  Please fix up the rest of the lintian warnings (except the no versioned depends on debhelper one) and make that a revision 2.
<ScottK-laptop> sommer: I recommend you pull your package off of mentors and fix this stuff first.
<sommer> ScottK-laptop: awesome will do, thanks again
<ScottK-laptop> slicslak: My recommendation is to use another, more secure protocol instead.
<sommer> ScottK-laptop: when I've fixed the errors should I just file a new bug?
<slicslak> ScottK-laptop, :)  ssh(fs) might be more secure from an outside attack but less secure from an inside attack.
<nxvl> jdstrand_: yeah, i saw the uploads, thank you for sponsoring them
<nxvl> :D
<uvirtbot> New bug: #261999 in mysql-dfsg-5.0 (main) "libmysqlclient15off broken in intrepid amd64" [Undecided,New] https://launchpad.net/bugs/261999
<nxvl> jdstrand_: you have plans to add ufw to debian?
<jdstrand_> nxvl: yes
<ScottK-laptop> slicslak: From my perspective anything that transmits a password in the clear is a non-starter, so for anything non-anyonymous, no ftp I say.
<basicshwa> ok, So, I installed ubuntu server edition. Then I sudo apt-get install xorg fluxbox fluxconf in command line. I then installed mplayer by apt-get install mplayer in command line. I went to go to systems in fluxbox and.... systems was not there
<slicslak> ScottK-laptop, true, but you can enforce TLS
<ScottK-laptop> I suppose that works.  It's been long enough since I used ftp, is it common that clients support that?
<ScottK-laptop> basicshwa: As soon as you said 'installed fluxbox' it wasn't really a server question anymore.  I suggest #ubuntu
<slicslak> old clients don't - or do so buggy.  but new ones do.  and clients like filezilla will automatically try to use TLS without being told to.  so i always enforce it and then tell clients to d/l filezilla (which is a great ftp app anyway)
<Goosemoose> anyone have a good preseed file example for hardy? In particualr I'm interested in getting the computers to join an AD Domain
<Goosemoose> and login authentication via AD
<freaky[t]> can anyone help me with this: http://main.freakyy.de/raid.txt i get this when starting my server it's something about a raid array ... http://main.freakyy.de/sda.txt <-- this is the smartctl -d ata -a /dev/sda output
<maw> anyone running snort on Ubuntu 8.04?
<ScottK-laptop> sommer: FYI, my review of your package also resulted in Debian Bug#496826 being filed.
<lukehasnoname> so apparently murmur is NOT console server capable
#ubuntu-server 2008-08-28
<lukehasnoname> Can I config sshd to listen on multiple ports?
<Kamping_Kaiser> yes
<lukehasnoname> Like, in /etc/ssh/sshd_config, put in more than one "Port" listing?
<Kamping_Kaiser> have multiple Port lines
<lukehasnoname> k
<lukehasnoname> Because not all programs that remote in take a custom port switch, but when I'm off my dorm LAN I can't get on port 22
<Kamping_Kaiser> i suspect yourdoingitallwrong (tm)
<lukehasnoname> ;_;
<lukehasnoname> Kamping_Kaiser: how then, do I get virt-manager to tunnel ssh on a custom port?
<Kamping_Kaiser> lukehasnoname, no idea what that is (and i cant stay around to help, because i'm off to work
<Kamping_Kaiser> gl with it
<lukehasnoname> later.
<leonel> packages.ubuntu.org   says  that  hardy has  postgresql 8.3.1   and  really has  8.3.3   same for  gutsy  says postgresql  8.2.6 and really has  8.2.9
<nxvl> mathiaz: it looks like lucas is mad with you
<nxvl> :S
<ScottK> nxvl: lucas isn't the only one.
<ScottK> mathiaz: That gems change got no support when it was discussed.  Why in the world did you upload it?
<nxvl> ScottK: being mad doesn't help to fix things is better to discuss it calm'd down
<ScottK> nxvl: I'm not upset now.  I think it showed very poor judgement and distrespect for the community.
<mathiaz> ScottK: I reread the thread and I didn't find there was strong disagrement about not doing it.
<ScottK> mathiaz: Nobody was in favor.
<ScottK> mathiaz: It is wrong all the way around.
<ScottK> OK.  I take that back.   Darren Hinderer liked it.
<ScottK> But given that he's a RoR developer he would.
<ScottK> nxvl: Propose a fix then.
<nxvl> hat do i broke now?
<mathiaz> ScottK: IIUC, your main concern is that binaries installed by the gem commands are located in /usr/local/bin and that would take precedence over binaries in /usr/bin ?
<ScottK> The RoR thing.
<ScottK> From a packaging/technical perspective yes.
<ScottK> Not having reviewed the code, I didn't understand the degree to which you were forking the package.
<ScottK> That's a serious concern too.
<mathiaz> ScottK: so you have to serious concerns ?
<ScottK> Yes.
<mathiaz> ScottK: or only the fact that there is a serious fork ?
<ScottK> The fork and random versions taking precdence over system installed packages.
<ScottK> Two.
<mathiaz> ScottK: on the packaging bits, we're using hooks that are in upstream code repository - so when 1.3.0 is out, the debian maintainer will be able to use it.
<mathiaz> ScottK: I agree that changing the patch system is the best move.
<ScottK> I think that was VERY bad.
<mathiaz> ScottK: OTOH all patches in 1.2.0 are included in upstream code.
<ScottK> That's not relevant.
<mathiaz> ScottK: so once 1.3.0 is package there isn't patch system needed anymore
<ScottK> Say lucas thought your change was wonderful and wanted to incorporated it?
<ScottK> You've built your change set in a way that's incompatible with his package and made it much harder than it needs to be.
<mathiaz> ScottK: incorporate it in the current version of debian (lenny) ?
<ScottK> It shows you've no intent of working with Debian.
<ScottK> Possibly.  The point is you did your change in a way that makes it hard to feed back to Debian.
<ScottK> Whether he tries to get a freeze exception or not is up to him.
<ScottK> We shouldn't presume.
<ScottK> Ubuntu tries to show it works hard to push things back to Debian and then incidents like this put us in a very bad light.
<ScottK> I think as far as that goes lucas reaction to the change speaks for itself.
<mathiaz> ScottK: if Lucas wants to incorporate our work, he can just grab the debian/operating_system.rb
<ScottK> Keep in mind that as Debian developers go, he's very pro-Ubuntu.
<ScottK> What possible benifit was there to change the patch system?
<mathiaz> ScottK: that's where all the update-alternatives plumbery is done.
<mathiaz> ScottK: none - it was not a move as I stated before.
<ScottK> OK, but he still has to redo the patches or redo his package.
<ScottK> ?? [22:01] <mathiaz> ScottK: I agree that changing the patch system is the best move.
<mathiaz> ScottK: there isn't any patches to do to implement the update-alternatives system.
<ScottK> But there are patches and you did change the patch system.
<mathiaz> ScottK: debian/operating_system.rb relies on hooks that are already in upstream.
<mathiaz> ScottK: correct - there is 1 patch.
<ScottK> Right. so lucas could, if he wanted either wait and get the new upstream or he could redo all the patches, what he cannot do is take advantage the Ubuntu patch without rework.
<mathiaz> ScottK: the current 1 patch in Ubuntu relies on upstream hooks IIRC - so lucas would have to update to the new upstream version first.
<ScottK> OK.
<ScottK> He still can't use the patch without rework.
<soren> What's the name of the offensive package again?
<mathiaz> ScottK: correct - as I said, changing the patch system wasn't a good idea
<mathiaz> soren: libgem-ruby
<mathiaz> soren: libgems-ruby
<sommer> mathiaz: I think you may have had a typo earlier
<sommer> mathiaz: about the patch system
<mathiaz> ScottK: sommer correct
<mathiaz> ScottK: I made a typo before - unfortunately it was a typo at a bad moment
<ScottK> OK.
<mathiaz> ScottK: so what was your other concern ?
<ScottK> files installed by gems taking precedence over ones installed through the packaging system.
<ScottK> This isn't the usual /usr/local situation where we can assume that if the admin installs something in there he want's it to take predence.
<mathiaz> ScottK: so binaries installed by gems are available in /usr/local/bin/ which takes precedence over package system.
<ScottK> Yes.
<mathiaz> ScottK: why - he is using the gem command to install something that is not in the archive ?
<soren> How are these different scenarios? I know nothing about gems or the issue at hand. I'm just curious.
<ScottK> mathiaz: Gems embed the entire application stack that they need.
<nxvl> soren: ubuntu-motu ML Lucas mail
<ScottK> They have no idea what's already installed.
<mathiaz> soren: imagine that you use easy_install and get a script in /usr/local/bin automatically.
<ScottK> Except more crackish, less integrated, and more opaque.
<ScottK> So a Gem always brings everything IT needs.  Regardless of the installed state of the system.
<mathiaz> ScottK: right - and ?
<ScottK> The Gem installing stuff in a location where it can find it, but is not in the system path is a reasonable compromise and about the best one could do today.
<ScottK> mathiaz: So it's the RoR equivalent of DLL hell.
<soren> ScottK: Ok. Even stuff that's already installed? It only assumes the presence of a ruby interpreter?
<ScottK> soren: As I understand it, yes.
<mathiaz> soren: yes - the gem command doesn't know about the ruby libraries installed by dpkg/apt.
<ScottK> ez_install has at least been patched into submission to know about stuff that's already installed through that package management system.
<ScottK> This is much worse.
<mathiaz> soren: it still knows which one it has installed though.
<soren> mathiaz: Well, it wouldn't have to know about packaging systems. It could just do an "import foo" (or whatever it's called in ruby) and see if it works. If not, go fetch the missing crack.
<ScottK> The problem is that other packages will use the selected alternative in /usr/local/bin.
<mathiaz> ScottK: ok - so if a package relies on a binary from a specific gem, and there is another version of the gem installed with the gem command, the deb package would use the version in /usr/local/bin instead of the one provided by the corresponding package.
<ScottK> Right.  Or if there happened to be the same file installed via the packaging system (recall \sh's imagemagic example).
<mathiaz> ScottK: don't remember - could you give me example ?
<ScottK> The example he gave was, "I had this at one ocasion, there was this imagemagick gem and this module was only working with a special imagemagick version, so it shipped it together with the other cruft, but instead of installing it somewhere where this imagemagick lib didn't hurt, it was just a smartass and installed it in /usr/lib, overwriting the distro imagemagick."
<mathiaz> ScottK: right - that's was a problem with the upstream gem, which was bad. I don't see how the version I've uploaded would have made thins worse in that case ?
<ScottK> Right, but if it'd let it be installed in a 'normal' place instead of forcing things, then with your change, it's in /usr/local and even though it doesn't overwrite the distro version, the effect is the same.
<mathiaz> ScottK: I take this as an example that there exists upstream gems that are wrong.
<mathiaz> ScottK: I don't understand what you meant. what would be in /usr/local ?
<mathiaz> ScottK: only the binaries declared by the gems would have a symlink in /usr/local/bin
<mathiaz> ScottK: all the rest would end up in the usual place (/var/lib/ruby1.X/gems/imagemagick) if the gem was using the gem calls
<mathiaz> ScottK: or in /usr/lib if the gem was bypassing everything
<NCommander> nothing should be added to /usr/local by any package
<NCommander> THe right fix isn't fixing gems, its packaging ruby gems into APT, just like python and python debian groups do
<mathiaz> NCommander: the libruby-gem package doesn't add anything to /usr/local/. The gem command does everything.
<NCommander> mathiaz, it symlinks things into /usr/local
<NCommander> Thats enough to be a policy violate
<NCommander> or should I say
<NCommander> It configures gems to do that
<mathiaz> NCommander: *it* = the gem command, not while installing the libruby-gem package.
<NCommander> A bad policy at best since then apt can't remove everything correctly
<NCommander> If you want to wreck the consistancy of users systems, then having gems's package manager exist at all is a miserably idea
<NCommander> Neither perls' CPAN or php's PEAR are supported
<NCommander> We repackage all those modules
<mathiaz> NCommander: slangasek responded on bug https://bugs.launchpad.net/ubuntu/+source/libgems-ruby/+bug/262063
<uvirtbot> Launchpad bug 262063 in libgems-ruby "rubygems bin in PATH potentially breaks other applications and violates all sense of decency in packaging." [High,Confirmed]
<ScottK> I think slangasek is right about that.
<ScottK> We aren't actually installing stuff in /usr/local.  We're just a faciliator.
<NCommander> Ok
<NCommander> I conceed the point
<NCommander> But I still think its bad practice to allow gems to install packages outside of APT's control
<ScottK> Doesn't make it a good idea however.
<NCommander> Having multiple package managers is kludgy at best
<ScottK> Just because you are dealing the crack instead of injecting it doesn't make you innocent.
<mathiaz> Can you install CPAN module directly from perl ?
<NCommander> yes, you can, but its depericated
<NCommander> Its only recommended if your running unstable, and the module you need is not available via any other means, and even then its still discouraged
<NCommander> (it ends up in /usr/local I believe, so it will still override any package installed by APT later on)
<ScottK> Just as you can install Python modules through distutils or ez_setup.  One fo the big differences though is that even ez_setup knows what modules are already installed.
<mathiaz> right - but it's still possible. What is the difference between doing a gem install something and download a tarball and ./configure; make; make install ?
<NCommander> Because that should be a method of last restort
<NCommander> If a package is not available via APT
<NCommander> Fine, install it via gems
<NCommander> But thats the only case
<NCommander> The correct and proper solution is to package the gems individually
<ScottK> But don't have the gems functionally replace stuff that's installed through the package system.
<mathiaz> NCommander: I don't deny that. but what if they're not available ?
<ScottK> mathiaz: Then don't put it in the path where it can mess other stuff up.
<NCommander> mathiaz, the user should be able to use gems, I agree, but it should not go into the PATH, and not affect the general usage
<NCommander> Make sure gems prints a giant warning label
<mathiaz> ScottK: well - as stated by NCommander, if you install CPAN module, it will override the system modules.
<NCommander> I consider that a bug in CPAN
<NCommander> Its because Perl offers no alternative
<ScottK> mathiaz: That doesn't make it a good idea.
<mathiaz> so what's the point of having /usr/local/bin on the path then ?
<NCommander> mathiaz, thats for things users install themselves. Its didicated by FHS
<NCommander> that may have actually changed in recent years, I haven't needed a perl module that wasn't packaged in APT in a very long time
<mathiaz> NCommander: exactly - and how does a end user install ruby library ?
<NCommander> sudo apt-get install libruby-gems-*name*
<mathiaz> NCommander: via the gem command, instead of ./configure; make; make install
<NCommander> I'm telling you
<mathiaz> NCommander: and if it's not available ?
 * NCommander feels like a broken record
<ScottK> Right, but it doens't just bring itself, it brings an entire applicaition stack.
<ScottK> And that's the difference.
<NCommander> Then its the users responsibility to install it and possibly shoot themselves in the foot
<mathiaz> ScottK: it brings the dependencies needed to make the gem running.
<NCommander> Which can include binary modules
<ScottK> Yes, whether they are alerady installed or not.
<NCommander> Which may have bugs or break the ABI with things already installed
<NCommander> Assume it installs an expat update that breaks the ABI
<mathiaz> NCommander: correct - via the gem command - we're not trying to support ruby libraries installed via gem.
<NCommander> SUddenly GNOME doesn't wor on the next restart
<ScottK> mathiaz: If I install a perl module or make install something I see what I get.
<mathiaz> ScottK: you can also do that with the gem command
<ScottK> If I install a gem, there's a whole train behind it that I don't necessarily get to see until it's to late.
<ScottK> mathiaz: But that's not the typical RoR usage.
<ScottK> mathiaz: I don't understand why each Gem can't just live in it's own private namespace and not disturb anything.
<ScottK> It's going to bring the whole stack anyway, so it's not like it causes more code duplication.
<mathiaz> ScottK: what's the typical RoR usage ?
<ScottK> Developer bangs out cool looking application, stuffs it into a gem module and delivers it and moves onto the next project.
<NCommander> you install a gem
<NCommander> gem installs libraries
<mathiaz> ScottK: I agree that being able to teach the gem command to check if there is already a ruby library installed by dpkg/apt is another step in the right direction.
<NCommander> YOu install another gem which installs another version of the same library
<ScottK> mathiaz: I just don't understand why being in the path is of any benifit.
<ScottK> All I see is downside risk.
<NCommander> Having gems as a package manager exist separately is insane if the distribution provides a package manager
<mathiaz> ScottK: I would argue for user friendlyness
<mathiaz> ScottK: if you install the rails gem, you'll get a rails binary
<mathiaz> ScottK: the rails binary relies on the rake binary to be called.
<mathiaz> ScottK: so the rails command doesn't work by default if you haven't modified your path
<ScottK> mathiaz: So modify the path.
<ScottK> I'm not at all convince I shouldn't just revert this entire uplaod.
<ScottK> It doesn't make any sense at all.
<ScottK> Note: I'm not actually doing that.  That's just my perspective.
<mathiaz> ScottK: well - the reason for doing this upload is so that you won't have to modify the path.
<ScottK> Yes, you can just destroy your system instead and that will be better.
<ScottK> mathiaz: Doesn't ruby have the equivalent of sys.path.append?
<ScottK> In Python at least this is the most trivial thing to do in the world.
<mathiaz> ScottK: by destroying the system you mean that the end user could install random binaries in /usr/local/bin ? How is that different from an end user using ./configure; make; make install ?
<ScottK> If that had the potential to drag in lots of not clearly related files and superced system functions, I'd agree.  It generally doesn't.
<ScottK> The problem is that Gems aren't at all transparent about what they will bring with them.
<ScottK> So the admins sees X and wants it and doens't now about Y, Z, and AA.
<ScottK> AA causes problems and he didn't even know it was there.
<ScottK> now/know
<mathiaz> ScottK: there is the dependency command in gems.
<ScottK> So you're telling me that any admin who installs a gem has a clear understanding of the dependencies he's bringing and what that might affect?
<ScottK> mathiaz: What's the problem with an installed gem extending it's path to include what it needs?
<mathiaz> ScottK: hm - it may possible to patch the ruby interpreter to include the /var/lib/ruby1.X/bin/ in it's path if it exists.
<ScottK> That would resolve most of my technical concern.
<mathiaz> ScottK: however any shell scripts coming the gem would work
<mathiaz> ScottK: *would not*
<ScottK> Right.
<mathiaz> ScottK: but that would still require the end user to modify its PATH to include /var/lib/ruby1.X/bin/
<mathiaz> ScottK: so that he can use the gem binaries directly.
<mathiaz> ScottK: that's the issue the upload is trying to solve.
<ScottK> Surely we can figure a way to add that to the environment for that user/gem.
<mathiaz> ScottK: well - according the lsb you can drop things in /etc/profile.d/
<ScottK> Put the main application in /usr/local/bin and then stuff all the dependencies in /var/lib/ruby1.X/bin/
<mathiaz> ScottK: however, this violates the debian policy.
<ScottK> Then the gem can be started, we just need a way to have it notice /var/lib/ruby1.X/bin/.
<mathiaz> ScottK: well then all the binaries would end up in /usr/local/bin/
<mathiaz> ScottK: let's take the example of rails
<mathiaz> ScottK: the rails gem provides the rails binary.
<mathiaz> ScottK: and depends on the rake gem.
<mathiaz> ScottK: but you'd also want to have the rake binary available on the command line.
<mathiaz> ScottK: so while installing the rails gem, it would pull in the rake gem
<mathiaz> ScottK: the rake gem should install the rake command on the PATH since a end user may want to be able to use the rake command.
<ScottK> Which puts us on the road to perdition.
<ScottK> I see you want it entirely the way it is then.
<ScottK> I guess there's no point in further discussion.
<ScottK> I'll just have to consider if I want to take the heat for reverting it or not.
<deleter> (linux newb) I'm trying to install Ubuntu server 8.04.1, but I keep running into this error - "Please insert the disc labeled: 'Ubuntu-Server 8.04.1 _Hardy Heron_ - Release i386 (20080701)' in the drive '/cdrom/' and press enter. Media change
<deleter> i md5d the iso, verified the disc integrity, and am pretty sure the cdrom drive is not at fault, as it always fails at the same point (78% into the base system installation)
<deleter> I tried the forums, but althought I found others with the error, I could not make out a solution
<deleter> anyone know what to do / have any ideas? Thanks
<ScottK-laptop> deleter: Did you try more than one CD anyway?
<deleter> yeah I'm on my 4th one...
<ScottK-laptop> OK.
<ScottK-laptop> No great ideas on my part then.
<azteech> have you attempted to download the iso from a different location and burning it from it?
<deleter> yes, I don't think its the iso though because the hash matched the one online
<Zelut> not sure if this is the right place, but can anyone tell me where my inputted ufw rules are stored?
<wantok> /etc/ufw/ iirc
<Zelut> I see before.rules and after.rules, but don't see any of my custom rules in those files.
<wantok> you wont see the rules per-se, just the iptables
<azteech> deleter, then suggest trying another machine to do download and burn and see if that helps ... if it doesn't .. then could be drive you are using to read disk on ...
<Zelut> wantok: right.  I'd like to manually edit a few of the iptables lines (i have a few in incorrect order)
<Zelut> problem is I just can't see any of my rules in ufw or iptables syntax
<Zelut> ahh, its in /var/lib/ufw.
<nealmcb2> ScottK, one question I have is what will best serve the average ruby or rails developer.  if gems is really popular, along with capistrano, and keeps things in sync itself, and deals with security issues, and works in a nice cross-platform way and allows users to track upstream better than ubuntu is likely to do, then we have to deal with that use case.  we could either explain to users how to do it ENTIRELY outside the package 
<deleter> turns out it was the cddrive, use of a different one led to success, thanks for the ideas
<ScottK> nealmcb2: None of which is an argument for putting every single gem that gets brought in when you install something in the system path.
<ScottK> nealmcb2: Not to mention the social aspect of blowing off every developer who gave comment on the ML and gratuitously forking the package from Debian.
<azteech> deleter: you are welcome ....
<nealmcb2> I certainly hear the frustration, and also agree with steve that it is a tough situation.  I haven't looked enough at the issues to say what the best path forward right now is
<nealmcb2> but I suspect we do want to figure out how to make it easy for users to use gems and capistrano
<nealmcb2> too bad the conversation that was started back a few months ago never really got off the ground
<nealmcb2> ScottK: finding ways to leverage the expertise of both ubuntu devs and ruby/gems devs is one challenge, from what I have seen
<nealmcb2> (and debian :)
<ScottK> We have a package management system and in the event of a conflict between that and something else, I have no doubt which we should go with.
<nealmcb2> then at this rate from what I'm hearing I'm guessing that we won't be much of a platform for ruby.  but that is just a guess, since I haven't looked at it in detail
<ScottK> The problem is that no attempt was made to try and make it work with the packaging system.
<ScottK> Every language that has a packaging system has to do this.
<ScottK> It's painful, but necessary work and they wanted a shortcut.
<nealmcb2> that could well be the case.  but dealing with conflicts between packaging systems is even harder than dealing with conflicts within a single packaging system
<nealmcb2> anyway I'm hardly the packaging expert.  I'm mainly trying to hold up a common use case and hoping we can address it
<ScottK> Right.  That's the hard part.
<ScottK> I thought we were having a good discussion towards compromise and all of a sudden he pulled back.
<NCommander> nealmcb2, I'm not upset with you over this patch, I'm upset with the people who approved it.
<NCommander> nealmcb2, and I can see it from your side of things, and at first glance, your solution isn't that bad until you realize what it means for APT :-)
<NCommander> and I hope this doesnt' discourage you from futher ruby contributions
<ScottK> nealmcb2: Was this your idea?
<nealmcb2> and I'm hoping we can appreciate folks for putting possible solutions forward, and recognize the inherent difficulty of the problems, without too much unhelpful venting.
 * NCommander re-reads the original bug description
<nealmcb2> ScottK: nope - not my idea....
<ScottK> OK.
<NCommander> Well, I personally want to see the bad patch get wiped
<nealmcb2> I did facilitate the server team meeting where it came up last week, but I was concentrating on the agenda, not the technical decisions
<NCommander> The proper method is we collabrate with Debian on packaging gems individually
<nealmcb> NCommander: how much do you know about how the average ruby and/or rails user works?  My sense is that gems is very widespread, but I haven't researched it a lot
<NCommander> Very little
<NCommander> I've tried Ruby on Rails
<nxvl> are you still fighting about the gems issue?
<NCommander> But I felt like I was fighting the tool more than anything else
<nealmcb> yeah - I think that's our problem - lots of ubuntu/debian expertise and not enough ruby user perspective
<NCommander> I find ruby on rails ATM to be more hype than being super-revolutionary.
<NCommander> http://www.oreillynet.com/ruby/blog/2007/09/7_reasons_i_switched_back_to_p_1.html
<NCommander> nealmcb, the way ruby does things with gems is the same problem we have when a user uses CPAN or PEAR directly
<nealmcb> of course.  and it is a hard problem.
<NCommander> (I'm familiar enough with gems to understand why its a bad thing the current setup, but I can't say I could build a gem now)
<nxvl> Django is kewl
<NCommander> nealmcb, the right (not the easy) solution would be to build a framework that can take a gem, and convert it to a debian package
<nealmcb> NCommander: and have folks installing packages from random repos?  with no security backing?
<NCommander> We can generate source packages that are part of the archive
<NCommander> The Debian perl group have a set of scripts for quickly debianizing cpan modules
<nealmcb> I think one aspect of this is that the ruby world is still moving very fast.  perhaps it will mature enough that our packaging will catch up.  anyway, I'm just hoping we can come up with a good answer, sooner rather than later
<wantok> for non-root users doenst cpan install into ~/.cpan?
<NCommander> wantok, yeah
<NCommander> My personal feelings on ruby ATM though are generally its more hype then anything else
<NCommander> It requires you to fight to a specific methodology
<NCommander> Reminds me of MFC actually
<NCommander> s/fight/think
<wantok> i dont know how ruby's thing works at all, but if it doesnt do the same thing its not really the same as cpan at all
<NCommander> wantok, gems doesn't support (to my knowledge) local user installations
<NCommander> Perl and CPAN have almost 30 years of code behind them
<wantok> i dont deny it, i just felt i should note hte difference between the perl 'you can shoot everyone or just yourself' and the (percieved) gems 'shoot everyone, or anyone'
<NCommander> I think with perl it is
<NCommander> "You can shoot yourself in the foot, but six months later, you'll have no idea how you did it"
<wantok> nm. you have regex to save yourself ;) *mwhwhahahaah*
<NCommander> well, python working to kill perl as the glue language
<toshko> hi all
<toshko> Soft RAID1 problems (invalid raid superblock magic), ubuntu server 8.04.1, anyone?
<mm_202> Hey guys. I have a rather stupid question.  If I have a dir A, with dirs a,b,c, and a dir B, with dirs d,e,f.  If I do a mv B/ A/ will it overwrite dirs a,b,c?
<owh> mm_202: No, it will move directory B inside of A, giving you A/a A/b A/c and A/B/d A/B/e A/B/f
<mm_202> Sorry, I meant mv A/ B
<mm_202> to just move the contents of A to B.
<owh> mm_202: Well, go into the /tmp directory, then run mkdir -p A/a A/b A/c ... etc and test it for yourself.
<mm_202> okay, will do.
<mm_202> thanks, owh.
<owh> toshko: I cannot help you directly, but perhaps if you asked an actual question, someone here might be able to.
<toshko> owh: well this is the problem: install, configure raid1 with sda,sdb and the folowing message appears on random basis at the start up screen (this is from the syslog file but the same is shown at the start).
<toshko> md: invalid raid superblock magic on sda
<toshko> md: sda does not have a valid v0.90 superblock, not importing!
<toshko> this is since ubuntu server 7.10 for me on several machines
<owh> toshko: Are the machines running the same hardware?
<toshko> no, it is not hardware issue, because the machines are different and i changed the mb and psu on the last one
<toshko> tested the hdds - no problem
<owh> Given that you've been having this issue for some time, I'd recommend putting your question with relevant background information, including the things you've tried and the hardware involved into an email and sending it to the ubuntu-server list.
<toshko> did it in the forum but will do in the list also
<toshko> thanks
<owh> toshko: Also, make sure that you don't get stuck into a single thought pattern, as-in "it's not hardware because...", you may well find the solution in a place where you didn't expect it.
<CrummyGummy> Elo
<CrummyGummy> In ubuntu-8.04-server-amd64.iso the pxeboot install installs the server kernel. Any ideas how I can change this?
<owh> CrummyGummy: The last time I looked at this was a little while ago, so what I'm telling you is not going to be accurate, but IIRC, you can configure exactly what happens with the appropriate config file. I recall setting it all up with several boot images and menu options. As I said, this isn't directly going to help.
<owh> CrummyGummy: The pxeboot process from memory works like a TFTP server which you can configure to use different boot images.
<owh> CrummyGummy: It sounds like you're using a ubuntu-server boot image, rather than a workstation.
<CrummyGummy> Sorry, I'm half a sleep still.
<CrummyGummy> The problem is that the server iso is installing a generic kernel, not the server kernel as expected.
<owh> CrummyGummy: Uhm, which boot stanza are you using because I think it might be pointing at the wrong thing.
<owh> (Bear in mind that as I said before, I've not done this for some time...)
<CrummyGummy> This is the current http://pastebin.com/m1c460e6a
<CrummyGummy> pxelinux.cfg
<owh> CrummyGummy: Well that's using the ubuntu-installer initrd, so I'd not be surprised if it's using the workstation kernel.
<owh> CrummyGummy: Where did the initrd come from?
<CrummyGummy> That initrd came from the ubuntu-installer directory on the server iso.
<CrummyGummy> (sorry, was on the phone)
<owh> CrummyGummy: You're sure that's where it came from, as-in, no mistake?
<owh> And while we're at it, there isn't another ubuntu-installer directory lying around anywhere?
<owh> CrummyGummy: The way I implemented this at the time was to loop mount an .iso of the required installer and make sym-links to the right bits, so I could just change the iso mount and make it install something else.
<CrummyGummy> I'm pretty sure that it came from that iso. Its the only amd64 iso I have on this server.
<CrummyGummy> as in no workstation ones. I'll try again with the symlinks as suggested.
<owh> CrummyGummy: There are no stray ubuntu-installer directories?
<CrummyGummy> Runnign find.
<owh> use locate -i
<elnewb> Hey guys.  I tried installing Ubuntu 8.04 Server on an old P3 Box.  I put the disc in and booted then it loaded the cd and displayed the menu.  When I clicked install not the "try ubuntu livecd" option it still took me to the live cd.
<owh> elnewb: Well, the server CD doesn't have a LiveCD, so you're in the wrong room :)
<elnewb> Owh: are you sure?  it gave me the option?
<elnewb> it booted into the comand line but it was running off the disc.
<owh> It's possible that I'm getting old and grey and I'm wrong.
<elnewb> Wait you are right..... just loaded the ubuntu iso in vmware
<owh> Pfew. Thought my brain had finally had it :)
<elnewb> This was at school today.  My teacher must have downloaded the wrong version.
<owh> elnewb: Well before you download another one, check the MD5
<elnewb> Nah I don't think thats the problem.  I asked my teacher to download the server version for me cause the students access to the internet is filtered so we are limited to direct downloading of documents (.doc .xls and pdf).
<owh> elnewb: No, I mean, check which CD you have.
<owh> elnewb: Not if it's corrupt or not :)
<elnewb> owh: the iso will still be on the computer that he downloaded it toos desktop
<elnewb> probably be like 8.04-desktop.iso
<kraut> moin
<owh> tag
<owh> Or should that be 'tag ?
<CrummyGummy> Okay, I've done it with symlinks to pxeboot stuff from the right iso. In the install process I chose Openssh server only. The kernel installed is linux-generic. Is this right?
<nxvl> good morning
<uvirtbot> New bug: #262244 in mysql-dfsg-5.0 (main) "Post-installation script error reinstalling mysql" [Undecided,New] https://launchpad.net/bugs/262244
<milestone> I have setup a mailserver (postfix+maildrop)
<milestone> as a mailbox_command I have defined maildrop
<milestone> so the user needs a $HOME/.mailfilter to function properly
<milestone> since i made it generic, I have copied the .mailfilter to /etc/skel
<milestone> when I create a new User, the file gets copied, but the permissions stay on root:root
<milestone> any suggestions on where to tell that the permissions need to be updated as well?
<acemo> is it possible to only reinstall grub and the mbr with the server cd?
<uvirtbot> New bug: #262251 in php5 (main) "phpize: configure:8184: error: possibly undefined macro: _LT_SET_OPTIONS" [Undecided,New] https://launchpad.net/bugs/262251
<uvirtbot> New bug: #262264 in likewise-open (main) "Fails to join a domain: Unknown pam configuration" [High,New] https://launchpad.net/bugs/262264
<slicslak> i have two shadow files: /etc/shadow and /etc/shadow-   i need to copy some existing users from another system.  which file should i edit to put the password hashes in?
<uvirtbot> New bug: #262291 in samba (main) "hardy: update samba to 3.0.32" [Undecided,New] https://launchpad.net/bugs/262291
<toyotafosgate> hey does anybody here know anything about raid?
<toyotafosgate> anyone here?
<toyotafosgate> had to sound like a retart
<toyotafosgate> *retard
<toyotafosgate> but i'm using pidgin for the first time to connect to IRC
<Koon> toyotafosgate: well, I know something about raid.
<toyotafosgate> fair enogh
<Koon> but maybe not enough, depends on your real question
<toyotafosgate> so heres the deal: I have two hard drives mirrored (raid1)
<toyotafosgate> they are mirroring only one partition
<toyotafosgate> they are mirroring /home
<toyotafosgate> i noticed that they were not synced
<toyotafosgate> so i synced them
<Koon> sorry, got to go now -- i'll see your question later if nobody else picks it up before
<toyotafosgate> this caused the server to go down
<toyotafosgate> alright
<toyotafosgate> when it came back up the drive was no longer mounted
<toyotafosgate> anyone have any ideas?
<toyotafosgate> ï»¿ hey does anybody here know anything about raid?
<toyotafosgate> i've got a pretty serious issue and i could really use someones help
<Brazen> toyotafosgate:I know how to set up md raid and that's about it.  If I ever had a failure, I'd have to pull out some google-fu.
<Brazen> toyotafosgate: but, have to checked to make sure there is an entry in /etc/fstab for the /home partition?
<Brazen> oops
<Brazen> s/have to checked/have you checked/
<kees> zul: I hatez mysql.
<kees> zul: amd64 is randomly failing still.  I just keep clicking "rebuild".  first 6 failures, then 1.
<kees> zul: now 3.
<kees> zul: though they are all ndb
<kees> zul: now 3.loaddata_autocom_ndb ndb_alter_table2 ndb_auto_increment ndb_autodiscover ndb_autodiscover2 strict_autoinc_5ndb
<kees> gAh
<kees> zul: so far it's always been a subset of loaddata_autocom_ndb ndb_alter_table2 ndb_auto_increment ndb_autodiscover ndb_autodiscover2 strict_autoinc_5ndb
<Goosemoose> anyone have a hardy preseed file?
<jmedina> Goosemoose: what is a preseed file?
<Goosemoose> a file used to push installs of ubuntu via a server to clients over pxe
<Goosemoose> could be used via cd too
<jmedina> Goosemoose: good, do you have any document about that?
<Goosemoose> only for gusty not hardy
<jmedina> Goosemoose: share it please
<lamont> jdstrand: if you'
<lamont> re feeling generous, you can upload bind9 to address 252675, as mentioned in the bug
<lamont> :-D
<lamont> otherwise, I'll have to actually go figure out SRU stuff :-)
<jdstrand> lamont: hmmm-- didn't debian decide not to actually do that in their security queue?
<lamont> jdstrand: debian has 9.3.4 and 9.5.0
<lamont> and disclaims 9.2.4 (sarge)
<jdstrand> I must have just misremembered it then-- so only 9.4.2-P1 has the performance hit?
<lamont> and, IIRC, upstream didn't actually bother to do the performance-improved P2 for 9.3
<lamont> if they did, I'd still say "9.3.5-P2 for etch, or someone else can backport the fix to 9.3.4" :)
<lamont> it's an uglier patch, and I think ISC makes good decisions on what to put into fix-version point releases
<lamont> jdstrand: also, 175316, aka debbugs 459010 probably wants some security-review-like activity, as per the comments in 459010:
<lamont> Is there security support for this part of BIND 9?
<jdstrand> lamont: I'm inclined to upload it, but would like kees' opinion. also, have you used the patch in production anywhere?
 * lamont is 9.5.0 everywhere
<lamont> I see no reason that it should bypass SRU
<lamont> I just don't want to be the one to deal with it... :-)
<kees> I think it makes sense to SRU it.
<jdstrand> I see. though I also see the argument that there is a regression, particularly if it affects a lot of people. but IIUC it is only for very high load servers-- is this accurate?
<lamont> something like that
<jdstrand> lamont: I'll do the SRU dance for you... this time ;)
<jdstrand> lamont: but likely not today
<lamont> well, I figured as long as you were being a defacto bind9-uploader.... :-)
<jdstrand> heh
<jdstrand> sounds more like punishment for bypassing git again :P
<jdstrand> (though I did ask kees about it before doing it :)
<lamont> for the next question... do you care if I merge your chagelog entries into the 9.5.0.dfsg.P2-2 changelog and drop the ubuntu ones?
<jdstrand> lamont: not at all
<zul> kees: *grumble*
<lamont> I'll at least ack which version they went into
<lamont> 9.5.0-P1 had the perf hit, fixed (and migrated to testing) in 9.5.0-P2
<kees> zul: yeah, once this fourth rebuild attempt finishes, I'm just going to upload with a mess disabled -- each test has passed at least once, so there's no single culprit.
<zul> fun fun :)
 * lamont wonders if -server cares enough about bug 175316 that we want to fix it in intrepid, rather than Jay
<uvirtbot> Launchpad bug 175316 in bind9 "no IDN in nslookup and host" [Undecided,Confirmed] https://launchpad.net/bugs/175316
<lamont> heh. and that's blocked on security-review as per above.
 * lamont goes back to working
<lamont> kees: was that /var/log/named/ that you wanted rw ?
<kees> lamont: yawp -- it's at least where I put logs, and at least one other person I know.
<kees> it was the only AA change I had to make when moving my DNS to hardy.
<jdstrand> kees: are you the one with the dnscvsutil issues?
<kees> jdstrand: nope, not I.
<jdstrand> oh, I guess not
<lamont>   /var/log/named/** rw,
<jdstrand> lamont: gotcha
<lamont> jdstrand: dnscvsutil is me and a few buddies
<lamont> though not my house
<jdstrand> lamont: do have the required apparmor rules for that too? (I haven't used dnscvsutil)
<lamont> jdstrand: they're already in 9.4.2-13 or so
<lamont> which uh, is not an ancestor of 9.4.2-10ubuntu0.1 et al
<lamont> also in 9.5.0
<jdstrand> lamont: ok, I'll add that to the SRU too
<lamont> you might just look at 9.4.2-13 and see if it makes sense to just migrate things to there...
 * lamont looks to see how much pain that would be
<lamont> kees: and no bug for the apparmor change.. for shame.
<lamont> but don't file one now - that'd just be annoying
<kees> lamont: you want me to make one?  :)
<lamont> no
<lamont> I already committed without the tag to generate the closure
<kees> heh
<lamont> oh, cool.
<lamont> 9.4.2.dfsg.P2-1 _IS_ a descendant of 9.4.2-13
<lamont> kees: do you want +sigchase in dig for 9.4.2 SRU?
<kees> lamont: uhm, I don't know what that is.  :)
<lamont> bug 257682
<uvirtbot> Launchpad bug 257682 in bind9 "dig compiled without -DDIG_SIGCHASE!" [Undecided,Fix committed] https://launchpad.net/bugs/257682
<lamont> hardening?
 * lamont adds sigchase - it only has potential issues if someone uses it, no change for the unaware (like, say, me before this morning)
<lamont> and do you want the default named.conf.options to lose the "query-source ... port 53" comment block?
<lamont> I'm inclined to say "no" to that one, because I don't like dpkg "replace this conffile" questions, especially on a -security/-updates upgrade
<jdstrand> lamont: it'd be nice to have that removed, but I agree with your caution
<lamont> http://paste.ubuntu.com/41298/ is the current changelog-to-be, modulo a little more cleanup
<lamont> and it'll be NEW.  go sonames!!
<looseparts> Hello. How might I do security updates without doing a 'apt-get dist-upgrade' ? - I don't want to upgrade every single app, just the ones that have security patches.
<jdstrand> looseparts: disable -updates
<looseparts> huh ?
<toyotafosgate> brazen you still there?
<lamont> kees/jdstrand: if you want to see the current proposal: git clone git://git.debian.org/~lamont/bind9.git; cd bind9; git checkout -b stable/v9.4.2 stable/v9.4.2
<kees> lamont: seems like losing the "port 53" part would be nice.
<lamont> and then it's just a question of whether -updates will squawk at 1:9.4.2.dfsg.P2-2 instead of 1:9.4.2-10ubuntu0.2
<lamont> kees: yeah... that's a "your call" item... trivial to cherry-pick the patch back to 9.4.2.. I just loathe questions, and named.conf.options is a frequently-tweaked file --> lots of users touched by it
<Goosemoose> so no one knows where there's a good hardy preseed file huh? the last one published is a few years old
<lamont> kees: so if you say "DO IT JONES", it's done.  otherwise I'm chicken. :-)
<arpu> hello
<looseparts> jdstrand: would you please tell me what you mean when you say 'disable -updates' ?
<arpu> i ask on #ubuntu but no help
<arpu> i have this problem
<arpu> Creation of temporary crontab file failed - aborting as user on ubuntu hardy server
<lamont> looseparts: in /etc/apt/sources.list, comment out the hardy-updates lines
<lamont> I expect there's some nice GUI-way to do that
<looseparts> if i was running a GUI i'm be asking another list
<looseparts> ; - )
<kees> lamont: let me just double-check it in a minute...
<lamont> kees: no worries - it'll be $HALFDAY before I get to it
 * kees nods
<arpu> this is the hole output
<arpu>  crontab -e
<arpu> no crontab for rails - using an empty one
<arpu> /tmp/crontab.aH5Mbo: Permission denied
<arpu> Creation of temporary crontab file failed - aborting
<toyotafosgate> hey does anyone know if you can change the jfs filesystem (raid drive) to another in order to retrieve the data?
<lamont> looseparts: cool.
<looseparts> lamont: thank you. just to clarify, comment out the hardy-updates lines,
<looseparts> run apt-get update
<lamont> and then dist-upgrade should just pull down hardy-security
<looseparts> then run apt-get dist-upgrade ?
<lamont> for bonus points, "apt-get -ud dist-upgrade"
<lamont> that'll show you what it's doing in more detail, and download without actually installing.
<looseparts> purrrfect : - )
<looseparts> thanks a lot
<toyotafosgate> ï»¿does anyone know if you can change the jfs filesystem (raid drive) to another in order to retrieve the data?
<arpu> no on an idea about the crontab problem ?
<didrocks> jdstrand: as there is the feature freeze, do we have to continue the work on the "second zone packages" for integrating ufw?
<arpu> what is the starndard permission of the /tmp directory ?
<arpu> 755 is not right ?
<lamont> arpu: 1777
<arpu> ok than this is a bug in ubuntu hardy
<arpu> :-/
<arpu> in ubuntu ists 755
<\sh> arpu: drwxrwxrwt   6 root root        122 Aug 28 19:56 tmp
<\sh> it's what lamont said
<lamont> the t == 1000
<lamont> arpu: not on my machine it isn't... nor any other hardy box I've installed..
<arpu> hmm this is a new hardy server install
<lamont> it's entirely possible that something blatted it after the base install... stupid package or some such
<\sh> arpu: hardy server tells me the same as I posted..1777 drwxrwxrwt
<arpu> strange
<arpu> drwxr-xr-x  3 root root  4096 2008-08-28 16:43 tmp
<toyotafosgate> still none who has any idea about raid?
<toyotafosgate> ?
<daedra> what duz I need to mount usb flashdrives?
<daedra> `sudo mount -t vfat /dev/sdf1 /media/fl` isn't working
<daedra> fdisk -l /dev/sdf says there is a W95 FAT32 device at /dev/sdf1, and /media/fl exists
<lamont> daedra: modprobe vfat?
<lamont> or maybe the actual error message...
<daedra> lamont: mount: wrong fs type, bad option, bad superblock on /dev/sdf1, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail  or so
<lamont> right.  modprobe is probably your friend
<maw> anyone familiar with a tool similar to "tripwire" but for windows?
<maw> working on PCI compliancy here at work :|
<daedra> same output after modprobe vfat
<_ruben> grrr .. wonder wassup with perl on this box .. it thinks a certain module is still at 2.0005 and required 2.0008, yet it is already 2.0008 .. stupid .pm caching
<jdstrand> didrocks: it's up to you. it would certainly be nice, but now those packages need to go through a feature freeze exception process
<lamont> daedra: and fdisk /dev/sdf tells you that sdf1 is a vfat partition?
<daedra> dosfstools did it
<daedra> had to make a new filesystem on the device. lost the original contents :( but after 6 rewrites it now mounts
<Goosemoose> where do i need to upload the preseed.cfg file on the server for network boot?
<didrocks> jdstrand: and do you think it worth it and will be able to have the feature freeze exception?
<jdstrand> didrocks: personally, I don't find mysql and postgresql super interesting, as they only listen on the localhost
<jdstrand> didrocks: squid would probably be good though
<jdstrand> s/localhost/loopback/
<didrocks> jdstrand: yes, that was my first concerned (about postgres and mysql) and I didn't understand why include it by default
<didrocks> squid is probably great, yes
<jdstrand> didrocks: I added them simply because they are part of Ubuntu server's tasksel
<didrocks> so, if for the feature freeze exception, I need someone advocating in my side, I can count on you?
<jdstrand> absolutely
<didrocks> (no sure it is a very correct english ^^)
<didrocks> :)
<jdstrand> (it's such a cmall change in the packages now anyway)
<jdstrand> s/cmall/small/
<didrocks> for sure
<didrocks> I will take a loog this weed-end to try to make the profile case insensitive and also branch your code with bzr (I think this is better than proposing you a patch, isn't it?)
<jdstrand> didrocks: yes
<jdstrand> didrocks: wrt case insensitivity-- I feel pretty strongly about the presentation of the profile name with what is presented with 'status'
<jdstrand> didrocks: because it has a bit to do with branding (eg OpenSSH)
<didrocks> I will try to make some trick this week-end, but be indulgent if I make something wrong with bzr, I am not used to it (I am in charge at my company on a prioprietar VCS and also used to VCS/SVN)
<didrocks> hum
<jdstrand> didrocks: as for the user interface, if the user can type 'ufw allow openssh' or 'ufw allow OpEnSsH', that seems to be ok
<didrocks> oh, yes, it is just for the user interface
<didrocks> ufw allow/deny profile
<didrocks> ufw status profile
<jdstrand> didrocks: ufw status
<didrocks> ufw app update profile (--add-new)
<jdstrand> (you don't specify the profile with 'status')
<didrocks> hum, no status filtered in just one profile ?*
<didrocks> sorry, my bad :)
<jdstrand> didrocks: status is the status of the ufw command managed parts of the firewall
<didrocks> jdstrand: yes, but I thought it was possible to filter a rule from a profile to the status
<jdstrand> didrocks: no, you might be thinking of 'status verbose' which gives a different view of application rules
<didrocks> so, just 3: ufw allow/deny profile, ufw app update profile (--add-new) and ufw app info profile
<didrocks> jdstrand: yes, the verbose mode give the associated port currently recorded in the firewall, isn't?
<jdstrand> didrocks: well, there is 'limit' too-- but you'll likely be able to be able to change just a couple lines
<jdstrand> didrocks: 'status verbose' shows the port/protocol instead of the profile name, yes
<didrocks> jdstrand: I just gave a quick look and the only matter is that you use the profile as a key. But I have my idea to do (beautifully, of course) the trick :)
<jdstrand> cool :)
<didrocks> jdstrand: btw, I will keep you in touch. Have a good evening!
<didrocks> (or day)
<jdstrand> didrocks: you too! (and I bet you'll grow to love bzr :)
<didrocks> jdstrand: thx (I love already bzr just having read the full user guide one month ago, but had no time to practice :))
<tarab> hello? gues
<tarab> i use ubuntu 7.10, i already installed bind9 then how to configure dns (bind) server/
<Brazen> There are some commands I want to run at the end of the bootup process.  Would the correct method for this be to add the commands to "/etc/rc.local" and then "chmod +x /etc/rc.local" ?
<mathiaz> sommer: is doc.ubuntu.com up-to-date now ?
<mathiaz> sommer: I think you said so in the meeting last tuesday
<iongion> anyone knowing small embedded lamp devices that could run ubuntu server ?
<iongion> or at least if u know small ... little noise devices/computers that could be used as ubuntu/apache/php/mysql home servers ?
<Brazen> iongion: check out the "gos dev kit" (google for it), it's a mobo with embedded via x86 processor.  it's supposed to be very low power and runs linux very well, and it's super cheap.
<Brazen> iongion: I only wish it had vt extensions :(
<NCommander> Brazen, what chip does that mobo have?
<NCommander> Most newer ones have VT on it
<Brazen> NCommander: it's a VIA C7-D.  I'm positive I double checked a while ago, and it does not have VT.
<NCommander> Yeah
<NCommander> VIA is about the only one that doesn't have VTx
<NCommander> I had to mod my BIOS to get it on my laptop though
<Brazen> nice
<NCommander> FreeDOS FTW
<NCommander> Upgrading the BIOS on this machine was a nightmre
<kees> lamont: instead of "port 53", use "port *", I think.
<kEiNsTeiN^^> hello.
<lolufail> hi
<lolufail> I need to know how to extract the xen initramfs to /etc/initramfs-tools, so I can add md-raid support.
<lolufail> because I _just_ registered this nick, the question again, dont kill me plz if it appears twice ;): I need to know how to extract the xen initramfs to /etc/initramfs-tools, so I can add md-raid support.
<lolufail> and that is xfs over lvm over dm-crypt over md-raid to be exact
<acemo> is it possible to have virtual servers on the same ip? while not having a domain name
<lolufail> acemo: depends on what you want to do. you would have to do port-forwarding on the host, to the veths of the VMs
<acemo> veths of the vms what do you mean?
<lolufail> yes
<lolufail> actually I mean their IPs
<lolufail> I have the same layout, using xen and iptables on dom0
<acemo> but its on the same computer, no virtual machines
<lolufail> then what do oyu want to do?
<acemo> hmm
<acemo> like
<lolufail> just apache or what?
<acemo> when you go to 127.0.0.1/acemo it should use /home/acemo/www as root and when going to 127.0.0.1/hitoi it should use /home/hitoi/www as root
<lolufail> oh, just for http
<lolufail> sure
<acemo> yep
<lolufail> I dont know how ;) but it's simple. google for ...
<lolufail> uhm
<lolufail> apache jail
<lolufail> maybe
<lolufail> chroot?
<lolufail> sry ;)
<lolufail> ill be quiet
<acemo> thanks ill try searching for that
<Goosemoose> anyone have a preseed more practical than https://help.ubuntu.com/8.04/installation-guide/example-preseed.txt
<Goosemoose> I also can't remember where to save this file, I haven't set a server up using preseed in about 16 months, and the docs don't say where to save it
<lolufail> damnit, Imma try the gentoo channel -.-
<acemo> o.o
<acemo> seems chroot would just jail the whole apache to a directory
<lolufail> yeah, but it's more secure
<lolufail> otherwise, use simple vhosts
<acemo> yeah but it wont do any good for what i want right now
<acemo> virtual hosts seem to not allow me to do what it want.. or probably.. i dont know how to do it
<lolufail> acemo: how about you give details?
<qhartman> I'm preparing to deploy a virtual server host, which I would like to do on Ubuntu. However, KVM just doesn't feel like a good server-oriented virtualization system right now. Maybe it will be someday, but for now it seems distinctly half-baked. Does Xen officially have a future on Ubuntu server?
<acemo> am using webmin.. i go to create virtual host, i get to see this.. http://i37.tinypic.com/jszwcg.png
<acemo> i guess ill have to fill in /home/acemo/www at the document root.. but i have no idea what to fill in at the address part
#ubuntu-server 2008-08-29
<lolufail> does the standard mkinitramfs command detect a xfs over lvm over dm-crypt over md-raid setup?
<lolufail> heeelo?
<acemo> hey lolufail
<acemo> i got it to work :)
<lolufail> congratz
<acemo> it was mod_userdir that i needed
<lolufail> I'm actually looking for help myself ;)
<acemo> only its <ip>/~username instead of <ip>/username now but thats fine
<acemo> whats your problem?
<lolufail> "does the standard mkinitramfs command detect a xfs over lvm over dm-crypt over md-raid setup?"
<acemo> ok i lost it at mkinitramfs, sorry hope someone else can help you.. you tried your best to help me so i think you should be helped too! <tries to do some research for you>
<lolufail> ;)
<lolufail> thanks :)
<linuxpoet> any ubuntu folks in here familiar with postgresql
<NCommander> linuxpoet, I am
<linuxpoet> I need someone who would be willing to give a talk on optimize and configuring postgresql, the ubuntu way
<NCommander> for what, UDW?
<linuxpoet> http://www.postgresqlconference.org/
<Karamon> How can I begin to troubleshoot a networking issue with ubuntu server 8.10 running it w/VMWare player with the official ubuntu image, I can run the desktop version with no issues but the server edition isn't working for me
<NCommander> linuxpoet, I can't claim an being an expert on postgres optimization
<NCommander> I'm sorry
<linuxpoet> That's ok. I would give the talk myself but I am running the conference :P
<NCommander> rofl
<NCommander> I'd also assume I'd need to be in person which would be troublesome unless its based in NYS
<linuxpoet> NYS?
<linuxpoet> Oh NY State
<linuxpoet> actually that isn't as much of a problem, that is what sponsorship is for
<NCommander> Well, I can give the configuration part
<NCommander> But I'm not that much of an expert in optimization
<NCommander> (I do use postgresql in production enviornments, I'm one of the coders of Ubuntu REVU which is Postgres based)
 * linuxpoet is one of the lead consultants for http://www.commandprompt.com/
<NCommander> I am however a very quick study ;-)
<Karamon> Anyone know how to troubleshoot a VMWare network issue?  eth0 won't come up with some weird errors.
<linuxpoet> Karamon: perhaps an explanation of the errors would be helpful
<NCommander> linuxpoet, what extactly did you want in this conference?
<Karamon> http://pastebin.com/d4c20c99e
<linuxpoet> Well obviously it is a postgresql conference
<linuxpoet> but mostly I am trying to show postgresql's flexibility
<NCommander> I realize that :-P
<NCommander> I mean what did you want an Ubuntu developer to talk about specifically, I can give configuration (and given some time, optimization)
<linuxpoet> That is a tough one because obviously linux is linux (for this consideration)
<linuxpoet> But since IMO Ubuntu is the shiznit I wanted Ubuntu to represent
<NCommander> Well, Postgres is similar enoughly configured between Ubuntu and Debian
<NCommander> Wrapping the two together shouldn't be that hard
<linuxpoet> Karamon: what happens if you /sbin/ifconfig
<linuxpoet> and I assume this is on the guest? not the host?
<Karamon> Running the server inside windows ( I think is what your asking )
<Karamon> linuxpoet: Only thing that really sucks is I have to hand type out the output
<linuxpoet> you are running windows as the host?
<linuxpoet> wow
<NCommander> Gutsy ;-)
<Karamon> I had to give up my dedicated box that I used to run my web server on, now I'm trying out the virtual appliance
<lolufail> does the standard mkinitramfs command detect a xfs over lvm over dm-crypt over md-raid setup?
<linuxpoet> gutsy is one term for it
<linuxpoet> or pun
<Karamon> linuxpoet: http://pastebin.com/d2c59df38
<linuxpoet> well that is certainly a non detected eth0
<linuxpoet> check your dmesg to see if it sees anything at all
<Karamon> linuxpoet: http://pastebin.com/d1904e45c
<linuxpoet> ok it sees it
<linuxpoet> so try this
<linuxpoet> ->/sbin/ifconfig eth0
<Karamon> eth0: error fetching interface information: Device not found
<lolufail> dmesg | grep eth0
<lolufail> oh nevermind ;)
<lolufail> but wait, that's weird:
<lolufail> cat dmesg | grep eth0
<lolufail> [  64.136139] e1000: e1000_probe: Intel(R) PRO/1000 Network Connection
<linuxpoet> lolufail: dmesg|grep would have worked too
<lolufail> tell me where you see "eth0" there?
<lolufail> the output is forged ;)
<Karamon> Yeah I have to hand type it, that was a mistake on my part
<Karamon> I can't cut and paste from a VM appliance
<lolufail> do a:
<lolufail> brctl show
<Karamon> not installed and I can't install it without a network connection :P
<lolufail> k
<lolufail> I think your eth0 has been renamed, that's the only thing I can think of
<lolufail> grep eth /var/log/messages
<lolufail> grep eth /var/log/kern.log
<lolufail> what VM software are you using?
<Karamon> VMWare Player.. 2.0.4 build 93057
<Karamon> All refrences in log say eth0
<lolufail> hm, I can only speak from my xen experience... if eth0 suddenly disappears, the bridge script failed...
<lolufail> what does ifconfig -a say?
<lolufail> any new interfaces?
<Karamon> Yes there is eth1, but ifup doesn't work says not configured
<Karamon> How do I configure it?
<Deeps> edit /etc/network/interfaces
<Karamon> ... that was strange, but I'm glad it was that simple
<Karamon> Thanks for the help everyone :)
<lolufail> np :D
<lolufail> uhm there is a cache file
<lolufail> wait
<lolufail> ah
<lolufail> yeah
<lolufail> udev is the evil person
<lolufail> aaaat
<lolufail>  /etc/udev/rules.d/70-persistent-net.rules
<lolufail> should be an entry making your network eth1
<lolufail> ->eth0.
<lolufail> done.
<Karamon> updateing/upgrading then I will check it out, thanks :)
<tarvid> what's the easiest way to get bind9 to start at boot
<lolufail> I like sysv-rc-conf
<lolufail> though it should start automatically if im not mistaken
<tarvid> it should
<nodoubt> ï»¿can bind run correctly from var/lib/named/etc ? is this a newbie friendly location for bind to live?
<tarvid> lolufail, puzzled. restart at 20:58:04
<lolufail> hm?
<tarvid> 21:27:07 bind starts (manually)
<tarvid> sysv marked for 2,3,4,5
<tarvid> no " bind" messages inbetween
<lolufail> tarvid: hm.
<lolufail> well, are you in one of those runelvels ;) ?
<tarvid> I am in gnome and bind9 looks like mysql, cvsd, ntop in sysv but bind9 does not start automatically but starts manually
<lolufail> sry, I dont know. I'm more a gentoo guy ;)
<lolufail> rc-status would be helpful ;)
<tarvid> how do I get that
<lolufail> you dont
<lolufail> gentoo tool :P
<lolufail> did you check all the logs for messages at boot time coming from bind?
<tarvid> there are none
<tarvid> there is an amazing amount of crap I could probably live without
<tarvid> bind may be stuck behind something else like winbindd
<tarvid> rc-alert is impressively long
<tarvid> I guess I can get used to starting it manually until I get around to reloading
<tarvid> several upgrades without a reload and this is the box where things get vetted
<tarvid> i set bind up as a caching webserver because my upstream server is funky
<favro> I use an old comp at home for a file server - is it healthy to have the hard disks spinning down when not in use?
<tarvid> favro, an ancient argument
<tarvid> I never spin down
<tarvid> but I have had a few hard drive failures
<tarvid> I suspect there is more risk in spinning up and down is greater than leaving them spinning
<favro> tarvid: I changed it in bios this morn and later I accessed one drive and it fired up with a grating noise...
<tarvid> backup now
<favro> so I'm concerned
<tarvid> get another drive
<favro> tarvid: already done
<tarvid> good man
<tarvid> raspy power outages seen particularly dangerous too
<tarvid> I put all my systems on a UPS so the ups and downs are minimized
<favro> my disks on the little server can go a few days without being accessed so I wanted to save some power
<tarvid> little used and stable data might be served well by flash
<tarvid> looking at little falls boards for low power servers
<favro> that's why I'm in two minds - it is a file server/torrent box/mythtv backend - it can be busy for days or idla for days
<favro> *idle
<tarvid> intel little falls should be good for that
<tarvid> haven't looked for low power hard disks
<tarvid> capturing video can get huge
<tarvid> lolufail, thanks for the counsel, installed bum, it's cute
<lamont> ScottK: how is it that your debdiff doesn't change debian/rules?
<ScottK> lamont: Because I stared at your debian/rules for a while and then concluded I didn't understand it well enough to integrate it.
<lamont> heh. ok.
<lamont> you thinking /usr/sbin for it?
<ScottK> lamont: I left this to you (we've discussed this a while ago)
<ScottK> Yes.
<ScottK> I tried to give them specific enough names that would be OK.
<lamont> and I forgive you for having a changelog diff in your diff. :-)
<ScottK> Well it applied at the time I uploaded it ...
<ScottK> Easy enough to edit out anyway.
<lamont> yeah, but having changelog in commits is guaranteed non-mergabliilty
<lamont> yep
<lamont>   [Scott Kitterman]
<lamont>   * Add postfix-add-{filter,policy} scripts for easier integration of
<lamont>     same.  LP: #247332
<ScottK> You and your new fangled VCS stuff.... Bah.
<ScottK> Great.
<lamont> committed in 2.5.4-2, and building to test here
<ScottK> Great.  Did you add Cocker's dechroot script?
<lamont> meh
<lamont> I added your debdiff from your last comment on 247332
<ScottK> It was suggested to me on postfix-users that such a script would be a good idea.  I said I'd talk to you about it.  I've done that.
<ScottK> Thanks.
<lamont> heh.  I haven't seen his script, so I don't exactly have enough to render an opinion on the subject
<lamont> OTOH, I do need to go get some shuteye so that I can wake up in the morning and all that
<ScottK> Bah.  That's what coffee is for.
<lukehasnoname> I just ate a snickers, drank a Rock Star Espresso, and am considering staying up all night
<lukehasnoname> but if I do that, I want to be productive.
<lukehasnoname> like learning Python, or PyGTK, or packaging debs
<_ba> can someone help me out there with a santity check i'm trying to firewall my box but when i am applying rules to incoming traffic i loose connectitivy out does this make any sense?
<spiekey> hello!
<spiekey> i would like to use pidstat but i read i need to build my own kernel for it.
<spiekey> ...With its -d option, pidstat can report I/O statistics, providing that you have a recent Linux kernel (2.6.20+) with the option CONFIG_TASK_IO_ACCOUNTING compiled in...
<spiekey> however, there is no such option in the current 2.6.25 release
<spiekey> any idea?
<nxvl> good morning everyone!
<Koon> hey nxvl, I guess congratulations are in order
 * Koon orders Pisco for the whole table
<zul> hey hxvl
<nxvl> Koon: heh, thank you
<Koon> coffeedude: hello Jerry. That's Thierry here. Did you have time to look at our pam configuration issue ?
<coffeedude> Koon: Hey. I'm working on it.    Just not done yet.  Is thre a bug in LP on it I should update ?  Or just send you mail?
<Koon> coffedude: bug 262264
<uvirtbot> Launchpad bug 262264 in likewise-open "Fails to join a domain: Unknown pam configuration" [High,New] https://launchpad.net/bugs/262264
<coffeedude> Thanks.
<Koon> coffeedude: if you want to take advantage of the new pam-updating tools of intrepid and need some introduction to them, I can set something up with someone that knows them better than I do
<coffeedude> Koon: Thanks. I might do that.  Just need a bit more time to dig into it.
<dendrobates> Koon: have you retested likewise-open in the last few days?
<Koon> dendrobates: I tested the one in intrepid a couple of days ago
<dendrobates> Koon: I just wanted to make sure it wasn't fixed in Steve's last upload of the pam stuff.
<Koon> dendrobates: I retested it just before filig the bug. But there may have been another pam upload after
 * nijaba takes a break
<nijaba> I am dropping Stella at a friend and will be back
<tacone> hello, virt-manager I get "Paravirtualized" grayed out. doesn't that work with qemu ?
<Deeps> dont think so
<tacone> ok, thanks
<Deeps> you'll want kvm or vmware for paravirtualisation i think
<nijaba> oops that was the wrong chan
<tacone> Deeps: I installed kvm, but it doesn't show in the list. I only have xen and qemu
<tacone> oh, maybe I need a modified kernel on the host machine ? I thought it was about the guest kernel.
<Deeps> afaik kvm support's integrated into the stock ubuntu kernel
<Deeps> i could be wrong
<tacone> uhm. weird. ok let's leave it alone. fully virt will be ok as well.
<uvirtbot> New bug: #262691 in openssh (main) "can't login with a special character password" [Undecided,New] https://launchpad.net/bugs/262691
<uvirtbot> New bug: #262715 in mysql-dfsg-5.0 (main) "missing ref in mysql library causes iodbc to fail" [Undecided,New] https://launchpad.net/bugs/262715
<zul> mathiaz: ping
<mathiaz> zul: sure I'm around !
<zul> mathiaz: for #262715 mysql_odbc_escape_string has been removed in 5.0.67 according to the release notes should I put that as wont-fix?
<mathiaz> zul: well -it seems that odbc is broken then
<mathiaz> zul: the bug may be invalid for mysql-dfsg-5.0, but valid for odbc
<zul> mathiaz: *sigh* :)
<mathiaz> zul: it seems that myodbc would be the correct package.
<zul> mathiaz: ok, we should update that package then
 * mathiaz nods
<zul> ill put it on my todo list
<macd_> mathiaz, here?
<mathiaz> macd_: yes
<macd_> I just caught up with the libgems discussion
<macd_> actually be back in a sec, work calls
<macd_> sorry
<zul> lallala i hate libtool
<soren> I need a DNS wiz to explain something to me.
<soren> I've registered a domain, jungberg.dk. If I do a "host -t ns jungberg.dk", I get the correct response.
<soren> However...
<soren> If I do what I thought a DNS server would do to find that answer for me, I come up empty.
<soren> Here's what I think it does:
<soren> It finds the name server for the .dk tld.
<soren> ...so I do a "host -t ns dk.", and get [abclps].nic.dk.
<soren> Then, it asks one of those for the NS for jungberg.dk.
<soren> ...so I do a "host -t ns jungberg.dk. a.nic.dk".
<soren> And that's where my world falls apart.
<soren> I get jungberg.dk has no NS record
<soren> I get the same result if I ask [bclps].nic.dk.
<soren> whiskey... tango... foxtrot...
<soren> Can someone explain this to me?
<uvirtbot> New bug: #262789 in mysql-dfsg-5.0 (main) "package mysql-server-5.0 None [modified: /var/lib/dpkg/info/mysql-server-5.0.list] failed to install/upgrade: subprocess pre-removal script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/262789
<soren> Can I at least hear someone say that "yes, that really *does* look weird"?
<jmedina> Yes, that really does look weird, dont say thanks...
<soren> jmedina: Don't worry. I won't :)
<nxvl> \o/
<fromport> soren: since a.nic.dk is not authoritive name server for your domain, it will respond with the "no NS record"
<fromport> only  host -t ns jungberg.dk ns[1 till 5].gratisdns.dk. will answer that posive
<qhartman> so, I've been working with getting some Xen stuff going, and I noticed today that my hardy VM's stopped in the boot process just after "Reloading OpenBSD Secure Shell server's configuration sshd"
<qhartman> I figured it was a Xen thing, but I rebooted the host, and now it's doing the same thing. Any thoughts?
<qhartman> I can still get in via SSH, but logging into the local tty is impossible.
<qhartman> huh, it looks like it is somehow related to Xen.... when I forced a reboot some error messages including "xm " flashed by real quick... wheee!
<soren> fromport: Erm.. No.
<soren> fromport: Ok, answer me this: How did you come up with ns[1-5].gratisdns.dk?
<soren> fromport: (supposed) answer: based on the ns responses from a.nic.dk, right?
<soren> *those* are the ones, I'm not receiving.
<soren> And *that's* what I don't get.
<soren> fromport: It wouldn't work if the only way to find out which server was the authoritative one, was to ask the authoritative one.
#ubuntu-server 2008-08-30
<Haydn1> Is there anyone around that wants to help a complete noob?
<PrivateVoid> I can try...
<PrivateVoid> what is up?
<Haydn1> Well I'm a CCNA student and am trying to break into the IT field of networking.
<Haydn1> I'm wondering if setting up an Ubuntu Server with a spare computer would be educational
<PrivateVoid> educational in what way?
<Haydn1> I mean what are the limits of U Server at home
<PrivateVoid> um... not really following what you mean...
<Haydn1> I have an extra computer and all I do is run F@H on it and SSH into it.
<PrivateVoid> I would assume given the right home network the limits of the Ubuntu server would be the same as any other place...
<Haydn1> I am wondering if setting up U Server would provide me with some good IT experience
<PrivateVoid> What kind of IT are you looking to get in too?
<PrivateVoid> networking as in file servers or networking as in Cisco / HP Procurve kinda gear?
<Haydn1> Cisco at this point
<Haydn1> I'm studying for the CCNA right now.
<PrivateVoid> To be honest then I am not sure if running an Ubuntu server would help with that at all...
<PrivateVoid> Unless you setup a router / switch at home and made vlans
<PrivateVoid> then looked at what different settings did when you connected to the file server...
<Haydn1> Yeah I know it wont help in that area but... running a server would help me get acquainted with the OS
<PrivateVoid> yes...
<PrivateVoid> but the OS has little or nothing to do with switches and routers
<Haydn1> Yeah I know... Sorry to throw that in the mix. I just wanted to establish a baseline of knowledge.
<PrivateVoid> if you were doing one of the security Cisco certs it might help... you could run some VMs and take pokes at different services and OSes
<PrivateVoid> Personally... if you want a good overview of servers...
<PrivateVoid> download both Ubuntu Server (or Debian) and Windows Server 2008 and set both of them up
<PrivateVoid> Windows server 2008 will allow 180 days of evaluation
<PrivateVoid> which should be long enough to play with the OS
<PrivateVoid> You are also far more likely to run in to that in the field
<PrivateVoid> Ubuntu does have some interesting stuff... but it likely will not affect you in your CCNA
<PrivateVoid> if you want to build some knowledge build a LAMP server -- first by selecting it in the build...
<PrivateVoid> then try doing it manually.
<Haydn1> LAMP?
<PrivateVoid> Linux, Apache, MySQL, PHP
<Haydn1> Ah cool. So what is your involvement with U Server
<PrivateVoid> Very little at the moment
<PrivateVoid> may set one up soon though
<PrivateVoid> I have focused on the desktop thus far...
<PrivateVoid> I am a sys admin by day -- running Windows and OS X servers
<Haydn1> Ah cool, did you get certified? And if so was that before or after getting a job?
<PrivateVoid> my plan with Ubuntu server will be to learn how to run a file server, and directory server and the Apache/MySql combo
<PrivateVoid> I have no certs
<PrivateVoid> most people get them before they get hired...
<PrivateVoid> unless they work for a training company or consultant
<PrivateVoid> then they get them both before and after hiring
<Haydn1> Ok I'm busting my ass off over here, I hope it pays off
<PrivateVoid> where are you?
<Haydn1> NC
<PrivateVoid> Yeah... the CCNA and most Cisco certs are well respected unlike the Microsoft Certs
<PrivateVoid> NC is a good area for tech... much better than where I am
<PrivateVoid> I might actually try to move to NC
<Haydn1> LOL, where you at?
<PrivateVoid> Western NY
<Haydn1> Huh, that is surprizing. I have a friend that runs a web hosting in NYC just to be on the east coast backbone
<PrivateVoid> NYC is not Western NY
<PrivateVoid> that is a few hundred miles from me
<Haydn1> No I know. I'm from CT so...
<PrivateVoid> and NYC drains the rest of the state of money
<PrivateVoid> cool...
<Haydn1> I just moved to NC a couple months ago
<Haydn1> Are you out by Buffalo
<PrivateVoid> I have a few friends down in NC and they are all very happy with the area... they are in the Chapel Hill area
<PrivateVoid> Not Buffalo... Rochester
<Haydn1> Yeah that's a really nice area of the state
<PrivateVoid> Yeah... its nice... but it is going through some really tough economic times...
<Haydn1> Ok got cha
<PrivateVoid> the major companies are all suffering
<Haydn1> I think everyone is
<PrivateVoid> Kodak, Xerox, etc.
<PrivateVoid> Well... some areas are more resistant too it
<Haydn1> True
<PrivateVoid> Syracuse lost its major employers in the 80s
<Haydn1> Where do your friends work down here
<Haydn1> Who was that?
<PrivateVoid> so economic down turns seem to affect it less than Rochester
<PrivateVoid> they work for the state
<PrivateVoid> Syracuse Lost Carrier, GM, and Lockheed Martin
<haydn1> Sorry about that xchat crashed
<haydn1> Well it was good talking to you. Take care.
<uvirtbot> New bug: #262899 in dovecot (main) "Please retry on all failed architectures" [Undecided,New] https://launchpad.net/bugs/262899
<uvirtbot> New bug: #262892 in openssh (main) "Possible bug found in SSHD and tunneling(Based on question number 43605)" [Undecided,Confirmed] https://launchpad.net/bugs/262892
<osmosis> how do I assign two IPs to a single nic?
<fromport> edit /etc/network/interfaces and use eth0:[number eg 1] for the next ip adress
<dbouls_>  Hey, I'm looking for some suggestions for a ubuntu box in my geek frat house http://ubuntuforums.org/showthread.php?p=5692225#post5692225 Check it out and ANY suggestions would be greatly appreciated for my little project
<alexcf> 20
 * delcoyote hi
<nxvl> kirkland: around?
<nxvl> kirkland: i have a problem with EPD
<nxvl> kirkland: http://paste.ubuntu.com/41905/
<nxvl> kirkland: i'm nut sure if it's a bug or that the wiki is now well described
<nxvl> or out-dated
<adammw111> Hi, what's the quickest way to install ALSA?
<azteech> if it wasn't installed at time of install, "sudo apt-get install alsa" should do it, or do it from synaptic
<adammw111> thanks..
<azteech> adammw11: yw
<darkscrypt> i'm trying to create a ftpd that has a default directory of /var/www
<darkscrypt> and files are uploaded as user httpd-data
<darkscrypt> that way it is a root access to the website
<darkscrypt> any ideas, suggestions, or places to look
<darkscrypt> i'm at a complete loss
<darkscrypt> i need a ftp user that is dedicated to /var/www
<ScottK> Would adduser --home do what you need (see man adduser for details)?
<SpaceBass> hey folks
<SpaceBass> since upgrading to Heardy server, my ldap authenticated user can no long sudo ... its like its not recognizing the network group in the sudoers list
<okaratas> hello channel
<ErikWestrup> How do I get the current time from the terminal?
<SpaceBass> time
<SpaceBass> oops ... its date
<SpaceBass> since upgrading to Heardy server, my ldap authenticated user can no long sudo ... its like its not recognizing the network group in the sudoers list...anyone know why that might be?
#ubuntu-server 2008-08-31
<omaru> Hi everyone, I have a noob question, I was wondering if is it possible to add a users group to another users group, does anyone knows?
<omaru> I mean if is it valid
<SpaceBass> since upgrading to Heardy server, my ldap authenticated user can no long sudo ... its like its not recognizing the network group in the sudoers list...anyone know why that might be?
<Kamping_Kaiser> omaru, yes you can, a group is just a group :)
<omaru> Kamping_Kaiser, sweet, thanks :D
<Kamping_Kaiser> no worries
<myr> greetings, need help installing on a dell gx1, i get a error saying it can't read files from the disc. can anyone help? more details here http://ubuntuforums.org/showthread.php?p=5697083
<jameswf-home> anyone have any thoughts why though i have specified keyboard lang etc in the .seed file it still prompts?
<FuRom> Would anyone know where I could read up about managing my own DNS alt root? It's hard to find out information on. There's a wikipedia page about it, but it's not very useful =/.
<uvirtbot> New bug: #263228 in samba (main) "problems with file-permissions using linux-cifs-client" [Undecided,New] https://launchpad.net/bugs/263228
<oxmox_> hi i have a problem with postgresql the egroupware setup tells me that i have no pgsql support but i have installed php5 with pgsql support postgresql is already running with an other application
<RainCT> Hi
<RainCT> I'm trying to setup polipo but it doesn't work on any computer beside that one where it is installed... http://paste.ubuntu.com/42148/plain/  Any idea what the problem could be?
<RainCT> (it isn't iptables, after "sudo iptables -F" it still doesn't work)
 * delcoyote hi
<okaratas> delcoyote, hello
<delcoyote> hi de hi okaratas
<okaratas> :)
<okaratas> I am read to Linux Kernel in Nutshell
<okaratas> sorry, away messages..
<e-jat> could anyone help me up with this : http://paste.ubuntu.com/42166/ ?
<okaratas> tspc_2.1.1-6.1ubuntu1_i386.deb package any tspc programs with ipv6 running..
<okaratas> apt-get remove --purge tspc
<okaratas> modprobe ipv6
<okaratas> apt-get install tspc
<okaratas> what is your build tspc packages?
<jameswf-home> anyone have any thoughts why though i have specified keyboard lang etc in the .seed file it still prompts? also how to skip user creation,
<jameswf-home> so it seems ubuntu wont let you skip user creation
<scientus__> how do i replace my server certificate with a real one
<scientus__> ?
<flea> i am noticing that iptables provided with apt does not seem to provide 'pptp-conntrack-nat'  is this available anywhere w/o patching source ?
<okaratas> flea your kernel upgrading and iptables upgrading with "layer7"
<okaratas> curl http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.26.3.tar.bz2 - o linux-2.6.26.3.tar.bz2
<okaratas> tar jxvf linux-2.6.26.3.tar.bz2
<okaratas> mkdir ~/linux
<okaratas> mv linux-2.6.26.3.tar.bz2 ~/linux/
<okaratas> apt-get install libncurses5-dev fakeroot kernel-package bzip2
<okaratas> make menuconfig
<okaratas> okey flea ?
<okaratas> Networking -> Network Options -> search the Layer7 enabled (*)
<okaratas> layer7 patching please..
<okaratas> find a sourceforge.net with l7-filter (patch)
<okaratas> idea layer7 new kernel upgrading to pptp-conntrack-nat options iptables filter..
<okaratas> and logging :)
<lukehasnoname> Is ufw a frontend for iptables, or its own program?
<guntbert_> lukehasnoname: just a frontend
#ubuntu-server 2009-08-24
<leaf-sheep> Help. For some reason, my Ubuntu installation is unable to go beyond 33% of partitions formatting for 2TB HDD. I'm wondering if I did something wrong.  Maybe I'm supposed to set up RAID or something
<qman__> leaf-sheep, RAID is only for multiple disk setups
<qman__> and only in certain cases
<qman__> what filesystem are you using?
<PhotoJim> I'm googling, but I'm not aware of any issues with drives that size.
<qman__> 2TB is a lot of data, and would take a very long time to format in ext3
<qman__> how long did you let it set?
<PhotoJim> it took me the better part of an hour to format a one terabyte drive, but that was on an older PIII server.
<qman__> I would guess 2TB would take about two hours to format in ext3, based on my experience in setting up my RAID
<PhotoJim> ext3 should be fine.  max. vol. size depending on implementation is 2-16 TiB.  and a 2 TB drive would not be 2 TiB, it'd be 2 trillion bytes, significantly smaller.
<qman__> yeah, it just takes a while to format the drive
<qman__> I have a 3.3TB RAID array
<qman__> in ext3
<PhotoJim> depends on blocksize, apparently.  1 KiB blocksize caps at about 2 terabytes.
<qman__> yeah, I know the limit of a default ext3 setup is ~8TB
<PhotoJim> with 4 KiB blocksize, which is the max. on most platforms.
<PhotoJim> so that makes sense.
<PhotoJim> so either it just needs more patience... or it's a bad drive or controller.
<qman__> I need to decide on a new filesystem to use, my array is full and I need to upgrade
<qman__> right now the choices are reiser and ext4
<PhotoJim> you can't stick with ext3?
<qman__> only to 8TB, but if I upgrade I don't think that'll be big enough
<PhotoJim> ext4 is still a bit new for me. and reiser's future is in some doubt due to Herr Reiser's incarceration.
<leaf-sheep> qman__: EXT4.
<PhotoJim> ahh.
<qman__> I trust ext3
<qman__> I don't trust XFS
<qman__> the others are up in the air
<giovani> xfs is relatively unstable on a regular basis
<giovani> ext4 is the future for ubuntu
<giovani> although you may want to be aware of some options that ensure secure journaling
<giovani> which aren't default in ubuntu afaik
<leaf-sheep> It's not just formatting.  See, when I started the installation process... to the point of formatting and setting up disks to my likings, it'll start right away at 33% and idling -- but I can't be certain of background activities for 2TB is a large HDD.
<giovani> leaf-sheep: you can go to the console
<giovani> and look at activity
<giovani> via iostat, etc
<PhotoJim> looks like ZFS is an option too
<qman__> leaf-sheep, the progress bar doesn't show the formatting process
<giovani> or, if you're physically at the machine, obviously looking at the HD/CD activity will tell you if something's going on
<qman__> it's divided up into how many partitions you have
<qman__> so if you have a /boot, /, and swap
<leaf-sheep> giovani: Console? I came in here because I know people in here maintain servers all times (lot of HDDS).
<giovani> ZFS support in linux isn't significant
<qman__> so it's logical it would sit there on a large partition
<leaf-sheep> I'm setting this up for HTPC.
<giovani> leaf-sheep: understood -- but you can switch into the console during installation and check for yourself if it's active
<qman__> ...one of my lines didn't show up
<giovani> since you were unclear if it was actively writing to the disk
<qman__> if you have a /boot, /, and swap
<qman__>  /boot is 33%, / is the next 33%, swap is the final 33%
<giovani> qman__: depending on the disk order, no?
<qman__> even if /boot is a few megs, / is almost 2TB, and swap is a gig
<qman__> yes
<leaf-sheep> 100MB boot, Encryption --> LVM --> [2GB swap, 10GB root, Remaining for home]
<qman__> oh, you're using LVM
<qman__> that's why
<giovani> and encryption :)
<qman__> it'll probably take about two hours to do the LVM
<leaf-sheep> Encryption, indeed. ;)
<qman__> then a while for the encryption
<giovani> nah, it shouldn't take that long
<giovani> I just did a new 1.5TB drive on LVM
<giovani> took 10-15 min
<qman__> wow
<qman__> that's quick
<qman__> but yeah, the process is generally slow, just let it do its thing
<giovani> and ... why speculate
<qman__> if it's not done in a few hours, then something's wrong
<giovani> just check yourself if there's disk io
<giovani> there's no need to wait and guess
<leaf-sheep> Oh I'm installing Ubuntu Minimal from USB -- Would that cause issues?
<giovani> not during disk partitioning
<qman__> no
<PhotoJim> shouldn't.
<giovani> why are we still speculating?
<leaf-sheep> There are no official minimal usb other than UNR.
<giovani> leaf-sheep: please switch into the console and run iostat
<giovani> this is will answer your question
<leaf-sheep> Is it possible to obtain "Release file" for my local apt-mirror?  Getting base packages from official us.archive.ubuntu.com is long. I set up apt-mirror because I'm sick of waiting for packages and get failed installation every time.
<qman__> I usually skip networking during the install to speed things up
<leaf-sheep> giovani: I'm getting jaunty usb. Somehow I ended up with karmic version.
<qman__> then update later
<giovani> leaf-sheep: sure ... set up your sources.list -- and it'll work
<giovani> leaf-sheep: that's not good -- karmic is not stable at all
<leaf-sheep> giovani: I know. I clicked at the bottom and I assumed it was Jaunty. :<
<qman__> but yeah, setting up 2TB that way is a slow process, and the progress bar does not indicate how far along each step is, just to which step you're at
<leaf-sheep> Yeah, but to start off at 33% right away?
<qman__> normal
<qman__> your 100MB boot partition takes a split second
<giovani> leaf-sheep: have you not been listening to what qman__ has been explaining?
<qman__> also
<qman__> I suggest you up that to about 256MB
<qman__> you'll run out of room for kernel updates
<qman__> just my opinion though
<giovani> 100MB is standard, and fine, imo
<giovani> why would you need more than 3-4 back kernels?
<qman__> you don't, but it keeps them automatically
<giovani> sure
<qman__> it's just an annoyance
<giovani> but by that logic, presuming you upgrade to every offered one ... you'll run out in short order anyway
<PhotoJim> I agree to make it bigger, you won't miss 156 MB of disk space and it saves hassles if you don't punctually delete old kernels.
<PhotoJim> not critical.  but convenient.
<giovani> heh
<giovani> but then he'll just run out later
<giovani> same process, different timing
<PhotoJim> significantly later.
<qman__> yeah, but you don't have to clean out the old kernels as often
<giovani> 1.5 times longer
<PhotoJim> 2.5.
<giovani> no, 2.5 total
<leaf-sheep> giovani: I'm putting it on usb... almost done.
<giovani> 1.5 longer
<PhotoJim> 256/100 = 2.56 actually.
<giovani> anyway
<PhotoJim> 3 hours is 3 times as long as 1 hour, not 2 times as long.
<leaf-sheep> qman__: I checked my laptop kernels where I kept them. It's in total of 46MB.  3 or 4 kernels there.
<giovani> PhotoJim: but it's 2 times longER
<giovani> 3 times AS lon
<giovani> long*
<PhotoJim> "times" = multiplication.
<giovani> yes
<PhotoJim> by definition.
<giovani> longER
<giovani> implies in addition to
<PhotoJim> you're using it wrong.
<qman__> I'ce got only one kernel on my router
<PhotoJim> 256% as much time, but 156% additional, if you must.
<qman__>  /boot is using 24M
<qman__> like I said, 100M is enough, it's just my preference to use 256
<qman__> won't miss that extra space and updates are less annoying that way
<PhotoJim> my /boot hasn't been cleaned out in awhile, 64 MB
<PhotoJim> that's only 5 kernels.
<PhotoJim> oldest April 1/09.
<leaf-sheep> giovani: What did you say about iostat? I'm at install prompt.  Do you mean for me to start a VT in boot process afterward?
<giovani> leaf-sheep: I meant to go into the console during the installer, since you're curious what it's doing
<giovani> you can do a ps aux
<giovani> or if you have iostat you can run that
<giovani> and both will make it clear what the installer is doing, if it's doing anything, or if it's hung
<leaf-sheep> Console? That's CTRL + ALT + F[1-6] we're talking?
<qman__> just alt
<qman__> control is only necessary if you're in X
<giovani> alt-f2 or f3, i forget which is which
<giovani> in the installer
<leaf-sheep> Okay. I'll set up everything right to the 33% part.
<giovani> this lack of experimentation and hand-holding will get old quickly
<leaf-sheep> giovani: Meaning I can set up /etc/apt/sources.list to my localmirror although I tried different mirror which lacks Ubuntu Release File (something) for verification?
<leaf-sheep> I guess not.  Now it's getting all packages from server... again.
<leaf-sheep> We'll sit tight. Please don't go anywhere. I'm frustrated and would love to get this done with. I have been trying this all night (last night) to this morning.  Fell asleep. Woke up and I'm at it again. :)
<HellMind> how tf, i must port forward with UFW , I HATE it :(
<qman__> heh
<qman__> port forwarding with iptables takes four lines per port forward
<qman__> I've got a nice loop in my script that reads from a file
<jdstrand> HellMind: ufw does not support port forwarding with the command line interface. Please read the man page for what it supports. You may use iptables-restore style rules in /etc/ufw/before.rules to achieve anything iptables can do. See https://help.ubuntu.com/9.04/serverguide/C/firewall.html for details
<HellMind> I hate that
<jdstrand> HellMind: patches are welcome
<HellMind> my patch will be rm -fr ufw
<jdstrand> HellMind: you are free to use any firewall application you wish. if ufw does not suit your needs, try another listed in https://help.ubuntu.com/9.04/serverguide/C/firewall.html
<HellMind> Im using 8.04
<jdstrand> https://help.ubuntu.com/8.04/serverguide/C/firewall.html
<HellMind> stop pasting that
<jdstrand> *8.04*
<HellMind> its the same doc -_-
<oh_noes> is it possible to create a VM with two virtual hard disks in python-vm-builder?
<uvirtbot> New bug: #293548 in gvfs (main) "can't save file in samba share (dup-of: 286828)" [Low,Fix released] https://launchpad.net/bugs/293548
<rosa> Hi there, I have a ubuntu domain member with samba and is working fine for a while but lost the connection...the users can't see the samba folder and i need to restart the server to them can see again...
<rosa> somebody know what can be the problem?
<rosa> hi, somebody know if a have a domain member server which lost the connection with the domain server every 5 hours, what could be the problem?
<jmarsden> rosa: I'm not at all an expert on SAMBA domain stuff, but look at the samba log files for clues.  If necessary, turn up samba logging so you get more info in the logs to work from.
<rosa> i am not sure if the problem is that but i am getting this error: winbindd: Exceeding 200 client connections, no idle connection found
<jmarsden> OK, sounds like you need to configure samba to allow more simultaneous winbindd client connections :)
<leaf-sheep> I wonder if Ubuntu-Server.iso would benefit me more for pure XBMC edition -- with occasional samba and cups sharing, even its being a torrentbox?
<jmarsden> leaf-sheep: https://help.ubuntu.com/community/ServerFaq#What%27s%20the%20difference%20between%20desktop%20and%20server?
<leaf-sheep> jmarsden: I'm looking for something of minimal disc with at least all packages for ubuntu-minimal with it.  Minimal disc itself retrieve packages from the Internet.
<leaf-sheep> jmarsden: I'm wondering if Server edition is what I'm looking for... as I'm sick of failed attempts and having to retrieve all packages from Internet more than 10 times. :|
<jmarsden> Can't you do a minimal install from the "normal" Desktop install CD?
<leaf-sheep> jmarsden: HPTC does not have cdrom. I'm doing this from Ubuntu and by Desktop install CD, it'll install full ubuntu-desktop.
<PhotoJim> Alternate CD, I think, not Desktop.
<leaf-sheep> PhotoJim: It still installs ubuntu-desktop.
<jmarsden> I'll play in a VM...
<ScottK> Basic server install is a lot smaller than a desktop install
<leaf-sheep> Does Server edition installer supports LVM + Encryption?
<leaf-sheep> LVM, yes. I know this for sure.
<PhotoJim> leaf-sheep: I'm reasonably sure it has the option of doing a very basic installation.  if you can't get Ubuntu to do that, I know you can do that with Debian.  so it would surprise me that you couldn't with Ubuntu.
<rosa> jmarsden, do you know how i can do that?
<leaf-sheep> PhotoJim: I'm confidently sure that alternative disk is same as desktop disk minus the GUI, and plus the lvm + encryption.  That's all. I use it many times -- but yeah, they should prompt tasksel by default (and that's in minimal disc).
<leaf-sheep> I think I'll remaster one in the future for ubuntu-minimal as default packages instead of ubuntu-desktop in the future for USB.
<PhotoJim> leaf-sheep: alright.  can't say I've done it, so I'll defer to your experience.
 * leaf-sheep says future twice... Silly me. :<
 * leaf-sheep gets server edition because he's certain that ubuntu-minimal is installed by default.
<twb> The alternative CD ought to be identical to the desktop CD in the set of packages installed.
<twb> The difference is chiefly debian-installer vs. ubiquity + live CD
<jmarsden> rosa: No... try googling that error message
<twb> However the differences between ubuntu-server and alternative are the set of packages cached on the CD (so they don't have to be downloaded), and the tasksel and locale packages installed by default.  See the preseed/ directory on the server CD.
<twb> There's no fundamental reason why you couldn't install either desktop or server using e.g. the mini.iso
<jmarsden> rosa: Maybe try the ideas in http://magazine.redhat.com/2008/06/02/tips-and-tricks-i-get-the-error-winbindd-exceeding-200-client-connections-no-idle-connection-found/
<jmarsden> Looks like the 200 is a compiled in value, see http://fixunix.com/samba/348340-samba-winbindd-exceeding-200-client-connections-no-idle-connection-found.html for some discussion.
<leaf-sheep> twb: Slow network. I'm trying to install a HTPC from laptop (bridged).
<twb> leaf-sheep: HTPC?
<leaf-sheep> It takes awhile to get a base installation.  (Need a long cable to my room).
<leaf-sheep> twb: Yes. Home Theater PC>
<rosa> thanks, the weid is i dont have 200 users...that mean connectios? but still i think i dont more than 200 connetions
<jmarsden> See the RedHat article for how to get it to list the connections...
<rosa> thanks i will check it
<rosa> l
<cemc> can I somehow refresh the list in /dev/disk/by-label ?
<jmarsden> Well, a reboot will most likely do that :)
<jmarsden> Probably umount and the mount of the volumes concerned would also do it.
<rosa> hi somebody know where i can configurate it ?WINBINDD_MAX_SIMULTANEOUS_CLIENTS ?
<rosa> sombody here?
<uvirtbot> New bug: #343738 in vsftpd (main) "vsftpd max username length too small" [Medium,Fix released] https://launchpad.net/bugs/343738
<johe|work> good morning
<soren> ttx: Any idea what might be causing this? http://launchpadlibrarian.net/30714354/buildlog_ubuntu-karmic-i386.eucalyptus_1.6%7Ebzr452-0ubuntu2_FAILEDTOBUILD.txt.gz
<ttx> soren: looking
<soren> It's during the java build, where it errs out with a:
<soren>       [ERROR] Unexpected internal compiler error
<soren> java.lang.StackOverflowError
<ttx> soren: strange, it built on lpia
<ttx> even stranger, the GWT stuff in compiled arch:all
<soren> hm?
 * ttx compares build logs, just a sec
<ttx> soren: that's strange... This should behave the same whatever the arch, it's some java compile and even the JARs used are arch:all
<ttx> soren: so apart from a i386-specific openjdk-6 issue...
 * soren is tempted to just retry the build
<ttx> soren: that idea crossed my mind as well :)
<soren> but first, I'll see if I can reproduce it locally.
<ttx> soren: could you reproduce it ...
<ttx> you read my mind
<ttx> soren: hmm. http://extjs.com/forum/showthread.php?t=73875
<ttx> soren: If you can reproduce it locally, I'll PPA this fix in GWT so that you can confirm it fixes the issue: http://code.google.com/p/google-web-toolkit/source/detail?r=5262
<soren> ttx: So... Do I need to patch gwt or can I just pass the greater stack size during the eucalyptus build?
 * soren tries it locally now.
<ttx> The first solution is the fix, the second solution is the dirty workaround
 * soren nods
<ttx> the fact that the error triggers only on i386 tends to prove you're at the trigger limit anyway
<ttx> since there is probably nothing arch-specific tere
<ttx> there
<ttx> soren: let me rephrase, the second solution is not a dirty workaround. It's perfectly acceptable
<ttx> soren: current GWT uses more stack than it should. Increase stack size to make it compile is perfectly ok.
<ttx> soren: not sure you have easy access to JVM parameters in the build though... so fixing GWT so that it's leaner might be a better solution.
<soren> ttx: Well, there's an ant xml snippet in that extjs thread.. I just don't know where to put it.
<soren> ttx: Could you try sticking that gwt patch in your ppa?
<ttx> sure, I'm on it
<soren> I could not reproduce it locally, by the way.
 * soren tries it again, just for kicks.
<ttx> heh, sounds like retrying the build might work, then :)
<ttx> soren: could you file a quick bug against GWT, so that I reference the fix ?
<soren> sure.
<ttx> soren: Is there any point in PPAing it, if you can't test the fix locally ?
<soren> bug #418022
<uvirtbot> Launchpad bug 418022 in gwt "GWT builds use too much stack space" [Undecided,New] https://launchpad.net/bugs/418022
<soren> ttx: Well... I could try uploading it to my ppa to test it.
<ttx> ok
<soren> ttx: To be honest, at this point, I don't see any particular reason to be so careful.
<soren> ttx: Just upload it, I'll retry the build, and we'll see how it goes.
<ttx> ok
<soren> We've got plenty of time to fix stuff .
<soren> It succeeded again locally, it seems.
<soren> Yup, just finished.
<soren> ttx: Ok, it seems only axis2c and rampart are missing MIRs.
<soren> I'll see if I can get someone to file it for me. I'd like to get crackin' on the Elasticfox thing.
 * twb wonders when -server became -devel
<twb> or -motu or whatever
<soren> twb: It always was :)
<soren> twb: Up until a couple of years ago, it was /only/ about server development.
<twb> Ha
<ttx> twb: we just don't use the channel as much as we should.
<soren> twb: We've just not been very good at using it for development stuff.
<soren> Heh :)
<ttx> soren: your fix is in.
<kwork> yes use it for development then i can scam off all the knowledege
<maswan> Use it for development to fix all my pet issues, even better! ;)
<kwork> so any services use upstart with karmic ?
<roxy09> hi there, i want to upgrade my version 8.04 to 9.xx..is possible to do easy or i need to install and configurate everything again?
<twb> roxy09: upgrading in-place is a well-supported model.
<twb> Unlike those RHEL idiots...
<roxy09> jaja
<roxy09> sorry haha (in english)
<roxy09> what is the command to upgrade to the last version?
<\sh> roxy09: do-release-upgrade ?
<mattt> twb: it's possible on RHEL, just not commonly done :)
<roxy09> thanks :)
<twb> mattt: it's not SUPPORTED on RHEL
<kwork> roxy09, just did it few days ago from 8.04 to 8.10 and then to 9.04 worked like a charm :)
<kwork> thou one thing confuzed me, is 9.04 lts ?
<\sh> kwork: nope
<leaf-sheep> !lts | kwork
<ubottu> kwork: LTS means Long Term Support. LTS versions of Ubuntu will be supported for 3 years on the desktop, and 5 years on the server. The current LTS version of Ubuntu is !Hardy (Hardy Heron 8.04).  The next LTS release is scheduled to be 10.04
<kwork> ah okey then no confuzion
<roxy09> my problem is i am not sure if the bugs that have 8.04 is solved in the 9.04 or i will have more problems with this version?
<kwork> you need to consult launchpad on that
<kwork> find your bug id and see where its fixed
<kwork> or whats the bug status
<roxy09> i have samba bugs
<kwork> im not bug database
<roxy09> i need to install the vesrion 3.4
<kwork> search the db
<kwork> !llaunchpad
<ubottu> Sorry, I don't know anything about llaunchpad
<kwork> !launchpad
<ubottu> Launchpad is a collection of development services for Open Source projects. It's Ubuntu's bug tracker, and much more; see https://launchpad.net/
<kwork> !bug tracker
<kwork> !bug
<ubottu> If you find a bug in Ubuntu or any of its derivatives, please file a bug using the command Â« ubuntu-bug <package> Â» If that fails, you can report bugs manually at https://bugs.launchpad.net/ubuntu/+filebug - Bugs in/wishes for the bots can be filed at http://bugs.launchpad.net/ubuntu-bots
<roxy09> dont stress!!! i asked u before u said that...
<kwork> no stress here yet :P
<roxy09> somebody know how i can replicate and configurate ldap server?
<roxy09> i mean condigurate a ldap backup server
<twb> roxy09: "configure"
<roxy09> yes
<roxy09> somebody know how i can configure a ldap backup server?
<roxy09> hi i try to change the version to 8.04 to 9.04 but i cant, somebody know the command...i try before with do-release-upgrade
<roxy09> somebody know why i can upgrade my version, when i do old me i have the last version but still i have the 8.04
<\sh> do-release-upgrade -d
<\sh> brb
<AlexC_> morning,
<AlexC_> SSH has suddenly stopped/crashed on a Lenny server of mine, of which is 200 miles physically away from me. I have zero access to this server now, as even the Dell Remote Access Controller has given up (given XML parse errors). Is there any way you can think of that I can purposely crash this server to get it to restart?
<AlexC_> sorry it's non-ubuntu, thought there would be someone with a clever idea, though no worries if it's off-topic I'll leave it
<twb> AlexC_: call your colo guy and get him to kick it
<AlexC_> see, thing is - the keys to the server are also sitting to my left :P
<twb> AlexC_: even the power cable?
<roxy09> thanks sh, do you know how long take it?
<AlexC_> twb, true, true - that'd work
<twb> AlexC_: or maybe serial cable to a box they have that you CAN ssh into and run screen /dev/ttyS0 on
<roxy09> also, if i do that what happen with my kubuntu ?
<AlexC_> twb, will have to see if they can do that. I'm not even sure if our guy will be in the area today, if not - any sane way of crashing it?
<twb> AlexC_: is it running PHP?
<AlexC_> twb, yes, though scripts are running as their own user - and the downside to me being security concious, there are no exploits that I could use to bring it down
<twb> AlexC_: none you know of, anyway
<AlexC_> ;)
<maswan> roxy09: do-release-upgrade is roughly the same as the graphical upgrade, and the time depends mostly on the number of packages you have installed
<twb> And the number of updates, and the speed of the mirror
<roxy09> :) thanks a lot mates!
<roxy09> meanwhitle...i am having another problem with amavis is using a lot of CPU, somebody know about that?
<twb> Isn't that amavis' job?
<roxy09> yes, but 99% od the CPU and stop the job of the mail services
<soren> ttx: ..rebuilding eucalyptus on i386.
<soren> ttx: \o/
<roxy09> hi somebody know what devecop does?
<Alblasco1702> Hallo roxy09 did you mean dovecot?
<roxy09> yes
<roxy09> i am upgrading ubuntu, should keep the conf files or let upgrade the files?
<Alblasco1702> dovecot is a IMAP server
<Alblasco1702> roxy09 i make a backup from my configuration files en then let the files upgrade but that's up to you
<roxy09> hi i am getting error with spammassasin...when i do restart show me doesn find some files, i try to reinstall but still the problem, show me cant find some lib with perl
<roxy09> hi there, i am having problme to retsart my spamassasin aplication, i got the error:  Can't locate MLDBM/Sync.pm
<kwork> install the lib ?
<kwork> or perl module even
<roxy09> which one...i try to instal MLDBM but said i have the last one
<kwork> http://cpansearch.perl.org/src/CHAMAS/MLDBM-Sync-0.30/Sync.pm
<roxy09> hi i got some problem with amavis-new is ussing s lot of CPU, somebody know about this problem?
<leaf-sheep> giovani, qman__ PhotoJim: Yoohoo. It's just me and installer's lack of feedback. I tried it again and watch few shows. Came back and it was done. Lulz. ;3
<roxy09> :S I cahnge my version to ubuntu 9.04 and now samba doesn't run...
<kwork> you should really get people to upgrade your box who knows what they are doing :P
<kwork> and on the subject i upgraded from 8.04 to 8.10 -> 9.04 and my samba works just fine
<roxy09> thanks !!!
<Sky[x]> #samba ? :D
<uvirtbot> New bug: #418117 in postfix (main) "package postfix 2.5.5-1.1 failed to install/upgrade: subprocess post-installation script returned error exit status 75" [Undecided,New] https://launchpad.net/bugs/418117
<ivoks> could someone look at 416970?
<ivoks> but 416970
<ivoks> lol... bug 416970
<uvirtbot> Launchpad bug 416970 in openais "Please sync openais 1.0.0-3 (main) from Debian experimental" [Undecided,New] https://launchpad.net/bugs/416970
<roxy09> im getting this error: final write to client failed: Broken pipe
<roxy09> somebody know about that?
<garymc> Hi. Anyone know how I replace a faulty hard drive? Im using HP proliant G3 with 6 hotswap scsi drives in each. One gone faulty
<Faust-C> garymc, are you using mdadm?
<garymc> i wouldnt know?
<Faust-C> well unless youre using some kind of raid i don't know what to tell you
<garymc> how would i know. what is mdadm
<garymc> ?
<Faust-C> mdadm is linux's built in software raid
<ball> garymc: on a machine like that you should have someone to hand who knows about these things.
<ball> Are you responsible for the care and feeding of this beast?
<garymc> yep, would should could, cant afford them
<garymc> so trial and error for me
<Faust-C> garymc, well normally i would agree w/ the "trail and error" method
<ball> ...I hope it's nothing mission-critical then.
<Faust-C> but its very hard to help someone whom is not familiar w/ the system in question
<Faust-C> but if its not mission critical ill try to help
<garymc> cool
<garymc> it aint mission critical yet
<Zerosan> Hello
<garymc> so need to know how to do it for when it is
<garymc> see my hard disk was displaying a red light on the bay. so normally means some sort of fault
<garymc> I took it out put another in its place system didnt work
<garymc> so replaced the faulty disk back in the bay restarted the server
<ball> The the machine continue to run when you pulled the drive?
<garymc> No i shut it down first
<Faust-C> garymc, do you know if you are using any kind of raid or LVM
<garymc> when i restarted it said press F1 to recover hard disk, i did and now no light showing a fault
<Zerosan> Can someone help me with scanner sharing using sane? I'm using 9.04 and the scanner can already be found in the network, but when I try to access it using xsane or scanimage as a client, I always get the message that the Access has been denied.
<ball> garymc: when you powered it back up, did the RAID controller tell you it was rebuilding the array?
<garymc> Im using RAID 5
<garymc> ball: no it never
<garymc> Ball: it said recover hard disk data or something
<ball> garymc: when you power on, do you see a message to the effect of "hit F8 for RAID configuration"?
<Faust-C> Zerosan, is the scanner setup to allow remote connections
<ball> (may be some other key than F8)
<garymc> yes
<Zerosan> Faust-C: yes it is, .... wait
<garymc> i do
<Faust-C> Zerosan, also check the logs and see what they say
<ball> garymc: did you press that key?
<garymc> with new hard drive in?
<Zerosan> cat /var/log/syslog | grep sane says that access to my client has been granted
<Zerosan> "cat /var/log/syslog | grep sane"
<garymc> i did with new hard drive in and it said there where no configured logical drive did i want to create one
<ball> garymc: yes, power down the system, insert the new hard disk, power the system on and hit that key to enter the RAID Array setup program
<garymc> ok i did that and (ABOVE) is what it said
<Zerosan> Faust-C: "cat /var/log/syslog | grep sane" tells me that the access has been granted to my client
<ball> garymc: you may have lost all the data on your array then, unless someone set it up to use software RAID for some reason.
<garymc> I then shut down server and put faulty one back in. I thought maybe that would have killed the whole system
<Faust-C> Zerosan, hmm and you still cant scan
<garymc> once i put old faulty drive in it worked fine after data recovery
<Faust-C> funny part is i was working w/ a client the other day that had a scanner that works flawlessly w/ linux
<Zerosan> Faust-C: yes, what bothers me is the fact that there is no group with the name "scanner"
<Faust-C> Zerosan, maybe create that group and try it that way
<Zerosan> Faust-C: I'll do that now, brb
<Faust-C> Zerosan, i havent had much luck w/ linux and network scanners/printers
<ball> garymc: Assuming your data is backed up, I'd wipe that box and start from scratch.
<Faust-C> kk report back success or failure
<garymc> lol
<Faust-C> garymc, no seriously
<Faust-C> that way you can set it up correctly
<garymc> ball: What would i do when im using the sytem for real and a hard drive goes?
<garymc> setup the raid?
<Faust-C> well raid will be setup before hand
<Faust-C> then when hdd dies you will know how to use mdadm to replace the drive
<garymc> yes its setup, so why would i start from scratch?
<ball> garymc: I would run tests before you put it into production
<Faust-C> garymc, because you need to be able to fix it w/o any help
<ball> Faust-C: Is mdadm just for software RAID though?
<garymc> ball: thats the plan
<Zerosan> Faust-C: I just created the group: "scanner" and added the user: "saned" to it. Still, no go | getting the message "access denied"
<ball> garymc: I'd try using the RAID array setup program to build an array.  How large are your drives?
<Faust-C> ball, yes, depending on the raid card it would be better to use built in raid and not fake raid
<Faust-C> Zerosan, what is the make/model of the scanner in question
<garymc> ball: I have 146gb and 72gb drives
<Zerosan> Faust-C: Doesn't matter so much, scanning works flawlessy when I try scanimage as SU on the server
<ball> Faust-C: he's using SCSI and an HP ProLiant I think, so odds are good it's a real RAID controller
<garymc> 72Gb seem to go faulty
<garymc> but now its showing ok?
<ball> garymc: how many of the 146 Gbyte drives do you have?
<Zerosan> Faust-C: but not over the network or as a normal user
<Faust-C> Zerosan, hmm odd
<Zerosan> Faust-C: yup
<garymc> only one in this sytem
<Faust-C> ball oh yeah didnt think of that
<ball> garymc: how many of the 72 Gbyte?
<Faust-C> Zerosan, .... man i cant think of what would cause it not to work
<garymc> ball: 5
<Faust-C> login and back out w/ user you just added to scanner group
<ball> garymc: will your RAID controller let you build a RAID 1+0 array?
<Zerosan> Faust-C: k, wait
<Faust-C> and if al else fails submit help request to forum
<garymc> yes, but i heard that wasnt best for my system?
<garymc> not that thats true though
<Faust-C> garymc, raid 10 is general purpose imo
<Zerosan> Faust-C: I restarted saned, now it works, hehe | thanks for the help
<Faust-C> lol ok
<Faust-C> Zerosan, if you could document your success for others
<garymc> so you think I should run all my servers on RAID 1+0
<Zerosan> or not xD
<Faust-C> i, at least, would greatly appreciate it
<Zerosan> Faust-C: didn't work.... *sob*
<Zerosan> Faust-C: Only difference now is that I can see the list in xsane, but when I select one scanner and press okay, still getting access denied
<ball> garymc: That depends on the machine in question.  If your data fits on a 72 Gbyte disk, you could just mirror three of those.
<Faust-C> Zerosan, heh
<ball> ...or mirror two and have a third as a hot standby
<garymc> yeah, like a backup drive?
<giovani> xsane, on a serveR?
<Zerosan> giovani: no, sane on a server | xsane on the client
<giovani> ah
<Zerosan> giovani: to be precise saned
<giovani> scan-to-server saves everyone's lives :)
<ball> garymc: not a backup drive, but a drive that the server can use in the event that one of the other drives fails
<garymc> whats the diff between RAID 5 and RAID 1+0
<Zerosan> right now it only hurts :P
<ball> (backups are something else)
<Zerosan> giovani: do you know how to make it work properly?
<garymc> ball: ok im not sure how I would do that
<_ruben> garymc: a lot of performance
<ball> RAID 5 requires three drives.  RAID 1+0 requires 4, but is slightly faster.
<giovani> garymc: massive difference -- google has lots of answers -- it's not simple to explain quickly
<garymc> but sounds like thats what I should do
<ball> ...and arguably a little more resiliant.
<ball> garymc: do you have a manual for your RAID controller?
<garymc> nope
<giovani> RAID10 is less efficient with space
<garymc> I bought the servers second hand
<_ruben> raid5 is nice for bulkstorage (fileserver) .. raid10 is good for performance (virtual machines/database/etc)
<ball> garymc: how large is your data set?
<garymc> not so large yet
<giovani> RAID5 'wastes' 1/Nth of the space you have -- where N is the number of disks
<giovani> RAID10 'wastes' half of the space you have
<ball> giovani: it's not wasted ;-)
<giovani> ball: hence the ''
<ball> It's used to save your ass ;-)
<_ruben> raid5 'wastes' your performance as well ;)
<giovani> I thought that was very clear
<_ruben> (when writing, not when reading)
<garymc> infact i think its the other way around with disk space
<Zerosan> why not use the word "reduces" instead of waste?
<garymc> lol
<ball> How about "uses"
<giovani> garymc: no ... it's not the other way around
<garymc> RAID 5 i have less space than with RAID 1+0
<giovani> false
<Zerosan> or "takes up"
<ball> garymc: if you're on 72 Gbyte drives, space is probably a non-issue for you
<ball> ...or you would be upgrading to larger disks.
<Zerosan> now, does anyone have expierence with saned on ubuntu-server 9.04?
<giovani> if you have 3 1TB drives in RAID5, 1TB will be used for parity, and 2TB will be usable
<ball> (which might be an idea anyway)
<garymc> yes, well I got someone picking up 6 146gb scsi right now
<giovani> if you use 4 1TB drives in RAID10, 2TB will be used for mirroring, 2TB will be usable
<garymc> which i will use for my asterisk server
<garymc> thats another story
<garymc> :)
<ball> garymc: Also be aware that every disk you put in the server adds to the heat inside the case.
<garymc> ahh right ok
<ball> ...so use the minimum number that gives you the reliability and performance that you need.
<garymc> i got them in an air conditionded room
<giovani> honestly
<giovani> there are plenty of studies
<ball> garymc: that doesn't matter ;-)
<giovani> that show that slightly hotter temps don't affect drives longevity
<garymc> no?
<giovani> so if the server was properly cooled with 2 drives, adding 4 more drives won't be a problem
<giovani> this obsession with super-cooling servers is wasteful
<garymc> not my electricity so not too bothered
<garymc> anyway I need to change or check a faulty disk drive
<giovani> well is the cooling auto-adjusted?
<garymc> whats the best way to go about it?
<ball> giovani: heat is the enemy of reliability though, not just for disk drives but for RAM and processors too.
<giovani> ball: that's a concept that's been taken beyond reality
<ball> giovani: not the way I do it.
<giovani> yes, in extreme temperatures, longevity/reliability are issues
<garymc> giovani: if you mean my Air con then yes, it keeps the room and 16 degrees
<garymc> *at
<giovani> but the suggestion that adding a few drives will in any significant way affect their longevity, is false
<ball> 16C seems excessively cold
<ball> Anyway
<ball> garymc: build a RAID array using the utility on the RAID controller card.
<giovani> 16C is FAR below standard
<garymc> well.... not a prob right now
<_ruben> garymc: find out the type of raid controller and then go find a linux managment tool for it
<giovani> I would not recommend wasting energy that way
<ball> garymc: then test that array.
<giovani> 22-23C is standard in most DCs
<garymc> is a raid array like create a new logical drive? etc?
<giovani> and even that is often not required
<ball> 23C sounds about right
<giovani> 72F is standard here
<ball> garymc: some RAID controllers refer to it that way
<garymc> yeah i can set it to 23C
 * ball breaks out the xcalc to do F->C
<giovani> the issue has more to do with airflow management
<giovani> and less to do with temperature
<garymc> right ok
<giovani> google runs their datacenter about 10-15 degrees hotter than "standard"
<giovani> because they know how to manage airflow
<garymc> so build an array using RAID 1+0
<ball> giovani: 72F ~= 22C
<ball> ...so that sounds good.
<ball> garymc: try RAID 1 with three disk drives, if it will let you.
<garymc> I wonder how big googles data centre is
<garymc> ok
<ball> garymc: they have several
<garymc> what about sparE?
<giovani> heh, 'several'
<giovani> they have dozens
<ball> garymc: RAID 1 with three disk drives writes the same data to three drives.  Up to two drives can fail before you lose all your data
<garymc> yep they are worth alot of dosh now
<ball> giovani: dozens == several
<garymc> and if one drive fails?
<giovani> then you replace it
<ball> garymc: if one drive fails the array controller should notify you of that
<garymc> how?
<giovani> by taking out the old one
<giovani> and putting in a new one
 * giovani smacks head
<ball> ...you replace it and tell the RAID controller to rebuild the array
<garymc> wball: when the system bootS?
<ball> garymc: ideally you should run software that talks to the RAID controller and asks it about the health of the array.
<garymc> so a drive fails, while system is up. I see red light on drive
<garymc> ok so what software can i download in ubuntu server?
<garymc> and run in the gui?
<giovani> there is no gui
<giovani> if your raid controller supports hotswap
<giovani> you replace the drive, while the system is on
<garymc> hmmm?
<giovani> unless you *want* to take the system offline
<ball> garymc: You *may* be able to ask the RAID controller to rebuild the array without bringing the server down.
<garymc> ok just to let you know Im running LTSP
<garymc> hmm ok
<ball> ...if not, then you'll have to reboot, hit that key to get into the software on the RAID card and then rebuild the array that way
<giovani> if your raid controller supports hotswap (all decent ones do)
<ball> (which shouldn't take long on a 72 Gbyte disk)
<giovani> you don't take the system offline
<ball> giovani: right, I'm allowing for a possible lack of Linux tools for his RAID controller)
<garymc> rebuild mean create new logical drive?
<ball> garymc: no.
<garymc> cos I will try that now
<garymc> ahh ok
<giovani> ball: well then I'd ask/investigate the controller, rather than assuming
<ball> giovani: we've asked already ;-)
 * ball asks again
<Steve[mbp]> morning everyone!
<ball> garymc: when you power up your system, do you see a line of text that says "HP SmartArray" something-or-other?
<garymc> yes
<ball> garymc: please type here the entirity of that line.
<garymc> ok ill be back in a sec with that
<ball> giovani: in fairness, I'm also not used to Linux, so I don't know how functional the available tools for a SmartArray controller are.
<ball> At a minimum I'd like them to tell me about the health of my array.
<ball> It would be *really* nice if I could also ask it to rebuild the array in the event of a drive failure.
<garymc> right here we go
<garymc> slot 0: HP SMART ARRAY 5i Controller ........... initializing
<ball> garymc: hopefully someone here knows about Linux tools for those.
<garymc> ok. Does the line give any indicatioins?
<ball> garymc: Yes, it tells us what your RAID controller is.
<ball> (HP Smart Array 5i)
<ball> ^- you may want to write that down.
<uvirtbot> ball: Error: "-" is not a valid command.
<maswan> ball: You want hpacucli from the hp websites to manage arrays from the command line in the OS
<garymc> ball: is it any good?
<ruben23> hi
<leaf-sheep> Hi alligators. :)  Is it possible to mirror from a website using rsync --no-parent (eg, getting all images on the said directory)?
<ball> maswan: good to know, thanks.
<Faust-C> leaf-sheep, look into using wget for that
<Faust-C> wget has a web spider feature iirc
<ball> garymc: I don't know, but I imagine it's adequate.
<leaf-sheep> Faust-C: I already have some local files and when I run wget, I see that it overwrite the files instead of checking to see if it's same.
<Faust-C> leaf-sheep, iirc there are options to _not_ overwrite
<ruben23> i have installed samba on my ubuntu desktop trying to access a windows workgroup..but get error when trying ot access windows pc on gui--stating unable to mount network location.
<pmatulis> ruben23: gvfs should handle that, shouldn't need samba
<ruben23> pmatulis: whats that...?
<pmatulis> ruben23: Places (in the Panel)
<ruben23> pmatulis: can you give guide on it how do i do it..
<pmatulis> ruben23: sure, Places > Connect to Server
<ruben23> then..? thats all--->ill be able to connect to my windows client PC..?
 * pmatulis is taking the risk of getting swatted by the local server nazis
<leaf-sheep> ruben23: Try "network://" in the Nautilus address bar.  That may be all you need.
<gst> what is Nautilus?
<ball> Gnome's file browser I think
<roxy10> Hi there ..i can't to connect from my windows client to samba server, i got the error teh route form the machine doesn't found
<roxy10> i update my version of ubuntu and i got this error, before was working ok, somebody know what could happen?
<garymc> ball: so what should I do. Do you know?
<roxy10> i really need help is 1 am and i need to resolve this problem ...please help!
<ball> garymc: How far did you get?
<garymc> i never
<ball> garymc: do you have any document files (or other data that you want to keep) on this server now?
<garymc> not really no
<garymc> but it took me a while to setit all up
<ball> garymc: set up in terms of software?
<garymc> yes... easily done again
<garymc> are you suggesting re install using RAID 1+0?
<garymc> not RAID %?
<ball> garymc: okay.  Are you using a different computer to access IRC?
<garymc> *5
<garymc> yes
<ball> Good.  How many 72 Gbyte disks are in the machine now?
<garymc> 5
<garymc> +1 146
<garymc> i could put 2 more 146 in instead
<ball> garymc: save them for when your data outgrows the 72 Gbyte drives
<garymc> right
<garymc> so.......................
<ball> gary: I'm thinking.
<garymc> cool
<ball> Does your RAID controller give you the option of using four disks to create a RAID 1+0 array?
<garymc> i go and check now
<ball> Thinking about it, if your data set is smaller than 72 Gbytes, I'd just shove two drives in there and mirror them.
<garymc> you dont want me to use 6?
<ball> (RAID level 1)
<ball> why use 6 if your data will fit on two?
<garymc> well im hoping my databse will become vast eventually
<garymc> my data?
<garymc> like the OS?
<ball> garymc: Use two now and then upgrade as it expands.
<garymc> well since i got loads of hard drives does it not make sense to just bung them all in?
<ball> Remove three 72 Gbyte disk drives and use the software on the RAID card to create one 72 Gbyte logical drive from a pair of 72 Gbyte physical drives (RAID-1)
<ball> garymc: no, that does not make sense.
<garymc> lol ok
<garymc> so i just bought loads of drives for no reason?
<ball> garymc: spares == a good reason.
<garymc> ok
<ball> You may want to create a second RAID-1 pair to backup onto (in addition to the tape backups you'll hopefully be making)
<garymc> you mean i need tape driveS?
<ball> ...either way, by using two drives now, you're leaving some drive bays empty for when you outgrow that 72 Gbyte array
<ball> garymc: a tape drive is a sensible investment.
<jmarsden> ball: Or set up RAID1 with a hot spare , if you really have plenty of drives available?
<garymc> i do have plenty of drives
<ball> jmarsden: I did think about that.
<garymc> ok
<ball> jmarsden: I like the thought that he could sustain two drives failing at once.
<garymc> so if I have say 4 drives and two hot spares?
<ball> garymc: will your RAID controller let you create a RAID1 array with three disk drives?
<ball> brb
<ball> (phone)
<garymc> hold on ill check
<garymc> ok
<roxy10> I can browse form windows to samba server, somebody know how i can detect which is the problem?
<IvanCosta> Hi, guys.
<IvanCosta> Does someone know the difference of ECC memory buffer and unbuffered?
<jmarsden> garymc: I'd be looking to create a RAID1 array (two drives) plus a hot spare (third drive).  If you want another one for backup... duplicate that.  Now you have used 6 drives, which seems to be what you want to do anyway :)
<jmarsden> IvanCosta: Different technology.  Use the one your motherboard supports.  One has buffers on the modules, one does not...
<garymc> Ball: It will only let me create RAID 5 or RAID 0 with 3 drives in
<giovani> garymc: no, RAID5 requires 3+ drives
<giovani> RAID0 requires 2+ drives
<giovani> RAID1/RAID10 require 2+ drives in even increments; 2,4,6 etc
<garymc> what does RAID 1+0 require?
<jmarsden> garymc: 4 drives
<IvanCosta> Jmarsden: If I used one by another, will break the machine?
<giovani> err, RAID1 is 2 drives only rather, RAID10 requires 4,6,8,etc
<garymc> ok so Bill is on the phone at the minute
<jmarsden> IvanCosta: Probably not, it just won't work... I think!  But don't deliberately put the wrong kind of RAm in a motherboard, that's just not wise...
<garymc> he was asking if my RAID array will let me create RAID 1 with 3 drives. The answer is no
<giovani> garymc: no, RAID1 only works with 2 drives
<jmarsden> garymc: OK.  You may be able to create ie RAID1 with 2 and then add a hot spare drive to it?
<giovani> or just use RAID5
<ball> giovani: usually.
<garymc> so should I remove a drive and start again?
<garymc> I only get 3 RAID options
<garymc> RAID 5 - RAID 1+0 - RAID 0
<jmarsden> Deselect one of the drives and pick RAID1.
<IvanCosta> Jmarsden: Yes, you right. But a only have buffered memory and it's very difficult to find unbuffered...
<ball> garymc: okay, so it looks as though your RAID controller wants to use three or four drives
<giovani> selecting RAID1+0 may have RAID1 as a suboption
<jmarsden> ball: He has 3 selected... :)
<giovani> look into it
<ball> garymc: doe jmarsden's suggestion work if you select just two drives?
<ball> s/doe/does/
<garymc> yes i can select RAID 1+0 and use one as a spare
<ball> garymc: that's strange.
<garymc> so TWO as RAID 1+0 and 1 as a spare
<garymc> it is?
<ball> RAID 1+0 requires four drives (minimum).  Hoepfully they've just mislabelled RAID-1 by putting it under the same heading.
<ball> If it lets you pair two drives as RAID-1, with one spare... that's a sensible choice.
<garymc> well i cant find no choice for RAID 1, unless your abbreviating RAID 1+0
<ball> btw, that phone call was my boss saying he'd plugged something into the hub.  Unfortunately he plugged it into a port that he should never touch.
<garymc> cos I cant find no RAID 1
<garymc> ball: stupid bosses
<garymc> :)
<giovani> garymc: it's not going to offer you RAID1 with an odd number of drives selected
<giovani> it shouldn't
<ball> garymc: I think HP have just (confusingly) lumped RAID 1 and RAID 1+0 into one menu option.
<giovani> this is so not an ubuntu issue though
<kinnaz> you can create raid1 with one hot spare
<kinnaz> so 3 drives :P
<giovani> indeed
<giovani> we've mentioned this a number of times
<garymc> ok lets assume i have
<ball> giovani and kinnaz are right
<giovani> garymc: we've been hand-holding you this entire time
<giovani> I think it should stop
<giovani> call HP, use google, poke around the menus yourself
<giovani> we don't know your server, this isn't ubuntu-related at all
<ball> ...or /msg me.  We are tying up the channel.
<ball> Thanks everyone for being so patient with us.
<garymc> thanks
<garymc> sorry
<kinnaz> no worries
<giovani> anyone here running ext4 in production?
<sgsax> I hear it's stable
<kinnaz> too sceard yet
<kinnaz> ill let you guys try it out :P
<giovani> heh, I've been running it in testin for a few weeks
<kinnaz> for sake of the testing
<giovani> but was talking in #ubuntu+1 about how data=ordered might have severe consequences for power loss
<kinnaz> try to poweroff the box
<giovani> testing*
<Sam-I-Am> mathiaz: you around?
<kinnaz> giovani, so have you tested it with powerloss ?
<giovani> kinnaz: not yet
<giovani> I'll need to set up a new box for that
<giovani> it seems data-ordered is the issue to be concerned with, and can be changed
<kinnaz> i havent really dig into it, but is there noticebale performance increase ?
<mathiaz> Sam-I-Am: hey
<Sam-I-Am> mathiaz: fixed the nssov bug
<mathiaz> Sam-I-Am: \o/
<giovani> kinnaz: there may be some -- there's the potential for online-defrag, which is a huge deal to me
<mathiaz> Sam-I-Am: what was the issue?
<giovani> the online-defrag code is still unofficial at this point though
<Sam-I-Am> mathiaz: turns out it was calling an undefined symbol
<kinnaz> i think i havent ever defraded ext partision, if fsck doesnot do it
<mathiaz> Sam-I-Am: which library is missing?
<kinnaz> defraged
<Sam-I-Am> mathiaz: gave output from ld_debug to hyc... he fixed it in HEAD... quanah is going to commit the fix to 2.4.18.
<mathiaz> Sam-I-Am: and how did you debug it?
<Sam-I-Am> mathiaz: it was just a bad function call... something that was apparently renamed
<Sam-I-Am> which oddly translated to 'file not found' as far as slapd's module loader was concerned
<mathiaz> Sam-I-Am: ok - so it was upstream, rather than the packaging
<Sam-I-Am> yes
<mathiaz> Sam-I-Am: great - thanks for taking this up to upstream, debugging it and reporting back
<Sam-I-Am> so the question is... do we want to wait for the fix in 2.4.18 or want me to submit patches for 2.4.17?
<mathiaz> Sam-I-Am: good question - we'll probably wait for 2.4.18
<mathiaz> Sam-I-Am: 2.4.18 will have support for disconnected mode in the pcache overlay
<Sam-I-Am> ok
<mathiaz> Sam-I-Am: which is something that was discussed at the last Ubuntu Developer Sumiit
<Sam-I-Am> what should we do with the ubuntu bug report i filed?
<mathiaz> Sam-I-Am: leave it open - I'll fix it with the upload of 2.4.18
<Sam-I-Am> sure
<mathiaz> Sam-I-Am: it's a bug in the current package and should be kept on track
<Sam-I-Am> guess we're good then... however, i will get some patches going to fix the nssov build/cleanup
<mathiaz> Sam-I-Am: if you could add a note to the bug stating that it was fixed upstream, with a link to the commit/ITS it would be helpful
<mathiaz> Sam-I-Am: did you talk with upstream about this too?
<Sam-I-Am> sure... not sure hyc got an its # yet, i'll ask
<Sam-I-Am> mathiaz: well, the nssov makefile could use a  'clean
<Sam-I-Am> er...
<mathiaz> Sam-I-Am: ok - even if there isn't an ITS, a link to the upstream cvs commit could be helpful
<mathiaz> Sam-I-Am: or at least a note that it has been fixed in 2.4.18
<Sam-I-Am> 'clean' target... but also since the deb build doesnt copy the contrib tree into the build die, it doesnt remove the temporary build files
<mathiaz> Sam-I-Am: just to keep track of where things are and where to look for if needed
<Sam-I-Am> so i was thinking about adding a clean target to nssov... then in debian/rules, rm the temp files.. or call make clean (if thats possible)
<Sam-I-Am> sure
<mathiaz> Sam-I-Am: isn't a clean target in the Makefile + rm in debian/rules redundant?
<mathiaz> Sam-I-Am: I though the two options were:
<mathiaz> Sam-I-Am: 1. add clean to nssov Makefile and call make clean in debian/rules
<Sam-I-Am> yeah it would be... i just didnt know if it was good practice to call another makefile directly from rules
<mathiaz> Sam-I-Am: 2. Add rm in debian/rules
<Sam-I-Am> depends on if we want to touch the nssov makefile or just debian/rules
<mathiaz> Sam-I-Am: option 1. would be suited for inclusion in upstream source
<mathiaz> Sam-I-Am: while option 2. would be debian specific
<Sam-I-Am> yup
<Sam-I-Am> your choice... i've done both
<mathiaz> Sam-I-Am: let's option 1 and ask upstream about their opinion - if they take the patch, then go for 1.
<Sam-I-Am> will do
<mathiaz> Sam-I-Am: let's *try* option 1 and ask upstream about their opinion - if they take the patch, then go for 1.
<Sam-I-Am> guess i'll file the its... dont see one yet
<Sam-I-Am> hyc will just bless it
<roxy10> hi i got his error with winbind ,Exceeding 200 client connections, no idle connection found ...somebody know about it?
<zklaus> Hi there, I have kerberos problems and could use some help.
<zklaus> I am trying to auth against my universities kdc and the kinit succeeds.
<zklaus> But I fail to find the configuration that allows me to login.
<zklaus> The auth.log says attempting authentication...
<zklaus> success
<zklaus> but then: FAILED LOGIN.
<zklaus> Any ideas?
<roxy10> somebody know what mean broken pipe?
<KillMeNow> usually it means you're trying to Pipe something to another application
<KillMeNow> you can do it where email comes in and "pipes" it to say RT
<KillMeNow> other wise google the error message
<KillMeNow> could mean something completely opposite than what i'm telling you
<slestak> cjwatson: hiya man.  good weekend?
<zklaus> roxy10: What's the full command, that led to that?
<slestak> cjwatson: got some interesting (to me at least) putty testing input
<cjwatson> slestak: ok ...
<slestak> cjwatson: i am using your karmic package with 9.04.
<slestak> cjwatson: i get visible artifacts when scrolling through text files with less, vi, and even when using my ERP system (ssh'd to aix, System Builder GUI toolkit)
<slestak> the artifacts only occur when I use compiz,
<slestak> I disables all effects, and they have gone away
<slestak> i am using an nvidia card with the binary nvidia driver
<slestak> does not occur with gnome terminal
<slestak> vim is the strangest, because whole lines can disappear or reappear as you use j-k to go up and down in a file
<slestak> can we refresh the package soon so I can get the Font Translation Selection patch included?
<slestak> i should be able to do some comparisons to other jaunty machines.  I have another machine here that has an intel chipset, wonder if it shows the same issue.
<slestak> its a netbook, so I do not use it nearly as much
<slestak> cjwatson: i had to reset sth, sorry.  did i miss a post?
<cjwatson> slestak: the artifacts really sound like Somebody Else's Problem, perhaps compiz's; I don't use compiz so I have no idea what might be going on there
<cjwatson> slestak: I've got a refresh lying around on disk, planning to upload this week
<cjwatson> thanks for reminding me
<slestak> I've since turned it off on my workstation.  i had it on bc I love gnome-do.  should I post it somewhere in launchpad (Maybe Answers?) in case someone else runs into the issue, or if someone can corroborate this on another Nvidia machine
<twint_> Boa tarde
<twint_> alguem pode me ajudar?
<KillMeNow> say what?
<guntbert> !br | twint_
<ubottu> twint_: Por favor, use #ubuntu-br para ajuda em portuguÃªs. Obrigado.
<twint_> 	
<twint_> like to know how to pop the apache and php and mysql
<kinnaz> !lamp
<ubottu> LAMP is an acronym for Linux-Apache-MySQL-PHP. However, the term is often used for setups using alternative but different software, such as Perl or Python instead of PHP, and Postgres instead of MySQL. For help with setting up LAMP on Ubuntu, see  https://help.ubuntu.com/community/ApacheMySQLPHP - See also the Server CD installation process (different in Edgy+)
<twint_> what would lamp?
<twint_> say what?
<twint_> 	 	
<twint_> I will investigate this link a Polco thank
<roxy10> hi sorry, i lost the connection...my winbindd is show me this error. request location of privileged pipe
<twint_> #ubuntu-br
<Sam-I-Am> mathiaz: with all the recent problems reported on 17 by richton, quanah says 18 might not be ready by the feature freeze for karmic...
<mathiaz> Sam-I-Am: right - I've emailed howard about that
<mathiaz> Sam-I-Am: the question is when 2.4.18 could be released
<mathiaz> Sam-I-Am: we could ask for a Feature Freeze Exception if 2.4.18 is released within the next two weeks
<Sam-I-Am> mathiaz: ok... just figured id let you know
<Sam-I-Am> mathiaz: got an ITS # for the nssov bug... updating ubuntu bug.
<mathiaz> Sam-I-Am: thanks for your work on this.
<Sam-I-Am> no problem
<Sam-I-Am> i hang out in #openldap and #openldap-devel :)
<mathiaz> Sam-I-Am: yeah - I should do the same - I'm on the mailing list
<StefanWray> seeking info on cloning a hard drive with ubuntu server
<andresmujica1> partimage, dd, dd_rescue, rsync
<SJr> With intel speed step, my CPU seems to only ever be running at 2.0 GHz instead of 2.83 GHz, how do I clock it up.
<PhotoJim> SJr: put it under load?  it might just idle at the lower speed to conserve electricity, and step up when load demand requires it.
<SJr> I don't think so PhotoJim it's still staying there
<SJr> oh there we go
<Doonz> hey guys was wondering if someone could help me set up my nic's to be static and not dhcp. i only have command line interface so i need some guidance
<VirtualDisaster> Doonz, if i point you to the proper doc will you use it?
<Doonz> very much so
<Doonz> rather read then be babied
<VirtualDisaster> Doonz, which version? 8.04 LTS or juanty
<Doonz> 9.04
<VirtualDisaster> https://help.ubuntu.com/9.04/serverguide/C/index.html
<Doonz> thanx
<VirtualDisaster> it has a good article on setting static interfaces
<VirtualDisaster> yw, and good luck
<StefanWray> anyone use clonezilla
<giovani> I've used it a few times, yeah
<StefanWray> giovani: i'm intending to use clonezilla live off a usb flash drive. any issues i should watch for?
<giovani> not that I know of
<giovani> that's how I've used it
<giovani> but I wouldn't necessarily remember everyhting
<giovani> everything*
<StefanWray> giovani: can you recommend best tutorial or instructions?
<giovani> nope ...
<giovani> why would you need a tutorial?
<giovani> it's just formatting the usb stick, setting it bootable and copying files
 * FastZ is away: Away from keyboard
<StefanWray> ok, so it's pretty straightforward then
<giovani> should be
<StefanWray> thanks
<trothigar> Hi, I'm having trouble installing jaunty x64 on virtual box on jaunty desktop (x64). It keeps freezing at 50% when scanning the mirrors.
<giovani> trothigar: do you have networking properly set up?
<trothigar> giovani, a NATed Network adaptor should do the trick shouldn't it?
<giovani> trothigar: should ... if you have another NAT to the internet, it can get nasty, but often works
<trothigar> giovani, hm i'm behind a router, so i do have double NATing as it were.
<giovani> yeah, try bridgin or something
<trothigar> giovani, Surely the jaunty install should time out eventually?
<giovani> could be unrelated
<giovani> it should
<giovani> I'd switch into the console
<giovani> and check what it's doing while "frozen"
<trothigar> is "Host interface" what they now call bridging?
<giovani> maybe
<giovani> try it out
<giovani> I don't use virtualbox
<Sam-I-Am> sommer: you around?
<sommer> Sam-I-Am: yeppers
<Sam-I-Am> sommer: so i hear you're the documentation gyu
<Sam-I-Am> guy
<sommer> sure, I do what I can :)
<Sam-I-Am> well, i tend to set up a lot of openldap-samba-kerberos-dhcp-bind systems (infrastructure stuff) and wondered if i can help with either a) the server guide and/or b) wiki pages for various things
<sommer> Sam-I-Am: sure, all help is greatly appreciated
<Sam-I-Am> so how would i suggest updats?
<sommer> Sam-I-Am: one thing that would be great is if you could review the openldap, kerberos, and ldap and kerberos sections of the server guide
<Sam-I-Am> no problem
<Sam-I-Am> one thing i noticed is no mention of sasl with openldap, yet the ldaputils stuff tries sasl by default... might be nice to have a basic setup
<sommer> Sam-I-Am: if you find any issues grammar, syntax, etc you can report them in LP
<Sam-I-Am> so just report things as bugs?
<Sam-I-Am> and suggested changes...
<sommer> Sam-I-Am: ya, I haven't done much with sasl, but if you have experience with that a new section would be great
<Sam-I-Am> sure
<sommer> there's a quick guide to get the docbook xml source here: https://wiki.ubuntu.com/ServerTeam/KnowledgeBase#Documentor%20resources
<Sam-I-Am> i'm writing ground-up docs here at work, so they might apply well to the public
<sommer> Sam-I-Am: cool, any help would be awesome
<Sam-I-Am> figure why keep stuff internal :)
<Sam-I-Am> especially if i'm going through the pain of writing this so just about anyone can plug-n-chug
<mathiaz> Sam-I-Am: I've been working on some script to integrate kerberos+openldap
<sommer> heh, ya sometimes writing things up can be painful... but every time I refer back for a command I can't remember it all becomes worth it :-)
<mathiaz> Sam-I-Am: https://code.launchpad.net/~mathiaz/openldap-dit/dynamic-backend
<sommer> mathiaz: sweetness :)
<mathiaz> Sam-I-Am: have you integrated dhcp/dns to use openldap as the backend?
<Sam-I-Am> i've integrated dhcp
<mathiaz> sommer: it probably won't make it in time for karmic though
<Sam-I-Am> however, theres some caviats with that
<mathiaz> Sam-I-Am: are you using isc dhcp?
<Sam-I-Am> yes
<mathiaz> Sam-I-Am: with the ldap patch?
<Sam-I-Am> yes
<Sam-I-Am> i'm also trying to package dhcp4
<Sam-I-Am> with the ldap patch
<mathiaz> Sam-I-Am: IIRC this patch is not in debian
<mathiaz> Sam-I-Am: has the ldap patch been submitted to upstream?
<Sam-I-Am> yeah, for a while now
<Sam-I-Am> they dont even put it in contrib
<mathiaz> Sam-I-Am: IIRC there was some push back from the debian maintainer to not include the ldap patch
<Sam-I-Am> i see a dhcp-server-ldap in debian
<mathiaz> Sam-I-Am: where?
<Sam-I-Am> http://packages.debian.org/lenny/dhcp3-server-ldap
<mathiaz> Sam-I-Am: oh right - I remember now
<Sam-I-Am> the dhcp ldap schema is fairly well, formed, but in testing i've only found a few things that actually work in ldap... particularly host management, which is really what we want... although config would be a nice touch
<Sam-I-Am> extra comma in there..
<mathiaz> Sam-I-Am: right - my plan was to integrate dns+dhcp via a shared backend (openldap)
<mathiaz> Sam-I-Am: and throw kerberos in the mix too
<Sam-I-Am> yeah
<Sam-I-Am> i havent deployed dns in ldap in production yet, but i have it in testing
<mathiaz> Sam-I-Am: cool.
<Sam-I-Am> yeah
<Sam-I-Am> everything in ldap!
<mathiaz> Sam-I-Am: If you could document this in a wiki page (to start)
<Sam-I-Am> btw, i'm one of the dhcp maintainers :)
<mathiaz> Sam-I-Am: then we can start on packaging all of this to create an OOTB experience
<Sam-I-Am> i have several dozen wiki pages here at work... once i'm done documenting, i plan to post them publically somewhere once i clean up the company-specific hoopla
<Sam-I-Am> btw, speaking of OOTB, should probably put a post-install note for slapd on how to access cn=config with .17 :)
<Sam-I-Am> since it doesnt ask for a password anymore, it might confuse people who dont know about sasl external
<mathiaz> Sam-I-Am: right - that's a good idea and can be documented after FeatureFreeze
<mathiaz> Sam-I-Am: all the script I posted above use the new EXTERNAL method
<mathiaz> Sam-I-Am: the missing part is that kerberos cannot used sasl EXTERNAL
<Sam-I-Am> because you're using it for config?
<mathiaz> Sam-I-Am: I should file a feature request with the upstream folks to support that
<mathiaz> Sam-I-Am: when you run krb5_ldap_utils to create the realm you need to give a DN and a password
<mathiaz> Sam-I-Am: is there another way to create a realms beside using krb5_ldap_utils?
<Sam-I-Am> well, to be honest i'm using heimdal here... i just call kadmin to generate a realm after editing krb5.conf
<mathiaz> Sam-I-Am: another point I'd like to investigate is to see if the two users that the krb5kdc use could use SASL external to connect to the slapd daemon over ldapi
<Sam-I-Am> yeah, ldapi is a bit limited right now
<mathiaz> Sam-I-Am: right - does heimdal kadmin support SASL external?
<Sam-I-Am> yes
<Sam-I-Am> i'm almost sure heimdal only supports ldapi
<Sam-I-Am> it of course comes in as root...
<Sam-I-Am> you can have other ldapi users and the proper authzregexp for them... but all the kerberos stuff runs as root
<mathiaz> Sam-I-Am: right - support SASL EXTERNAL would mean running the different kdc under different accounts
<mathiaz> Sam-I-Am: and then map this via AuthzMap to the correct dn
<Sam-I-Am> yeh
<Sam-I-Am> i'm still torn between mit and heimdal...
<mathiaz> Sam-I-Am: IIUC this is not supported by heimdal?
<Sam-I-Am> heimdal right now has the added benefit of updating samba hashes in ldap as it finds them
<Sam-I-Am> heimdal supports ldapi and sasl external
<mathiaz> Sam-I-Am: I'll post a feature request to the MIT folks as I'm in contact with the dev team
<Sam-I-Am> does mit not support sasl external?
<Sam-I-Am> its on my list of things to configure in the testbed...
<Sam-I-Am> i design and plunk infrastructure systems down all the time and think in some cases heimdal or mit would better suit a particular customer
<julius> hi
<Sam-I-Am> howdy
<julius> how can i close a bug reported by me on launchpad?
<Sam-I-Am> just say its been fixed and the maintainer will close it as needed...
<Sam-I-Am> post some detail about how it was fixed..
<Sam-I-Am> i wonder if mit supports tls or just ssl...
<julius> so i as the owner cant close it?
<Sam-I-Am> i... dont think so.
<julius> thx
<Sam-I-Am> mathiaz: whats your priority on a dhcp4 package w/ ldap patch?
<Sam-I-Am> i have dhcp4 built against sid/karmic with the patch... just not packaged yet since they more or less replaced the entire build internals
<Sam-I-Am> probably something for 10.04
<mathiaz> Sam-I-Am: not a target for 9.10
<mathiaz> Sam-I-Am: having ldap+krb5 integrated was a low priority target for 9.10
<mathiaz> Sam-I-Am: but things haven't moved as quickly as expected
<Sam-I-Am> that seems to be a common case
<mathiaz> Sam-I-Am: (mainly blocked on administration tools)
<Sam-I-Am> if i could spend every day working on open source stuff i would... but it doesnt work that way
<mathiaz> Sam-I-Am: well - you're *already* spending more time on open source stuff than most of the people :)
<Sam-I-Am> the best i can do is make my stuff as generic as possible, push docs back out, and get things patched
<mathiaz> Sam-I-Am: that's one of the best approach
<Sam-I-Am> and working closely with the upstream developers for what i use
<Sam-I-Am> it literally took 5 minutes to get the nssov patch from howard
<Sam-I-Am> now if only gnutls worked that quickly... :/
<Sam-I-Am> i found a bug in that which essentially breaks typical certs with subjectaltname...
<Doonz> heya all
<Sam-I-Am> howdy
<Doonz> need some help with dns on my server
<Doonz> i set the 2 nics to static ip's
<Doonz> but now i cant resolve host names ie: cant ping google.com
<Sam-I-Am> is there a default route?
<Doonz> i changed my /etc/resolv.conf to show my isp dns servers
<Doonz> Sam-I-Am: sorry im new with this
<Sam-I-Am> can you ping those dns server IPs?
<Doonz> hmm never thought of that
<Doonz> one sec
<luxos> buenas
<luxos> como van
<luxos> pregunta
<luxos> quiero instlar una impresora en el servidor linux y quiero compartirla en la red
<mathiaz> Sam-I-Am: well gnutls may be a bit buggy but it's the only option we have from a licensing point of view
<luxos> como hago eso
<Doonz> Sam-I-Am: i cant ping the nameservers either
<mathiaz> !es | luxos
<ubottu> luxos: En la mayorÃ­a de canales Ubuntu se comunica en inglÃ©s. Para ayuda en EspaÃ±ol, por favor entre en los canales #ubuntu-es o #kubuntu-es.
<Doonz> i can only ping my router and other pcs on this network
<Sam-I-Am> can you get an ip of google.com from another machine and try to ping it from the machine in question?
<KillMeNow> Doonz, did you check to make sure you have a good gateway set?
<Doonz> the gateway is set to 192.168.1.1 wich it is
<Doonz> but i cant ping outside of my network
<KillMeNow> can you ping outside from your gateway?
<Sam-I-Am> are other machines using that gateway working?
<Doonz> i cant ping everything from this laptop
<Doonz> can*
<KillMeNow> what is your gateway?  is it a linux box or a little router?
<Doonz> its a router
<Doonz> but i have my resolv.conf file set up with the ips of my isp dns servers
<Sam-I-Am> same as the laptop which works?
<KillMeNow> run a route command
<KillMeNow> from the box that is having the issue
<KillMeNow> can you ping those?
<Sam-I-Am> you also mentioned something about having 2 nics or at least two IPs on this machine in question
<Sam-I-Am> depending on how those are configured you could have a problem
<Doonz> http://pastebin.com/m59313df1
<Doonz> thats the output from route
<Doonz> both nics have different ip's
<KillMeNow> anyone else seeing 2 gateways?
<KillMeNow> do a ifconfig please
<Doonz> ok
<Sam-I-Am> yeah i'm seeing two
<Sam-I-Am> you shouldnt have two default gateways...
<Doonz> http://pastebin.com/mebc428a
 * Doonz is a noob
<KillMeNow> remove the gateway statement from one of your interfaces
<KillMeNow> or set one statically
<Doonz> ...
<KillMeNow> but i guess you see the problem now
<Doonz> NO I DONT
<Doonz> bah sorry
<KillMeNow> ok give me the config for each of your eth interfaces...  are they DHCP or static?
<Doonz> static
<Doonz> http://pastebin.com/me4fae7
<KillMeNow> pick one of your interfaces to be the default and then edit your config and remove one of the gateway addresses
<KillMeNow> the whole gateway statement
<Doonz> so un eth1 just romve the gateway part?
<KillMeNow> yea i updated your pastebin
<Doonz> ok
<KillMeNow> using eth0 as the main interface
<KillMeNow> what are you trying to do?  you trying to bind them together for better bandwidth?  like get 2 gbps by joining 2 nics?
<Doonz> no
<KillMeNow> ok
<Doonz> just 2 seperate ips
<KillMeNow> once you ahve it edited, restart your network
<Doonz> want to keep media traffic on one nic and the rest on the other nic
<KillMeNow> by issuing:  /etc/init.d/network restart
<Doonz> hmm
<Doonz> it doesnt like that command
<hggdh> or 'service network restart'
<Doonz> @server:/etc/init.d# service network restart
<Doonz> $network: unrecognized service
<KillMeNow> sorry
<KillMeNow> use networking
<KillMeNow> s/network/networking/
<Doonz> oh ok
<hggdh> yeah. Same mistake I made ;-)
<Doonz> hehe
<KillMeNow> nah, i work in RHEL / Ubuntu
<KillMeNow> get the two mixed up sometimes
<Doonz> YAY
<Doonz> thanx all good now
<KillMeNow> congrats
<Doonz> brb
<Doonz> yaya
<Doonz> woohoo
<Doonz> all good
<KillMeNow> i'm glad
<Sam-I-Am> mathiaz: so whats the idea for security regarding all this stuff talking to ldap... like dhcp, dns, etc... most of them don't support ldapi, and even if they did, they'd come in as the root user.  i dont think all of these services would need write access to the whole ldap tree.
<VirtualDisaster> and you dont want to make dhcp/dns dependant on ldap
<VirtualDisaster> use ldap for auth and use radius for items like dhcp clients
<mathiaz> Sam-I-Am: right. One of the idea in Ubuntu is to try to run most of daemons as non-root
<mathiaz> Sam-I-Am: so given that goal, having the services running under their own account would solve the problem
<Sam-I-Am> yes
<mathiaz> Sam-I-Am: IIRC the dhcp server doesn't run as root
<Sam-I-Am> nope
<Sam-I-Am> nor does bind to some extent...
<Sam-I-Am> they do at one point or another
<Sam-I-Am> reading through your todo...
<mathiaz> Sam-I-Am: another idea wrt to security is to try to use SASL External + kerberos to authenticate and encrypt communication
<mathiaz> Sam-I-Am: from my openldap-dit branch?
<Sam-I-Am> yeah
<mathiaz> Sam-I-Am: the first lines may no be relevant anymore - they're related to another DIT
<mathiaz> Sam-I-Am: my branch is actually base on the trunk from the openldap-dit LP project
<mathiaz> Sam-I-Am: which was started by andreas
<mathiaz> Sam-I-Am: it's based on the Mandriva Directory Service
<Sam-I-Am> ah
<Sam-I-Am> noticed the heimdal branch question
<mathiaz> Sam-I-Am: andreas used to work for Mandriva
<mathiaz> Sam-I-Am: yes - heimdal was the choice then
<Sam-I-Am> i store heimdal machine/service credentials in ou=computers with the samba machines
<Sam-I-Am> and person creds in ou=people
<mathiaz> Sam-I-Am: I've removed a lot of the original work to reduce the scope
<mathiaz> Sam-I-Am: right - I moved everything under accounts
<Sam-I-Am> yea, guess thats irrelevant now if we're using mit heh
<mathiaz> Sam-I-Am: as from the kerberos perspective they're all principals
<Sam-I-Am> yeah
<Sam-I-Am> mit clients talk to heimdal kdcs just fine
<Sam-I-Am> its just kadmin that doesnt work
<mathiaz> Sam-I-Am: does heimdal make a difference between host principals, service principals and user principals?
<mathiaz> Sam-I-Am: ie can it store them in different sub-trees?
<Sam-I-Am> it only allows you to configure one place to store principals in the config file, but i usually use a script to add mine so they go into the appropriate locations... and the heimdal service account can read the entire ldap tree
<mathiaz> Sam-I-Am: I haven't played enough with the DIT where all principals are stored in the same OU
<Sam-I-Am> it'll work all in the same ou, it just looks messy
<mathiaz> Sam-I-Am: right - that comes back to the issue of administration tools
<mathiaz> Sam-I-Am: right - the DIT isn't supposed to be read by end users
<mathiaz> Sam-I-Am: one of the design principles I follow is to have a shallow tree with as little hierarchy as possible
<Sam-I-Am> with heimdal, if your kerberos attributes are not in the same DN as your other user stuff, it wont also update your samba hash upon kerberos key change
<Sam-I-Am> mine are pretty limited too... mostly nis-like with some extras for heimdal, dhcp, dns, and samba
<mathiaz> Sam-I-Am: right - that would be covered by the krb5smb openldap overlay
<Sam-I-Am> yes, which should probably get built along with nssov
<mathiaz> Sam-I-Am: which would be responsible for maintaining all passwords in sync
<mathiaz> Sam-I-Am: it doesn't support MIT kerberos though
<Sam-I-Am> ahh yes, another problem...
<Sam-I-Am> as of a few months ago it didnt like compiling against gnutls either
<mathiaz> Sam-I-Am: and I know that howard is working on updating a couple of IETF drafts wrt to password policies and kerberos schema
<Sam-I-Am> yeah, thats cool
<mathiaz> Sam-I-Am: so it may worth waiting a bit to see what comes out of this
<Sam-I-Am> someone in openldap-devel sent me his diffs for building smbk5pwd within the ubuntu package, so that might be useful either way...
<mathiaz> Sam-I-Am: oh cool.
<Sam-I-Am> i think that was part of the reason i chose heimdal
<Sam-I-Am> that and its potential integration with samba4... although samba4 appears to run its own kdc
<mathiaz> Sam-I-Am: yes - it makes sense.
<mathiaz> Sam-I-Am: samba4 is being ported to MIT kerberos
<Sam-I-Am> ha
<mathiaz> Sam-I-Am: they've started with heimdal but work is done to support MIT as well.
<Sam-I-Am> would be nice if it could just compile against one or the other... just like openldap and gnutls vs. openssl vs. moznss
<Sam-I-Am> although that sort of thing makes the code a bit more complicated
<Sam-I-Am> especially if you have to work around implementation specific bugs... which are fairly common with openldap-gnutls
<Sam-I-Am> also, while i'm thinking about it, how does one get approved for the server team?
<mathiaz> Sam-I-Am: hm - when I'll process pending the requests :D
<Sam-I-Am> ah, ok
<mathiaz> Sam-I-Am: I haven't done that for a while
<Sam-I-Am> i should be one of them
<Sam-I-Am> i sure wish there were more hours in the day...
<Sam-I-Am> so many interesting things to do, so little time
<Sam-I-Am> also saw your note about autofs... several schema for that :/
<Sam-I-Am> depending on what kind of compatibility you want
<Sam-I-Am> time to head home... bbl
#ubuntu-server 2009-08-25
<cheeseboy> i cant get my ethernet working :(
<cheeseboy> says " eth0: link is not ready"
<giovani> then either the cable isn't plugged in on one side, the cable is bad, the switch/hub is bad, or the network card is bad
<cheeseboy> none of the above?
<giovani> how do you know?
<cheeseboy> cause ive checked em all
<giovani> how did you check if the network card is bad?
<KillMeNow> i'm thinking it's a cross over cable and switch tried to auto negotiate it and roll it but was made of fail
<giovani> KillMeNow: out of the dozens of possible causes, why that one?
<cheeseboy> giovani, because its fine
<giovani> cheeseboy: that doesn't sound like a
<giovani> *test*
<cheeseboy> its fine
<giovani> ok, so you're lying -- you haven't tested it
<giovani> feel free to get help from someone else
<cheeseboy> "it obviously doesnt work cause I say so"
<ball> Ugh.  Wish users wouldn't put '*' in filenames
<ball> it makes my head hurt
<Sam-I-Am> or name them -rf
<Sam-I-Am> or.. put carriage returns in them
<ball> I get a lot of &s and 's...a few *s
<ball> I'm going to have to install something like mc
 * ball sighs
<Sam-I-Am> what are you trying to do?
<ball> Ah, it's alright.  The *s weren't part of the filenames, they were ls -F showing me that some files were marked executable
<ball> chmod ftw
<Sam-I-Am> ah
<ball> Still have files with ' and & in the names though
<uvirtbot> New bug: #418396 in mysql-dfsg-5.1 (main) "need to port 38_scripts__mysqld_safe.sh__signals.dpatch from mysql server 5.0" [Undecided,New] https://launchpad.net/bugs/418396
<mase_work> hi guys, i am setting up rssh to allow chrooted sftp access to users and was wondering why /bin/bash needs to be accessable in the chroot. I was under the impression the purpose of rssh was to replace the shell
<giovani> mase_work: who said bash needed to be in the chroot?
<mase_work> giovani: well it fails if it is not in the chroot.
<giovani> did you set the user's shell to rssh?
<mase_work> yes
<giovani> where is the failure error message?
<giovani> i.e. how did you come to know it failed?
<mase_work> it is in /var/log/user and it immediately disconncts the sftp sessions
<giovani> let's see the error then
<mase_work> ok 10 secs
<mase_work> giovani: ignore me. it seems i have one set up here without /bin/bash which is working .there is obviously something else going on as well.
<mushroomtwo> anyone know a way to limit ssh login attempts to 3 per hour?
<giovani> mushroomtwo: do you mean per source ip?
<mushroomblue> that would work too.
<giovani> there's no built-in mechanism for that -- you'd have to use a script that watches for logins (failed, presumably) and then actively bans that IP -- which is essentially an IPS (Intrusion Prevention System)
<giovani> well why not describe what your goal is ...
<mushroomblue> someone has compromised me over wifi
<giovani> your initial description was an output ... not really a motivation
<mushroomblue> and I'm pretty sure there's been a man-in-the-middle attack going on via the router.
<Wicked> hello all. im wondering how i can have a user start up a program at boot....id like for it start a screen session. i tried adding this to the crontab "@reboot /usr/bin/screen &" ...but it does not work.
<mushroomblue> so all my machines are more than likely compromised.
<giovani> Wicked: you should add it to the proper runlevels
<giovani> mushroomblue: how is this related to wifi?
<Wicked> giovani, well i just want to start screen for a certain user at boot...i really dont want to add a init script or somehting like that.
<mushroomblue> I've narrowed down the ways the attacker has accessed the system, and it appears it happened over wifi
<Wicked> there has to be a way to have a user be able to start something at or after the machine boots
<mushroomblue> which means, I have a douche of a neighbor
<giovani> Wicked: ok ... "at boot" and "for a user" are entirely different, and not compatable methods
<Wicked> how about...id like a program to start under a certain user after the machine is fully booted.
<giovani> the proper way to do anything like that is with an init script
<giovani> it's just that simple
<Wicked> hmm ok
<giovani> but I really don't think that you should be doing it -- I think you should be looking for a better method to achieve the output you want
<giovani> mushroomblue: fail2ban, denyhosts, etc -- a bunch of scripts to monitor failed logins
<giovani> I'm not sure how limiting logins to 3 per hour has anything to do with preventing an attacker who already has a password
<mushroomblue> it wouldn't. but it would discourage bruteforce
<giovani> but if they already have hacked your system
<giovani> there won't be any bruteforcing
<giovani> so you need to define your parameters here
<Wicked> i run irssi and a few other things inside a screen session. i have screen setup to start the programs when screen is started. after the machine is rebooted i would like screen to be started so then my programs will always be running...then i can connect via ssh and reconnect to the screen session
<giovani> Wicked: why would the system be shut down?
<Wicked> i thought setting a crontab to do that at boot would work...but it fails to start screen
<Wicked> giovani, lots of reasons. mostly...power outages
<Wicked> or new kernels
<mushroomblue> giovani: make no mistake, I'm in the process of reinstalling all my servers atm, and flashing the router. I'm just trying to come up with ideas so that there isn't a next time.
<Wicked> or someone trips over the power cord...
<mushroomblue> I can't afford another full system rebuild.
<mushroomblue> s/system/network/
<mushroomblue> but thanks for the tips.
<giovani> mushroomblue: ok -- but without understanding the attack you're trying to prevent, you're not goign to get anywhere
<giovani> Wicked: alright, well if you want to use the crontab -- there's no need to add the '&' at the end -- the process isn't going to ever be in the foreground anyway
<grim76> Wicked: Why not setup .bash_profile to launch screen for the user upon login?
<giovani> I presume you added that to your user's crontab?
<giovani> grim76: because he doesn't want a new screen session on every login, obviously
<Wicked> yea
<giovani> Wicked: and you tested this with a -reboot- and not a clean startup? I'm not sure if it's specific to the former
<Wicked> and i just want screen to start at boot so all my programs in screen witll be running as soon as the computer boots.
<Wicked> http://rpatterson.net/blog/screen-sessions-at-boot
<Wicked> im trying that now
<Wicked> giovani, ive tried both
<giovani> both what?
<giovani> ok well that url is using an rc script
<giovani> like I recommended
<giovani> did you remove the '&' from the end and try a reboot?
<Wicked> yes
<Wicked> i added the & because without it it did nothing.
<giovani> that seems unlikely, but alright
<giovani> it's possible that @reboot isn't supported in your version of cron
<giovani> I don't see it in my manpages
<Wicked> yea.
<Wicked> thats what i was thinking...that its a newer feature thats not in 8.04
<giovani> or a custom-compiled one in a major distro, who knows
<giovani> but I don't see it documented outside of casual mentions in google
<giovani> i don't see manpages containing it
<Wicked> yea me neither. i actually 1st saw the @reboot mentioned on a debian page
<Wicked> cant remember which one.
<giovani> yeah, looks like ubuntu contains cron 3.0
<giovani> and the only manpage I can find mentioning @reboot is for 4.1
<ball> Is that Gnu Cron?
<giovani> no ... isc/vixie
<giovani> the standard one :)
<Wicked> ah
<ball> Thanks
<giovani> gnu cron doesn't even seem to exist
<giovani> except in intention
<giovani> http://www.gnu.org/software/gcron/main.html
<Wicked> nice that did just what i wanted
<ball> "This project has been decommissioned and is no longer developed."
<ball> http://directory.fsf.org/project/gcron/
<jmarsden> Ummm   man 5 crontab |less +/@reboot      shows @reboot in the Ubuntu 8.04 crontab(5) manpage, for me at least...
<giovani> jmarsden: I stand corrected -- you're right, somehow I missed that
<twb> Debian, at least, defaults to Vixie cron
<giovani> yeah, as does ubuntu
<jtimberman> i think theres a vixie cron on aix even
<Alex_21> I am having trouble with Webdav. I get the Error:  You cannot connect to this server because it cannot be found on the network. Try again later or try a different URL.
<Alex_21> Here is my Virtual Host configuration: http://paste.ubuntu.com/259068/
<Alex_21> Any help is appreciated.
<Alex_21> Please
<slap> I have a private lan with a domain name gandalf.lan. I'm trying to configure /etc/exports with the following line /sharing *.tolkien.lan(blablabla). But it doesn't work Can someone telling me what I'm doing wrong ?
<slap> If I write /sharing *(blabalbla) everything is ok
<Sam-I-Am> probably a name resolution problem
<slap> How can I check?
<Sam-I-Am> can either host ping each other on tokien.lan and galdalf.lan?
<slap> I'm able to ping the gandalf.tolkien.lan from the cleint
<Sam-I-Am> whats the clients name?
<slap> hal9000
<Sam-I-Am> can the server resolve that to hal9000.tolkien.lan ?
<Alex_21> I am using: http://www.howtoforge.com/how-to-set-up-webdav-with-apache2-on-ubuntu-8.10
<slap> No !
<Alex_21> To set up Apache for Webdav
<Sam-I-Am> yeah, so name resolution needs to work both wats
<Sam-I-Am> ways
<Sam-I-Am> otherwise use IPs
<slap> How can I do that both way ?
<Sam-I-Am> if you dont have a dns server, make sure they're all defined in /etc/hosts
<slap> I've already installed DNS.
<slap> Seems to work ok
<Alex_21> Any ideas about my Webdav?
<Sam-I-Am> apparently its not if things arent resolving...
<slap> 'Cause I can ping gandalf.tolkien.lan www.tolkien.lan and tolkien.lan from the cleint
<Sam-I-Am> of course, all your hosts need to point to the same dns server
<Alex_21> I see no difference but the paths in my file from the tutorial's pages
<Sam-I-Am> but the server..
<Sam-I-Am> Alex_21: i havent messed with webdav in a long time
<Alex_21> Can someone check to see if my config file is correct?
<slap> Where do I start to configure the server to recognize clients
<Sam-I-Am> slap: make sure it can resolve your hostnames
<Alex_21> It is a mission critical webdav share
<Sam-I-Am> like... either consider it to point to your dns server in /etc/resolv.conf or edit /etc/hosts ... probably the same way you configured the client
<slap> Totally forgot to edit resolv! ( I remember reading something about that) Thanks a lot
<Sam-I-Am> are you trying to get to the webdav server using the URL as configured?
<Sam-I-Am> its going to be some.domain/base
<slap> Ok, now I can ping it both way. But I still have an access denied when I triy to mount the sharing ? Any clues ?
<Sam-I-Am> look in the logs
<slap> It says that there is a mount request for a unknown host ??
<Alex_21> I figured it out.
<slap> found it
<Alex_21> I shouldn't have been so chicken about looking in Apache's logs
<slap> add the client's name on /etc/hosts
<slap> Thanks Sam-I-Am
<Alex_21> Now, I can enjoy this webda share
<Sam-I-Am> logs are good
<Alex_21> Yes.
<Alex_21> Not plesant when you use a screen reader, but good all the same
<slap2> Sam-I-Am I'm learning networking on linux. I have one thing I would like to know. If I want a sharing on a medium network with a DHCP service, how can I configure bind and nfs if I have dynamic address. Do I have to use something else ?
<slap2> Is it possible ?
<Sam-I-Am> you can either make sure hosts keep the same name independent of IP... using dynamic DNS... or you statically assign IPs to hosts
<slap2> no, I would ike to try with dynamic IPs for hosts. So I have to take a look at dynamic DNS, right ?
<Sam-I-Am> yup
<Sam-I-Am> or just allow entire ranges of IPs for nfs
<Sam-I-Am> or domains
<slap2> Do I have to use bind9 with a different configuration, or there's another deamon
<Sam-I-Am> isc dhcp talks to bind9
<slap2> No. How can I do that ?
<Sam-I-Am> theres documentation out there
<Sam-I-Am> time for bed...
<Alex_21> Thanks for all your help
<Alex_21> Good night.
<uvirtbot> New bug: #416093 in samba (main) "karmic hangs during booting" [Undecided,New] https://launchpad.net/bugs/416093
<roxy08> Hi there, I just recover my samba server but i cannot access to all of my shared folder even if i have permisions, and still i got error such as cli_lsa_lookup_sids_noalloc(): out of memory
<uvirtbot> New bug: #418276 in samba (main) "package samba-common 2:3.4.0-3ubuntu1 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,Invalid] https://launchpad.net/bugs/418276
<johe|work> short question, what time is it (in UTC)?
<andol> johe|work: date -u
<johe|work> too easy :-)
<johe|work> i have a problem here with the ldap client auth, user logging works now, but he does not get the groups
<soloslinger> is there a decent guide that someone can point me to regarding 9.04 and hardware raid?
<garymc> Hi, anyone know anything about RAID. See im not sure what config im best using
<garymc> I see RAID 1 is all drives are the same and if one goes down you just replace it. But that means that if I have 6 72gb drives in, im only gonna get about 68gb of space?
<garymc> which is ok for my LTSP server, but
<garymc> i need to setup an asterisk server and store lots of calls. Am i best using RAID 5 for that?
<\sh> garymc: if you want  raid1 over 6 drives, you use RAID10
<\sh> garymc: which is a mix between raid0 and raid1
<garymc> thanks, but thats my issue i dont really understand raid
<garymc> so dont know which one is best to use
<\sh> garymc: http://www.cuddletech.com/veritas/raidtheory/x31.html
<garymc> im just looking to update the controller array 5i firmware
<\sh> garymc: it depends on your work setup ... raid5 + databases could be a problem in some cases...then you have raid6 which is similar to raid5 but has two drives which could fail at the same time
<\sh> garymc: HP?
<garymc> yes
<garymc> HP
 * \sh is mostly using raid10 for more diskspace with mirroring...and having at least one more redundancy via drbd or directly using HP MSA 60/70 or iscsi solutions for lots of diskspace...
<garymc> hmm lol, i better read up more on it all
<garymc> im looking to download the firmware for my server, but theres that many differnt packages on the HP site not sure which one to get.
<garymc> What do you think of this one?
<garymc> Systems ROMPaq Firmware Upgrade Diskette for HP ProLiant DL380 G3 (P29) Servers
<garymc> would that be the correct stuff to download and boot my server with?
<\sh> garymc: there is normally bios firmware updates and smart array updates packages...and there is at least one CD ISO where the latest stuff is on..depending on your configuration...
<\sh> smart array 5i is old afaik
<garymc> so do i have to download the 5i firmwarE?
<garymc> or can i upgrade my firmware to someting else
<\sh> garymc: for your smart array 5i http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownload.jsp?lang=en&cc=us&prodNameId=266599&taskId=135&prodTypeId=329290&prodSeriesId=374803&lang=en&cc=us this is the right page
<\sh> for your dl380 there is some other page....search hp...
<garymc> thanks
<garymc> :)
<garymc> trying to load that link now
<\sh> garymc: for your ROM update...you need http://h20000.www2.hp.com/bizsupport/TechSupport/SoftwareIndex.jsp?lang=en&cc=us&prodNameId=266599&prodTypeId=329290&prodSeriesId=374803&swLang=8&taskId=135&swEnvOID=1085 this page
<garymc> whats a rom update?
<\sh> garymc: the smart array bios firmware stuff...it's the bios of your smart array controller
<garymc> that first link you gave me lists operating systems
<garymc> im using ubuntu 9.04?
<garymc> or Centos
<\sh> garymc: yes and there is a smartstart maintaince - ROM update...that's what you need
<garymc> do i just burn them to CD rom?
<garymc> and boot the serveR? with it in?
<\sh> garymc: dunno...I'll have a debian OOB installation for this to update..cause I'm not really convienced that HP is supporting my ubuntu 64bit setup ;)
<garymc> ahh ko
<garymc> ok
<garymc> well im running 32bit as 64 bit wouldnt work?
<garymc> or wouldnt go on
<\sh> garymc: ah no...hp is supporting debian 32 + 64bit ... but as there are some kernel modules which I didn't test against ubuntu 32 or 64bit I can't say if it works...problem is that my HP supporter always tells me "You know more about this crap then I do" and I don't have the time to test it
<garymc> ok lol
<\sh> garymc: next target of my work is to implement HP SIM for our server infrastructure and then I have the time to test HP drivers on Ubuntu
<garymc> cool, busy busy :)
<Daviey> h
<jpds> Daviey: ...
<sergevn> hi
<sergevn> i have a question, how can I install and configure truetype fonts for php?
<Daviey> sergevn: I'm kinda confused what you want to achieve.. php will produce valid output, but the fonts really need to be client side..
<Daviey> Unless you are trying to create images or similar sergevn ?
<sergevn> Daviey: yeah images :)
<Daviey> ah
<sergevn> i am trying to use imagettfbbox() but getting error it cant fint the font.
<Daviey> sergevn: unless i'm mistaken, it uses a full path
<Daviey> you can't just say $font = "somefont.ttf";
<Daviey> so you could just drop the somefont.ttf to the same location as the .php script
<sergevn> yeah im doing that.
<sergevn> still error
<sergevn> that's the weird part
<sergevn> and on "another" webhoster it does work
<Daviey> sergevn: http://uk.php.net/manual/en/function.imagettftext.php#90580
<Daviey> sergevn: you might get more specifc help in a #php channel.
<sergevn> Daviey: you saved my day
<sergevn> it works :)
<Daviey> heh
<soloslinger> is there a decent guide to making 9.04 work with hardware raid?
<soloslinger> everything seems to want to send a person to software raid
<tarekeldeeb> Hello all, I want to build an ubuntu based lab, with fat-client config. Login+home mount should be authenticated using LDAP. Can any1 guide me to the needed howto documents?
<Daviey> soloslinger: to get hardware raid *working* is often irrelevant to the OS.
<pmatulis> soloslinger: that's b/c with hardware raid the OS does not know anything about the raid configuration.  it will see one drive when there are actually two (RAID1)
<pmatulis> (for instance)
<Daviey> soloslinger: hardware raid just presents the raid disk as a standard block device.
<soloslinger> pmatulis: I have configured hardware raids for use with other operating systems, but I can't seem to get ubuntu's server installer to want to recognize it.
<tarekeldeeb> Can any1 support me, please ?
<pmatulis> soloslinger: k, that's different
<Daviey> tarekeldeeb: however, monitoring the raid device is normally the responsability of the OS.
<Daviey> soloslinger: What make/model raid contoller are we taling about here?
<pmatulis> soloslinger: maybe ubuntu just doesn't support that particular controller
<soloslinger> pmatulis: that is my sneaking suspicion(sp?) unfortunately...  Daviey: I'd hafta reboot the box and look, it's in a fairly new 1 u server, an hp dl160 g5
<Daviey> soloslinger: ahh, is it hp raid?
<pmatulis> The spelling of suspicion is correct
<soloslinger> Daviey: yeah.
<Daviey> soloslinger: Then it is probably cciss, and i dislike it :)
<soloslinger> pmatulis: ty  =)
<Daviey> soloslinger: What version of Ubuntu are you trying to use?
<soloslinger> Daviey: most likely it is.  Am I SoL tryin to get the ubuntu installer to see it?   9.04
<Daviey> no.. it should work.
<soloslinger> kk
<soloslinger> I'll hafta play with it when I have the opportunity to bring the server down for longer.  Is there some docs somewhere that you (or someone else) could point me to?
<soloslinger> Did some googling for that, but everything comes back with software raid tips
<Daviey> soloslinger: The "disk" should show under /dev/cciss
<soloslinger> yeah
<soloslinger> But as the installer moves along, even if I want to do partitioning manually, I don't see it prompt me to specify a device to use as a disk
<Daviey> ie, $ ls /dev/cciss/
<Daviey> c0d0  c0d0p1  c0d0p2
<Daviey> soloslinger: Can you confirm you are using the Ubuntu Server disk, or the Ubuntu Desktop?
<soloslinger> Daviey: I am fairly certain I am, nobody here runs Ubuntu for a desktop, so there shouldn't be a cd of it laying around.  If I install to one drive it doesn't try to get me to put a WM on like the desktop installs I did once upon a time.
<Daviey> ok, great.
<soloslinger> Daviey: I would say I am fairly certain I am using a Server disk.
<pmatulis> soloslinger: is it graphical or ncurses (blue)?
<soloslinger> pmatulis: blue
<Daviey> soloslinger: When you get to the disk management area, what do you see?
<soloslinger> Daviey: It first prompts me for something to the effect of "Do you want to enable Serial ATA RAID" with a yes, no and cancel.  Regardless of if I select yes or no, the next screen where normally the partitions are built is blank.  In other words, there isn't a "disk" there to divide up the partitions.
<maswan> One trick could be to switch to a shell prompt (alt+f2) and type lspci to find out what storage controller it actually is
<maswan> I'm not sure that the dl160 has a real cciss
<Daviey> i'll bet it's the P400.
<soloslinger> Daviey: You'll hafta forgive me too, i needed something up by the time I left today so I am going from memory an hour or so ago.
<Daviey> soloslinger: no worries, can you try what maswan said?
<tarekeldeeb> sorry for duplicating, but i need support
<tarekeldeeb> Hello all, I want to build an ubuntu based lab, with fat-client config. Login+home mount should be authenticated using LDAP. Can any1 guide me to the needed howto documents?
<soloslinger> If I switch to a shell prompt and build the partitions via Fdisk, could I switch back into the installer and carry on like normal?  I am comfortable using Fdisk.
<tarekeldeeb> :(
<maswan> soloslinger: I'm not sure offhand if fdisk is included in the environment, parted and probably sfdisk is
<maswan> tarekeldeeb: Hm. Don't really know. We use kerberos auth.
<soloslinger> Daviey: I will try the next time I can bring the server down for any time.  I am fortunate that it's in a cluster of a couple other servers, so losing a disk will suck, but it could be worse.
<Daviey> tarekeldeeb: There are a bunch of ways, perhaps if you hit the ubuntu-server mailing list, you'll get a better response.
<Daviey> soloslinger: ok.. what OS is it at the moment?
<soloslinger> ubuntu 9.04, I just have it running on one disk atm
<soloslinger> Daviey: you want the output from a lspci ?
<Daviey> ahh, you can just ssh in and run some commands :)
<tarekeldeeb> kerboros, is it better than LDAP? Or you seek windows clients interoperability ?
<Daviey> soloslinger: please.
<Daviey> soloslinger: Also, fdisk -l would be handy
<tarekeldeeb> Daviey: thanks ..
<soloslinger> Daviey:    lspci = http://pastebin.com/d6e9ca9a4
<Daviey> 631xESB hmm
<soloslinger> Daviey:  fdisk -l = http://pastebin.com/dd2f5c3c
<Daviey> i was wrong :)
<Daviey> soloslinger: for a two disk setup, is there a reason you dont want to use software raid?
<soloslinger> Daviey: haven't really weighed the pros and cons out yet.  Figured it had the hardware for a hardware raid, might as well use it.
<Daviey> soloslinger: I can see your point, but for 2 disks - there is no real extra overhead for a mirror / RAID-1
<Daviey> and actually, you get some bonus.. you can pull the disk out and put it in any server, not locked to the RAID controller card
<Daviey> it's easy to monitor.
<soloslinger> Daviey:  I suppose so.  What are you seeing that put the nail in the coffin to the hardware raid idea for my future reference?
<Daviey> soloslinger: I haven't used hardware raid with 631xESB.. but it was the fdisk -l, that showed 2 disks made me mention it.
<soloslinger> Daviey:  ah.  Gotcha
<Daviey> soloslinger: mdad really is your new best friend :).
<soloslinger> Daviey, maswan, pmatulis: I appreciate your time and help.  I get to get outta here and get some sleep!
<soloslinger> Daviey:  Yuup.  Sounds like I got some reading to do.
<Daviey> nn soloslinger, let us kow how you get on.
<soloslinger> will do
<garymc> yo, im installing new firmware for my HP server and its stopped at 82% :(
<garymc> should i just wait?
<pmatulis> firmware?  for what device?
<garymc> Array controller 5i
<garymc> RAID
<pmatulis> ok
<garymc> all seemed to be going well then it stops, just sitting there :(
<garymc> :)
<garymc> now its done lol
<pmatulis> ok
<garymc> anyone tell me if I can have 4 X 146gb drives in my Server
<garymc> 3 of them setup as a logical drive as RAID 1 and one of them spare
<garymc> and the fourth 146 as Logical drive on its own as RAID 0
<garymc> ?
<_ruben> both raid0 and single drives arent raid
<garymc> so would the RAID 0 drive be its own drive for extra storage?
<garymc> so I could remove it  when backing up?
<garymc> Or should I set the 4 DRives up as RAID 1+0 and have two of them as Spares?
<garymc> decisions decisions
<garymc> its for a UBUNTU LTSP server
<_ruben> you cant do raid10 with 2 spares with just 4 disks
<_ruben> raid10 has a 4disk minimum without spares
<_ruben> (except for the software raid10 in linux, which can do fancy raid10 setups with odd number of drives for instance)
<smoser> jdstrand, so yesterday you mentioned to me that passing sensitive data to a application via environment was not secure ? why is that? i'm not aware of a way in which a non-root user can view environment of another users' processes.
<_ruben> it pretty much comes down to how much performance and how much storage you need
<garymc> well my RAID config is letting me setup RAID 1+0 with 2 drives and two spares?
<uvirtbot> New bug: #418342 in mysql-dfsg-5.1 (main) "akonadi-server prevents install of mysql-server-5.0" [Undecided,New] https://launchpad.net/bugs/418342
<garymc> what linux software can i get that does that _ruben
<garymc> is it in the GUI or command prompt?
<garymc> im better with gui
<garymc> :)
<_ruben> garymc: mdadm .. not sure if there's a gui for it
<garymc> ok
<garymc> is RAID 0 just pure disk space?
<Tom_Ass> and performance
<_ruben> raid0 isnt raid (as the r in raid stands for redundant, and raid0 doesnt offer redundancy) .. it basically 'glues' 2 or more drives together .. same as with using multiple pv's in lvm
<garymc> ok well ive setup two logical drives in the array controller
<garymc> one RAID 0 146gb drive
<garymc> and 3 Raid 1+0 (2 plus a spare)
<_ruben> performance is one of the most common reasons for raid0 .. its nice for video editing for instance
<garymc> Will this work?
<garymc> I want to use the single 146 as a data backup
<_ruben> like i said, a single drive isnt raid, not even raid0
<garymc> will it show up as a seperate drive in ubuntu?
<_ruben> and im not sure how your raid controller does it, but in my book, raid10 over 2 disks isnt possible
<garymc> hmm posible its RAID 1?
<_ruben> could be
<garymc> thats over 3
<garymc> one as spare
<garymc> just trying to work out the best setup of this stuff
<_ruben> raid1 with 2+1 disks is fairly common
<garymc> cool but i got a RAID 0 on there too
<garymc> in the fourth slot
<garymc> will this work?
<jdstrand> smoser: it is an unnecessary risk imo. subprocesses will inherit it, which might not be what you want. the same user can get to it (of course), root can get to it (of course). There might be other ways I'm not thinking of. IMO it will be difficult to get right, and if you do get it right, you may not have covered everything for another platform
<jdstrand> smoser: I also googled it to see if I was missing anything and came across: http://dev.mysql.com/doc/refman/5.0/en/password-security-user.html
<_ruben> garymc: it probably would
<smoser> on linux the "extremely insecure" would seem a bit incorrect
<smoser> as "If you set MYSQL_PWD, your password is exposed to any other user who runs ps." is as far as i can tell explicitly wrong
<smoser> so anyway... i agree that without care, environment is longer lived than data written to a pipe.  but loads better than other ways.
<smoser> and actually, i *think* that mysql was where i first saw this... if you do run '-pMYPASSWORD' as a command line argument, it reads that, and then writes over its argv space, so that the 'ps' only shows 'XXXXXXX'
<jdstrand> smoser: yes, but it is in ps output for a split second in that case
<smoser> yeah
<smoser> s/second/milli secong/ but yeah
<smoser> :)
<jdstrand> smoser: it's not particularly hard to win those kind of races
<smoser> fair.
<smoser> jdstrand, do you know, is it actually the case that environment is intended to be "secure" on linux ?
<smoser> ie, as a general goal rather than just happenstance that the /proc/<pid>/environ has secure perms
<jdstrand> smoser: I think it is hoped to be, based on /proc/<pid>/environ
<smoser> (i realize not pure happenstance, but wonder if it would be considered a bug if it would be treated as a security issue if that were leaked elsewhere)
<smoser> jdstrand, thanks.
<jdstrand> smoser: of course, I've seen kernel bugs where the permissinos end up wrong in /proc/<pid>
<jdstrand> smoser: based on /proc/<pid>/environ, you *probably* are ok to use it, but still are dealing with subprocesses and root. I don't know exactly what you are trying to do, but if it is ec2, protecting against root where feasible would be a good defensive stance
<jdstrand> smoser: I don't claim to know every means of getting at the environment, but it doesn't feel right
<jdstrand> smoser: kees and mdeslaur may want to weigh in as well
<smoser> jdstrand, it was just a general question, really, but came up in context of ec2-ami-tools (from amazon) which expose "secret-key" on cmdline and provide no other way to pass it (that i'm aware of)
<smoser> how much hope does one actually have for protecting against root ? is that a reasonable goal?
<smoser> jdstrand, i would say that if you were a program reading sensitive data from environment you should take care to un-set that data after reading
<mdeslaur> smoser: a real fix would be to use a config file, as mysql does. You should file a bug with amazon to get that implemented.
<jdstrand> (I recommended that initially)
<smoser> so how is a permanently stored plaintext password in a config file more secure than just about anything... definitely not safe from root.
<mdeslaur> oh...good luck protecting from root
<jdstrand> smoser: those are two different points
 * mdeslaur has not followed the whole thread
<kwork> you should always protect your stuff from root
<kwork> because root is the evil man
<smoser> it would seem to me that plain text password in a text file is not really more secure than environment on linux
<jdstrand> smoser: you can't fully from root, cause there could be a keylogger
<jdstrand> smoser: and of course root has access tot he file as well as /proc/../environ
<mdeslaur> smoser: most "debugging" scripts people run grab the user's environment
<smoser> right. thats what i was thinking... doesn't really matter what you've done if there is evil root
<jdstrand> smoser: I don't know the application, but prompting for the password and piping it into a command via a shell built-in is safe from all but a key logger and memory attacks
<jdstrand> smoser: my point wrt to the env is that 'ps auxwwe' will show all the environment as root. if the perms in /proc/.../environ happen to be wrong (I'm assuming ps is looking at that), your screwed
<jdstrand> smoser: if you don't use the environment, you are protected from such matters
<mdeslaur> what's the debate here? storing passwords in environment variables?
<jdstrand> smoser: imho, when dealing with ec2 private keys, I think a defense in depth is in order
<smoser> mainly, yes.
<jdstrand> mdeslaur: it's not so much a debate as a discussion
<smoser> i'd say an education of smoser more than debate
<jdstrand> mdeslaur: but that is what I thought you might want to weigh in on. it makes me feel queezy
<jdstrand> queazy?
<mdeslaur> well, environment variables aren't usually considered confidential. A whole slew of scripts pick up the environment variables when submitting bug reports, etc.
 * jdstrand goes with qweezee
<smoser> just fyi, i verified that /proc/<pid>/environ is apparently read-only even to <pid> and recorded at invocation time
<smoser>  PASSWORD=bar bash -c 'PASSWORD=XXX; unset PASSWORD; sleep 30' &
<smoser> [4] 6530
<smoser> looking at environ of 6530 still shows "PASSWORD=bar"
<jdstrand> mdeslaur: that is an excellent point
<jdstrand> if there is a bug, a user would happily run 'sudo apport-collect -p ...' if asked to in a bug report
<jdstrand> now you have to make sure your apport hook doesn't submit it, but then you can't protect against ad hoc debugging scripts
<mdeslaur> like "ipsec barf"
<smoser> jdstrand, apport-collect does tell the user "please look for confidential data"
<mdeslaur> and phpinfo()
<smoser> but yeah, probably they're not going to :)
<mdeslaur> and whatever else
<jdstrand> smoser: apport-collect does? I'm pretty sure it just runs and submits (note, apport-collect is for adding info to a bug after it is filed, and separate from apport-cli and apport-gtk)
<smoser> jdstrand, ok... i'm probably wrong yet again... i was thinking of 'ubuntu-bug foo'... i sweare it used to say "if you weren't doing anything confidential..."
<smoser> but it doesn't seem to say that now. it really should.
<jdstrand> smoser: you are right about that-- it will give you a chance to look at it :)
<jdstrand> smoser: it does give you a chance to look at it though
<jdstrand> smoser: regardless, using env seems brittle at best
<smoser> mdeslaur, jdstrand thank you for environment-variable-security-101 class.
<jdstrand> smoser: I think apport will say the confidential bits if it grabs a crash. ubuntu-bug may not prompt for anything, but gives you a way to see what youa re submitting. apport-collect just does it
<mdeslaur> smoser: the other thing, is environment variables being private may not be portable
<jdstrand> I'm not an apport expert, but pretty sure that is how it works
<smoser> mdeslaur, i agree, its not portable to other unixes
<smoser> but does anyone care about other unixes ? :)
 * smoser ducks
<mdeslaur> smoser: what other unixes? :P
<mdeslaur> smoser: amazon may care
<smoser> i think larry ellison owns a unix
<smoser> he owns a lot of stuff
<mdeslaur> also, some people use some legacy OS called "Windows"
<soren> jdstrand: It's "queasy" isn't it?
<jdstrand> soren: aha! that's it
<jdstrand> soren: though, I think qweezee is more fun
 * soren agrees
<jdstrand> at least as fun as feeling 'barfy' can be
<Tom_Ass> it's not fair to other legacy software to call Windows legacy ;D
<soren> mdeslaur: Environment variables under Windows are freaky in all sorts of ways, though.
<soren> mdeslaur: Processes that can modify each other's environment makes me cringe.
<soren> jdstrand: Heh... "barfy". That's good.
<jdstrand> :)
<ScottK> FYI, I just put in a big stack of sync requests that should result in a working turbogears2 for Karmic.
<garymc> I just installed Ubuntu LTSP on my server with the RAID config above ^^ 3 RAID 1 , one as a spare and One 146gb as RAID 0.
<garymc> Now the 146gb RAID 0 drive shows up in the file system, but it says unable to mount when i click on it
<ball> hello garymc
<ball> A single RAID 0 drive *makes no sense*!
<Tom_Ass> garymc, is that 5 disks total?
<ivoks> ball: remove 'A single' and 'drive'
<ball> ivoks: There are probably a few applications where non-redundant striping is justified (e.g. video editing scratchpad).
<ttx> ivoks: anything special I should know about before acking your openais sync request ?
<ball> ...but not on a single drive ;-)
<garymc> lol ball, i was just thinking maybe using it as a mass storage device you know for extra disk space
<ball> garymc: that's not RAID-0 though.
<ivoks> ttx: nothing special...
<ball> that's just a disk.
<garymc> no? it says it is in the Array utility, just updated firmware too
<garymc> ohh ok
<garymc> I cant mount it anyhow in my Ubuntu Ltsp setup
<ball> garymc: you may want to remove it from your RAID controller's list of drives to use.
<garymc> and i know now it aint a good idea
<garymc> i will
<garymc> just delete it and remove it?
<garymc> or can i do something else with it
<garymc> maybe make it another spare?
<ivoks> ttx: there will most probably be additional syncs from debian later
<ivoks> ttx: but i'd like to have everything in place before FF
<ttx> ivoks: sounds like a good idea
<ball> garymc: are the three disks (RAID-1 mirror and a standby drive) all 72 Gbytes?
<garymc> yes but 146gb now
<ivoks> sound of dell t300 booting
<ttx> ivoks: acked
<ivoks> ttx: :* thanks
<ttx> ivoks: in all cases it sounded better than what was already there :)
<ivoks> right... i'll upload pacemaker once openais settles down
<ivoks> and somebody will need to sync rhcs
<ivoks> and that's it
<ivoks> 0 time spent on mail stack :( :( :(
<garymc> anyone know where MySql likes to save the databases you create?
<ivoks>  /var/lib/mysql
<Steve[mbp]> morning everyone
 * ball waves
<ball> oooh, we have a meeting today, right?
<ivoks> yep
<ball> Is that in here or in #ubuntu-meeting ?
<ewook> ooh. Meetings.
<ewook> what time?
<ttx> ball: #ubuntu-meeting
<ttx> in 30 minutes
<ball> Thanks
<heath|work> what package do I need to install to be able to compile apache modules with apxs ?
<heath|work> I see, the development headers
<Sam-I-Am> join #ubuntu-meeting
<Sam-I-Am> oops
<andol> Sam-I-Am: Thanks for the reminder :)
<Sam-I-Am> haha
<giovani> hmm, so crontab in ubuntu (and other linuxs ) doesn't seem to support a backslash dor multiple lines -- but freebsd does -- despite using the "same version" of cron
<Jeeves_> If you need more than one line, wouldn't a script be nicer?
<giovani> in some cases
<giovani> not always -- just annoying that it's available somewhere and seems simple to have
<garymc> stgraber is this documentaion, does it work with Ubuntu LTSP or do I need to install Ubuntu server on the one I want to add as a cluster to my main LTSP server?
<Daviey> chaps -> meeting
<soren> giovani: If it's the same version, they'll act the same.
<garymc> stgraber: O k I need to follow those instructions youve done to the tee
<garymc> pretty good rigth up and thanks for providing this
<garymc> *write
<jbernard__> ive just uploaded libcgroup (http://revu.ubuntuwire.com/p/libcgroup) to REVU, if anyone has some cycles to review it I'd be very grateful
<garymc> sorry wrong room :S
<ewook> scary dealio with how meetings are conducted.
<giovani> soren: putting "same version" in quotes wasn't accidental -- and that's false -- different distros apply patches and keep the upstream versions the same all the time
<ewook> I don't get the idea of holding back bacula 3.0.x.
<ivoks> FF is day after tomorrow
<ivoks> and nobody looked at bacula and the merge isn't quite trivial
<ewook> FF?
<ivoks> feature freeze
<ewook> mind you, this is my first meeting.
<ewook> ah.
<mdz> soren/smoser: I wanted to check on how testing is done of the UEC/EC2 images for milestones
<mdz> I looked in the test tracker at http://iso.qa.ubuntu.com/ but they aren't there
<mdz> where do you report the test results?
<soren> mdz: It's manual at the moment. there's a testing matrix on the iso tracker.
<mdz> soren: oh, did I overlook them?
<aubre> Daviey: is Thursday the last day to submit tips for karmic?
<soren> mdz: To be honest, I've not used it. I just talked to slangasek about it, and he said he was putting it up there.
<mdz> I looked at the left in the list of products, and under the server section
<Daviey> aubre: no.. should still be ok after that..
<mdz> soren: who performs the testing?
<Daviey> aubre: it's not been uploaded yet anyway.
<aubre> Daviey: ok, it's not like it would break anything. Is it going to be in karmic?
<soren> mdz: For Alpha4, I did. I forget who else.
<Daviey> aubre: should be, just resolving some issues before it gets uploaded.
<soren> mdz: They're listed as UEC images.
<mdz> soren: I have looked all around and can't find them. can you give me the URL if they're there?
<ivoks> take care
<soren> mdz: In light of the... um.. interesting kernel issues we're seeing on EC2, we should probably track those separately.
<soren> mdz: I chose the "All" filter.
<aubre> are theimages working ok in UEC?
<soren> aubre: I'm not familiar with the filtering magic in the ISO tracker. Perhaps "All" is the only way to see it.
<mdz> soren: weird, they do show up there, but not anywhere else that i see
<mdz> soren: in any case, it shows no test results submitted
<mdz> soren: and most of those tests are nonsensical for UEC and EC2
<mdz> so we need to get the test tracker sorted out, and then make sure that all milestone test results get reported there
<soren> mdz: It's something slangasek and I agreed to revisit for the next alpha.
<soren> mdz: Alpha4 did have a bit of a  "this is the first time we're doing alphas of these things" feel to it.
<ttx> mdz: I'm about to call it a day, did you have any question not EC2/UEC related that you wanted answers for ?
<mdz> soren: we had discussed the possibility of getting the daily UEC images into EC2 as AMIs.  wheer does that stand?
<mdz> ttx: yes
<ttx> mdz: fire
<mdz> ttx: what is the status of "user login leverages directory infrastructure" in Karmic today?
<soren> mdz: It was a useful experience, though. It uncovered a lot of things, such as the need to specific tests we need to put on the ISO tracker, for instance.
<mdz> ttx: should I address all EC2/UEC questions to soren then?
<mathiaz> mdz: waiting on openldap upstream to deliver the feature
<mathiaz> mdz: will require a FFe for 2.4.18
<mdz> mathiaz: what other work is remaining in Karmic for that spec?
<mdz> other than openldap 2.4.18?
<ttx> mdz: yes.
<mathiaz> mdz: package sssd - it's under testing right now
<dendrobates> mathiaz: is that correctly reflected in the status page?
<ttx> mathiaz: shoudl be done sometime today, right
<mathiaz> mdz: as upstream released their version yesterday - it should be in karmic before FF
<ttx> dendrobates: it's up to date on the Canonical Server team Operational plan page, yes
<mathiaz> dendrobates: nope - last time I updated the status page was on Friday
<smoser> mdz, regarding daily UEC images into EC2 as AMIs, I was planning on looking at adding code to do that to the automated builds
<mdz> dendrobates: the status page shows it as "on track" but we are 2 days away from "missed the deadline" so I thought I'd check
<ttx> mathiaz: I updated it for you.
<mathiaz> dendrobates: and the news I got arrived yesterday late (my time-EST)
<soren> mdz: We haven't had time to look into that. I've scripted the entire process, though, so just need to find out if a) we can pull the images at will (so we don't end up having thousands of images on EC2 for no good reason), and b) where we'll be running the scripts to do this automagically.
<mathiaz> ttx: ah ok - thanks.
<smoser> mdz, regarding updated-ness of ec2 mirrors, I just verified that at least the us is up to date and sources.list is functional to point at it
<smoser> http://pastebin.com/m2601b85c
<ttx> mdz: status page shows "Has issues that need to be resolved in order to make the release" for that spec
<mdz> soren: do you think it is worth doing for 9.10, or should we put it off until Karmic+1?
<clusty> hey
<mdz> smoser: thank you. can you clarify who is responsible for the operation of those mirrors? is it canonical IS?
<soren> smoser: I think it's worth revisiting for this release.
<soren> Err..
<soren> mdz: I think it's worth revisiting for this release.
<smoser> mdz, i don't know that.... soren said IS, i have no info to refute
<dendrobates> soren: I think we should defer that to 10.04
<mdz> soren: do you have a name for someone in IS who is responsible?
<soren> mdz: For what, exactly? The mirrors? I've only talked to elmo about it, I think.
<mdz> soren: yes, the mirrors. OK.
<ttx> mdz: anything else ?
<clusty> i am having some issues using LDAP as a source for usernames (network auth). I installed the LDAP server and can query it locally with ldapsearch. Also installed lib-nss-ldap and configured it to the best of my knowledge
<clusty> https://help.ubuntu.com/community/LDAPClientAuthentication
<mdz> ttx: lots, but all the rest is EC2/UEC, so if you don't have information on those projects, I guess there is no need to stick around
<clusty> unfortunately when I do a getend i don't see any user from LDAP
<clusty> any ways how i can debug what is going on?
<ttx> mdz: I have some, but soren has much fresher information.
<ttx> mdz: what TZ are you in currently ?
<mdz> soren: what's the story on EC2UpgradesSpec?  it's low priority and 0 of 4 work items are complete, so it sounds like "defer to post-Karmic"
<soren> ttx: Have a nice evening, talk to you tomorrow.
<mdz> ttx: UTC+1
<ttx> mdz: ok, so we should be able to discuss any remaining questions with you tomorrow morning.
<mdz> but I will be staying around until I have a clear idea of what's done vs. not done for feature freeze
<mdz> ttx: the rest of this week I will be UTC-7 FYI
<soren> mdz: It's been blocked on the whole kernel thing. It's something I'd really, really like to do for this release, I have most of the code on my side, it just needs testing on EC2. It's very much worth an FFe.
<mdz> soren: what's "the whole kernel thing"?  the bug(s) which cause boot failure?
<soren> mdz: Yes-ish :)
<soren> mdz: The absence of a karmic ec2 kernel.
<soren> mdz: ...which is caused by the boot failure bug(s).
<mdz> soren: absence?  I thought it was just unreliable
<soren> mdz: That's the Jaunty kernel.
<mdz> soren: what's the story with the karmic kernel (bug numbers?)
<soren> mdz: I'm not sure there is a bug report on it. It has simply not been delivered.
<mdz> soren: what's included in the karmic UEC images then?
<mdz> oh, a non-xen kernel of course
<soren> mdz: UEC images do not contain kernels.
<mdz> soren: so what's in the karmic alpha 4 AMIs on EC2?
<mdz> the amazon kernel?
<soren> mdz: An Intrepid kernel.
<mdz> oh dear
<soren> ...from Amazon.
<soren> Well, the one we've been using for Intrepid on Amazon.
<soren> "oh dear" indeed.
<soren> :(
<soren> I'm well aware that this is bad in more ways than I'd like to enumerate. We needed a kernel that worked.
<mdz> soren: under the circumstances, I'd like to propose that we drop the kernel upgrades project, since we have our hands full getting even one kernel working
<soren> mdz: Well, getting a kernel working is the kernel team's job.
<soren> mdz: So that does not take time from us.
<soren> mdz: Other than the poking and nudging and all that.
<mdz> soren: moving on to UEC, what's the status of eucalyptus 1.6?  I see there's a snapshot from bzr in karmic
<soren> mdz: Correct.
<mdz> soren: how is it working?
<soren> mdz: It's still settling somewhat.
<soren> mdz: There are some upgrade issues (conffile handling) that are causing problems.
<foolano> mathiaz: sorry i was away during the meeting, i though I would be available but I couldnt make it :(
<soren> mdz: ..and some other bits and pieces.
<mdz> soren: does it pass a smoke test on a fresh install?
<soren> mdz: The reason for the bzr snapshot is that upstream hasn't actually tagged their 1.6 release yet. They are also stabilising. I'll grab the freshest code tomorrow evening or Thursday morning, so we're as close to their final relaese before FF.
<soren> mdz: I've not had a chance to test that yet.
<soren> mdz: ...since it doesn't really work very well in a VM.
<mdz> soren: when will the seed changes land to put eucalyptus on the CD?
<soren> mdz: The NC because it needs to run virtual machines (which is tricky inside a virtual machines) and the other stuff because the networking is special.
<soren> mdz: They landed a while ago.
<soren> mdz: I think.
 * soren checks
<soren> mdz: cd-build-logs agrees.
<mdz> committer: Colin Watson <cjwatson@canonical.com>
<mdz> branch nick: ubuntu.karmic
<mdz> timestamp: Tue 2009-08-11 13:57:54 +0100
<mdz> message:
<mdz>   add eucalyptus-simple-cluster and eucalyptus-node tasks, for foundations-karmic-cloud-setup-in-server-installer
<soren> Yup.
<mdz> soren: so that's blocked on eucalyptus getting promoted to main?
<soren> Yup.
<mdz> what's holding that back?
<soren> Eyes and hands.
<soren> We're waiting for the MIR team.
<soren> Except for a few cases where I need to fix up a few things. The vast majority is java dependencies which simply need a review from the MIR team.
<mdz> soren: who is working on that?
<soren> mdz: lool and pitti, I believe.
<mathiaz> foolano: np - we've got the update
<soren> mdz: I seem to remember overhearing something about the rest of the team being on holiday.
<soren> mdz: Don't quote me on that, though.
<mdz> soren: since we're almost out of time, I think we need to be tracking it more closely
<mdz> especially if the people who would normally work on it happen to be on holiday
<mdz> how can I help move it forward?
<soren> mdz: I'm not sure what more we can do. We poke and prod as much as we can without being excessively annoying. :)
<ttx> mdz: the whole MIR team is working on that
<ttx> though so far only asac did some reviews.
<ttx> and lool on the C ones.
<mathiaz> ttx: hm - regarding sssd
<mathiaz> ttx: the packaging is ready
<mdz> ttx: there is a fine line between "everyone is accountable" and "no one is accountable" ;-)
<ttx> mdz: indeed
<soren> mdz: :)
<mdz> if no one is working on it, it will not get done, no matter how many people are on the MIR team
<mathiaz> ttx: but sssd doesn't work on karmic - related to dbus being broken
<mathiaz> ttx: does it make sense to upload the package to karmic even it's broken?
<mathiaz> ttx: with upstream being aware of the issue?
<mdz> it sounds like what you need is a commitment to get the remaining reviews processed, and the packages promoted to main, in the next two days
<soren> mdz: Correct me if I'm wrong, but I don't believe we usually consider main promotions covered by feature freeze?
<ttx> mdz: during my meeting with the MIR team, they said it was ok to do MIR stuff after FF... though in our case it clearly screws up the delivery of the feature on the CD.
<mdz> soren: they're correct in that we can promote things to main post-FF. however, you can't deliver your features until those MIRs are processed.
<ttx> mathiaz: would it require a fix in sssd to be working, or just a future fix in dbus ?
<soren> mdz: Ah, yeah, we do have the "*-on-a-cd" spec.
<mdz> soren: and if I'm not mistaken, the foundations team has a feature goal which depends on eucalyptus being on the CD, and they won't be able to land that either
<mdz> (the cloud installer)
<soren> mdz: *nod*
<mdz> so it's not OK to put that off
<mdz> next, the virtual appliance project
<mdz> I see that alfresco has landed in partner, which is great
<soren> Oh, we're not actively delaying it :)
<mathiaz> ttx: I don't know - bug is under investigation
<mdz> but who is working on making the appliance?
<soren> I am.
<ttx> mathiaz: if the dbus bug is tracked and milestoned appropriately, I'd upload the package, not working but not broken in itself
<mdz> soren: sounds like you have a lot on your plate
<mdz> soren: is there anyone else who can work on some of these things, to let you focus more?
<ttx> ok, now I really need to go or I'll die.
<mathiaz> ttx: hm - well the package doesn't install since the daemon is not able to start
<mathiaz> ttx: ok - I'll keep investigatin this issue
<soren> mdz: I'm not sure I can split it up, explain and train someone else to work on it iwithin the given timeframe.
<soren> ttx: Take care!
<ttx> mathiaz: mdz knows what FF should cover. Ask him what alternative is the less worse.
<soren> mdz: I do have a lot on my plate. No argument there :)
<cjwatson> mdz: for the record, I do not need euca on the CD to develop my side of this
<cjwatson> it can be landed for netboot at least
<mdz> cjwatson: only to test it? ;-)
 * soren suspects cjwatson can roll a CD with eucalyptus on it if he wants to test it :)
<cjwatson> lots of people will be using this with netboot anyway
<mdz> indeed
<cjwatson> so I can develop *and test* it that way
<cjwatson> as it happens I'm debugging it into existence at the moment
<mdz> cjwatson: should I be nervous that you're not talking about it in the past tense?
<mdz> ah
<cjwatson> it is not exactly terribly early, but I think I have it under control
<mdz> cjwatson: is there anything I can do to help?
<cjwatson> mdz: I've bounced you the mail I sent about it before going on holiday last week
<mdz> soren: what sort of explanation or training would be needed to work on the appliance?
<mdz> soren: I think I could explain it to someone pretty easily if that would help
<mdz> I'm happy to put the time in if it would move things forward
<cjwatson> the main things that are left are additional debconf configuration and the thing to write and publish the preseed file on the cluster
<mdz> cjwatson:  I can take back that TB action if needed ;-)
<soren> mdz: There are two sides to this, really. There's the one where we build a UEC image with the alfresco package preinstalled. This is quite simple.
<cjwatson> if what I have here does anything sensible at all, I'll upload it hopefully today
<cjwatson> mdz: I have two weeks for it, right? :)
<mdz> soren: that's the part I'm asking about . that is, in fact, the whole reason for this exercise. it was the original requirement: a reference appliance :-)
<cjwatson> (yes, I know)
<mdz> cjwatson: :-(
<mdz> I wish I could say that it could be deferred, but it can't
<cjwatson> I mean for the TB action
<mdz> oh
<mdz> phew
<cjwatson> err, you can come and babysit so I don't have a child trying to climb over me? :)
<mdz> cjwatson: if I didn't have a plane to board tomorrow...
<soren> mdz: The other is the recipe based version (or as I like to call it:  the one that is actually useful).
<cjwatson> don't panic, captain mainwaringg
<cjwatson> -g
<soren> mdz: A CMS is no fun if it's going to throw away all your content when it's rebooted.
<mdz> soren: why does that require a recipe? I figured it would just use an EBS volume or whatever
<soren> mdz: You just answered your own question :)
<soren> mdz: Because it needs and EBS volume.
<soren> -d
<mdz> soren: I guess my understanding of what we mean by recipes is a bit weak
<cjwatson> you know, what would really help me that's basically decoupled would be having the eucalyptus package in revision control in a useful way
<soren> mdz: Recipes are needed if you need to take actions that require the user's credentials.
<mdz> soren: this is not as urgent as the other items; if it requires hacks which can't go in past FF, we can hide those away in the image
<cjwatson> which probably only takes making sure that James' import is sane and then branching it and agreeing to use it
<soren> mdz: Since those are not going to be available in a generic image.
<soren> mdz: ...so we need something that holds the user's credentials  to set up the EBS volume and all that for them.
<mdz> soren: is there anyone in particular assigned to packaging landscape this cycle?
<soren> mdz: Packaging... landscape..?
<soren> mdz: Oh, the client stuff?
<mdz> soren: yes, landscape-client, sorry
<soren> I'm not sure.
<mdz> dendrobates: ?
<soren> Not that I know, but that is not to say there isn't :)
<dendrobates> mdz: not that I am aware of.  I will ask mathiaz.
<soren> cjwatson: The eucalyptus packaging should be on launchpad already?
 * soren checks
<mathiaz> mdz: I uploaded the latest version of landscape-client two weeks ago IIRC
<mathiaz> mdz: I haven't heard anything from the team since then
<mdz> mathiaz: can you tell me if there is another code drop they expect to land for feature freeze?
<cjwatson> soren: there's no vcs-bzr field in the package
<mathiaz> mdz: I don't know.
<cjwatson> which is, at the moment, an indicator that the branch is actually being used
<soren> cjwatson: Ah, sorry about that.
<cjwatson> I don't want to commit to a vacuum
<soren> cjwatson: https://edge.launchpad.net/~ubuntu-core-dev/eucalyptus/ubuntu [+]
<cjwatson> aha
<soren> s/\[\+\]//g
<cjwatson> right then, I will get that into shape with my recent uploads
<soren> cjwatson: That would be fantastic. Thank you.
<soren> mdz: Did you want anything else from me? I've got dinner waiting.
<cjwatson> (out for half an hour while this test install runs)
<soren> mdz: Alternatively, I'll be back in a couple of hours.
<mdz> mathiaz, dendrobates: I just remembered Gustavo was here in the office :-)
<mdz> he says that he doesn't think there are any features outstanding, that they expect to be in bug fix only mode at this point
<mdz> but he will confirm 100% and get back to us
<mathiaz> mdz: great :)
<cjwatson> hmm, actually, I can take the laptop downstairs with me ... the magic of wireless
<mdz> mathiaz, dendrobates: he says they normally release the client once per month.  we should make sure that we track those monthly releases and sponsor them
<mdz> Free on the landscape team is apparently working on the packaging
<mdz> so hopefully they just need review and sponsorship
<mathiaz> mdz: yes - I'm in contact with him
 * soren needs to run
<mathiaz> mdz: he pings whenever he needs sponsoring
 * soren will check back later
<mdz> mathiaz: ok, good. so I'll just get confirmation that there is no more feature work expected to land in karmic
<mdz> soren: thanks for all your help
<cjwatson> oh GOD I hate CDBS
<mdz> mathiaz: regarding the directory stuff, so AIUI you hope to upload sssd before FF, and seek an FFe for openldap...and there is nothing else to do but fix bugs?
<mathiaz> mdz: that's is correct.
<mdz> mathiaz: thank you
<cjwatson> soren: hmm, not to be picky, but do you think you could remember to commit your *own* changes to that branch? :-) 0ubuntu2 doesn't seem to be there ...
 * cjwatson syncs it up
<clusty> where in /etc/ is the place where i can change kernel params?
<kinnaz> sysctl.conf
<kinnaz> maybe :P
<cjwatson> that depends on the kernel parameter
<cjwatson> if you mean the kind you put on the kernel command line, then that goes in bootloader configuration - /boot/grub/menu.lst in <=9.04, or /etc/default/grub in >=9.10
<cjwatson> soren: synced up to match the archive now
<clusty> got it
<clusty> kinnaz, got it thanks.
<kees> smoser: leaking environment variables would be considered a security issue, yes.
<kees> smoser: also, /proc/$pid/environ, as you saw, is only the invocation environment, since later env changes need to use different regions of memory, IIRC
<heath|work> What software is being used for spam scanning now a days?
<giovani> heath|work: it's always been spamassassin for content scanning
<giovani> there are many other spam-prevention techniques
<giovani> but spamassassin has dominated the content scanning market
<heath|work> giovani, What about Mail Scanner? It looks like it uses spamassassin and clamav
<giovani> well it's just a package
<giovani> it's not a scanner application itself
<giovani> it uses some blacklists, etc
<giovani> I see no point in using it over using the individual applications yourself -- which gives you a lot more flexibility and customization
<VirtualDisaster> i have 2 repos with the same package name in each
<VirtualDisaster> however the one i need is from a custom repo, how would i install a package explicitly from the custom repo and not brick apt-get
<genii> !pinning
<ubottu> pinning is an advanced feature that APT can use to prefer particular packages over others. See https://help.ubuntu.com/community/PinningHowto
<genii> VirtualDisaster: ^
<VirtualDisaster> genii, ah ty
<genii> np
<heath|work> genii, thanks
<heath|work> sorry giovani ^^
 * genii hands out more mugs of coffee
<soren> cjwatson: *blush* Sorry about that :(
 * genii scrolls back up to see what all the blushing is about
<jpds> kirkland: I still have the padlock_sha.ko problem.
<kirkland> jpds: oh, really!
<kirkland> jpds: herm
<jpds> kirkland: What's kernel is your fresh install running?
<kirkland> jpds: i just updated it to latest; rebooting
<kirkland> jpds: boom ... it's back
<jpds> Installed: 2.6.31-7.27
<kirkland> jpds: yeah, -7 breaks
<kirkland> jpds: -6 works
<jpds> I know.
<kirkland> jpds: are you encrypting swap?
<jpds> Otherwise I wouldn't be here. ;-)
<jpds> kirkland: /home, with LUKS.
<kirkland> jpds: interesting, okay
<cjwatson> soren: I'd very much appreciate it if you could review lp:~cjwatson/eucalyptus/installer - would there be a major problem with merging and uploading this as a step on the road?
<cjwatson> I'm pretty sure it's not all the way there
<cjwatson> mdz: ^- or indeed you might like to
<cjwatson> this is basically the patch from a week ago with basic bug-fixes applied following smoke-testing
<cjwatson> and with euca_find_cluster updated to output IP addresses rather than hostnames
<mathiaz> kirkland: hey - http://people.canonical.com/~mathiaz/cgroup.boot.png
<mathiaz> kirkland: Am I supposed to do something special to make cgroup working?
<kirkland> jbernard__: cgroups
<kirkland> jbernard__: what needs to be done to make it work out-of-the-box?
<jbernard__> kirkland: ahh
<jbernard__> you need to edit /etc/cgconfig.conf
<kirkland> jbernard__: to say what
<jbernard__> and at least uncomment the lines referring to 'mount'
<kirkland> is it something that we could ship a more sensible default?
<kirkland> jbernard__: or disable the init script by default, if config must be done?
<jbernard__> yes
<jbernard__> we could go both ways
<jbernard__> the current package install everything as the upstream author indented it, but I really think we can do a bit better on default configuration
<jbernard__> what are your thoughts there?
<mathiaz> jbernard__: is there a reason to *not* have the mount lines uncommented?
<mathiaz> jbernard__: the idea being that a default install should work OOTB - ie have a minimal working configuration
<jbernard__> the directory must exist, in this case '/mnt/cgroups'
<jbernard__> so we could ship with those lines uncommented and create the dir if it doesn't exist
<mathiaz> jbernard__: if every use will have to uncoment the lines, then it makes sense to enable them by default
<jbernard__> i agreee
<jbernard__> i can make that change quickly
<kirkland> jbernard__: cool, thanks
<kirkland> jbernard__: please bump the ubuntu version, as i uploaded the other one already
<jbernard__> other one?
<jbernard__> from 0ubuntu1 to 1ubuntu1 you mean?
<kirkland> i uploaded what you put in revu
<kirkland> jbernard__: you did a nice job packaging this, thanks ;-)
<kirkland> jbernard__: i asked mathiaz to additionally review
<jbernard__> no problem
<mathiaz> jbernard__: 0ubuntu2
<jbernard__> so the only request I'm hearing is a sane default configuration
<kirkland> jbernard__: his comments on the init script/config are correct, but that can be solved between now and beta
<jbernard__> ok, so this solves the in-before-freeze issue
<jbernard__> or is the clock still ticking?
<mathiaz> I think that's ok to fix the default configuration between now and beta
<mathiaz> It's not a blocker for FeatureFreeze IMO
<jbernard__> ok, thats good to hear
<jbernard__> i will make the change then so the initscripts can be run at install successfully
<mathiaz> jbernard__: right - I noticed that the init scripts weren't installed via dh_installinit
<mathiaz> jbernard__: so they're only installed, and not run
<mathiaz> jbernard__: which is kind of odd for init script
<jbernard__> correct, i removed the running of them from the postinst
<aubre> excited - I got my hardware in and tomorrow I will be racking it and getting ready for my UEC proof of concept cloud for Auburn University
<jbernard__> because it requires a working configuration
<mathiaz> jbernard__: the result is that either the system needs to be rebooted or the init script needs to be run manually afterwards
<mathiaz> jbernard__: right - I think that providing a default working configuration would fix this issue
<mathiaz> jbernard__: it makes sense to install an init script and then start it
<mathiaz> jbernard__: one of the Ubuntu policy is to have a default configuration working OOTB
<jbernard__> i agree, ill make that change
<kirkland> mathiaz: okay, where's the sssd branch?
<jbernard__> james just mentioned that he's inclined to reject it from the queue, so ill try to get these changes tested an uploaded this evening
<kirkland> mathiaz: sorry, i'm sure it's in my history :-)
<kirkland> mathiaz: it's been a crazy busy day
<mathiaz> jbernard__: note that I don't have enough knowledge about cgroups to actually figure out whether it makes to have a default configuration working for 80% of the systems
<aubre> Since this setup is proof of concept, should I stick with jaunty, or go ahead with karmic?
<mathiaz> kirkland: https://code.launchpad.net/~mathiaz/sssd/ubuntu-pkg-release-tarball
<mathiaz> kirkland: yeah - welcome to pre-FeatureFreeze crazyness...
<aubre> it certainly won't be in production until after karmic
<mathiaz> aubre: UEC testing in Karmic is more than welcome
<mathiaz> aubre: however you may enconter rough edges
<mathiaz> aubre: just be aware of that.
<aubre> mathiaz:  great, I'm prepared for that
<mathiaz> aubre: like - it may break, not work at all, or eat your data
<aubre> mathiaz: actually this setup is for testing , it won't have anything of vaule on it
<mathiaz> aubre: if you're up for it and ready to debug/report bugs, testing UEC in Karmic is welcome
<aubre> mathiaz:  made for breaking, I am ready to help with bugs, I have my launchpad account all set up
<mathiaz> aubre: awesome - welcome to the factory floor..
<aubre> mathiaz: hehe thanks - I have 3 machines , one will be cloud & cluster controller and 2 will be ncs.
<aubre> mathiaz: and the white paper was really helpful in ironing things out for me
<aubre> mathiaz: but the most important thing is I have a supportive director
<jbernard__> mathiaz: i tend to agree
<jbernard__> mathiaz: but it's probably nice to not see a boot-time error
<jbernard__> even though no cgroups are configured for the default installation
<kirkland> mathiaz:         mv $(CURDIR)/debian/tmp/usr/lib/libnss_sss.so.2 $(CURDIR)/debian/tmp/lib/libnss_sss.so.2
<kirkland> mathiaz: what's that doing?
<kirkland> mathiaz: moving it out of the way, i see
<mathiaz> kirkland: nss libraries need to be in /lib rather than /usr/lib
<mathiaz> kirkland: or at least that's where most of them are located
<kirkland> mathiaz: ah
<kirkland> mathiaz: fair enough
<mathiaz> kirkland: yeah - I'm not sure about my first statement
<mathiaz> kirkland: there may be a use case to have the nss libraries in the ramdisk?
<kirkland> mathiaz: perhaps required for boot somehow?
<jbernard__> does anyone have a package example laying around where dh_installinit installs an initscript from outside the debian directory?
<jbernard__> i couldn't seem to get it to work right, but maybe I was just tired
<mathiaz> jbernard__: hm - dh_installinit only works from the debian/ directory
<kirkland> mathiaz: you could solve a couple of bashisms in the sssd initscript with /bin/kill
<mathiaz> jbernard__: you could copy the init scripts from scripts/init.d/ to debian/libcgroup1.(name).init and then call dh_installinit --name
<jbernard__> mathiaz: that would work too, but it would also mean upstream changes would require a manual sync
<kirkland> jbernard__: if you want, just send me a debdiff
<anAngel> Hello. I have some problems configuring amavis with postfix and dovecot with virtual domains and users. It doesn't scan/put any Headers in my mail. Anyone can help me?
<jbernard__> kirkland: for the license updates and default configuratin?
<kirkland> mathiaz: your sssd package looks good to me
<mathiaz> jbernard__: even if you copy the file from scripts/init.d/ *during* the build process?
<jbernard__> mathiaz: oh, i hadn't thought of that
<jbernard__> mathiaz: ill look into that, good idea
<kirkland> mathiaz: note that I didn't build it, since I didn't have the tarball
<anAngel> Hello. I have some problems configuring amavis with postfix and dovecot with virtual domains and users. It doesn't scan/put any Headers in my mail. Anyone can help me?
<cjwatson> jbernard__: symlink in debian/ ?
<mathiaz> kirkland: https://fedorahosted.org/released/sssd/sssd-0.5.0.tar.gz
<mathiaz> kirkland: ^^ upstream tarbal
<mathiaz> kirkland: ^^ upstream tarball
<cjwatson> jbernard__: though unless it's a native package you'll probably have to make the symlink during the build, which isn't really much better than copying the file during the build
<jbernard__> cjwatson: good grief, that'd be even awesomer
<jbernard__> cjwatson: true
<cjwatson> I don't think dpkg-source will preserve symlinks in any particularly sane way, yet
<jbernard__> but at least i would have to override dh_installinit target
<jbernard__> but actually i have to anyway to put the name in there
<anAngel> Hello. I have some problems configuring amavis. It doesn't scan/put any Headers in my mail. Anyone can help me?
<mathiaz> !volunteers | anAngel
<ubottu> Sorry, I don't know anything about volunteers
<mathiaz> !volunteer | anAngel
<ubottu> Sorry, I don't know anything about volunteer
<mathiaz> !behavior | anAngel
<ubottu> anAngel: The people here are volunteers, your attitude should reflect that. Answers are not always available. See http://wiki.ubuntu.com/IrcGuidelines
<jbernard__> does the absense of a license on a source file prevent a package from being included in the release?
<jbernard__> I assume this has to be resolved before it can be uploaded? Or are there exceptions to this?
<cjwatson> jbernard__: personally I think it's OK as long as the licence of the whole package is clear (if it's just "all these source files are licensed under ..." kind of thing). There are some differences among archive admins about this, I think
<cjwatson> I tend to think of it as "would you be able to convince a court that there was any reasonable doubt about this licence?"
<cjwatson> (indeed civil cases are on balance of probabilities so if it's that strong you're on pretty solid ground I feel)
<jbernard__> Yes, it's just a few, 7 actually, but 5 of those are standalone test programs in /test
<cjwatson> test files without explicit licences are really very common - is the licence of the whole package clear?
<jbernard__> i would assume they just forgot to put a license on those, there does exist /COPYING which is LGPL and 85% of the source files have that license in the header
<cjwatson> then I reckon it's ok
<jbernard__> in that case, what would be the proper documentation in debian/copyright to reflect this?
<jbernard__> or can you point me to a doc and/or example where this has been done
<jbernard__> oh, nevermind, dh_make has a snippet at the bottom of the template that I can use
<jbernard__> cjwatson: thanks for the input
<soren> cjwatson: I'm curious why you didn't use start-stop-daemon for this: http://bazaar.launchpad.net/~cjwatson/eucalyptus/installer/revision/457 ?
<cjwatson> soren: maybe I should have done
<cjwatson> I didn't think of it
<cjwatson> (of course s-s-d renders it Debian-specific, but ...)
<cjwatson> the kill stuff was extending something that was already there, though
<soren> cjwatson: Ah, right, the dhcp server stuff?
<soren> cjwatson: meh, not important.
<soren> cjwatson: Apart from that, I must admit I was hoping for something that wouldn't be installer specific.
<cjwatson> http://paste.ubuntu.com/259535/ ? untested
<cjwatson> not installer-specific for which bit?
<cjwatson> we do need to have installer integration
<ZachMan> hello, i am trying to install unbuntu server and it fails to load the CD driver, it is a standard IDE DVD/CD rom
<soren> cjwatson: re the patch> I was also thinking for the killing part. s-s-d implements the wait-for-a-bit-and-then-SIGKILL-it quite nicely.
<cjwatson> it does, but I would have had to change other code
<soren> cjwatson: Oh, ok.
<cjwatson> if that other code is updated to use start-stop-daemon, then I'd be happy to follow suit
<cjwatson> it would certainly be cleaner
<soren> cjwatson: Ah, you mean the existing code from upstream?
<cjwatson> I mean the stuff that kills the pid listed in /var/run/eucalyptus/eucalyptus-cc.pid
<cjwatson> in debian/eucalyptus-cc.init, and likewise for the nc
<soren> Right, that's straight from upstream.
<soren> cjwatson: As for the installer specificness (specificity?), I just mean it'd be nice if you got roughly the same experience if you installed eucalyptus-nc on an existing system.
<cjwatson> ZachMan: more often than not this turns out to be a kernel bug. To verify, try booting an Ubuntu live CD and see if it starts up successfully
<cjwatson> soren: oh, right. so, one of the things I mentioned in my mail as being missing is some euca_conf integration to do the scanning on a running system as well
<cjwatson> I don't see any reason euca_find_cluster couldn't be integrated into that
<ZachMan> cjwatson where can i get the live cd?
<cjwatson> ZachMan: http://releases.ubuntu.com/
<mushroomblue> ZachMan: have you tried installing via usb stick?
<cjwatson> pick your release, grab the desktop CD
<soren> cjwatson: Makes sense.
<ZachMan> mushroomblue no
<mushroomblue> ZachMan: I only install by CDROM on systems that don't support usb boot.
<cjwatson> I realise you probably don't want a desktop installation, but it's just to verify whether the CD detection problem lives in the kernel
<cjwatson> soren: the libd-i dependency is a little ugly, although it would *work* on a regular system since libdebian-installer4 exists as a .deb - it wouldn't be upstreamable that way though. It ought to be replaced with some other hashtable implementation
<ZachMan> how can i boot via usb stick
<cjwatson> I was just in a tearing hurry and that was the first canned hash impl that came to hand
<soren> cjwatson: glib isn't anywhere in the dependency stack, is it? /me thinks not
<mushroomblue> ZachMan: sudo apt-get install usb-imagewriter
<cjwatson> soren: no, I was quite careful about that
<mushroomblue> ZachMan: I think there's a windows utility to burn a .iso file to usb
<soren> cjwatson: I meant in Eucalyptus' existing dependency tree. It's not per se, but you'd be hard pressed to find a system that runs Eucalyptus that doesn't already have glib. :)
<TViYH> how hard is it to install ubuntu server?
<cjwatson> soren: oh, right. or it's not like it actually needs a hash for all it does; walking a list would do fine.
<mushroomblue> TViYH: easy as pie
<mushroomblue> unless you're afraid of a console-based GUI and arrow keys.
<mushroomblue> you might have to press enter and tab occasionally, if that's a problem. :)
<mushroomblue> apparently, that was too much to bear.
<ahasenack> quick question: are USNs issued for packages in universe? I think not, right?
<mathiaz> ahasenack: USNs are issued for packages in universe.
<ahasenack> mathiaz: hmm, my theory is off then
<mathiaz> ahasenack: sorry about that. I didn't mean to ruin your whole life...
<ahasenack> mathiaz: a guy just told me he saw apache2-mpm-itk in the hardy security repository but that it had no USN
<mathiaz> ahasenack: right - that's probably because apache2-mpm-itk needs to be rebuilt every time apache2 is published
<mathiaz> ahasenack: it's a special case
<ahasenack> mathiaz: ok, so it's a dependency, and it doesn't come from the same source
<mathiaz> ahasenack: yop.
#ubuntu-server 2009-08-26
<Bilge> derp
<jdstrand> apache2-mpm-itk is also in universe, so no USN
<jbernard__> kirkland, mathiaz: i've just uploaded libcgroup to REVU that addresses all of the current issues
<jbernard__> kirkland, mathiaz: I think it's super solid now, any feedback you've got is much appreciated
<mathiaz> jdstrand: hm - so USN are not published for packages in universe?
<jdstrand> mathiaz: that is correct
<mathiaz> jbernard__: since the 0ubuntu1 version of libcgroup has been rejected
<jbernard__> i already bumped it
<jbernard__> to 0ubuntu2
<mathiaz> jbernard__: you should just upload a new version of 0ubuntu1 to revu
<jbernard__> ahh
<jbernard__> good call
<jbernard__> im on it
<mathiaz> jbernard__: package version are only relevant once they're in the archive
<mathiaz> jbernard__: but REVU is used before the archive - and thus should only have 0ubuntu1 versions
<jbernard__> mathiaz: ok, good to know
<mathiaz> jbernard__: and an changelog entry just stating Initial Release
<mathiaz> jbernard__: once the package is the archive, REVU is not used anymore
<mathiaz> jbernard__: new version should be sponsored via LP bugs(branches soon)
<jbernard__> mathiaz: ok, upload complete, should show up on the next pulse
<mathiaz> jbernard__: it seems that most of the code is actually licensed under LGPL 2.1
<jbernard__> mathiaz: yes
<mathiaz> jbernard__: while debian/copyright states LGPL 2
<mathiaz> jbernard__: COPYING in the src tree is LGPL 2.1
<jbernard__> yeah, as i understood it, LGPL-2 == 2.1
<jbernard__> since it was the first successor to GPL-2
<mathiaz> jbernard__: hm - I don't think so
<jbernard__> mathiaz: i was reading the second para from the top in COPYING
<mathiaz> jbernard__: /usr/share/common-licenses/ has a file for LGPL-2 and another one for LGPL-2.1
<jbernard__> mathiaz: indeed it does, oops ;)
<jbernard__> ill fix it up, gimme a sec
<mathiaz> jbernard__: right - that paragraph is confusing
<jbernard__> nothing gets by you guys ;)
<mathiaz> jbernard__: licensecheck
<mathiaz> jbernard__: ^^ help script to figure out the licenses of every file in the src tree.
<jbernard__> mathiaz: i ran that, but it seems to be very particular about header format
<jbernard__> mathiaz: so i ended up having to go through most of it by hand
<mathiaz> jbernard__: I usually run this command http://paste.ubuntu.com/259563/
<mathiaz> jbernard__: which is just for loop
<jbernard__> mathiaz: oh nice
<jbernard__> mathiaz: ok, upload is done
<jbernard__> mathiaz: i have to run out for a bit, but leave comments here and I'll pick them up on my return
<jbernard__> mathiaz: thanks for all the time, I really appreciate it
<mathiaz> kirkland: ^^ - looks good to me now
<mathiaz> kirkland: you can have another look at the packaging - (0ubuntu1 has been rejected from the NEW queue)
<Doonz> hey guys can someone recommend a program to monitor my network cards. im having connection issues with my server and i beleive i have a bad nic in it but need to justify getting a new one
<KillMeNow> wireshark maybe?
<KillMeNow> hrm...  are you getting a bunch of errors when you do a ifconfig?
<giovani> wireshark on a server? no ...
<giovani> tcpdump will work fine -- but it won't tell you if the problem is at the NIC
<giovani> you can run some traffic tests locally on the LAN
<giovani> but unless you test the same switch ports, etc -- you can't eliminate all of the variables
<KillMeNow> how many times have you seen a NIC fail giovani?
<KillMeNow> it's pretty rare
<oh_noes> is it posisble to tell lvextend  "extend to the end of the physical disk" instead of saying "+400M"
<twb> oh_noes: I guess you could just keep extending it until you run out of extents.
<twb> pvs will tell you how much is left
<twb> Ah, the manpage mentions %FREE
<twb> So try --size 100%FREE
<twb> +100%FREE, rather
<oh_noes> nice, thanks!! Let me try
<twb> oh_noes: I worked this out by READING THE MANUAL
<twb> It's an approach I highly recommend.
<oh_noes> twb: your right,  I can use 100%FREE which works, however I cant lvextend it because the VG is still the same size
<oh_noes> any thoughts on how to tell vgextend to "extend to the end of the existing disk you already have"
<twb> oh_noes: that doesn't make sense.
<twb> a VG is a bag of PVs.  PVs are either disks or (more commonly) partitions.
<twb> Pastebin the output of "pvs", "vgs" and "lvs".
<oh_noes> twb: http://pastebin.com/meeb9974
<twb> oh_noes: so you have one PV, /dev/sdb1
<oh_noes> twb: in my example, I have extended the 'end' of an existing disk.  So theres sdb1 which was originally 100% (8GB), then I added another 8GB to sdb, so I want to extend sdb1 to include the new free space at the end of the disk
<twb> oh_noes: if there is space on sdb not allocated to sdb1, you can't use it.
<twb> You would either need to extend the PV, or add another partition (sdb2) and make that a second PV, and add that new PV to the VG.
<oh_noes> twb: hrm, ok thanks.  What Im trying to do is handle a "VMware size increase".  ie. the OS is shutdown, then sdb is brought back with additional space at the end.  I want to be able to handle this and increase /foobar
<oh_noes> I thought LVM would handle this, but maybe not.  As you said, it appears my problem is that sdb1 isnt associated with that free space.
<oh_noes> can I use anything to resize and expand sdb1 to take up the free space?
<twb> You can delete the partition from the partition table and create a new one in the same place, with a different size.  I do not know if LVM will cope with that.
<oh_noes> so there isnt a 'proper' way to handle this?
<oh_noes> I prefer not to create a new partition, because I don't know how many more times it'll be extended (and thus new partitions) will be needed
<twb> Because you're running in vmware there's not a lot of point in using LVM anyway
<twb> You might as well just make a filesystem directly on /dev/sdb1 and then use resize2fs when you grow it.
<oh_noes> twb: but wont I have the same problem?  the partition is 8GB with unallocated space at the end of the disk.
<oh_noes> I will still need to resize the actual partition, right?
<twb> Oh, yeah, I guess.  But I *know* ext3 won't care if you delete and create a new partition
<oh_noes> theres no GNU tool that can resize a partition into unallocated space?
<oh_noes> gparted will do it, but I dont want to boot into a live CD.  I was hopign to do it somewhere in rc2.d before applications come up (so they can use the new size)
<giovani> "resizing" = bad!
<giovani> oh_noes: you can't resize a normal partition while an OS is running from it
<giovani> not even windows can accomplish that
<oh_noes> maybe i can manually delete the partition with fdisk and create it again (but bigger)
<giovani> yeah ... back up your data first
<oh_noes> giovani: whats the definition of a 'normal' partition?  Whats the difference between creating sdb1 on a new unused sdb1 and extending sdb1 when sdb is unused?
<giovani> a "normal" partition is one that isn't LVM
<giovani> your sdb/sdb1 example makes no sense
<oh_noes> giovani: yep i get that, in my case the partition in question is /foobar
<oh_noes> OS is *not* running from it, sdb1 isnt mounted.
<giovani> ok -- so then why do you need to do it in rc2?
<giovani> you can always modify partitions that don't have the OS on them when the system is fully booted
<giovani> no need to use a livecd
<oh_noes> i was hoping i could 'detect' the VMware disk has been increased, then resize it before apps come online
<giovani> what?
<giovani> sigh
<giovani> I have no idea what you're talking about
<oh_noes> giovani: what can I use to modify the existing partition into unallocated space?
<giovani> what does "modify the existing partition into unallocated space" mean?
<oh_noes> it means, sdb is 0-1000 blocks
<oh_noes> sdb1 is using 0-300
<oh_noes> I want to make sdb1 take up the entire 0-1000
<giovani> you don't
<giovani> particularly if it uses ext3
<ball> giovani: Is that like spreading peanut butter too thin?
<ball> (run out of inodes etc?)
<giovani> ext3 won't resize
<giovani> period
<giovani> the only way to resize a ext3 partition is to convert it to ext2
<giovani> and then back
<giovani> and I would never recommend it
<giovani> just move the data off -- make it LVM for future use
<giovani> and move the data back
<ball> hello pw_thirdfloor_
<Tim__Reichhart> there anyway that I can configure squirrelmail just accept just the username and not the full email address?
<uvirtbot> New bug: #419053 in samba (main) "package samba 2:3.3.2-1ubuntu3.1 failed to install/upgrade: le sous-processus post-installation script a retourn? une erreur de sortie d'?tat 1" [Undecided,New] https://launchpad.net/bugs/419053
<martinjh99> Is there a package for mod_security for Apache2 on hardy lts?
<\sh> martinjh99: doesn't look like...libapache2-mod-security occured first time in jaunty (when packages.ubuntu.com is true)
<martinjh99> poo - Just had someone putting phishing files on my server... Trying to harden it up a bit...
<martinjh99> setup ufw to allow http
<martinjh99> and ssh only from local network...
<martinjh99> Also added a password to root.  Is there anything else I should be doing?
<\sh> martinjh99: I would fix the application first
<martinjh99> Everything is updated from ubuntu repos - Updated Joomla from their website to the latest version.
<Boohbah> martinjh99: don't use vulnerable web applications
<ewook> lol
<martinjh99> yes that might be a good idea...
<ewook> joomla is indeed targeted - or, foremost the badly written plugins.
<martinjh99> Ah ok - suggestions for a CMS then??
<martinjh99> So might be my choice of cms thats the problem...
<\sh> drupal is good...typo3 is also having issues mostly plugin wise
<martinjh99> thanks for those - Could never get my head round Drupal when I tried it before plus there are no examples like the ones in Joomla
<\sh> martinjh99: drupal has a lot of examples and good books out in the field
<martinjh99> :) I'll google and have a look... Thanks
<dayo> anyone has any experience with snort?
<jtimberman> dayo: you might try #snort :)
<dayo> jtimberman: in there right now. spookily quiet, though
<jtimberman> dayo: more active during the day in the US, I think.
<dayo> jtimberman: true. forgot the timezome thing lol
<jtimberman> dayo: me too, its apparently after 3am.
<dayo> i'll check back later, then
<dayo> lol
<alvin> Are there known issues when using ext4 filesystems in qcow2 images? I'm seeing a lot of corruption when using a Karmic kvm host.
<alvin> What I actually want is building a stable production system:
<alvin> What Ubuntu version has the most stable version of kvm, what filesystem and what type of images should I use?
<a_ok> I did an release update however the php gd.so lib disapeared. how is this possible? (fixed it by installing the package php-gd)
<mattt> a_ok: the only reason i could think it'd get removed was if gd was now built into the base php package, and not required as a module
<mattt> a_ok: which release are you using?
<a_ok> hardy
<a_ok> mattt: yeah guess that must be it. dangerous though as these kind of changes can break stuff
<mattt> a_ok: hmmm, i don't think that's it :/
<mattt> a_ok: maybe you want to look through /var/log/dpkg.log to see if it offers any insight
<a_ok> mattt: 2009-08-26 07:32:23 upgrade php5-gd 5.2.3-1ubuntu6.3 5.2.4-2ubuntu5.7
<a_ok> when updating it just removed it and did not install the new one
<a_ok> i checked with apt-cache policy and there simply was no php5-gd installed anymore. i guess something is broken in that package
<a_ok> mattt: or 2009-08-26 07:40:47 remove php5-gd 5.2.4-2ubuntu5.7 5.2.4-2ubuntu5.7
<a_ok>  removed a bit to much
<cjwatson> soren: so, if I fix that start-stop-daemon thing (at least on start) can I go ahead and upload that?
<soren> cjwatson: Yes, please do.
<soren> cjwatson: Sorry, didn't I say that last night before I passed out?
<soren> cjwatson: scrollback suggests I did not. Sorry.
<cjwatson> no problem, I was beating on RAID this morning anyway
<spiekey> Hi!
<spiekey> if i boot with 8.04 or 9.04 live cd, i get a sda device. If i boot with my own kernel, i have a hda device.
<spiekey> this sucks since i want to do automatic cloning :)
<spiekey> is there a way to turn off/on this scsi emulation?
<cjwatson> only by recompiling the kernel, usually
<cjwatson> it's probably CONFIG_ATA plus CONFIG_PATA_WHATEVERDRIVERRUNSYOURCONTROLLER
<cjwatson> the old IDE stuff is decreasingly well maintained so in general it's a good idea to try to switch away from it
<spiekey> okay, thanks
<soren> spiekey: What exactly are you trying to do?
<spiekey> soren: Boot up Ubuntu-Live CD ---> Run a Script that mounts, rsyncs, writes grub, reboots the Cloned PC.
<spiekey> my problem: When it boots up i get hda, in the Live CD its sda
<spiekey> so my searchand replace script wikk fail...well it works, but then i run into a kernel panic :)
<heath|work> I need to find a script that has a name in it. So far I have: find / -type f -regex .*\.sh -exec grep smith {} \;
<heath|work> It spits out the line the name is on, but not the file name. What do I need to add to view the filename?
<pmatulis> heath|work: try 'grep -l'
<heath|work> pmatulis, thanks, I will
<szczym> Helo all, i have a problem installing ubuntu server on box with no cdrom (i do it from usbstick). install stops on detecting cdrom (demands floppy drivers). could any one help me please ?
<pmatulis> szczym: does your BIOS support booting from USB?
<heath|work> pmatulis, Thanks that worked!
<pmatulis> heath|work: good stuff
<szczym> yes, i did booted fine from usb
<pmatulis> szczym: that's strange, what release are you tring to install?
<pmatulis> *trying
<szczym> 8.04.03 installer works from usb but dont go forward after message "No common CD-ROM drive was detected"
<szczym> the failing item is: Detect and mount CD-ROM
<pmatulis> szczym: maybe add comment to bug 378442
<uvirtbot> Launchpad bug 378442 in linux "Error 'No common CD-ROM drive was detected' when installing Ubuntu Server via USB drive" [Undecided,New] https://launchpad.net/bugs/378442
<szczym> there are several bugs issued on that topic, but no solution ;(
<pmatulis> szczym: i know, but commenting to it is the proper thing to do
<szczym> yes, thanx for info. in my humble opinion its very embarrassing problem given debian/ubuntu excellence ...
<pmatulis> szczym: agreed
<pmatulis> szczym: you may want to dig dipper by asking in #ubuntu-installer
<pmatulis> *deeper
<szczym> thanx, i will
<uvirtbot> New bug: #419191 in php5 (main) "Mailing from php5 module fails when number of Apache VirtualHosts is above +/-1500" [Undecided,New] https://launchpad.net/bugs/419191
<giovani> above +/-1500?
<soren> giovani: I'm guessing "+/-" is a poor man's "~".
<mattt> :)
<giovani> soren: ah, it baffled me, honestly
<pmatulis> szczym: looks like you got some good info over there
<szczym> yes indeed it helped me partially
<szczym> but any way i will tray to install 8.04 on other box into pendrive and then boot from it the machine because on 9.04 i head problems with gphoto2
<ahasenack> is there a command line tool that checks for available security updates and matches them with USNs?
<ahasenack>  /usr/lib/update-notifier/apt-check uses just the repository name. If it ends in -security, the update is considered a security one
<ahasenack> (and it has to come from "ubuntu")
<sgsax> ahasenack: I was looking for just such a thing a month or so ago, but never found one
<garymc> sorry guys, forgot the command to add a user in the terminal is it "sudo aduser john" ?
<garymc>  "sudo adduser john" ?
<sommer> garymc: yeppers
<garymc> thanks
<sgsax> useradd is the "classic" tool
<sgsax> but adduser is the kinder gentler interactive debian tool
<traemccombs> hey guys... morning.
<traemccombs> Anyone use clonezilla  and know if there is a problem with doing 4 machines @ once over a network?
<traemccombs> I've got a single image on a box... and I want to get said image down to 4 machines.  But wouldn't think there'd be a problem with accessing the same file from 4 machines
<josephpiche> I was wondering if could have someone look at ubuntu bug 396632. I filed it because I feel like I should be able to run `service lighttpd status` as an unprivileged user, but I'm not familiar enough with init scripts to know what to modify in order to write a patch.
<uvirtbot> Launchpad bug 396632 in lighttpd "lighttpd init file should not chown or chmod on status check" [Undecided,Incomplete] https://launchpad.net/bugs/396632
<sgsax> traemccombs: accessing the image file shouldn't be a problem, but iirc, clonezilla uses udp broadcast, which could conceivably cause your network to get real slow
<traemccombs> sgsax: yeah.... that could be bad. :)
<traemccombs> maybe I'll just take them off the network and stick em' on a solo switch
<frojnd> Hello there.
<frojnd> I desperetally need someone that has experience with linux
<frojnd> I've done some "terrible" things
<frojnd> First I've removed my primary rtorrent 0.8.0
<frojnd> And I've installed it with ./configure and make commands
<frojnd> All fine and well but I excidentally removed the source dir :S
<frojnd> So I thought why not download again and ./configure and make again..
<frojnd> all good and fine only that I missed the stable  versioninstaead I've selected development version :S
<frojnd> I tried to make uninstall
<frojnd> but I couldn't
<frojnd> i kept getting errors like: make: *** No rule to make target `uninstall'.  Stop.
<frojnd> and if that wasn't enough I've tried to install svn and from it rtorrent
<frojnd> http://www.howtoforge.com/compile-rtorrent-from-svn-ubuntu-8.04-hardy-heron <- I follewed that guide
<frojnd> all good and fine, only that know I have not working 0.8.0 version that only sudo user is allowed to run it :S
<frojnd> Is there anyone brave enough to help me solve this mess? I need to remove newest version of rtorrent which is 0.8.4
<frojnd> and possible the svn lateset version 0.8.5
<frojnd> and maybe even version 0.8.0
<frojnd> any brave ppl here?
<frojnd> I guess not :S
<ilowe> frojnd: give people a chance to respond.... we are all multi-tasking
<frojnd> ilowe: ok I'm myself so I'll wait hope someone response.
<ilowe> frojnd: I take it the version in the repos is no good for you?
<mushroomblue> has anyone actually successfully set up OpenLDAP using the server guide?
<virtualdisaster> mushroomblue, no :)
<virtualdisaster> mushroomblue, but i got a book that helped
<frojnd> ilowe: no
<frojnd> ilowe: I have 8.04 LTS and it is only 0.8.0
<mushroomblue> virtualdisaster: wait. so the book helped you _not_ set up openldap?
<mushroomblue> virtualdisaster: what'd you end up doing to set it up?
<virtualdisaster> mushroomblue, one sec
<ilowe> frojnd: what version do you *require*?
<mattt> mushroomblue: i just set it up on lenny :)
<mushroomblue> hah.
<mushroomblue> doesn't lenny also require GnuTLS?
<mattt> i do have libgnutls26 installed
<frojnd> ilowe: 0.8.4
<frojnd> ilowe: it has features that I need
<frojnd> ilowe: well 0.8.4+
<mushroomblue> mattt: didja use a howto, or are you merely strong with the force and set it up from memory?
<virtualdisaster> mushroomblue, mastering openldap
<virtualdisaster> good book
<mushroomblue> virtualdisaster: relevant to 9.04?
<virtualdisaster> should be
<mushroomblue> hmm. seems most books on openldap I found require modifying a deprecated slapd.conf
<mattt> mushroomblue: i know nothing about ldap, used some guides online
<mattt> mushroomblue: what problem are you running into?
<mattt> mushroomblue: i want to use ldap for vsftpd authentication -- used these two (fortunately book-marked them): http://www.howtoforge.com/linux_ldap_authentication http://www.debuntu.org/ldap-server-and-linux-ldap-clients
<mattt> mushroomblue: it's working from what i can tell, i managed to use libpam_ldap to connect ldap and vsftpd
<mushroomblue> mattt: that might solve it.
<mushroomblue> I think my first mistake was using the ubuntu server guide
<mushroomblue> my first issue was trying to figure out GnuTLS in order to set up a certificate authority. documentation was slim
<mushroomblue> now slapd seems to be working with it
<mushroomblue> but I can't log in as a user from the directory.
<mushroomblue> I'll stop talking and RTFM a bit more. thanks for the links.
<frojnd> ilowe: I'm doing programming myself just out of curiousity, u still with me despite multi-tasking stuff? :P
<ilowe> yup; just checked out the SVN trunk, I want to see if I can throw together a quick deb for you
<mattt> mushroomblue: you're trying to log in via what?
<mushroomblue> ssh.
<mattt> mushroomblue: you took care of /etc/nsswitch.conf?
<frojnd> ilowe: great, just wannt u to know that I appreciate this.
<mushroomblue> I think so.
<ilowe> frojnd: you got it :)
<mushroomblue> should ldap be before files?
<mushroomblue> in nsswitch.conf?
<mattt> yeah
<mushroomblue> mmkay.
<mattt> mushroomblue: when you installed libldap on the client, did you specify the ldap server correctly?
<mushroomblue> it appears not.
<mushroomblue> I haven't even gotten to clients yet.
<mushroomblue> I'm still trying to get the server to auth.
<mushroomblue> turns out, nss-ldap is spitting out a ton of errors on startup.
<mushroomblue> apparently, ldaps//127.0.0.1 doesn't exist
<mushroomblue> it's also trying to find these things long before slapd starts
<mushroomblue> mattt: out of curiosity, did you just create a slapd.conf file?
<mattt> mushroomblue: i'd start without the SSL stuff first, and then move on to that :/
<mushroomblue> fair enough.
<garchotron> hello
<mattt> mushroomblue: nope, i had one in /etc/ldap, provided by the ldap server ... you don't have that file?
<mattt> mushroomblue: the only thing i can't figure out is when you configure libldap, where it stores that info about the server (as my /etc/ldap/ldap.conf file on the client isn't updated)
<garchotron> i'm having trouble using refresh-ims directive in squid refresh patterns in ubuntu server 8.04
<garchotron> why isn't the directive supported?
<cPF> is there anyone experiencing this issue? ctrl+c not interrupting processes and various zombies https://bugs.launchpad.net/ubuntu/+bug/402973
<uvirtbot> Launchpad bug 402973 in ubuntu "ssh terminal on Jaunty doesnt process ctrl-C ctrl-D ctrl-Z and so on (dup-of: 317948)" [Undecided,New]
<uvirtbot> Launchpad bug 317948 in gnome-terminal "ctrl+c doesn't interrupt running process" [Low,Triaged]
<mattt> cPF: thankfully not :)
<cPF> it's totally driving me nuts.. been like that for 2 months now
<cPF> hmm, seems like i just found a duplicate bug getting more attention https://bugs.launchpad.net/ubuntu/+source/openssh/+bug/407428
<uvirtbot> Launchpad bug 407428 in openssh "sshd zombie processes and strange behavior after karmic upgrade" [High,Confirmed]
<mushroomblue> matt: slapd.conf is deprecated as of 8.10. 9.04 uses cn=config by default.
<mushroomblue> mattt: is it storing it in /etc/ldap/cn=config ?
<mattt> mushroomblue: nope, al i have in there is a default ldap.conf file
<kirkland> mathiaz: hey, you noticed that libcgroup was rejected too....
<kirkland> mathiaz: i couldn't find any specific feedback, or who did the rejection
<kirkland> mathiaz: could you?
<mathiaz> kirkland: hm - no. I usually don't know why it has been rejected.
<mathiaz> kirkland: I just ask the AA of the day
<ilowe> frojnd: will rtorrent work with latest repo libtorrent or does it depend on more recent versions?
<mathiaz> kirkland: hm - the last karmic kernel update broke my karmic vms.
<mathiaz> kirkland: the block device is not found anymore - have you heard of something similar
<kirkland> mathiaz: hrm, not really
<kirkland> mathiaz: dpkg -S `which kvm`
<frojnd> ilowe: it needs more recent libraries
<frojnd> ilowe: but to tell you the truth I don't know which ones since I have a mess here
<mathiaz> kirkland: kvm
<mathiaz> kirkland: the host is running hardy
<frojnd> ilowe: I would need someone that can fix my problems.. becouse to tell you the truth  I don't know where to begin
<frojnd> ilowe: I can give you sudo user and you can fix it
<mathiaz> kirkland: is was working correctly until today when I saw a kernel upgrade
<ilowe> frojnd: LOL.... that's a little familiar for a first date
<kirkland> mathiaz: interesting;  hardy kernel too?
<frojnd> ilowe: under screen x :)
<kirkland> mathiaz: and your host is using the kvm-84 dkms module?
<kirkland> mathiaz: so the problem is clearly in the karmic guest?
<mathiaz> kirkland: kvm on the host: 1:84+dfsg-0ubuntu12.1~rc5ppa1
<mathiaz> kirkland: yes - I think so
<mathiaz> kirkland: my first guess is that virtio block device are no longer recognized by the kernel
<mathiaz> kirkland: the guest is running karmic
<kirkland> mathiaz: interesting ... the latest kernel busted my desktop too
<kirkland> mathiaz: something about crypto swap it doesn't like
<mathiaz> kirkland: hm - block device related too?
<kirkland> mathiaz: maybe so...
<kirkland> mathiaz: can you confirm that booting the older kernel allows you to boot?
<kirkland> jbernard__: ping
<mathiaz> kirkland: it was working correctly yesterday  - let me try with an old kernel
<lssd> hello guys! I just setup my ubuntu 64bit server as following: 2 disk 160each same partiotions.. 1 20gb 1 1gb 139gb rest ...the first 2 x20gb made md0 and they are / filesystem the other 2x139 are lvm and the 2x1gb swap... Is there a way to check they work properly ?
<mathiaz> kirkland: yes - it works
<mathiaz> kirkland: 2.6.31-6-server works
<mathiaz> kirkland: 2.6.31-6-server boots
<lssd> i ask because i think when i first booted it gave me an error :(
<lssd> is there a way to check my server errors during booting ?
<kirkland> mathiaz: let's talk to rtg in -devel
<mathiaz> kirkland: right - I won't have so much time to debug this today though
<mathiaz> kirkland: I need my karmic vms working correctly for package testing before FF
<kirkland> mathiaz: right, i'm very swamped as well
<kirkland> mathiaz: we need the kernel team to get -7 into shape
<ilowe> frojnd: you still with me?
<garchotron> i tried to use reload-ims with the default squid (oooooold 2.6) in 8.04 -- to no avail, since it is not implemented in that version. now i've installed a "squid3" package from the official repos, and it seems to work with my previous config. It hasn't replaced squid2 tho...should i uninstall it? or expect any problems?
<kirkland> mathiaz: one more question ....
<kirkland> mathiaz: can you try booting with -7 kernel, and virtio off?
<kirkland> mathiaz: to see if it's a virtio problem specifically?
<garymc> anyone know the command to uninstall?
<garymc> Im trying to uninstall or remove twinkle from the server
<garymc> i tried sudo remove twinkle
<garymc> it doesnt work
<virtualdisaster> garymc, sudo apt-get remove twinkle
<garymc> thanks
<virtualdisaster> garymc, i suggest you consult the handbook on items such as this
<garymc> ok sorry
<virtualdisaster> garymc, no biggie we all are learning :P
<virtualdisaster> i just learned how to "pin" packages
<garymc> :)
<frojnd> ilowe: yes still with u
<frojnd> ilowe: I just went out to check on workers...
<ilowe> frojnd: OK, I have a bunch of debs for you
<frojnd> ilowe: can't wait :)
<ilowe> frojnd: where can I put them?
<frojnd> ilowe: rapidshare, firewire, u can put em on my server via ssh
<ilowe> frojnd: ssh is cool
<frojnd> ilowe: ok let me create user for ya
<mathiaz> kirkland: I'll give it a try later today
<cjwatson> cPF: I intend to look into that, but only in a few days once I've done the feature work I have to get done first for karmic
<cPF> cjwatson: nice to know.. but just guess what, it started working normally after aptitude reinstall openssh-server :o
<cjwatson> thanks for erasing the evidence ;-)
<josephpiche> I was wondering if could have someone look at ubuntu bug 396632. I filed it because I feel like I should be able to run `service lighttpd status` as an unprivileged user since I can run `service mysql status` unprivileged, but I'm not familiar enough with init scripts to know what to do
<uvirtbot> Launchpad bug 396632 in lighttpd "lighttpd init file should not chown or chmod on status check" [Undecided,Incomplete] https://launchpad.net/bugs/396632
<cPF> maybe i'll reboot and see what happens
<cjwatson> *shrug* don't worry too much, I'm not going to be able to investigate today anyway and there's already a bunch of information on the bug
<cPF> i had it go away for a while in the past too
<thefish> anyone know the difference between an IDC insertion tool and a punchdown tool?
<cPF> but last time i thought it was related to bash or pam upgrades
<cjwatson> entirely possible
<cjwatson> it'll be something to do with the precise nature of the environment in which the daemon is started
<cjwatson> restarting the daemon in a different environment will make it go away
<cPF> i checked the signalmask bits before the miracle happened (as shown in the bug info) but they were all zeroes, as they should be
<cPF> hmm, that could be it
<aubre> <aubre> well I got my machines racked and powered, I'm setting up the cloud/cluster controller now and will be setting up the node controllers shortly. Now just awaiting the networking folks to tell me how to lay out my private network and bridge and we'll be ready to start testing the cloud at Auburn University!
<aubre> hoping to help test and help debug the karmic UEC instances
<kinnaz> pacemaker ?
<kinnaz> or whats controlling them
<aubre> they aren't up just yet, I'm going to try a few things, maybe RightScale, Cohesive
<aubre> maybe I should look at pacemaker
<aubre> I'm formatting 1.5 tb of disk space so I am taking a lunch break
<cPF> cjwatson: it works after reboot no matter what i do... crazy
<cjwatson> cPF: sure, it's probably the automatic restart from networkmanager when new interfaces appear
<cjwatson> I doubt you needed to reinstall - just restarting the daemon from a console (not an ssh session) should have done the trick
<cPF> hmm, launcing pptp vpn shouldn't trigger that?
<cjwatson> dunno tbh, sorry, no time to look now.
<cPF> ok, vpn doesn't seem to make a difference... i'm now pretty confident it's fixed by the reinstall
<rtg_> kirkland, 'sudo ecryptfs-setup-swap' gets me 'ERROR: Please'. Perhaps it should be "Please sir, may I have another?"
<rtg_> clean A4 install with full upgrade
<kirkland> rtg_: hrm
<kirkland> rtg_: it should say:
<kirkland> rtg_: Please install cryptsetup
<rtg_> kirkland, how come its not a depends?
<rtg_> rtg@xps1330:~$ sudo ecryptfs-setup-swap
<rtg_> dm_task_set_name: Device /dev/sda5 not found
<rtg_> Command failed
<rtg_> kind of scary.
<kirkland> rtg_: do you not have a /dev/sda5 ?
<kirkland> rtg_: it's not a Depends because ecryptfs doesn't actually depend on you having encrypted swap
<kirkland> rtg_: merely strongly recommended
<kirkland> rtg_: though, I think you're probably right ... I could probably, at this point, depend on it
<kirkland> rtg_: when the error message prints correctly, it should tell you exactly what you need to install to get it working
<rtg_> kirkland, it does have a /dev/sda5, and its marked as a swap partition.
<kirkland> rtg_: cat /proc/swaps
<rtg_> kirkland, nothing in it, but I answered yes to 'Do you want to proceed with encrypting your swap? [y/N]: y', and its been running awhile now.
<rtg_> INFO: Setting
<rtg_> WARNING: Commented out your unencrypted swap from /etc/fstab
<rtg_>  * Stopping remaining crypto disks...                                                                                  * cryptswap1 (stopped)...                                                                                     [ OK ]
<rtg_>  * Starting remaining crypto disks...                                                                                  * cryptswap1 (starting)
<kirkland> rtg_: right, i'm hanging there too
<kirkland> rtg_: only with the -7 kernel
<rtg_> kirkland, hmm, this is with a 2.6.31-7 kernel.
<kirkland> rtg_: with the -6 kernel, it starts fine
<kirkland> rtg_: let me strace that
<kirkland> rtg_: anything in dmesg interesting while you're hanging at *starting*
<rtg_> kirkland, just complaints about the various encryption engines no starting.
<kirkland> rtg_: still think those are red herrings ?
<rtg_> kirkland, yeah, but I'm gonna have to prove it.
<kirkland> rtg_: if you reboot at this point, you will hang there, waiting for cryptdisks to finish
<rtg_> kirkland, think I'll go get some brain food before wrecking this thing.
<kirkland> rtg_: it's hanging on the cryptsetup call
<kirkland> rtg_: i'll try to get an strace
<SockPants> hello
<SockPants> i have an older machinet to which i can't connect a cd-drive or external harddrive, and wont boot from USB.
<giovani> SockPants: ok, so?
<sgsax> SockPants: got a floppy drive?
<giovani> PXE boot, or floppy
<SockPants> i installed ubuntu server 8.04 on a virtual machine on my mac, and copied the virtual drive to the older machine's internal drive using a usb adapter and DD
<giovani> it has to have ONE of the above
<SockPants> that seems to work
<SockPants> the system boots
<SockPants> but now it doesn't detect the computers NIC
<giovani> yeah, I wouldn't have done that
<SockPants> is there any way i can make it re-scan stuff like that
<SockPants> its the only thing that really matters, the rest seems to work already
<SockPants> its listed in lspci
<SockPants> i just dont know how to make it work
<giovani> SockPants: do you know that it's a supported chipset?
<giovani> i.e. what's the driver that it should be using?
<SockPants> ummm
<SockPants> its a 3com 3c905c-tx/tx-m
<giovani> umm ... find the driver name
<SockPants> "Works fine with the standard 3c59x kernel driver
<SockPants> "
<giovani> ok, so try modprobing that drive
<giovani> driver
<giovani> and see if it works
<SockPants> so
<SockPants> in modprobe
<SockPants> it lists the 3c59x.ko
<sgsax> SockPants: that's right
<sgsax> now you should be able to do "ifconfig -a" and see eth0 listed
<sgsax> it's probably not configured, but should be listed
<kirkland> rtg_: do you see any crypt changes in the diff between -6 and -7
<rtg_> kirkland, I surveyed commits yesterday, but none (of the several hundred) lept out at me.
<SockPants> it lists eth1
<SockPants> and lo
<SockPants> but no eth0
<SockPants> that would probably be the virtual one then
<SockPants> that isnt present now
<SockPants> right?
<kirkland> rtg_: is it worth building a few and bisecting?
<sgsax> ok, so udev has probably reserved eth0 for whatever was in the box you copied the drive image from
<SockPants> ok
<sgsax> you need to edit /etc/udev/rules.d/70-persistent-net.rules
<rtg_> kirkland, well, I'm starting to look at the code a bit. kcryptd seems to be active, so I wanna figure out what its doing
<sgsax> remove any lines referring to the old interface
<sgsax> then when you reboot, udev should discover the new card and update that file for you
<kirkland> rtg_: k
<rtg_> kirkland, cryptsetup is running 100%, so its likely something there.
<kirkland> rtg_: you can strace -p $PID to see what it's doing
<sgsax> alternative to a full reboot, restarting the udev service may be sufficient, but I cant' say for sure
<SockPants> ok, i'll reboot and see
<SockPants> there was already a line for the other nic in that file though
<SockPants> should i have deleted that too?
<sgsax> yeah, it was probably eth1
<SockPants> yeah
<SockPants> should i have deleted both or kept the eth1
<sgsax> you can probably just change it to eth0, but you'll have to either reboot again, or probably just "rmmod 3c59x" and "modprobe 3c59x" again
<uvirtbot> New bug: #419398 in samba (main) "package samba-common 2:3.3.2-1ubuntu3.1 failed to install/upgrade: subproces post-installation script gaf een foutwaarde 255 terug" [Undecided,New] https://launchpad.net/bugs/419398
<SockPants> and all is good
<SockPants> :D thanks
<SockPants> now, how can i change the system time
<SockPants> oh, no need
<SockPants> haha, anyway, great :)
<guntbert> SockPants: have you seen https://help.ubuntu.com/9.04/serverguide/C/NTP.html ?
<guntbert> and he left... :-)
<sgsax> giovani: sorry if I stomped on you there, I just jumped in without looking at the scrollback
<kirkland> mathiaz: fyi, i can boot karmic guest on virtio okay
<kirkland> mathiaz: using kvm directly (no libvirt)
<giovani> sgsax: not at all -- I come and go -- being at work and such -- I appreciate the assistance :)
<soren> kirkland, mathiaz: What's the problem you're discussing?
<mathiaz> soren: after upgrading to 2.6.31-7 today my vms no longer boot
<uvirtbot> New bug: #419400 in mysql-dfsg-5.1 (main) "[Karmic] MySQL security problem" [Undecided,New] https://launchpad.net/bugs/419400
<mathiaz> soren: the block device is not recognized by the kernel anymore
<soren> mathiaz: In the guest?
<mathiaz> soren: yes
<soren> mathiaz: How far do you get?
<mathiaz> soren: booting the guest with 2.6.31-6 works correclty
<mathiaz> soren: I get dropped to the ramdisk
<mathiaz> soren: with a message stating that /dev/by-uuid/kXXXX is not found
<kirkland> soren: i'm also having block device issues with the new -7 kernel, specifically encrypted swap is busted, won't boot
 * soren looks around
<soren> mathiaz: Which of the virtio modules do you have in the initramfs?
<mathiaz> soren: how can I tell?
<rtg_> soren, kirkland: I'm installing mainline -rc7 just to make sure, then I'm gonna have to start bisecting (I think). It happens on bare metal as well.
<soren> mathiaz: find /lib/modules/ -name 'virtio*'
<kirkland> rtg_: yes, i saw it first on my laptop, bare metal
<soren> device mapper problems?
<mathiaz> soren: nothing
<soren> mathiaz: Umm... Ok.
<soren> mathiaz: There's your problem :)
<mathiaz> soren: http://people.canonical.com/~mathiaz/karmic-2.6.31-7.fail.png
<soren> mathiaz: uname -a
<mathiaz> soren: refresh the image above
<rtg_> kirkland, when you boot -rc7, do you get _any_ swap device?
<kirkland> rtg_: it never finishes the boot
<kirkland> rtg_: or drops to busybox
<kirkland> rtg_: it just hangs on startup, trying to cryptsetup the device
<soren> mathiaz: ta
<soren> mathiaz: Is this a freshly installed system or an upgraded one?
<rtg_> kirkland, I'm not getting a swap device on an unencrypted platform.
<mathiaz> soren: upgraded one
<soren> mathiaz: Alright.
<kirkland> rtg_: do you have a swap partition?
<mathiaz> soren: 2.6.31-6 is booting correctly
<kirkland> rtg_: a non-encrypted swap?
<rtg_> kirkland, used to
<rtg_> trying -rc6 next
<kirkland> rtg_: right, ecryptfs-setup-swap converted your swap to be encrypted to protect your data
<soren> mathiaz: Can you boot into the 2.6.31-6 and run a command for me?
<mathiaz> soren: sure
<kirkland> rtg_: if you want a non-encrypted swap, you'll need to comment out one line from /etc/fstab and /etc/cryptsetup
<rtg_> kirkland, did that already
<soren> mathiaz: find /lib/modules/2.6.31-{6,7}-* -name 'virtio*'
<kirkland> rtg_: and then edit /etc/fstab, copying the line you commented out, and changing the mount point
<kirkland> rtg_: to be the /dev/sda5 device, or whatever
<kirkland> rtg_: you'll also need to mkswap /dev/sda5
<kirkland> rtg_: and swapon /dev/sda5
<kirkland> rtg_: after that, you should have swap in /proc/swaps
<kirkland> rtg_: on reboot, then, you should have workign cleartext swap at boot
<kirkland> rtg_: i think mkswap was the key bit you're missing
<mathiaz> soren: http://paste.ubuntu.com/260002/
<mathiaz> soren: seems like the virtio modules are not included in -7
<soren> mathiaz: Seems like it. i386 or amd64?
<soren> amd64.
<mathiaz> soren: amd64
<rtg_> mathiaz, debian.master/config/config.common.ubuntu:CONFIG_VIRTIO_BLK=m. I wonder where they all went?
<soren> The build log also says it's there.
<rtg_> soren, I have it in the generic image, just about to look in -server
<soren> I'm looking at the build logs from launchpad.
<rtg_> /lib/modules/2.6.31-7-server/kernel/drivers/block/virtio_blk.ko
<soren> the virtio modules are clearly listed as included in linux-image-2.6.31-7-server_2.6.31-7.27_amd64.deb:.
<soren> mathiaz: Oh!
<soren> Hang on, I have a hunch.
<soren> Yes, got it.
<camilojd> Hello all..  anyone knows how is the best way to install postgresql 8.2 on jaunty server? i cannot use my app with 8.3
<soren> You guys are using the -virtual kernel, not the -server one.
<mathiaz> soren: yes
<soren> *That
<soren> * image does not have the virtio modules anymore.
<rtg_> ah, did I drop some stuff from virt?
<soren> rtg_: Lots of stuff, apparantly.
<rtg_> soren, it must have moved
<soren> rtg_: It doesn't look like it.
<soren> You said:
<soren> /lib/modules/2.6.31-7-server/kernel/drivers/block/virtio_blk.ko
<soren> I have:
<soren> /lib/modules/2.6.31-6-generic/kernel/drivers/block/virtio_blk.ko
<rtg_> soren, digging...
<camilojd> Jaunty server ships with PostgreSQL 8.3, which breaks my application. How is the best way to replace it with 8.2?
<soren> rtg_: Found it.
<rtg_> soren, wtf ?
<rtg_> my local builds are fine.
<soren> rtg_: ata_generic no longer exists.
<soren> rtg_: ...so it bails out when it gets to that one.
<soren> (it's explicitly listed in virtual.list)
<rtg_> soren, why on the buildd and not locally?
<soren> rtg_: Do you have build logs?
<Daviey> camilojd: use Ubuntu Hardy server
<rtg_> soren, no, but I can re-run a build quickly enough and make some logs
<soren> rtg_: Really? Launchpad takes three hours to do it :)
<rtg_> soren, 10-15 minutes
<soren> s/Launchpad/the buildds/
<soren> What's your secret?
<rtg_> sodual quad core nehalem w/18GB RAM
<rtg_> soren, ^^
<soren> So no cheating involved? No ccache or something?
<camilojd> Daviey, isn't a better way? Like, recompile from sources?... I don't want to trash someone else's job on the server..
<rtg_> soren, well, of course I'm using ccache. Its much faster the 2nd time though
<soren> rtg_: Wow.
<soren> Well, let's seem those build logs, then :)
<soren> -m
<camilojd> Daviey, whatÂ´s the "ubuntu way" to build and install from source?
<rtg_> soren, ok, build started. I'll see what I kind find.
<soren> rtg_: You don't have a stale ata_generic.ko lying around or something, do you?
<Daviey> camilojd: well sure, but it's already in Hardy - which is also an LTS..  Sure you could try and bring the old version into Jaunty.. but then you ave the burden of maintaining it yourself.
<rtg_> soren, I typically scrub and re-clone
<soren> Yeah, I figured. *shrug*
<camilojd> Daviey, yeah i understand. Gotta go back to Hardy LTS then :-(
<guntbert> camilojd: not exactly what you asked for: but you could have a look at http://www.postgresql.org/docs/8.3/static/release-8-3.html to see why it breaks your app - and the change the app ...
<guntbert> *then
<camilojd> guntbert: that looks interesting. I'll check it out!
<soren> rtg_: I totally understand why it fails. I completely don't understand why -6 didn't.
<guntbert> camilojd: good luck :-)
<rtg_> soren, it doesn't look like it stopped, even though it couldn't find drivers/ata/ata_generic.ko
 * soren has a hunch
<soren> rtg_: Is your system completely up-to-date?
<rtg_> soren, as of this morning
<soren> rtg_: Which version of bash?
<rtg_> GNU bash, version 4.0.28(1)-release (x86_64-pc-linux-gnu)
<soren> New bash was uploaded the day before yesterday. This is in the changelog:
<soren> l.  Changed behavior of shell when -e option is in effect to reflect consensus of Posix shell standardization working group.
<soren> Are you bulding in an sbuild or something?
<rtg_> soren, I should have thought of that. I found a find-utils bug early in the karmic cycle that took days to spot.
<soren> rtg_: I remember :)
<soren> rtg_: This time, though, it seems to be an intended change.
<rtg_> soren, I'm building in a straight chroot with dbuild
<soren> rtg_: And is /that/ completely up-to-date?
<rtg_> lemme check that the chroots are up to date
<soren> New bash on the 24th.
<rtg_> soren, check this out: GNU bash, version 3.2.48(1)-release (x86_64-pc-linux-gnu)
<soren> There were go.
<rtg_> thats my chroot version
<soren> There /we/ go, I mean.
<rtg_> ok, I'll update and see what happens.
<soren> I'm sure it'll fail. It makes sense :)
<soren> There's a non-zero return code in a subshell of a shell with -e enabled.
<rtg_> soren, thats a theory, or you have spotted it?
<soren> I've spotted it.
<soren> 20 minutes ago :)
<rtg_> in scripts/sub-flavour
<soren> The sub-flavour script has -e enabled. It fails ..
<soren> right.
<soren> ..to find ata_generic in a subshell.
<rtg_> remind me what -e does?
<soren> It bails out if anything has a non-zero return code.
<rtg_> I don't think thats what I want in this case.
<soren> Perhaps.
<soren> Well...
<soren> I guess what you really want is for the entire build to fail if this fails.
<soren> ...so that you'll notice that the module list is out of date.
 * soren takes a half hour break
<rtg_> soren, I guess thats fine too. why doesn't the make bail out?
<ewook> soren: in the middle of the night? :)
<martinjh99> How do I stop denyhosts from denying 192.168.0.0/24 ip addresses and how do I get into my server again..?
<KillMeNow> ummm console?
<sgsax> martinjh99: login as a different user
<sgsax> denyhosts blocks per IP *and* userid
<martinjh99> Killmenow I might have to do that
<KillMeNow> sgsax, i think he just did a block deny on class C IP range
<sgsax> then you have to delete lines from the denyhosts database files
<KillMeNow> which locks anyone from that IP subnet
<sgsax> didn't even realize you could do that, I just use it to block brute-force ssh attacks
<martinjh99> Killmenow got a report saying it just blocked my local network desktop ip 192.168.1.2
<martinjh99> I want to stop it from denying those ips...
<aubre> I have a dhcp server, its clients don't seem to be able to talk to the outside world
<KillMeNow> aubre:  did you set the router option in DHCP?
<sgsax> martinjh99: I can tell you what to do if it was a dynamic block, but if you specified a block on all IPs in that subnet (as KillMeNow suggested), you'll have to undo the setting somehow
<martinjh99> sax its only blocked 192.168.1.2 - Found a page that tells me I can keep denyhosts from blocking it by putting that ip in hosts.allow
<sgsax> that's fine, but you'll still need to remove it from the current database files
<sgsax> and hosts.deny
<martinjh99> thats what I'm going to do... Which other denyhosts files are there?
<sgsax> default work dir is /var/lib/denyhosts
<sgsax> check in your denyhosts.conf to see if yours is in a different location
<sgsax> remove any lines containing that IP in any files in the work dir
<sgsax> be sure to stop the denyhosts service before making changes to these files
<martinjh99> Thanks Sax :) Will do that tomorrow!
<sgsax> have fun :)
<martinjh99> I will...
<martinjh99> :)
<sgsax> I've written a script to do this, I can post it if you're interested
<sgsax> ...or not
<qman__> what setup do you use for blocking SSH brute force attempts?
<qman__> I use iptables with the recent module to just slow them down
<KillMeNow> yea, i use IPtables and a counter
<KillMeNow> once it reaches X number of 22 connects, it locks them out for a while
<KillMeNow> totally blacklists their IP address
<uvirtbot> New bug: #419464 in ec2-api-tools (multiverse) "ec2-monitor-instances, ec2-unmonitor-instances fail with 'EC2_HOME not set'" [Undecided,New] https://launchpad.net/bugs/419464
<qman__> I'd like to come up with something that blocked anyone who attempted to use "Administrator" or "root" or "test", etc., automatically
<qman__> without a cron-based log parser
<giovani> so you have two log-checking methods
<giovani> either inode-notification, or a daemon
<giovani> I don't know which fail2ban uses
<giovani> I
<giovani> I'd prefer to do it on the network level, rather than on the log level
<giovani> but that's just met
<giovani> me*
<giovani> denyhosts is another option
<giovani> ossec as well
<qman__> I'm not much of a coder, so a daemon might be too much
<qman__> but I'll look into inode notification
<rtg_> kirkland, so mainline -rc7 seems to work with encrypted swap. the substantive change that has likely caused this is 'SAUCE: (drop after 2.6.31) Added KSM from mmotm-2009-08-20-19-18' which is a bit of a change from -rc6.
<KillMeNow> i thought fail2ban parses the log file
<qman__> I don't have the performance to spare for cron-based log parsing, so it's out of the question, but something that parsed it as it logs would work
<kirkland> rtg_: okay, what does that patch do?
<giovani> qman__: well inode notification just saves you the i/o load of checking the file every X (mili)seconds
<giovani> I'm not sure why you think cron is a performance waster in and of itself
<rtg_> kirkland, its the virtual machine shared memory patch, but it may also have some impact on crypto. I'm gonna revert that and see
<qman__> well
<giovani> a daemon will keep memory allocated, when a cronjob wouldn't
<qman__> that's not what I meant, I meant that it has to reread the whole log
<giovani> no it doesn't
<giovani> only a fool would do that
<qman__> that's where the problem is
<giovani> that's unrelated to cron
<giovani> and related to how the app is coded
<kirkland> rtg_: cool, thanks
<giovani> you shouldn't be reading the entire log
<kirkland> rtg_: oh, duh
<kirkland> rtg_: yeah, KSM, right
<kirkland> rtg_: if we have to lose that one, i won't cry about
<rtg_> kirkland, does KSM in -rc6 work?
<kirkland> rtg_: good question, i haven't gotten around to it yet
<rtg_> kirkland, lemme verify first
<qman__> I just need something to intercept new entries like tail -f, only for a script, not to console output
<giovani> qman__: no ...
<giovani> tail -f is brutal on the disk
<giovani> that's the opposite of clean and efficient
<qman__> regardless of how it collects it, that's the data I need
<giovani> heh
<giovani> but you're very concerned about performance impact
<giovani> so I'm discussing the issues to reduce it
<qman__> yes
<qman__> the system is very old and slow, so performance is important
<kirkland> rtg_: i gotta run for a bit, will be back later
<giovani> then consider doing this on the network level
<rtg_> kirkland, me too, beer night.
<kirkland> rtg_: oh, that's a lot more fun
<rtg_> kirkland, biking, then beer.
<qman__> I have no idea how to intercept that data at the network level, since SSH is encrypted
<giovani> qman__: brute forces set up many different connections to SSH
<giovani> most IDS/IPS have rules for X number of connections per timeframe from a single host
<qman__> I already have that set up
<giovani> then what's the problem?
<qman__> I meant for triggering based on which usernames were used as well
<giovani> I don't see the need for that
<giovani> unless your system isn't catching a specific attack
<giovani> in which case you might want to adjust its threshhold
<qman__> well, it is catching them
<qman__> but it's really just slowing them down
<giovani> why isn't it stopping them? it should be blocking that IP at the firewall level
<qman__> and, if for any reason my iptables gets flushed, the flood gates open
<giovani> ... why are you flushing your iptables of your firewall?
<giovani> that's bad
<qman__> I'm not, but it's happened a few times
<nuckable> hey everyone, im working on a little ion based ubuntu server, and id really like to put ubuntu server on a usb flash drive and make it boot to ram
<giovani> alright, well, blocking a handful of countries (presuming you don't have a need to receive legit SSH connections from China, or Brazil, for example) will reduce a large percentage of the attacks
<giovani> the rest are mitigated through brute force detection
<nuckable> so i can fully use the hd space for the samba server
<nuckable> is that possible/smart?
<giovani> nuckable: that's a highly custom setup -- look into ramdisks
<giovani> it's possible ... smart is another matter
<nuckable> what speaks against it?
<giovani> livecds work this way
<giovani> it's messy to set up
<giovani> it's not standard
<nuckable> well sure its messy, but it stays after the initial setup
<qman__> and without considerable work, it's not persistent
<qman__> patching is difficult
<giovani> right
<nuckable> yeah, thats the biggest problem im having
<giovani> this is not really a way to run a server
<giovani> possibly an embedded device
<nuckable> when it loads into ram theres gotta be a way to change the files on the usb flash drive
<giovani> but not an active filesystem
<giovani> nuckable: sure ... many liveusb distros do this
<giovani> but it's complex
<nuckable> i see
<giovani> creating a rw filesystem for temp storage, and writing it back out to the ramdisk filesystem, etc
<giovani> it's not something I'd ever do on a server
<giovani> and it's not something we can really help you with here
<giovani> it's highly custom -- and will take a lot of experimentation
<nuckable> hmmmm
<nuckable> well what would you recommend then?
<qman__> if power consumption is the concern, low-capacity SSDs can be had reasonably cheap
<giovani> not doing that ...
<giovani> nuckable: what's the problem with keeping the OS on a disk?
<nuckable> qman__, that sounds interesting
<nuckable> for the ssd id need a pcie slot, or does pci work too?
<giovani> what?
<giovani> no, it's a drive
<qman__> SSDs connect to regular drive interfaces
<qman__> usually SATA
<giovani> SATA
<nuckable> oh :/
<nuckable> well the mobo only got 4 sata slots
<giovani> ...
<giovani> that sounds like more than enough
<nuckable> and id like to use it as a nas with extras
<giovani> oh boy
<giovani> another nas
<qman__> a 32/64GB disk can usually be had for about $80, though prices may have changed since I last looked
<qman__> does the board have IDE?
<giovani> if you have a pci slot
<nuckable> qman__, nope
<giovani> you can add a sata controller
<qman__> yeah
<giovani> with many more ports
<giovani> they can be had for cheap
<giovani> $20
<qman__> a four-port SATA controller, non-RAID, is about $40-60
<nuckable> im eyeing for the nvidia ion board
<qman__> more than four gets expensive
<nuckable> cause its got very low power consumption due to the abscence of the intel chipset
<qman__> your largest power usage is going to be the disks themselves
<qman__> a SATA controller doesn't use much
<giovani> why is the power consumption so critical?
<nuckable> giovani, cause its gonna be running 24/7
<giovani> I mean, tons of machines will run on 50-60W these days
<giovani> sure ...
<qman__> unless you're on battery, a sata controller is going to be neglegible
<giovani> Ion is definitely not the *most* efficient
<nuckable> qman__, its not the sata controller, more the case xD
<giovani> the case uses no power
<nuckable> giovani, but for the price it does the trick quite well
<nuckable> giovani, i was referring to the problem being there aint that much room in the case xD
<giovani> nuckable: well you can't have everything ...
<qman__> sata controllers aren't that big
<qman__> are you referring to space for disks?
<giovani> disks are about 10x the space of PCI cards
<nuckable> giovani, sure, at least i can try to get as good as possible no? =)
<giovani> nuckable: but you're being unrealistic
<nuckable> qman__, yup
 * virtualdisaster tries to remember the name of that ubuntu book that is free
<nuckable> giovani, so far i havent even decided anything, im just evaluating possibilities
<qman__> look into 2.5" hard drive sthen
<giovani> ok
<nuckable> so i cant be unrealistic yet xD
<qman__> you can fit a ton of them in the space
<giovani> qman__: not for a NAS ...
<giovani> clearly he's trying to maximize disk space
<giovani> for the NAS storage
<nuckable> exactly
<nuckable> 4 drives with as much space as possible
<giovani> so don't use such a tiny case
<qman__> yeah
<giovani> you need enough space for the system
<giovani> a SSD is small
<qman__> you need a bigger case
<giovani> as is a PCI card
<nuckable> true
<nuckable> im gonna have to recheck if the ion has a pci slot
<giovani> you can fit both in less space than you can a single 3.5" HD
<qman__> my file server is a large mid-tower
<nuckable> cause afaik it only has pcie
<giovani> nvidia ion is a chipset
<giovani> not a board
<nuckable> the boards name is ion something
<qman__> PCI or PCIe is irrelevant
<qman__> you can get controllers in either
<qman__> for roughly the same price
<qman__> you just need to know which you have
<qman__> also
<qman__> make sure you're providing adequate cooling for the hard drives
<qman__> they don't need much, but if you're not getting any air through, they will have a considerably shorter life
<giovani> heh
<giovani> that's debatable
<giovani> HDs have no air intakes at all
<giovani> heat is not an issue for most drives
<giovani> vibration is
<giovani> unless the heat you're talking about is 110+F
<qman__> it also depends on the drives
<nuckable> yeah im not penny-pinching with the hdd themselves
<qman__> in my experience, WD drives run very hot
<giovani> nuckable: this has nothing to do with drive costs
<qman__> this is about case airflow
<qman__> small cases tend to not have any
<nuckable> true
<nuckable> i guess a midi tower wouldnt hurt
<giovani> chenbro
<giovani> makes a really nice NAS case
 * nuckable looks it up
<qman__> I have some 250GB WD drives that have gotten to 50C before I installed more fans
<nuckable> qman__, but in general would you recommend wd?
<qman__> I like seagate
<nuckable> cause so far i had no problems with wd drives
<giovani> segates are awesome
<giovani> I've hated WDs I've owned
<nuckable> i already had 2 seagates failing on me
<nuckable> during the years
<giovani> http://usa.chenbro.com/corporatesite/products_detail.php?sku=78
<giovani> there's the 4-drive case
<giovani> they have 2-drive cases as well
<bptk421> Has anyone tried WD's Green drives for a NAS?
<giovani> bptk421: they're awful, stay away
<giovani> green drives are very low-end
<bptk421> In what way?
<qman__> yeah
<giovani> in that we've had 50% of them fail in our fileserver
<giovani> over a 6 month period
<bptk421> ouch
<giovani> (24-drive fileserver)
<qman__> I would never buy a drive marketed that way
<qman__> performance and reliability are more important
<giovani> green drives are the lowest-end for WD
<nuckable> giovani, problem with that case is i wont be able to fit an ssd in it
<giovani> they're appropriate for grandma's internet machine
<giovani> nuckable: ... that's not true
<giovani> SSDs are small
<giovani> you can velcro it to the side of the case
<giovani> that's what I do
<nuckable> velcro?
<soren> rtg_: Did you find an answer to your question, or do you want me to look?
<giovani> yes ... velcro
<nuckable> (sorry not english native)
<giovani> they're light and small
<giovani> google it
<qman__> velcro, double sided tape, even drill your own mounting holes
<qman__> not up for a little modding?
<nuckable> oh lol
<giovani> velcro is the most portable imo
<rtg_> soren, which question? I found taht the KSM patches in -rc7 are scrogging encrypted swap.
<nuckable> the stuff children use to fasten their shoes
<nuckable> xD
<giovani> easy to take the drive out
<qman__> yeah
<giovani> nuckable: yes, it's awesome stuff
<nuckable> hehe
<soren> rtg_: Why make doesn't bail out.
<qman__> velcro is a good idea, since SSDs don't care about shock or heat
<giovani> exactly
<giovani> and they're light
<soren> rtg_: KSM break encrypted swap? Sounds like fun.
<rtg_> soren, dunno yet, I'll have to get back to that question later tonight.
<giovani> I put velcro in every server I build now
<nuckable> im gonna have to cheak if chenbro is available in my area
<giovani> and stick the SSD on it
<soren> rtg_: Alright.
<giovani> nuckable: in your area? no
<giovani> you order online, from a reseller
<rtg_> soren, I'm on a beer mission right now.
<soren> rtg_: Sounds good. Wish I was too.
<nuckable> giovani, nope its available
<giovani> btw, nuckable
<rtg_> soren, why are you up so late?
<giovani> if you can only get a minipcie slot on your ion board
<soren> rtg_: feature freeze
<giovani> there are SSDs that fit into mini-pcie
<soren> rtg_: It's not that late yet, really, though.
<rtg_> 11P?
<soren> rtg_: 11 PM is not unusually late.
<sgsax> beer misson, I like that
<rtg_> ah, well I'm usually done by 2P
<sgsax> apt-get install beer
<soren> rtg_: When do you start?
<rtg_> 0600
<soren> rtg_: Ah. I don't start until some time between 7 and 9.
<rtg_> anyways, gots to go.
<soren> rtg_: And most of my team is 6 hours behind, so if I want to work just a little bit with them...
<nuckable> ok thank you very much giovani and qman__
<gene420> good evening everyone, and would anyone be familar with setting up ubuntu snmp as a client .....I seem to need help with snmp.conf since snmpwalk works locally
<virtualdisaster> gene420, make sure firewall allows snmp out etc, also man there is something you need to run to get it to work properly
<virtualdisaster> there is a program to make messing snmp simpler but i cant recall atm
<gene420> hmm I don't have a firewall open and I'm sure there is something missing since I don't see any netstat ports open 161 or services running for snmp
<gene420> opps that open shouldn't be there...no firewall on the unit.....
<gene420> fyi ahh here is the setup script to run snmpconf -g basic_setup
<kirkland> ScottK: ping
<kirkland> ScottK: i'm processing sync requests
<kirkland> ScottK: i see a stack of yours for new packages from debian
<kirkland> ScottK: new-source doesn't seem to know anything about these
<kirkland> ScottK: do you know what's up?
#ubuntu-server 2009-08-27
<clusty> hey. i am having some trouble getting LDAP based auth to work. i installed and imported data in the ldap server, installed on the client ldap-nss
<clusty> and configured it to the bets of my knowledge, but getend still does not give out any new entries
<clusty> how can I check if it's a server issue or a client conf issue?
<clusty> rather stuck i gotta say :(
<LiraNuna> is there a way to run apache2 with mod_chroot and provide it to what user to drop privilidges to per vhost?
<twb> clusty: you must use auth-client-config to enable the lac_example template
<clusty> twb, would you please tell me how to do it? :D
<twb> No.
<twb> You need to work from hints because I am too busy to hold your hand.  Sorry.
<clusty> twb, that profice does not exist
<clusty> twb, only: 'ldap_example', 'lac_ldap'
<clusty> that seems related
<twb> clusty: I can't remember which
<clusty> twb, if i enable both?
<qman__> how do I disable the automatic installation of 'recommended' packages? I can't seem to find what I'm looking for on google
<qman__> I have a package that wants to install X and a ton of deps, and I don't think it needs them
<qman__> figured it out, it actually does depend on X
<ScottK> kirkland: I think now I know better than to rely on packages.qa.debian.org for Ubuntu status of a package.  Sorry for the trouble.
<twb> They're m
<twb> Bleh, network problems here
<twb> qman__: disable it in aptitude?
<clusty> twb, i got the two parts to talk to each other nicely (including PAM). now i got the problem that the passwords in the LDAP db are bad (all set to {crypt}x)
<clusty> if i remember correctly those are not good for logging in
<twb> clusty: there's a filthy hack in /etc/passwd
<twb> clusty: if the hash isn't valid, then there are no passwords which will match it
<clusty> any clue how can i replace the passwd info from LDAP with the right stuff from /etc/passwd?
<twb> clusty: I would not advise following that hack in LDAP
<clusty> twb, what would you suggest i do?
<twb> Lock the account with the appropriate attribute
<twb> I don't remember which it is offhand
<clusty> twb, no no. i want accounts to be loggable. the import ldiff file must have been bad
<clusty> ldiff was generated by migrationtools
<twb> Shrug
<clusty> i want all in LDAP to actyually be able to log in
<twb> I haven't had to migrate users, so I don't know
<twb> Oh, I see what happened
<twb> In passwd, if the password is x it's because you're using shadow passwords
<twb> You need to teach migrationtools to get the password from /etc/shadow
<clusty> ohhhh gosh
<clusty> so i shoul dbe able to log in with x D:?
<twb> It's probably just a CLI option
<twb> clusty: no, not at all.
<twb> clusty: in /etc/passwd, x has special meaning.  It doesn't in LDAP AFAIK.
<clusty> got it. lemme look at migran tools
<twb> x is a hack because passwd is a file and you can't restrict read access to individual fields in a file, like you can in LDAP.  Hence /etc/shadw.
<clusty> twb, but you would not know how I can replace the stuff ?
<twb> clusty: no
<twb> Rather: I do not know.
<clusty> thanks a bunch
<uvirtbot> New bug: #346875 in unixodbc (main) "package unixodbc 2.2.11-16build2 failed to install/upgrade: package unixodbc is already installed and configured" [Undecided,New] https://launchpad.net/bugs/346875
<\sh> hmm...why don't we have in our heartbeat package the iscsi resource.d script? looks like it's gone missing or was never there
<ivoks> which version?
<ivoks> in karmic heartbeat is just a part of pacemaker
<\sh> ivoks: jaunty heartbeat 2.1.4-2
<\sh> dpkg -L heartbeat|grep resource.d -> no iscsi resource script ;)
<\sh> oh wait
<\sh> it's in /usr/lib/ocf/resource.d/ not in /etc/ha.d/resource.d
<\sh> ivoks: do you think that this ocf powered shell script can be used in a heartbeat 1 haresource configuration?
<ivoks> don't know :/
<a_ok> why do i get these errors from /etc/cron.daily/apt: No value set for `/system/http_proxy/use_http_proxy'
<cjwatson> soren: where would be a good place on the filesystem to put the node preseed file generated at cluster installation time?
<mattt> anyone here using LDAP to authenticate system users?
<a_ok> how can I see what package pulled in gconf2?
<alvin_> a_ok: aptitude why gconf2
<a_ok> alvin_: thank you
<soren> cjwatson: hmm.. Does it have any information that the user might want to edit?
<cjwatson> it's possible, yes
<soren> cjwatson: In that case, I think I'd stick it in /etc/eucalyptus somewhere.
<cjwatson> ok
<cjwatson> /etc/eucalyptus/node-preseed.cfg?
<soren> cjwatson: Sounds good.
<cjwatson> or I suppose .conf is more in line with everything else there
<cjwatson> I was then planning to add a <Location/> section to httpd-cc.conf to make /node.preseed point to that
<soren> I was thinking of using an Alias, but that's just a matter of style.
<cjwatson> soren: yeah, I realised that Alias was better myself, and then came back and read your comment :)
<cjwatson> soren: lp:~cjwatson/eucalyptus/installer ?
<stefan___> n
<soren> cjwatson: Sorry, I'm sorting out a trainwreck that demands my full concentration. Give me 20 minutes.
<soren> cjwatson: Looks good (I had a spare few minutes waiting for a build to complete).
<soren> cjwatson: Feel free to upload. It will ftbfs, though, due to aforementioned trainwreck.
<cjwatson> soren: ok. not tested at all anyway. I'm still working on some of the preseeding details
<cjwatson> soren: I've done the bit to stick the cluster's public key on each node. I think the last major remaining bit is the code to discover avahi-advertised nodes
<cjwatson> soren: any thoughts on the interface? euca_conf --register-nodes already exists - maybe if it's given no nodes it should fall back to avahi? or should it be a different option?
<rtg_> kirkland, are there any userspace packages that take advantage of KSM yet? I'm considering ripping it out completely.
<cjwatson> soren: (this is all for when you have a moment) I'm having real difficulty setting up a proper test environment for this, though. I only really have one kvm-capable machine and it's a bit low on memory, so setting up VMs takes ages. I heard you have a nice quick server-testing setup ...?
<soren> rtg_: kvm does.
<mattt> trying to understand if libpam-ldap is actually required to authenticate ALL logins against LDAP
<rtg_> soren, did ti work OK with -rc6 ?
<soren> rtg_: It's hard to tell.
<garymc> anyone know how I could in stall xlite softphone?
<soren> rtg_: I don't know how to tell (other than loading up a bunch of identical VMs and see how it affects the total memory usage.
<soren> cjwatson: I do. It won't work for this, though.
<soren> cjwatson: How about euca_conf --discover-nodes?
<rtg_> soren, ok, perhaps I'll restore KSM to the way it was for -rc6
<soren> rtg_: I'm curious how it manages to break encrypted swap.
<garymc> ive tried sudo apt-get install x-lite
<cjwatson> soren: oh, are your VMs not bridged or something?
<rtg_> soren, its too deep for me to figure out in a short while, but the KSM stuff reaches into the VM, so it seems likely.
<cjwatson> soren: --discover-nodes would be OK
<soren> cjwatson: They are, but the server-testing setup won't help you. It's just good for testing a whole bunch of installs really quickly.
<soren> rtg_: I'm just surprised encrypted swap breaks, but regular swap doesn't, /because/ it's in the VM layer. I would have thought the vm layer was agnostic as to whether or not the underlying swap device was encrypted or not.
<cjwatson> soren: well, maybe I didn't mean the actual server-testing setup, I was wondering if you had machines beefy enough to be able to test this stuff before the heat death of the universe
<soren> cjwatson: I suppose I do.
<soren> cjwatson: They're just busy building stuff for me these days.
<cjwatson> I'll see what I can do, but I've tended to end up doing yak-shaving to try to cobble together enough disk space and such when I've tried ...
<soren> cjwatson: I don't actually need the vm extensions to do the install, I suppose, so if you have other hardware for testing, you can just use that?
<soren> cjwatson: Err... "/You/ don't actually need" etc.
<cjwatson> not very conveniently :-( I keep having too much to do to maintain the computers around here properly ...
<cjwatson> I'll see what I can do but don't hold out a lot of hope just noww
<soren> Alright.
<cjwatson> hmm, the iMac over <- there has the VM extensions, that might help
<a_ok> i have a jaunty server that is verry slow at establishing incomming connections but only from an external source. listing arp is very slow aswel. what could be wrong here?
<a_ok> once connection is established its fast though
<cjwatson> a_ok: usually slow or broken reverse DNS
<a_ok> cjwatson: but it uses the same DNS server etc as the rest
<cjwatson> just a guess to point you in the right direction, afraid I don't have time to dig into details
<a_ok> cjwatson: besides arp -a should still show entries almost on the fly regardless of dns
<soren> a_ok: No.
<soren> a_ok: arp also tries to resolve IP addresses to host names.
<a_ok> soren: ?
<soren> a_ok: Try: arp a-n
<soren> Err..
<soren> arp -an
<soren> -n tells arp to not attempt DNS resolution.
<a_ok> ahh
<a_ok> brb
<ewook> a_ok: is it running apache?
<ewook> lol
<ewook> oh, wrong person.
<ewook> a_ok: disable the lookup in apache and it will run faster.
<alvin_> What's the alternative for XDMCP nowadays?
<a_ok> ewook: nope tomcat, ssh, mysql
<ewook> well, is tomcat doing lookups on connections?
<a_ok> ewook: don't know but will check if arp -an is fast in a min
<henkjan> alvin_: http://en.wikipedia.org/wiki/NX_technology
<alvin_> NX is cool. (I'm using it already), but I'm really looking for (more or less) the same functionality as XDMCP. Give multiple users a desktop environment. Server and clients are Ubuntu.
<alvin_> We're migrating from Solaris CDE desktops to Linux KDE. People are used to XDMCP, but I've noticed  it's hard to set it up on Ubuntu Server.
<henkjan> alvin_: check the ltsp-server project
<soren> alvin_: Really? I find it extremely straightforward.
<alvin_> henkjan: ltsp is another cool project, but we will not be using diskless clients. (A mix of Ubuntu clients (well, X servers...) and Exceed on Windows)
<soren> alvin_: Find the [xdmcp] section in your gdm.conf. Add "enable=true". Done.
<soren> alvin_: What's the problem you're running into?
<alvin_> soren: Well, there are several. We're not using gdm, but kdm (Kubuntu) or just X. I'd like to be able to use something like 'exec X -indirect {name-of-server}'
<soren> alvin_: I don't know how kdm does it. If it helps, you can easily use gdm even though the desktop session will run KDE.
<VirtualDisaster> alvin_, freeNX?
<VirtualDisaster> sounds better imo
<alvin_> VirtualDisaster: NX is not an option. They suffer from a bug which crashes TK windows. Some of our software is TK based.
<alvin_> Thanks soren. I'll do a few tests
<VirtualDisaster> alvin_, oic, well good luck
<ScottK> alvin_: You might ask Riddell in #kubuntu-devel.  I think he knows about such stuff too.
<VirtualDisaster> alvin_, btw thx for mentioning Exceed, never heard of it before
<alvin_> ScottK: Thanks, I will
<alvin_> VirtualDisaster: Well, it's costly, but has a lot of functions and works very good
<VirtualDisaster> yeah
<uvirtbot> New bug: #419919 in libapache-mod-auth-kerb (universe) "Segfault when using apache2-mpm-worker." [Undecided,New] https://launchpad.net/bugs/419919
<a_ok> alvin_: so how do I fix reverse dns stuff?
<a_ok> its just a test machine in a localnetwork so no need for DNS stuff at all
<henkjan> alvin_: ssh -X also works great
<alvin_> henkjan: yes, but only for single applications. I really want a full desktop (even dual screen)
<alvin_> In the meantime, it works. I was forgotten that I had these problems before: (Bug 86021)
<uvirtbot> Launchpad bug 86021 in libxdmcp "XDMCP does not work without reverse dns" [Low,Confirmed] https://launchpad.net/bugs/86021
<garymc> when i ssh to my other server connected to my ethernet switch, how do i bring up a terminal window for that server? im in ubuntu 9.04 LTSP
<garymc> nevermind done it
<hattoricaca1> hello i have a firewall with filter policy DROP on INPUT and FORWARD. Would i do better dropping in PREROUTING-nat? (as the packets don't have to travel all prerouting, conntrack, routing and most of the input/forward chains to be discarded)
<aubre> is there some more definitive guidance on how to set up Eucalyptus on jaunty? The community Eucalyptus documentation leaves me with a lot of questions.
<giovani> by "the community documentation" do you mean help.ubuntu.com?
<giovani> aubre: *
<aubre> giovani: yes
<aubre> I think I need to step back , reinstall and follow the instructions more closely
<BrixSat_> Hello how can i view the spent resources of a process?
<clusty> BrixSat_, like ram/cputime... ?
<BrixSat_> clusty yes
<clusty> BrixSat_, i use htop for that
<clusty> you have to install it
<clusty> top is an ugly version
<BrixSat_> ok ;)
<BrixSat_> do yo know any free shell?
<monteith> my connection to proftpd times out/fails to retrieve a directory listing while trying to connect
<monteith> anybody know what config setting i likely missed?
<monteith> Aug 27 15:41:31 fiveunder proftpd[8104] fiveunder (c-24-118-215-40.hsd1.mn.comca                                                                                                                               st.net[::ffff:24.118.215.40]): Passive data transfer failed, possibly due to net                                                                                                                               work issues
<clusty> hey
<clusty> is there some guide how to get NFS homes working?
<sgsax> clusty: many ways to do that
<sgsax> we use the automount daemon
<clusty> sgsax, so i just managed to get auth working with ldap
<sgsax> congrats, that can be a pain
<clusty> sgsax, is it that the mount is a post-auth step and unmount is delog?
<clusty> or the mounts are persistent?
<sgsax> amd leaves the share mounted for a period after it is no longer needed, I don't know for sure what the timeout is
<sgsax> honestly, this was setup before I got here, so I don't know what the reasoning was insted of just nfs-mounting /home directly
<clusty> sgsax, ok. lemme set the server to export the /home and test by hand. will come back with questions after that :D
<sgsax> sure, that's easy enough :)
<sgsax> you can skip amd by just adding the nfs mountpoint in your /etc/fstab
<clusty> do i need to worry about permissions over nfs?
<sgsax> nope, they are carried forward
<sgsax> uids/gids are looked up in ldap
<clusty> great
<clusty> sgsax, you have any suggestion for rsize and wsize or however that stuff is called?
<sgsax> clusty: nope, I don't set those values, so it's using whatever the default is
<clusty> sgsax, hard or soft mount?
<sgsax> I think hard is the default
<sgsax> I don't specify this either
<clusty> sgsax, is there any advantage to have automount do it's thing?
<giovani> aubre: the instructions seem very adaquate to me -- they're for getting it installed, they're not meant to be a howto on how to use it in general -- the official project docs are for that
<aubre> giovani: I'm making progress now, thanks
<aubre> I may raise some bugs later though
<sgsax> like I said, I don't know what the justification was for going that route
<chmac> Ubuntu 8.04. Jobs created through `crontab -e` work for my user, but jobs created via `sudo crontab -e` are not being run. Any suggestions on how to debug?
<Oracle11g> good day guys...
<Oracle11g> I have a question on partitioning... is advisable install swap space in the lower cylinders?
<chmac> Hola Oracle11g
<Oracle11g> hola chmac :)
<chmac> Oracle11g: Do you have a single disk?
<Oracle11g> chmac: theoretically, yes :P
<chmac> Oracle11g: :-)
<chmac> Oracle11g: Personally, I would lay out my disk in a nice fashion orderly fashion purely for recovery purposes, etc
<chmac> Oracle11g: I think seek times are based on where the read head is now and where it needs to get to, so the start or the end of the disk probably isn't any quicker
<chmac> Oracle11g: Statistically, the centre of the disk is probably the fastest to access, it's only ever 50% away from the head, whereas one edge could be 100% from the head
<chmac> Oracle11g: Know anything about debugging cron? root's crontab is not being called on my server for some reason :-(
<Oracle11g> sorry chmac I don't know about debugging cron :(
<Oracle11g> chmac: I don't remember where I read about is a good experience install swap and /var in the partitions that could be the edge of the disk
<jmedina> chmac: probably it is about env varibables
<jmedina> I remember there is a option for sudoers or something...
<chmac> jmedina: Ok, thanks, I'll do some searching
<jmedina> chmac: are the cron entry actually written ?
<jmedina> try sudo crontab -l
<chmac> jmedina: Yeah, it's being written ok, or at least read back by crontab -l
<jmedina> chmac: chec your logs
<chmac> jmedina: Good thinking :-)
<jmedina> chmac: you need to comment out the line
<jmedina> #cron.*                         /var/log/cron.log
<jmedina> from /etc/syslog.conf and restart sysklogd and then check your log :)
<chmac> Hmm, I see "User account has expired" in the logs
<chmac> Looks like because the root account is disabled, cron won't run it's tasks
<chmac> Logs, logs, logs, always a good place to start. :-)
<chmac> Turns out the workaround is `sudo passwd --unlock root` then `sudo usermod --lock root` :-)
<philsturgeon> hey guys. im no linux noob i have just completly forgotten how to go about this. my server has no network connection after a period of inactivity. how to even start debugging?
<philsturgeon> my inactivity, i mean i havent fired it up in a few months, now it doesnt work
<chmac> philsturgeon: Can you connect to the machine in another way?
<chmac> philsturgeon: Are you physically in front of it?
<philsturgeon> yes
<chmac> philsturgeon: I'd start with `sudo /etc/init.d/network restart`
<philsturgeon> got a monitor and keyboard
<chmac> philsturgeon: Run that command without the back ticks :-)
<Linaxys> hey I'm trying to install ubuntuserver on my new server, it's the first time I install a server, my harddisks are detected in RAID, and when I'm at the partitions table screen setup, I see no partition table at all, I can't install...
<Linaxys> has anyone got an idea please ?
<Linaxys> when I use fdisk, it detects my hard drive :/
<clusty> Linaxys, hardware raid?
<philsturgeon> chmac: still no luck
<chmac> Linaxys: Does fdisk show any partitions?
<philsturgeon> chmac: no
<chmac> philsturgeon: sudo ifconfig eth0
<philsturgeon> chmac: No DHCPOFFERS recieved
<Linaxys> chmac, it shows the hard disk, but I was able to create a new partition inside it with "n" and save with "w"
<chmac> philsturgeon: Is your server configured to get an IP address dynamically?
<Linaxys> clusty, I think yes, it's hardware raid
<philsturgeon> should do
<Linaxys> clusty, I'm not sure how to enable/disable it in the bios
<chmac> Linaxys: When you go through the installer, can you create partitions as part of the setup process?
<philsturgeon> chmac: eth0 shows results
<chmac> philsturgeon: sudo ifconfig
<clusty> Linaxys, well you are tryign to install/boot from the raid?
<philsturgeon> chmac: only eth0 and lo
<Linaxys> chmac, I really can't at all, it says like "Help for partition setup" "[white space]" "Cancel every partitions modifications", "Back"
<clusty> Linaxys, if yes, you need to init the raid ARRAY before you start installation
<Linaxys> clusty, I don't really know, I'm a pure newbie with servers and raids...
<clusty> Linaxys, ok. tell me how you know it's a raid? :D
<chmac> Linaxys: Sounds like clusty can help, I'm out of my depth with raid stuff :-)
<Linaxys> hmm, something that handles the hard disk and prevent it from crashing ?
<clusty> Linaxys, no no. not what is a raid. how do you know your disks are in raid now?
<chmac> philsturgeon: The error message suggests to me that eth0 is trying to get an IP dynamically
<Linaxys> aaaah
<chmac> philsturgeon: Does it show the IP via sudo ifconfig eth0 ? If not, you could set it manually
<Linaxys> yeah because I saw in my config my hard disks are in raid, and the setup always asks me something about raid
<philsturgeon> chmac: my bad dude its a physical networking issue
<clusty> Linaxys, using intel matrix storage by any chance?
<Linaxys> hmmm, I've never seen that, but my serverboard and cpu are both intel
<Linaxys> gonna check it out in the bios
<clusty> Linaxys, where are your HDD's plugged in?
<clusty> in the mobo or in some funny extra card?>
<Linaxys> i don't know exactly, I've checked that out, I haven't seen a SATA plug into the mainboard
<Linaxys> but maybe it's some funny extra card
<clusty> when you boot the PC what do you see? any mentions of stuff like: "Loading BIOS" press alt+i or whatever toi enter RAID BIOS ?
<Linaxys> I try that
<Linaxys> so, in the bios config, SATA Mode, I see : Enhanced, Compatibility, Ahci, SW RAID
<Linaxys> should I use SW RAID ? When I use Enhanced, I see my hard disk in the bios config list, I see it two times, same name, same space :/
<clusty> Linaxys, is it some ASUS type of mobo?
<Linaxys> hmm, I think it's Intel
<Linaxys> completely intel, it show the "Intel Serverboard" logo on the boot, I see no other ASUS
<clusty> i am a bit lost. did you configure the type of raid?
<clusty> Linaxys, like RAID 0,1,5,10 ?
<clusty> the RAID scheme to use
<Linaxys> hmm I haven't touched anything once I received the server
<Linaxys> it was plugged, ready..
<Linaxys> soo, do I have to unplug the hard drive ? which RAID mode should I use ?
<ScottK> If it's SW RAID, then it needs a 'driver'.  Consult Google to see if one exists (or might be in the Linux kernel already)
<ScottK> If you don't find one, SW RAID is definitely not what you want.
<clusty> well worse case worst you can use MD
<clusty> that is the linux soft raid thing
<clusty> but that will probably suck for a lot of reasons
<deathtech> Hello All
<deathtech> Were looking to possibly implement a linux based solution ( LTSP ) as a terminal server. I have been tasked with finding out the best software packages to use for linux that would allow us to manage the systems at an enterprise level
<deathtech> Patching, security, remote usage, reporting, etc
<pmatulis> deathtech: begin here -> https://help.ubuntu.com/community/UbuntuLTSP/
<deathtech> Landscape appears to do something similar to this, but i was wondering if there are alternative s?
<deathtech> yes, the LTSP part i have done, its just "Ok , it works great. Now how do we manage 1500 of them ?"
<pmatulis> 1500 LTSP servers?
<stefg> deathtech: you mean load balancing?
<pmatulis> kirkland: do you have much experience in converting from vmdk to quemu disk formats?
<kirkland> pmatulis: sorry, none at all
<Linaxys> yessss
<Linaxys> I've just switched to Enhanced SATA mode
<Linaxys> then when the setup asked me to enable the RAID service stuff, I said no
<Linaxys> and I'm formatting the hard disk now :D
<Linaxys> well, I hope I won't need RAID in the future
<stefg> deathtech: http://www.nomachine.com/products.php might be worth a look, too
<deathtech> Sorry , no not load balancing
<deathtech> Conf call  :p
<deathtech> just mass server management basically
<stefg> deathtech: http://www.ubuntu.com/news/landscape-system-management-tool
<stefg> deathtech: but forget 'bout the sparc machines... http://www.phoronix.com/scan.php?page=news_item&px=NjM5MA
<deathtech> and that is EXACTLY what im looking for. however, i have looked at that, and im looking for other alternatives, so i have a comparison
<clusty> deathtech, i am planing to use puppet
<clusty> deathtech, to clone confs accros multiple servers
<sgsax> deathtech: you looked at cfengine?
<Linaxys> hmm I don't really understand... how can I connect to my openssh server with winscp ? I've just done a little sudo apt-get install openssh-server, and rebooted, "sudo /etc/init.d/ssh start", and I try to login inside it it doesn't reply at all :S
<Linaxys> do i have anything else to do once I've installed it ?
<deathtech> No sir
<deathtech> cfengine ?
<sgsax> Linaxys: out of the box config should be sufficient
<VirtualDisaster> Linaxys, ssh is pretty much plug and play
<VirtualDisaster> install and then connect
<sgsax> deathtech: http://www.cfengine.org/
<VirtualDisaster> Linaxys, make sure youre using the proper IP and username etc
<deathtech> Linaxys : Also check your Routing :)
<Linaxys> hmm yeah I'm using a router, I don't know if it blocks ports but I am in the same network, like 192.168.2.50...
<sgsax> Linaxys: and make sure apparmor isn't getting in the way
<VirtualDisaster> sgsax, apparmor?
<sgsax> Linaxys: if you can connect via ssh, you should be able to connect via scp
<VirtualDisaster> ubuntu doesnt use that by default does it?
<sgsax> I believe 8.04 does
<VirtualDisaster> oic
<jdstrand> apparmor does not confine ssh
<jdstrand> https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/AppArmorProfiles
<jdstrand> VirtualDisaster, sgsax: ^
<VirtualDisaster> ah nice ty
<sgsax> jdstrand: good to know
<aubre> for some reason when I try to gunzip ubuntu-uec-jaunty-amd64.img.gz (a 600mb file) it grows until it fills the hard disk partition or I kill it
<aubre> last time I killed it it had gotten up to 30 GB
<nick125> aubre: That's possible..
<aubre> well it seemed odd
<nick125> You can compress a lot of zeros down to nothing :)
<aubre> I got those from here http://uec-images.ubuntu.com/jaunty/current/
<aubre> are they supposed to be that large?
<Sam-I-Am> hmm
<Sam-I-Am> anyone here use xubuntu karmic?
<Sam-I-Am> er, wrong chan...
<aubre> hmm maybe I was a decimal point off with that lol
<ruben23> hi
<ruben23> hi guys i run apt-get as root and i got this output----->http://pastebin.ca/1545008  its like no activity its not downloading any packages or files..
<ruben23> anyone have ideas on this
<ruben23> i run apt-get update
<ruben23> what i mean sorry
<qman__> looks like it updated fine
<qman__> if you want to install updated packages, you now need to run sudo apt-get upgrade
<qman__> apt-get update updates the database information, apt-get upgrade installs new package versions
<qman__> you need to run both, in that order, to fully update your system
<cemc> is there a way to boot with a non-default kernel once, so that grub selects that, but only at the next reboot?
<cemc> after that it reverts back to the default kernel
<kinnaz> you can use e to edit the boot line
<kinnaz> at boot time
<kinnaz> or modify default for one boot
<giovani> kinnaz: he might be interested in testing a new kernel remotely
<kinnaz> testing new kernel without drac :P?
<kinnaz> sounds like fail
<giovani> freebsd has a feature for this, where you can boot a kernel once, and if it fails to boot, it'll boot the last kernel next
<giovani> kinnaz: let's not get manuf-specific
<giovani> using non-standard IPMI cards is fail
<clusty> i am trying to get DNS working on the local host. so far I can ping the DNS server from the rest of network, but i cannot ping the rest of the network from the DNS machine (ping=resolve DNS)
<giovani> but not everyone has thatl uxury
<clusty> any got a clue what could be going on?
<giovani> s/thatl uxury/that luxury/
<clusty> :D
<cemc> yeah, it's like that. It's a remote server and I have impi to reset it,
<giovani> clusty: that you haven't set /etc/resolv.conf correctly?
<cemc> but I don't have SOL (yet)
<giovani> cemc: SOL?
<cemc> Serial Over LAN, to control grub
<clusty> giovani, the server has a search debian.lan and a nameserver stanza
<giovani> gotcha -- just hadn't heard the acronym used much
<clusty> debian.lan being the local domain
<kinnaz> serial over lan
<kinnaz> never heard of that
<giovani> clusty: I don't know what stanza is to you ... but you shouldn't be putting DNS names in /etc/resolv.conf
<giovani> kinnaz: it's standard in good IPMI cards
<giovani> it's serial console for the 21st century
<kinnaz> hmmmz i do have serials on some servers never figured out for what to use them thou
<kinnaz> how does it work ?
<giovani> kinnaz: google it
<kinnaz> do i need some extra hw for that ?
<kinnaz> not including the card itself
<giovani> if you don't have an IPMI card that provides serial console, then yes, you need one
<giovani> or you can do it the way it's been done for a decade or two
<giovani> which is to use a serial terminal server
<giovani> but then you're using the physical serial port
<cemc> back to my original question :) can I boot with a kernel once ?
<giovani> not a virtual serial interface over the internet
<giovani> cemc: I'm not sure -- I'd google it
<giovani> you're using grub, I presume?
<giovani> look at that
<giovani> first google hit
<giovani> http://www.gnu.org/software/grub/manual/html_node/Booting-once_002donly.html
<giovani> why I had to do that I don't know
<clusty> giovani, http://pastebin.com/m78044328
<clusty> 192.168.0.1 is localhost
<cemc> giovani: mhm, found it also
<giovani> clusty: nope, it's 192.168.0.1
<clusty> other is the DSL router
<giovani> that's not the loopback
<giovani> netstat -anp | grep :53
<cemc> grub ftw ;)
<clusty> should i switch to loopback?
<giovani> ... sigh
<giovani> no
<giovani> type 127.0.0.1 in there
<giovani> but I don't know that your DNS server is listening on that interface
<giovani> so run the netstat line I gave you above
<clusty> giovani, not working: http://pastebin.com/m71c13e13
<clusty> will run now
<clusty> giovani, http://pastebin.com/m464beee7
<clusty> seems to be running
<giovani> of course it's working
<giovani> you just pasted output showing that it's working
<clusty> yeah
<giovani> I don't know why you're not providing dig with a FQDN
<giovani> that's silly
<clusty> wanna see zone file?
<giovani> nope
<clusty> and the dns conf?
<giovani> nope
<giovani> I'm not gonna help with configuring that
<giovani> maybe someone else has the time
<giovani> you haven't demonstrated that anything's wrong
<kinnaz> money can buy anything
<clusty> giovani, well i spent maybe 1 week to get dns up to this stage :D
<cemc> giovani: it worked, with this: http://www.gnu.org/software/grub/manual/html_node/Booting-fallback-systems.html#Booting-fallback-systems
<cemc> thanks
<giovani> cemc: is that a different method than the url I pasted? (just curious)
<cemc> just a little bit different, it's on the same page actually,
<cemc> on what you pasted
#ubuntu-server 2009-08-28
<uccadmin> I was asking this in #ubuntu but I feel this channel is more appropriate. Does anyone have any experience with installing and configuring DRBL?
<tonyyarusso> Is there a way through mail agents or related things to strip all attachments off of e-mails so they aren't stored in the e-mail databases?  (ie, move them into a folder separately instead)
<tonyyarusso> uccadmin: That's the remote boot thing, right?  I haven't worked with it on its own, but have run it through the Clonezilla suite of tools.
<uccadmin> Yes it is
<uccadmin> I'm actually trying to get to that point (A Clonezilla setup)
<tonyyarusso> ah
<tonyyarusso> Well, it's been a little while, but I might be able to help you - what's the holdup?
<qman__> I've only ever used LTSP
<uccadmin> I have 6 machine that need to be automatically reimaged, so I'm looking into DRBL and Clonezilla. My main problem is that we already have a DHCP server on the network and don't really want to mess with it. I don't know how important it is to have DRBL assign the IPs and I'm wondering if we'll be okay if we have the clients assigned static IPs via MAC address via our Windows DHCP server
<uccadmin> I've found a bit in the way of DRBL support on the issue, but not enough to make me confident enough to deploy DRBL and Clonezilla
<tonyyarusso> The best way is to unplug yourself from the rest of the network and just work with an isolated room of those six.
<tonyyarusso> Even if you can deal with the DHCP thing, your DRBL server will be broadcasting a ton of traffic that will congest the rest of the network if it's not detached.
<uccadmin> Well, the imaging will only be done around 4AM, and at that time there won't be much other network traffic. I figure the impact should be minimal as the server will only image 6 machines and the image is only 6GB or so.
<tonyyarusso> Are the six machines physically/network proximate to each other?
<poseidon> I put in a usb drive.  How can I mount the drive?  I think I need to use the mount command.  However, I don't know what I need to mount
<uccadmin> Yes. They're all in a single room
<tonyyarusso> uccadmin: Well, ultimately the answer is: yes, I believe it's possible to do what you're envisioning, but it will be WAY easier to just unplug them from their uplink and do that room alone.
<uccadmin> That makes sense. I don't know why I never thought of that
<tonyyarusso> poseidon: it will be in as /dev/sd<something> - you'll just have to figure out which one.
<uccadmin> Though I will have to procure another machine to serve as the DRBL server. The one I was going to install it on is already being used for other things and can't be repurposed/moved at this time
<tonyyarusso> Yeah, when I did it we just grabbed an old desktop and put it on a rolling cart so it could be moved around the building.
<uccadmin> I think I have some hardware that will work well. Time to scavenge...
<uccadmin> Thank you for all of your help
<poseidon> tonyyarusso, when I plug in the usb drive an sdb and sdb1 show up in /dev/  When I try mount -t ntfs /dev/sdb /mnt/ it says that an NTFS signature is not found
<poseidon> same with sdb1
<kinnaz> fdisk /dev/sdb
<tonyyarusso> poseidon: Are you sure it's ntfs?
<tonyyarusso> poseidon: You will want to be mounting the sdb1 version - sdb is the whole drive, while sdb1 is the partition.
<tonyyarusso> poseidon: Usually you won't need to specify the type at all - it can usually figure it out.
<poseidon> It looks like it's fat32
<poseidon> got it to work
<poseidon> thanks
<Alysum> hey guys - is mysql server 5.1 available on Jaunty?
<Alysum> jaunty-updates/main mysql-server 5.1.30really5.0.75-0ubuntu10.2 [57.2kB]
<Alysum> not sure is it's really 5.1 ?
<Alysum> the name is confusing.....
<Alysum> HELLO???
<samd> hi, im trying to setup an ircd-hybric irc server,, but cant connect from the outside world, ive already routed port 6667 to my server
<ball> hello Deevz
<uvirtbot> New bug: #420277 in openldap (main) "ldap tls refusing to initialize" [Undecided,New] https://launchpad.net/bugs/420277
<poseidon> I installed ubuntu-server 9.04.  I got the wireless driver installed, but now I want to set it up so I can use it as a home server and store stuff on it from my ubuntu desktop.
<poseidon> What do I need to do?
<qman__> there's plenty of options for file servers
<qman__> if you want to share with windows, samba is your best choice
<qman__> if not, openssh and sftp should do it
<PhotoJim> NFS is best if you're networking Linux/BSD machines together, as far as file sharing goes
<qman__> NFS is horribly insecure though, so make sure to only use it on a secured LAN
<qman__> since you're using wireless I'd advise against it
<PhotoJim> WPA-secured WiFi is fine.
<PhotoJim> Slow.  but fine.
<qman__> as long as you're using AES and not TKIP, and have a strong passphrase or use some other authentication
<qman__> TKIP is broken and all WPA is vulnerable to dictionary attacks, so weak passphrases are not acceptable
<poseidon> I have a laptop and a desktop both running ubuntu.  I want them to be able to store files and retrieve files from it.  Is NFS the best way to go about doing that?
<qman__> not really, NFS is better when you want to use files from the server
<qman__> if you just want to send and recieve, more like a backup, sftp/scp/ssh is better
<PhotoJim> in which case you're not really using a server.  you're just moving files between machines.
<PhotoJim> for a server, NFS.
<PhotoJim> for a central server, that is.
<PhotoJim> I have a machine in my basement.  all my important stuff is on it.  NFS and Samba, for Linux/BSD and Windows, respectively.
<PhotoJim> WPA-secured WiFi, and wired access.
<qman__> I disagree, and advise against using NFS
<PhotoJim> Secure WiFi + NFS > insecure WiFi + any other method of transferring files.
<PhotoJim> IMHO.
<qman__> there's really no such thing as a truly secure wifi
<PhotoJim> Securing file transfer without securing e.g. Internet access is still an issue.
<PhotoJim> WPA is highly secure.
<qman__> not really
<qman__> it's very vulnerable to dictionary attacks
<PhotoJim> Evidence?
<PhotoJim> Oh, don't use dictionary words then.
<qman__> it hasn't been broken yet, but the track record is not very good
<PhotoJim> Then we need to worry about his login passwords, too, then.
<qman__> NFS is a security nightmare, because the way it works is not nice to firewalls
<PhotoJim> My NFS server doesn't listen to the WAN NIC.  it only listens to the LAN NIC.
<qman__> it also doesn't support user authentication, only IP-based
<PhotoJim> WAN NFS is insecure.
<PhotoJim> Oh, I agree that it's imperfect.  but this guy isn't the FBI.  he's a home user with two machines.
<PhotoJim> you can't spoof an IP from my LAN unless you break into my non-dictionary-word password-protected WPA-encrypted WLAN, or you break into my house and plug into my switch.
<qman__> in any case, sshfs is a good alternative where NFS-like behavior is needed
<PhotoJim> Slower.  Agree that it's a good alternative.
<PhotoJim> I see it as being of primary value for WAN filesharing.
<PhotoJim> when you need to mount filesystems, at least.
 * ScottK generally uses sftp.
<PhotoJim> I use OpenVPN tunneling and NFS.
<PhotoJim> I use sftp to move stuff around when I'm not here and particularly when I'm not using one of my own machines.
<PhotoJim> but there is convenience to having a filesystem mounted.
<qman__> I have a mixed network, so I use samba and scp
<twb> sshfs is built on top of sftp, fwiw
<PhotoJim> Samba isn't really that much more secure than NFS though.
<twb> CIFS is *less* secure than NFSv4 with kerberos
<qman__> it has user-based authentication
<PhotoJim> Still vulnerable to dictionary attacks.
<qman__> of course, the only reason I use samba is to support windows clients
<qman__> otherwise it'd all be scp/sshfs/etc...
<PhotoJim> slower, that's all.
<PhotoJim> if you have good security in other ways, you can get the speed of NFS working for you.
<PhotoJim> no need to encrypt/decrypt data on a GigE LAN connection.
<PhotoJim> or 100BaseTX for that matter.
<twb> PhotoJim: if all you care about is speed, use socat
<twb> Or even firehose
<PhotoJim> twb: not familiar with either of those.
<twb> They turn data streams into network streams, and vice-versa.
<twb> ssh receiver nc -l 1234 \>foo & ssh sender nc receiver 1234 \<foo
<twb> Or something like that.
<PhotoJim> NFS saturates my links.  what would be the advantage?
<PhotoJim> oh.  but ssh adds encryption/decryption.  great for WAN, not great for LAN.
<twb> PhotoJim: NFS has NFS overhead, obviously
<PhotoJim> twb: some overhead.  but not ridiculous.
<twb> PhotoJim: ssh is only being used to run a command, not for the data channel.
<PhotoJim> ahh.
<PhotoJim> ahh, I see.
<PhotoJim> I didn't read it carefully.
<qman__> scp and samba saturate my links too
<qman__> that's not a good measure
<PhotoJim> oh well.  do what you like. :)
<PhotoJim> I'll stick to NFS.
<qman__> samba has a lot of network overhead
<twb> I'm not telling you otherwise.
<qman__> where ssh has a lot of CPU overhead
<qman__> the real question is how much data gets through while your link is saturated
<twb> I'm just saying that your arguments for NFS especially valid.
<twb> For me NFSv3 wins because it's very easy to deploy in a homogeneous Unix environment
<twb> NFSv3 and NIS, I'm afraid
<PhotoJim> and it works well with OS X, also.
<twb> It certainly isn't secure, and it is only moderately efficient in terms of raw throughput
<PhotoJim> although that's a Unix of sorts.
<twb> OS X is BSD with heavy makeup standing under a streetlight at 2am
<qman__> yep
<twb> I like to think my tastes are more refined ;-)
<nick125> Mmh. Anyone here know how to get rid of these "Bad source address from client, dropping packet" errors in OpenVPN?
<PhotoJim> I have a VAX in the basement running OpenBSD (sorry, Ubuntu, you don't work on VAX :) ).  That gets me some taste points.
<PhotoJim> nick125: I've not seen that problem.
<nick125> I'm trying to route all of my internet traffic on my laptop through my VPN...it connects and I can connect to the VPN host, I just can't connect to any machines beyond that.
<qman__> is ip forwarding enabled in the kernel?
<nick125> I believe so.
 * nick125 checks
<qman__> check the routes on the system too
<qman__> make sure it knows where all the relevant networks rae
<qman__> are*
<twb> "bad source address" doesn't sound like "you aren't a router" to me
<ball> PhotoJim: Which VAX?
<PhotoJim> if you can ping the other side of the VPN, it should be easy to set the default route to a router on the other side.
<PhotoJim> ball: VAXstation 4000/60.
<ball> PhotoJim: Nod bad.
<karex> HI, why does my homepage in localhost couldn't be updated if I don't connect internet?
<PhotoJim> ball: it's slow.  but it's interesting.
<karex> help, newbie
<ball> PhotoJim: I tell myself I'll never buy a VAX, but if a VAXstation 2000 or a 4000/VLC came along, I'd be sorely tempted (for different reasons)
<ball> I miss the 6000 series though
<qman__> karex, this is #ubuntu-server, if your problem is on ubuntu desktop, join #ubuntu
<PhotoJim> ball: I got this one for free, essentially.  it was hard to say no.
<karex> qman__: my problem is about apache+php+mysql. help me please
<twb> I'm sufficiently weary of hardware to view doing ANYTHING outside of a VM as a waste of my time
<twb> Real PDPs and VAXen are obviously in that list
<PhotoJim> twb: duly noted.
<qman__> karex, try to explain your problem more specifically
<twb> Especially since I could emulate them with $100 of gear.
<PhotoJim> Flying a flight simulator is not the same as operating a jet.
<twb> I mean, OK, if I didn't spend ALL DAY dealing with bloody computers, it might be a refreshing challenge
<PhotoJim> Vms are useful, but they're not particularly interesting.
<PhotoJim> yes.  good point.
<PhotoJim> Computers aren't my day job.  They're my hobby.
<twb> Lemme guess... you're a chemist
<karex> qman__: when it's not connected to internet, my http://localhost/ homepage couldn't be updated (I'd updated /var/www/index.php)
<PhotoJim> if you mean chemist as in a person who is in the business of the science of chemistry, no.
<PhotoJim> if you mean chemist as in a person who is in the business of pharmacy, also no.
<twb> PhotoJim: what is your day job, then?
<PhotoJim> I'm an insurance broker, and a part-time lecturer at a university business school.
<twb> Ha!
<qman__> karex, what do you mean by "couldn't be updated"?
<qman__> karex, your index.php, is it a web page you've written, or a piece of software you've installed?
<karex> qman__: it was not changed after I re-write /var/www/index.php (but it does when I connect to the internet or ad-hoc)
<karex> index.php is the default page
<nick125> Hmm. I have eth0 and eth0:0 and I want to setup a bridge with eth0 in it for OpenVPN. How would I do those IP aliases?
<qman__> karex, if the fact that you are connected to a network is the determining factor, it is most likely either a problem with your web browser's offline mode, or a problem with the web application you are trying to use
<qman__> karex, other places to check are the website configuration in /etc/apache2/sites-available/ and your general apache configuration (if non-default)
<PhotoJim> nick125: if you bridge tun0 (or whatever your VPN device is) and etho, then there is no need to include eth0:0.  it is on eth0 in hardware, so it will be bridged.  but better to route than bridge.
<karex_> qman__: Thanx...
<uvirtbot> New bug: #420194 in php5 (main) "pecl.php.net is using a unsupported protocal" [Undecided,Invalid] https://launchpad.net/bugs/420194
<luckyone> hello all
<uvirtbot> New bug: #420310 in openldap (main) "tried to installed openldap,not installed completely, I don't need it now , but havaing pb because i tried to installed." [Undecided,New] https://launchpad.net/bugs/420310
<luckyone> howdy - anyone in here with a lot of RAID experience?
<luckyone> I am quite newb with this and need some help recovering/mounting the other half of my RAID1 array
<oh_noes> Why does it take ubuntu server hardy 5 mins before NTP corrects my clock?
<oh_noes> Is it possibel to tell it to "immediately fix when you start"
<soren> oh_noes: That's what ntpdate does.
<uvirtbot> New bug: #420366 in mysql-dfsg-5.0 (main) "package mysql-server-5.0 5.1.30really5.0.75-0ubuntu10.2 failed to install/upgrade: subprocess new pre-removal script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/420366
<uvirtbot> New bug: #420389 in php5 (main) "php 5.2.6 'random' segfault" [Undecided,New] https://launchpad.net/bugs/420389
<force1_> hi all, i am searching for something cool to do with my server
<force1_> something that isnt easy, and probaly wont work out of the box
<force1_> something i have to do some research for....
<force1_> my own idea is to sync an iphone withe server' LDAP database
<force1_> * with the
<kaushal> hi
<kaushal> i get http://paste.ubuntu.com/260795/ in dmesg. is it a hardware issue ?
<gnuyoga> force1_: installating windows in linux perhaps
<force1_> what do you mean? installating?
<force1_> you mean run a XP image in a virtual machine, with linux as the host?
<gnuyoga> kaushal: looks like hardware issue (hdd perhaps)
<gnuyoga> force1_: no no , i was kidding ...
<kaushal> gnuyoga: ok
<kaushal> so is it due to busted hdd or busted controller ?
<gnuyoga> kaushal: do u have a failed hdd or something ?
<kaushal> it doesnot responds
<gnuyoga> i have seen this error in multiple cases
<gnuyoga> hard disk failure, cable fault, bus failure
<gnuyoga> 50% its cable and 50% is hdd
<kaushal> I am able to login and when i do ls or pwd it doesnot responds
<gnuyoga> so some sectors have problem, replace hdd :)
<kaushal> ok
<kaushal> gnuyoga: can we say it 100% as hdd issue ?
<kaushal> :)
<gnuyoga> force1_: do u plan to code as well ?
<gnuyoga> kaushal: perhaps 99% ... 1% we shuoud leave it for doubt ;)
<kaushal> ok
<kaushal> ... means ?
<kaushal> yes or no :)
<kaushal> I need to generate a report :)
<kaushal> about the issue
<force1_> gnuyoga: i can write c, and php, so it certanly an option :D
<colclough> Hi, is there a way for me to copy output from a server console to a text file on my desktop? The server is being run in virtualbox.
 * gnuyoga suggest force1_ to look at Gnu Hurd Project ... its really coool
<gnuyoga> colclough: u can perhaps do a folder sharing ;)
<gnuyoga> colclough: is ur server linux or windows ?
<colclough> linux, ubuntu 8.04
<gnuyoga> and host OS == ?
<colclough> lastest ubuntu
<colclough> I can't remember the version
<gnuyoga> no problem, so both are linux ...
<kaushal> gnuyoga: Thanks
<gnuyoga> kaushal: np
<gnuyoga> colclough: VirtualBox gives an option of shared folder
<gnuyoga> colclough: the idea is u can attach a host OS folder to guest OS
<gnuyoga> for this we need to installed guest addition i suppose (we need a new command mount.vboxsf)
<Maleko> how do we kill particular established connections, without terminating daemon?
<gnuyoga> colclough: search for VBoxGuestAdditions.iso it will be in /usr/share if i remember it correect
<colclough> There is an option in my vbox devices->sharded folders - is that what you are talking about?
<gnuyoga> Maleko: did u try tcpkill
<gnuyoga> ??
<gnuyoga> Maleko: http://www.cyberciti.biz/howto/question/linux/kill-tcp-connection-using-linux-netstat.php
<gnuyoga> calYes
<gnuyoga> colclough: yes
<colclough> cool, I've made a new folder but under access it says its full
<gnuyoga> Maleko: to get tcpkill, apt-get install dsniff
<gnuyoga> colclough: k
<gnuyoga> colclough: now u need to enable guest addition
<Maleko> gnuyoga: thanks. no wonder i couldnt find tcpkill package
<gnuyoga> Maleko: ;)
<gnuyoga> colclough: now install guest addition,
<gnuyoga> http://ubuntu-tutorials.com/2007/10/13/installing-guest-additions-for-ubuntu-guests-in-virtualbox/
<Aison> hello
<Aison> I installed ebox on my ubuntu server (Jaunty)
<Aison> I can login to the web appllication
<Aison> but I cant enable the modules
<Aison> eg when I try to enable network module, the webapplication somehow hangs
<Aison> eg. ntp, network, etc...
<Aison> events was possible to enable
<colclough> gnuyoga: I'm clicking on the install guest addition but nothing happens, I know it says that it takes a minute or two but its been a while now. Is there another way to install guest?
<gnuyoga> colclough: didd u find the iso image ?
<colclough> Aison: do you have an ad blocker installed in your browser? I'm using ebox too and I've noticed that my ad blocker effects changes too. Also you could join #ebox you'll get alot of support from the guys there.
<Aison> colclough, no, no add blocker. I also tried kopete instead of firefox
<gnuyoga> Aison: why dont u ask this in #ebox channel, u might get a better response
<Aison> gnuyoga, I will now ;)
<colclough> it hasn't mentioned anything to me about an iso,
<gnuyoga> Aison: not sure if its thus bug https://bugs.launchpad.net/ubuntu/+source/ebox/+bug/376054
<uvirtbot> Launchpad bug 376054 in ebox "[karmic] installation of ebox hangs at: restarting eBox module apache" [Undecided,New]
<gnuyoga> colclough: another way, search for VBoxGuestAdditions.iso in ur host machine
<colclough> ok
<gnuyoga> colclough: normally it will be in /usr/share/virtualbox
<gnuyoga> Aison: what version of ebox ? 1.2 or 1.3
<gnuyoga> Maleko: tcpkill did the job for you ?
<Maleko> yes
<gnuyoga> Maleko: great !
<gnuyoga> colclough: did u see that ISO ?
<colclough> yeah I've found it, I'm just trying to get vbox to use it.
<gnuyoga> colclough: if the machine is started u can go to "Device -> mount CD/DVD Rom image"
<colclough> yeah thats what i've done
<Aison> gnuyoga, i'm using the one from apt
<Aison> gnuyoga, in ebox summary it says eBox version    0.12.2
<gnuyoga> colclough: in guest /dev/cdrom /mnt
<gnuyoga> Aison: k
<gnuyoga> Aison: i suggest u check here as well http://forum.ebox-platform.com/
<force1_> yup
<uvirtbot> New bug: #420451 in mysql-dfsg-5.0 (main) "package mysql-server-5.0 5.1.30really5.0.75-0ubuntu10.2 failed to install/upgrade: subprocess post-installation script killed by signal (Interrupt)" [Undecided,New] https://launchpad.net/bugs/420451
<uvirtbot> New bug: #420459 in mysql-dfsg-5.0 (main) "package mysql-server-5.0 5.1.30really5.0.75-0ubuntu10.2 failed to install/upgrade: subprocess post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/420459
<godmod> hello guys
<ricdanger> hi
<ricdanger> does anyone know how to setup nss-ldap to support round-robin connections, and fallback?
<doc777> Has anyone successfully gotten the broadcom Netextreme II going under 8.04 LTS?  lshw shows the 2 ports there as "unclaimed" and bnx2 is loaded. Also I have installed the firmware-nonfree-bnx2 package
<doc777> I formatted the machine and installed 9.04 server and it worked perfectly... But zimbra, which is the purpose of this machine, will not run on that and requires 8.04
<Sam-I-Am> doc777: it probably needs the bnx2x driver
<Sam-I-Am> doc777: which is newer than 8.04
<Sam-I-Am> nothing says you can't install a newer kernel on 8.04
<doc777> Sam, I have attempted that and on boot I get the "cannot load bnx2 firmware" and it still does not load. Also, this box has no network connection so every task is great fun ;)
<doc777> I actually have about 3 days sank into this thing now as I have formatted it 3 times, tried a couple of kernels, working with 2.6.30 which gives "Can't load firmware file bnx2/bnx2-mips-09-4.6.17.fw"
<Sam-I-Am> do you have the bnx2x firmware?
<Sam-I-Am> (broadcom stuff sux btw heh)
<doc777> yea I downloaded and installed the firmware-nonfree-bnx2 package
<roxy09> hi there i have a question, i am purchasin a HP DL G6 server, is compatible with ubuntu, for example hot swap har drive?
<uvirtbot> New bug: #420545 in vtun (universe) "package vtun 3.0.2-2ubuntu1 [modified: usr/sbin/vtund] failed to install/upgrade: " [Undecided,New] https://launchpad.net/bugs/420545
<doc777> Is there  a way to download a DVD with all the packages for 8.04.3 LTS server for an offline install? Trying to build a new kernel without a network connection is just great fun  ;)
<giovani> doc777: all of the packages? doubtful -- universe and multiverse are huge
<giovani> you can do a mirror copy though
<giovani> and keep a repo locally
 * genii thinks about the ~35Gb of repo data
<giovani> heh, yeah
<giovani> putting all possible packages on a DVD isn't possible
<genii> You could span 7 dvd and use apt-cdrom to add the set, conceivably. I haven't actually done this however
<maswan> easier to use a usb drive and put it all in there
<maswan> or figure out what you actually need
<giovani> it would be easier to just have a local network repo
<maswan> it's even easier to just put the machine on the net. :)
<doc777> I need to install 8.04.3 but it will not recognize the Dell T410 NICS = BCM5716.  So I have no net access.  When I try and compile a new kernel it fails with various reasons, which is probably some package missing.  Working with internet is rough ;)
<doc777> The new T410 has no PCI slots, all PCI-X, and I do not happen to have a PCI-X nic laying about.
<doc777> After 3 days I am about ready to start drinking ;)
<Boohbah> does dell provide a driver for it?
<doc777> yes, but it will not seem to build on this system at the moment. It is a "loop".  It seems ubuntu has all sorts of bugs with the bnx2 driver at the moment.
<Sam-I-Am> more like the bnx2* driver has all sorts of bugs :/
<Sam-I-Am> broadcom is known for code blobs
<Boohbah> darn binary blobs :(
<doc777> Yes, I agree. I am debating on centos or overnight an INTEL PCI-X nic
<Sam-I-Am> those are nice
<Sam-I-Am> anything but broadcom and a few other ho-hum companies
<doc777> I have 8.04 LTS on 3 other Dell power edges but this new T410 seems to use something else. They are all broadcom netextreme II, it says, just the new will not work ;)
<doc777> Thanks for the help all. I just ordered an Intel NIC and will disable the broadcom trash for now ;)  Take care.
<aubre> when I do a sudo euca_conf -addnode nodename   , it wants a password, and asks for the eucalyptus user's password, but the eucalyptus user doesn't have a password - what do I do?
<aubre> those broadcom netextreme II work fine with jaunty
<KillMeNow> i'm running broadcom on my Dell 1u boxes
<KillMeNow> with Jaunty
<aubre> I have a R610 and 2 R410s no problem
<SirDerigo> hi people, i need to know what hosting control panel you actually use?
<uvirtbot> New bug: #417996 in apache2 (main) "segfault in libphp5.so, I think its because of zend-framework" [Undecided,New] https://launchpad.net/bugs/417996
<giovani> SirDerigo: be more specific -- do you mean like a commercial hosting platform? or something along the lines of webmin/ebox
<SirDerigo> giovani: something like cpanel but free
<SirDerigo> a thing that can deal whit subdomains, a script autoinstaller, things like that
<giovani> I don't know if one exists
<giovani> google free hosting control panel
<genii> ISPConfig isn't too horrible.
<SirDerigo> genii: not, it doesnt is, but i want to know if another exist
<uvirtbot> New bug: #420639 in php5 (main) "php-pear package problems (Karmic)" [Undecided,New] https://launchpad.net/bugs/420639
<genii> I wonder if that bug affects Horde
<baccenfutter> I just realized, that `ssh -D 8080 -Nf domain.tld` sems to break down after too long idle time... regular ssh connections don't, though. Does the socks env need a certain config in sshd_config to stay up while idling?
<baccenfutter>  Read from remote host shell.c-base.org: Connection reset by peer
<MTeck> How can I make a pointer so when somebody goes to one page, they're actually looking at a different one? So wget site1.com/file.tgz is actually downloading site2.com/long/path/name/file.tgz
<uvirtbot> New bug: #419487 in vsftpd (main) "package vsftpd 2.0.7-0ubuntu1 failed to install/upgrade: subprocess post-installation script returned error exit status 1 (dup-of: 349469)" [Undecided,New] https://launchpad.net/bugs/419487
<armando_> Que tal
<armando_> acabo de montar un Ubuntu Server , con samba pero este cada 5 minutos mas o menos me arroja mensajes de que reinicia
<armando_> I have a problem with my samba server , i mount Ubuntu Server 9.04 , but when i start the server , the shell show samba is restarting , anybody knows how to fix this problem.?
<soren> armando_: It's not a problem. It's perfectly normal.
<soren> armando_: It's usually becuase your network interface takes a little bit before it gets configured and at that point, samba gets restarted.
<armando_> ok , but i can remove this mesage from the screen , arent know hide the message.
<omnydevi> greetings, trying to install jaunty server on a microdot mobo. seems it cannot detect my hdd's. i googled a bit and found a pci=nomsi option, and tried it at install, but still no luck. any ideas?
<omnydevi> they are sata drives
<omnydevi> meh, i'll just try 8.04
<BrixSat_> hello
<BrixSat_> im inside a machine using shh and the machine cant ping or acess ubuntu updates like it would not have nety
<BrixSat_> im over net on the machine
<BrixSat_> :s i can enter via ssh on the machine using it's domain
<BrixSat_> and once inside i cant ping nothing
<BrixSat_> any ideia?
<ruben23> hi how do i make an application startup automatically upon reboot on ubuntu-server
<ruben23> hi
<ruben23> anyone have idea
<jmarsden|work> ruben23: If it is a daemon, make sure it has an initscript and use that to start it.
<jmarsden|work> If not, look at editing /etc/rc.local or maybe a crontab entry with a @reboot time value
<ruben23> ok thanks
<jmarsden|work> No problem.
#ubuntu-server 2009-08-29
<uvirtbot> New bug: #420813 in openssh (main) "ssh blacklisting of private keys 9.04_64" [Undecided,New] https://launchpad.net/bugs/420813
<jumbers> Does anybody know of a VPN server that's easy to set up? Doing some packet sniffing on my local network has made me paranoid of what could be accomplished in public
<qman__> jumbers, I use openVPN, though I wouldn't say it's particularly easy to set up
<qman__> it helps if you understand how SSL and certificates work, and routing
<qman__> if you're just looking for something simpler for a particular use, ssh tunneling goes a long way
<twb> Hear, hear.
<jumbers> If I SSH tunnel, can I do it so that it tunnels traffic on all ports?
<jumbers> I know it can be done on a single port basis
<qman__> no, only one port per tunnel
<qman__> for more than a couple, you'll need a full-on VPN, like openVPN
<qman__> SSH tunneling also can't do UDP ports (to my knowledge)
<jumbers> Hmm, then it looks like I'm going to need openVPN
<twb> ssh is also tcp-based, so you get the tcp-over-tcp resend hysteresis problem.
<twb> However you CAN do UDP over ssh tunnels (-w), but not simple port forwarding (-R/-L).
<twb> I did that for syslog once, it was fugly.
 * ScottK waits impatiently for lamont to upload the new Postfix release.
<phylogenesis> I have a server setup with ubuntu server edition, but I'm having trouble with the network.  I connected directly to it via a crossover ethernet cable, but when I copy more than a few hundred megabytes of files, somewhere randomly in the middle, the connection will drop and I get a "No route to host" error until I restart the server.  (I've tried restarting ntp, ssh, and proftp all to no avail)  I've used sftp to try to copy files as well as
<phylogenesis> scp and both freeze and drop connection.
<phylogenesis> How do I stop it from freezing?  Is there a setting which prevents more than a certain amount of data?
<phylogenesis> I still have 30GB of hdd space on the server, and I've only used about 16 so far copying on to it.
<phylogenesis> The server is running Ubuntu Server edition 9 I think, and I'm trying to copy data onto it from Fedora 11.
<ScottK> Ubunty versions have a month and a year.  Do you mean 9.04?
<phylogenesis> I believe so
<phylogenesis> Yes, cat /etc/issue says 9.04
<ScottK> This is generally a pretty quiet time of day.
<ScottK> I'd recommend seeing what network card you have using lspci and then Google.
<phylogenesis> Is 802.11bg a wireless thing or just a network thing?  (I see two, and I want to see which is for the wired)
<PhotoJim> 802.11bg is WiFi (wireless)
<PhotoJim> lspci | grep Ethernet
<phylogenesis> Nothing online will tell me what the ethernet card is, I suspect it's onboard.  lspci tells me "Realtek Semiconductor Co., Ltd. RTL-8139...
<phylogenesis> (It's a Toshiba Satellite A105-S2061 btw)
<PhotoJim> that's a common on-board chipset.
<PhotoJim> my old router has an RTL-8139 apparently.
<phylogenesis> I don't suppose there's simple setting I can change that will allow me to copy lots of data onto the server without disconnecting?  (As in, is there something that prevents large data transfer?)
<PhotoJim> over a LAN?  you need to figure out what's causing the disconnections first.
<phylogenesis> over a direct cable connection via a crossover ethernet cable
<PhotoJim> the obvious thing to check is the cable.
<PhotoJim> I can move gigabytes of data on my LAN (through a switch, mind, not computer-to-computer) without interruptions.
<phylogenesis> Do you have any ideas as to what could be doing it?  Any ideas on how I could test it in other ways?  What can I check about the cable?  It's hardly been used, it should be in great shape.
<PhotoJim> there's nothing wrong with the RTL-8139.
<PhotoJim> It could be a defective cable.  It's not a common fault, but it's possible.
<PhotoJim> I gues one thing you could do is to install ethtool, and see if your NICs are in full duplex mode.
<PhotoJim> connecting directly, they might have issues autonegotiating speed and duplex.
<PhotoJim> (just a theory.)
<phylogenesis> If it were the cable, wouldn't it be an all or nothing thing, rather than failing after copying tons of data successfully?
<PhotoJim> not necessarily.
<PhotoJim> but usually, you're right.
<phylogenesis> I'll switch to a standard cable and try again
<twb> If neither NIC is 1000baseT, you can't rely on them to autonegotiate sensibly
<PhotoJim> a standard cable won't work unless you use a hub or switch.
<twb> PhotoJim: wrong
<PhotoJim> twb: did your mother not teach you how to be polite? there are more respectful ways to disagree.
<twb> Even recent 100baseT NICs will negotiate crossover correctly.
<PhotoJim> twb: unless you know he has two of those, then you don't know that that's relevant.
<twb> A standard cable *might* work as a crossover cable, if at least one NIC is sane.
<PhotoJim> alright.  might, not won't.
<ScottK> twb: Well since he HAS a crossover cable, how is that relevant.
<twb> ScottK: because the cable might be bad.
<PhotoJim> I suspect it's a speed/duplex mismatch.  probably duplex.  ethtool will tell us.
<twb> PhotoJim: I agree.
<phylogenesis> I do have to do "sudo ifconfig eth0 192.168.2.101" to get my IPs to match so that I can connect at all.
<twb> phylogenesis: nothing wrong with that
<phylogenesis> damn
<PhotoJim> I'm not a big fan of connecting NIC to NIC, but I can't tell you that I feel that way based on logic.  I just like the flexibility of being able to add additional machines to the network.
<twb> PhotoJim: I don't like it because all my gear is too old and shit to autonegotiate, and a switch is easier to find than a crossover cable
<phylogenesis> Does the fact that I have to completely restart the server after a failed transfer say anything?
<phylogenesis> If I don't I just keep getting a "No route to host" error
<twb> phylogenesis: are you running NetworkManager?
<phylogenesis> Man I wish I hadn't left my router at home :-/
<PhotoJim> twb: good reason.  I have everything from stuff on AUI 10BaseT to gigabit stuff on my switch.
<phylogenesis> twb: on the server I only have command line so I'm not sure.  I'm not specifically running that but it may be running automatically.
<PhotoJim> phylogenesis: you need to get yourself a small switch and keep it in your bag o' bits. :)
<twb> phylogenesis: pgrep Network
<twb> phylogenesis: do you get a number?
<PhotoJim> I think Network Manager runs by default on Ubuntu.
<phylogenesis> one sec, gotta wait to the machine starts back up
<twb> PhotoJim: only on desktop installs IME
<phylogenesis> proftpd is taking a very long time to start now
<twb> Use SFTP and/or vsftpd, not proftpd.
<PhotoJim> twb: that could be.  I'm not 100% sure it installed on my server.  I had issues with it, so I think it did but I didn't make notes.
<phylogenesis> I was, but I installed proftpd when I set up the machine
<phylogenesis> so it starts up at startup
<phylogenesis> "pgrep Network" returns nothing
<twb> phylogenesis: good.  Then the problem is probably something NEARLY as dumb.
<PhotoJim> DHCP?  could it interfere?
<PhotoJim> it should fail elegantly.
<twb> phylogenesis: instead of doing ifconfig, configure the static network in /etc/network/interfaces
<phylogenesis> Oh, my server is using a static IP, I set it to use 192.168.2.100 when I had my router, and it seems to have kept that up (I don't see why it wouldn't just pointing it out)
<twb> PhotoJim: dhclient will break him if he's doing-it-wrong, though it's surprising that he's tripping it.
<PhotoJim> twb: might be wise to disable it here, to rule it out.
<twb> phylogenesis: "no route to host" means that something is eating your point-to-point route
<phylogenesis> I'm getting behind here:  What should I try next?
<phylogenesis> edit /etc/network/interfaces?
<twb> phylogenesis: yes.
<twb> Back it up first
<twb> phylogenesis: oh, first: do you have physical access to this machine?
<phylogenesis> Edit it how?  should I just comment out the iface eth0 inet static and following lines?
<phylogenesis> yes
<phylogenesis> I have two laptops in front of me, this one is my main one (client, running Fedora 11), and the other is the server (running Ubuntu Server 9.04)
<twb> You want it to basically contain a single interface, "iface eth0 inet static" "address 192.168.2.100" "pointtopoint 192.168.2.101" where .2.101 is your other box's IP
<twb> Ah, I guess you need "netmask 255.255.0.0" or so, too.  It's a bodge, but it should suffice for now.
<phylogenesis> Can you pastebin what I should have in it?  I have pretty close to what you said already.
<twb> phylogenesis: pastebin what you have now
<phylogenesis> http://pastebin.com/d24f1ccc1
<phylogenesis> My router was 192.168.2.1 when I had it here
<phylogenesis> twb, should I install that eth tool?
<twb> No, that makes sense.
<twb> I dunno what's going wrong.
<twb> You definitely got "no route to host", and DIDN'T get "destination host unreachable"?
<phylogenesis> "scp -r /files/Music phylogenesis@192.168.2.100:/home/phylogenesis/backup/Music"   returns   "ssh: connect to host 192.168.2.100 port 22: No route to host\nlost connection"
<PhotoJim> "ip route" ... ?
<phylogenesis> 192.168.2.0/24 dev eth0  proto kernel  scope link  src 192.168.2.101         10.32.2.0/24 dev eth1  proto kernel  scope link  src 10.32.2.55  metric 2         default via 10.32.2.1 dev eth1  proto static
<phylogenesis> I'm sorry, that's on my client machine
<phylogenesis> so is the "no route to host" message
<phylogenesis> is that what you were asking for?
<phylogenesis> on the server I get:  "192.168.2.0/24 dev eth0  proto kernel  scope link   src 192.168.2.100\ndefault via 192.168.2.1 dev eth0  metric 100"
<phylogenesis> twb, someone in #fedora suggested that it's an sshd issue
<twb> Sorry, I'm a bit busy
<twb> phylogenesis: I assume you try ping(1) when ssh fails.  Does ping work?
<twb> Does your route table change before vs. after the failure?
<twb> Are you transferring data with scp, or doing something more involved with ssh directly?  In the latter case, perhaps you have an idle connection that is being interrupted.
<phylogenesis> ping 192.168.2.100 returns "Destination Host Unreachable"       whereas it works when I first setup the connection, before I transfer any files
<twb> I'm also assuming you're checking BOTH ends of the connection -- have you confirmed that the machine you're connecting the Ubuntu server *to* is not borked?
<phylogenesis> np about being busy, I'm patient
<phylogenesis> Yeah I went and asked about that, and am trying to talk to them at the same time, which is pretty hard :-P
<phylogenesis> twb, How can I check my routing table?  I have standard internet going via wireless at the same time (on the client (fedora) machine only).  I'm transferring with scp and not doing anything else with ssh.
<twb> phylogenesis: route -n
<phylogenesis> What should I see there?  I see two connections:  Destination=192.168.2.0 and 0.0.0.0 (I don't see 192.168.2.101 anywhere)
<twb> pastebin it
<twb> Or better, compare it before and after the failure
<phylogenesis> Okay, but that'll take a bit.  (gotta restart, get it to work, and then fail)
<phylogenesis> I'll post when I have that
<phylogenesis> twb, sigh... How can I tell if my computer can connect to the internet?  I tried sudo apt-get install ethtool and it failed to connect.  I'm thinking, gee maybe this has something to do with it?  I'm connected to the internet via an ethernet cable (yet another one).
<phylogenesis> ping www.google.com gives me "Destination Host Unreachable" error
<phylogenesis> *errors*
<twb> You have two NICs?
<phylogenesis> Nvm, it gives me nothing, but ping 66.102.7.104 does (which is googles IP according to this computer)
<phylogenesis> I have one onboard ethernet nic and a wireless one
<phylogenesis> I dare not try to connect wirelessly as the school's wireless is very difficult to get working on any OS
<phylogenesis> and so far I can't connect with my own school credentials (right now I'm on wireless with my client machine thanks to a friend putting his in).  However, the wired connection works immediately and easily on my client machine when I use it.
<phylogenesis> twb, it failed again.  I plugged in my server to the internet via ethernet.  I changed /etc/network/interfaces to use iface eth0 inet dhcp, then downloaded updates and ethtool.  I then restarted the server and ran sudo ifconfig eth0 192.168.2.100.  On the client I ran sudo ifconfig eth0 192.168.2.101, then "scp -r /files/backup phylogenesis@192.168.2.100:/home/phylogenesis".  It ran for about ten minutes then stopped with: "[DB]_Bleach_182_[
<phylogenesis> 1BD395A7].avi                  80%  117MB   0.0KB/s - stalled"  The routing table on the server was the same before and after, I forgot to check the routing table on the client before, but the after one is listed:  http://pastebin.com/dbcf823
<phylogenesis> ifconfig on the client still tells me inet: 192.168.2.101 for eth0
<phylogenesis> neither routing table changed
<samd> hi, im trying to setup a irc server, but i cant conect to it from the outside , ive already forwarded port 6667, any ideas?
<twb> samd: forwarding the port, and then connecting to 127.0.0.1, should be sufficient.
<twb> samd: assuming, of course, that the IRC daemon is running and listening to lo
<samd> twb, i have  to be connected to 127.0.0.1, from the server in order to connect from the outside?
<twb> samd: OK, suppose your server's name is "fs", and it's accepting ssh connections from anywhere, but only accepting loopback connections to the IRC server.
<twb> Then ssh fs -xfL 6667:127.0.0.1:6667 && irssi -c 127.0.0.1
<samd> ill try that
<Boohbah> twb: if samd connects to the irc server on the localhost interface there is no routing or port forwarding needed. unless you want it accessible outside
<samd> ight, so that command basicly connected me trough ssh then connected to the irc as local?
<samd> Boohbah, twb, yea, i want it to be open to some friends,, is there any other port supposed to be opened/routed apart from 6667?
<twb> samd: the command I gave will connect you to the IRC server from anywhere, without opening access to the IRC server to the world.
<twb> It assumes you already have a shell account on the box, of course.
<twb> If you WANT to let arbitrary people connect, you only need port 6667 open AFAIK.
<samd> twb ight, ill chek on that then, yeah,i have ssh acc on the serv, its a old computer running as personal server here at home
 * Boohbah missed the ssh proxy part
<uvirtbot> New bug: #420957 in samba (main) "package libpam-smbpass 2:3.3.2-1ubuntu3.1 failed to install/upgrade: subprocess post-installation script returned error exit status 139" [Undecided,New] https://launchpad.net/bugs/420957
<uvirtbot> New bug: #421005 in freeradius (universe) "can't compile version from repository (on 64bit)" [Undecided,New] https://launchpad.net/bugs/421005
<uvirtbot> New bug: #408306 in dhcp3 (main) "Wireless not connecting to AP, no IP address assigned through DHCP" [Undecided,New] https://launchpad.net/bugs/408306
<roxy09> hi there somebody now about pount or another web proxy server
<mattt> roxy09: a little?
<roxy09> i need to install a web mail server and i would like to know wich proxy or another extra web tools i need to install to make safety and fast the access
<mattt> roxy09: web mail server?
<roxy09> yes
<mattt> roxy09: are you talking about squirrelmail or something to that effect?
<roxy09> hi there somebody have experience setting a email server with security staff ? i mean antivirus, antispam, etc?
<roxy09> also which is the best mail delivery agent?
<roxy09> aoÂ¿nother question amavis-new or greylisted?
<NorthByNorthWest> Anyone here who can help me with an SMB-issue... the samba channel is pretty dead...
<lamont> ScottK: 2.6.5 already?  sigh
<lamont> this weekend is swamped - prolly early next week
<slap> I'm reading the manual, and i'm not sure to understand what the line ( domain Local domain name ) is for in the resolv.conf. Can someone help ?
<slap> I understand that nameserver retrieves the server by its IP
<Boohbah> slap: that is the search domain appended to any domain names not found at your resolver
<Boohbah> slap: for example, an internal domain
<slap> that's waht i'm trying to do...
<slap> So if I have a mynetwork.lan...
<Boohbah> search my.local int.my.com
<slap> And I look for mycomputer, the resolver will try to resolv mycomputer.mynetwork.lan ?
<Boohbah> yes
<slap> But if there's already a nameserver 192.168.1.1, do I still need the line domain, or resolv will try with the default server address to map a local machine?
<Boohbah> no, if you have DNS records for mynetwork.lan on your NS at 192.168.1.1 you don't need the search line
<slap> Right to the point. Thanks a lot.
<Boohbah> welcome
<mini_tis> any one here having odd Pam login issues since latest patches in LTS 8.10?
<mini_tis> hmm I mean 8.04
<tosh> hi everyone
<ScottK> lamont: Yep.
<tosh> anyone know how to get procmail to run on messages after amavis has filtered them?
<tosh> I want spam messages to go into user's junk mail folder
<MatBoy> I have a VPN router in front of a ubuntu box and I want to give users rights on shares and so on when they VPN to it... what should be the best way for this ? users use windows clients
<tosh> does the router handle all the vpn login stuff?
<tosh> anyone know how to get procmail to run on messages after amavis has filtered them?
<_ruben> tosh: that'd be default behaviour for postfix configured with amavis as content_filter and procmail as delivery agent
<tosh> yeah
<tosh> _ruben, I don't think the issue is with amavis, but that postfix isn'ta using procmail for the virtual mailboxes
<Abdullah9> i need help guys
<Abdullah9> ?
<Abdullah9> any one ?
<Abdullah9>  ?
<tosh> Abdullah9, whats the problem?
<Abdullah9> i installed a new drive for my wireless adapter , "rt2870 " , it seems it don't work
<tosh> where did you get the driver?
<Abdullah9> http://www.ralinktech.com/ralink/Home/Support/Linux.html
<tosh> did you try just going to System->Administration->Hardware Drivers to see whats there?
<Abdullah9> yes
<tosh> is it listed there?
<Abdullah9> i can't find it
<tosh> is this a laptop?
<Abdullah9> yes
<Abdullah9> but i am useing USB port
<helix2301> whats up guys?
<phylogenesis> Hi, I'm having trouble with a connection timing out.  I have a laptop running Ubuntu Server 9.04 (server).  I have another laptop (client), this one, which runs Fedora 11, that I try to connect to the server with via a direct crossover ethernet cable.  Both machines use dhcp so first I run sudo ifconfig eth0 192.168.2.101 on the client and sudo ifconfig eth0 192.168.2.100 on the server.  I then use scp to move files from the client hdd to th
<phylogenesis> e server hdd, and it works for about five minutes, but then stalls.  If I attempt to rerun the command, I get "No route to host".  How can I prevent the connection from timing out?
<phylogenesis> Both machines are fully updated, and I'm attempting to copy over 20GB of data.  However, I've also tried using Nautilus in Fedora 11 to transfer data via ssh and it works the same way.  Just as with scp, it can copy up to a couple GB before stalling, but even if I copy small chunks at a time, it will eventually stall on one of them.
<simplexio> phylogenesis: .. could be few things
<phylogenesis> I attempted the copy with neither computer connected to the internet and both having the wireless cards switched off, but the exact same thing happened.
<phylogenesis> simplexio, please, any ideas are welcome
<simplexio> phylogenesis: is dhvp server still running, sound like you manually define ip, and did you define default route
<simplexio> if those net stuff is ok, i would check next that you are moving files to linux fs ( those all support big files ) and not something like fat )
<phylogenesis> I don't think I did.  Although, initially the server was using iface eth0 inet static\naddress 192.168.2.100\n etc
<phylogenesis> simplexio, both machines are linux, with the client having a Windows 7 partition, but that isn't where the files are (they are on an ext4 partition I believe)
<simplexio> phylogenesis: but it should affect actually.. if you define ip + mask for inerface then that interface is default route for it
<simplexio> phylogenesis: does dmesg give anything funny about network interfaces
<phylogenesis> Can you explain what exactly to do?  (I just spent about 10 hours on this yesterday, so clearly I don't know exactly what to put where)
<phylogenesis> Can I try dmesg without the connection being in place?
<phylogenesis> Unfortunately, my wireless reset on me and I cannot reconnect so I either have my internet connection here, or the crossover connection.
<simplexio> phylogenesis: at anytime.. dmesg tell about stuff that happens in kernel. so if somereason network interface dies or something it there
<phylogenesis> on which machine, the server?
<simplexio> phylogenesis: or both
<phylogenesis> what am I looking for, and would there be anything considering I have not retried the transfer since both computers were restarted?
<phylogenesis> (Btw I have physical access to the server)
<simplexio> phylogenesis: something about eth0/1 .. or about ssh server
<simplexio> phylogenesis: then you could offcourse use mii-tool to check that cards have negotriated right speed
<phylogenesis> I did dmesg | grep eth0 and I got this line for some reason: "eth0: Broadcom BCM4328 802.11 Wireless Controller 5.10.91.9"    even though this is my ethernet card that I'm connected to the internet with right now
<phylogenesis> iwconfig says: "eth0      no wireless extensions."
<simplexio> phylogenesis: you probably have eth0 and eth1...
<phylogenesis> on my client machine, yes
<phylogenesis> I'm checking the server dmesg output now
<simplexio> what mii-tool says about speed
<simplexio> phylogenesis: btw... ubuntu changes interface names after boot, so that they can be same at every boot
<phylogenesis> Client:  "SIOCGMIIPHY on 'eth0' failed: Operation not supported"   (even as root):   Server:  "eth0: no link"
<simplexio> phylogenesis: you should get link up down notice from dmesg if you take wire off
<phylogenesis> ifconfig on the server gives eth0 and lo.  on the client it gives eth0, eth1, and lo.  I have no idea what eth1 is for.
<phylogenesis> checking
<simplexio> phylogenesis: client probably has wireless and wire network cards on
<phylogenesis> yes, but wouldn't that be wlan0?
<simplexio> dunno.. its dosent need to be names as wlan
<phylogenesis> Oh, you're right, even iwconfig shows it as eth1, anyway:   On the client dmesg gives me "eth0: link down\neth0 link up", and the server gives "eth0: link down".       However, they are not connected right now.  I'd have to disconnect from the internet to connect them.  Should I test it now?
<phylogenesis> More specifically: the last lines of dmesg on the server are:  "[##.###] eth0: link down" then "[###.###] ACPI Error (evevent-0303): No installed handler for fixed event..."
<phylogenesis> Anyway, I think I'm getting off track.  What should I test?  Perhaps connect the computers, then set the IPs as before, then run mii-tool and pastebin the output?
<phylogenesis> should I set my server to have a static IP (in /etc/network/interfaces) instead?
<simplexio> phylogenesis: probably better
<phylogenesis> simplexio, should I also do that on the client?
<simplexio> phylogenesis: yeah
<phylogenesis> Any idea where to set it in Fedora 11?
<phylogenesis> I'll figure it out or ask elsewhere if not, I'm just wondering
<simplexio> phylogenesis: just call /etc/init.d/networking stop, then do ip addr add 192.168.0.1/24 dev eth0 (or what eth interface it is ) and same in server
<simplexio> phylogenesis: no idea.. easiest way probably is just call network starting init script to stop
<simplexio> phylogenesis: or if you want you could just add eth alias to those interfaces and define your own network into that
<phylogenesis> How would that help?  (I really mean to ask, not to suggest you are wrong)
<simplexio> phylogenesis: i have noticed that ubuntu dosent like me if i define manually stuff whit ifconfig, but ip addr works fine
<phylogenesis> Can I have my old dhcp configuration, then use the alias for a static one?
<simplexio> phylogenesis: if you still have dhcp call there when you define interface manually, it it could change to some default after it timeouts
<simplexio> phylogenesis: yes
<phylogenesis> simplexio, now I'm confused :-/  What should I be trying?  The static IP on both, the dhcp with alias, replacing the dhcp with the alias?
<simplexio> phylogenesis: in buntu /etc/network/interfaces.. line: auto eth1:1 \n iface eth1:1 inet static .... etc etc .
<simplexio> in that case attleast dhcp stuff wont be reason why it dosnt work
<simplexio> phylogenesis: i assume that you wire interface is eth1, in that case you define that auto eth1:1 lines to both systems ( no idea howto do it in fedore )
<phylogenesis> and I use eth1 even though ifconfig doesn't show eth1, right?
<phylogenesis> my wire interface is eth0 on both
<simplexio> phylogenesis: no, you use whatever is you wire interface.. i that ase eth0:1
<phylogenesis> Okay, I will try that and come back.  It'll take a good 20 minutes I think.  Thanks.
<simplexio> phylogenesis: and chech that wire is ok
<phylogenesis> How?
<phylogenesis> I only have one crossover, and I can't even ping each other using standard ethernet.
<phylogenesis> This'll be my new interfaces file, is this correct or am I doing something stupid :-P:    http://pastebin.com/d22a5ce85
<simplexio> pretty much like that
<phylogenesis> and how do I indicate that the scp should use eth0:1 instead of eth0?
<phylogenesis> will the server just know to use that when it receives input from the connection pointed toward the static IP?
<simplexio> phylogenesis: scp user@host , just like before
<simplexio> you justkeep your old dhcp config on side and "working" when you define another ip on same interface
<phylogenesis> I'm gonna try to figure out the fedora side of this now
<simplexio> may or may not fix the problem : ) its computers
<phylogenesis> Should I use address=192.168.2.101, subnetmask=255.255.255.0, and gateway=       for the client?  In network configuration, I chose eth0, which is dhcp, but there is a tab "Route" that says: "Static network routes" so I clicked to add one and it asks for those things.
<simplexio> sure
<Nafallo> andol: what's the magic for disabling recommends to be depends again?
<Nafallo> andol: nvm. found it.
<phylogenesis> Here is the output from "sudo ethtool eth0" on both machines:    using mii-tool on the client didn't work so someone suggested ethtool   http://pastebin.com/d54b6387d
<phylogenesis> simplexio, sorry, I left a different channel and it totally disconnected me instead
<simplexio> phylogenesis: use irssi ;)
<simplexio> phylogenesis: its seems that its negotiated just right.. is it still doing those stalls
<phylogenesis> simplexio, I might, I don't like xchat much.  As for the stalls, yes.  This time it didn't take very long at all, maybe one minute.  I didn't start the scp until being connected for a few minutes.  It seems like it disconnects after a certain amount of time from being connected.
<phylogenesis> Also, I still had to do the ifconfig thing on both machines before I could even ping one another
<simplexio> phylogenesis: it should be that hard...
<simplexio> phylogenesis: i mean disable dhcp / manually configure interface to same subnet, (maybe restart openssh server ) and use scp * user@host
<phylogenesis> simplexio, as in, "figures" or literally
<simplexio> maybe fedore has some broken config :)
<phylogenesis> I worked on that last night, according to the people on IRC in #fedora, I have everything set up like I should
<simplexio> i mean have two buntu in use here and i havent see that kind problem for long time
<simplexio> phylogenesis: dmesg or cat /var/log/syslog.0 dosent tell anything funny ...
<simplexio> phylogenesis: do you have TCPKeepAlive Yes in sshd_config ... that pretty muhc only option that could affect
<phylogenesis> nothing that means anything to me.  the last line of dmesg is eth0: link down, but I had to disconnect the two to get back on here
<simplexio> phylogenesis: does ifconfig report about collision or overruns
<phylogenesis> all 0
<phylogenesis> (checking TCPKeepAlive)
<simplexio> then i have no other idea that you network cards are bad or their driver are bad
<phylogenesis> where is sshd_config?
<simplexio>  /etc/ssh/
<phylogenesis> I see "TCPKeepAlive yes" :-/
<phylogenesis> Well, I'm baffled.  Thank you for all your time.  I'm gonna try it with a friend's switch.  Hopefully that'll help...
<simplexio> phylogenesis: im just testing it. and it just works scp -4r user@host: .
<phylogenesis> What is -4r?  (I use just -r)
<simplexio> force ipv4 addr
<phylogenesis> Mind that I've had it run for up to five minutes and transfer more than a few GB of data before failing.
<phylogenesis> Any chance that would help?
<phylogenesis> Also, do you know of a way I can test the cable itself?
<simplexio> if dmesg dosent say that hd timeouts its should be problem
<simplexio> you could try run rsync
<simplexio> attleast it can restart where it left it
<simplexio> btw.. id few GB is one big file or many small
<simplexio> im testing with many small
<phylogenesis> No thanks, I want to get it working.  Who knows what other problems it'll cause if I don't.
<phylogenesis> it's both, I have tons and tons of small files plus several very large ones
<simplexio> i can test with big file
<phylogenesis> I may ask you to later, but I'm going to try one more time with -4r and if that doesn't work I'm gonna head to a friends place and have him help.
<simplexio> between 2 ubuntu boxes if it makes same on my computers
<phylogenesis> Thanks a million though, I'll let you know what happens if you wish.
<simplexio> 1% 2148MB  26.7MB/s 1:09:31 ETA....  going strong
<phylogenesis> nice connection, the best mine showed was 20MB/s, anywho I'm gonna try that -4r thing now, I'll be back on later
<linkxs> hi, i have a few questions regarding mail servers and ubuntu
<ScottK> !ask | linkxs
<ubottu> linkxs: Please don't ask to ask a question, simply ask the question (all on ONE line, so others can read and follow it easily). If anyone knows the answer they will most likely reply. :-)
<linkxs> first of all, how would i make my ubuntu server forward to another smtp server, for example, i want to specify smtp.linkxs.org in the mail client, and my server would forward to smtp.sbcglobal.net
<ScottK> It depends a lot on what mail transfer agent you select.  We generally recommend Postfix.
<linkxs> I've just finished installing dovecot
<linkxs> as far as i understand, dovecot doesn't do smtp
<ScottK> Dovecot is the mail delivery agent.
<ScottK> Postfix does.
<linkxs> so should i install postfix alongside dovecot and configure it to forward to another smtp server?
<ScottK> Yes
<ScottK> However ....
<linkxs> however...?
<ScottK> Any spam you forward is going to look to sbcglobal like it came from you, so if you don't want to get blacklisted by them, make sure you have that worked out first.
<linkxs> ah
<linkxs> good point
<linkxs> thanks
<linkxs> i think i might jsut get lazy and specify smtp.sbcgloabl.net
<linkxs> thanks though
<linkxs> another question, i've just installed dovecot(as i said), and I want to configure squirrelmail. #ubuntu threw this link at me: https://help.ubuntu.com/8.04/serverguide/C/email-services.html about dovecot, but i don't see anything for squirrelmail tehre
<linkxs> is there a good tutorial for squirrelmail?
<ScottK> I haven't setup squirrelmail.
<linkxs> ok
<linkxs> another question.. after I isntalled dovecot, I tried doing 'telnet 192.168.1.8 pop3s' from another computer
<linkxs> Trying 192.168.1.8...
<linkxs> Connected to 192.168.1.8.
<linkxs> Escape character is '^]'.
<linkxs> is what i got, but i can't quit it now
<linkxs> ctrl+] does nothing, just prints it
<linkxs> same goes for imaps
<linkxs> ok, another question: i've followed this tutorial: https://help.ubuntu.com/8.04/serverguide/C/dovecot-server.html , what else do i need to do to be able to use, let's say, imap?
<tosh> dovecot does imap too
<linkxs> yeah, i know
<linkxs> i plan to use imaps and pop3s
<tosh> oh
<tosh> you have to set up certs for that
<linkxs> i did
<linkxs> self-signed
<tosh> ok
<linkxs> but what else do i need to do after that tut?
<spowers> linkxs: squirrelmail acts as an imap client
<tosh> hang on
<tosh> hmmm
<tosh> that tutorial doen't have much in it
<tosh> but it should work
<tosh> yea its got protocols = pop3 pop3s imap imaps
<linkxs> spowers: i know, but before i deal with squirrelmail, i want to get dovecot to work with, say, thunderbird
<linkxs> tosh: i only enabled pop3s and imaps
<tosh> and and the ssl stuff is there
<linkxs> yes, i did do the ssl stuff
<tosh> ok
<tosh> so it should work
<linkxs> don't i need to create users?
<tosh> well yeah
<linkxs> how would i go about doing that?
<tosh> you have postfis or some other mta?
<linkxs> uhm, no
<linkxs> i was planning on using squirrelmail
<tosh> how are you actually receiving mail?
<linkxs> dovecot?
<linkxs> as i understand, dovecot is the pop3 server
<tosh> yup
<spowers> dovecot will read mail from a maildir or mbox mail store and serve it to imap or pop clients
<linkxs> therefore, it handles it
<spowers> you need some way to get mail into those mail stores
<spowers> e.g postfix
<linkxs> oh
<tosh> but its not a mal transfer agent
<linkxs> ok
<linkxs> gotcha
<spowers> postfix is the mta, i think people refer to dovecot as an MDA
<spowers> mail delivery agent
<spowers> is that correct?
<tosh> it just takes the mail thats on the server and sends it to the user
<linkxs> gotcha
<linkxs> so i can use squirrel mail as the MTA?, right?
<tosh> but you have to get mail onto the server somehow
<linkxs> ok
<spowers> squirrel mail is a MUA
<spowers> mail user agent
<tosh> nah squirrel mail is just a web interface
<linkxs> ook
<linkxs> ok
<linkxs> so postfix it is
<linkxs> ima go install it then
<spowers> (rest of world) -> emails you via postfix -> delivers mail to mbox or maildir mail store -> dovecot grabs that and shoves it at the mail client -> thunderbird or squirrelmail shows the users their mail
<linkxs> ok
<spowers> and squirrelmail will also attempt to use your local postfix server as a way to get outbound mail out, but it's probably configurable in SM just like for thunderbird or eudora or outhouse
<tosh> hey anyone know how to get procmail recipes to work with virtual addresses in postfix?
<linkxs> i'm using this tut for postfix: https://help.ubuntu.com/8.04/serverguide/C/postfix.html , and soem things in there don't appeal to me
<linkxs> for example, i think it wants me to use procmail
<linkxs> and doesn't think i have a domain name
<linkxs> so it asks me to put NONE in the spot of a domain name
<linkxs> should i just follow the tut?
<tosh> are you using ubuntu 8.04?
<MatBoy> what is a good way to use ubuntu-server with windows clients and a VPNbox in front of the server ?
<linkxs> tosh: yes
<MatBoy> I want to secure folders...
<tosh> is the vpmbox able to get the windows uses on the network or are you going to use ubuntu for that?
<MatBoy> I'm using the vpnbox for that, I think the linux box would be make it easier indeed
<tosh> ok so you have the windows users on the network?
<tosh> not sure how windows vpn works but it would probably have another network device listed for the vpn
<tosh> make sure that device gets a proper ip
<tosh> hopefully the vpnbox uses dhcp so your ubuntu server will get an ip on the same network as the the windows users connected to the vpn
<tosh> once you have that you can set up samba on ubuntu to share files
<tosh> do you have a windows domain controller on your network?
<MatBoy> tosh: yes, I think I need samba for sure
<MatBoy> tosh: nope, I can let samba do that... but how do I need to login.. that is the question... after the VPN has started
<tosh> hmmm
<tosh> yeah you can add users and groups in samba
<tosh> and configure samba to allow different permissions to different shares
<tosh> I think thats what you want
<quizme> how do you determine how much ram a process is using?
<genii> quizme: "top"  ?
<quizme> free -m ?
<linkxs> conky
<quizme> how about if i know the PID
<quizme> hwo do i get memory usage just for that PID ?
<quizme> in MB ?
<jbbarnes> I have let my ubuntu server (gutsy) ready end-of-life, so apt-get no longer works. What are my options for upgrading my existing server rather than overwriting and reconfiguring. Is there any way to dist-upgrade?
<danbhfive> jbbarnes: https://help.ubuntu.com/community/EOLUpgrades#7.10%20to%208.04%20(Gutsy%20to%20Hardy)
<jbbarnes> Thank you. I am reading that now.
<genii> quizme: top -p###     where ### is pid usually works
<quizme> genii thanks!
<jbbarnes> danbhfive: Okay, I have ready through the upgrade instructions. It was straightforward. Question: How risky is this? Generally pretty safe, or is there a significant risk of hosing the system with a failed upgrade?
<danbhfive> jbbarnes: In theory, lot's of people went through upgrades with those packages and did ok.  The only difference now is that those packages are no longer being updated with security fixes
<jbbarnes> I do not have the problematic kernel mentioned in that section, so I think I will go ahead and do it. Thank you.
<giovani> jbbarnes: upgrades often break things
<giovani> I wouldn't recommend it unless you feel comfortable troubleshooting weird issues for at least a few hours
<giovani> (it may go perfectly fine -- but rarely does)
<giovani> quizme: that's what ps is for
<jbbarnes> I think I will be sure to image the drive first so I can fail back if necessary. Thanks for the warning.
<giovani> jbbarnes: sounds like a decent plan
<jbbarnes> The alternative is to start from scratch and migrate data. That's a lot of work, anyway. I might as well give it a shot. I suppose it's possible for it to work perfectly or have just a few problems. I can hope, anyway.
<quizme> giovanni it lists the processes in memory
#ubuntu-server 2009-08-30
<giovani> quizme: yes, I know that ... so?
<giovani> jbbarnes: how is migrating data a lot of work?
<quizme> giovani: you asked what ps is for... oh nm.  i misread what you typed.
<giovani> quizme: ps gives you the output you're asking for
<linkxs> i was talking here earlier
<linkxs> i'll explain my situation again
<linkxs> i installed dovecot, and now, postfix. now, will pop3s and imaps work?
<linkxs> if i try to connect to my server from say another computer, from thunderbird
<linkxs> err, uhm
<linkxs> how would i go about making accounts?
<linkxs> and forward certain ones to one account?
<giovani> linkxs: read the postfix/dovecot documentation, on their respective websites
<irt> You should try it, and see if you can't, and them ask for a solution.
<linkxs> well, i was going to try it and then realised that i ahven't made any accounts on the server for pop3 or imap
<jmarsden> linkxs: Read the documentation, and try with a normal unix user account first.
<phylogenesis> simplexio, By the way, it completed the transfer successfully.  I never had to even try the switch.  Thanks again for all the help.  (afk)
<linkxs> i'm trying to connect to it from LAN, and it doesn't work.
<linkxs> telnetting to it works
<linkxs> am i not specifying the settings for the server right?
<linkxs> i did send a message to myself ont he server, it worked, i recieved it.
<ycy> how come when I try to create a ext3 partition on a 15000.0GB hard drive, the greatest partition that both fdisk and parted is 1806GB?
<giovani> ycy: can you be more clear, I don't understand
<jmarsden> ycy: I don't think many people make 15TeraByte hard drives, are you sure you have one?  In the consumer marketplace I think 2 TeraByte is the biggest I have seen...
<ycy> RAID
<jmarsden> Then that is not "a hard drive"...
<giovani> ok, that's not a 15TB hard drive though
<linkxs> lol
<twb> http://en.wikipedia.org/wiki/Disk_partitioning#PC_BIOS_partition_types
<twb> Can't see anything about upper limits there...
<linkxs> can anybody help me with my mail server?
<twb> !anyone
<ubottu> A large amount of the first questions asked in this channel start with "Does anyone/anybody..."  Why not ask your next question (the real one) and find out?
<twb> ycy: if that really is a problem with the underlying MS-DOS disk label format, you could either switch to some other label format (e.g. gpt), or create a bunch of 1.8TiB partitions and make them all separate PVs for LVM.
<linkxs> i, uhm, did ask the question
<twb> ycy: I guess another possibility is that you accidentally used RAID1 instead of RAID5 or something... check /proc/mdstat that you genuinely have 15TiB of effective storage.
<ycy> i don't want an MS-DOS disk label format
<twb> ycy: MS-DOS is what fdisk ALWAYS uses, and parted uses by default.
<ycy> i want to use ext3!
<twb> ycy: ext3 is a filesystem, not a disk label format.
<ycy> is it mandatory to create a disk label format?
<twb> What I am calling (per parted) "disk labels" are what you would call a "partition table" -- though it needn't be an actual table.
<twb> You *could* make the entire /dev/md1 device a LVM PV, but this is usually avoided because then other tools can think the disk is not in use.
<twb> This is assuming the RAID array is not your boot disk.
<ycy> yes it's not on my boot disk
<ycy> it's a RAID hardware
<ycy> I only see a giant /dev/sdb
<ycy> and I want to create a partition on that
<twb> You can't create a partition (i.e. /dev/sdb1) without using some kind of disk label.
<twb> All I'm saying is that you could use something other than the default ms-dos label format, if there is some kind of inherent limit in the size of its disk labels.
<giovani> yeah, the MBR has a maximum partition size of 2TB afaik
<twb> So use gpt
<twb> parted -s /dev/sdb mklabel gpt && parted -s /dev/sdb mkpart 0 0 100% ext3, or so
<twb> (Except of course you WILL be using LVM here, lest baby Amithaba cry.)
<LiraNuna> I host several websites using virtual hosts on my server
<LiraNuna> currently, because all files are grouped (read only) as www-data, php scripts can read (and write if permissions are set) to folders in other websites directories
<LiraNuna> assume /var/www/www.website.com/stuff.php can read /var/www/www.another.com/index.php
<LiraNuna> how can I restrict - or even chroot apache (like mod_chroot), per virtual host ?
<LiraNuna> I don't really mind about php scripts reading outside of /var/www, as I use AppArmor to restrict read/writes to where needed
<giovani> LiraNuna: first of all ... #apache would be far more relevant ... second of all, I'm not sure that there's a good solution to your problem, presuming you're using name-based virtual hosting
<LiraNuna> sorry for asking in here, I thought it's a generic server discussion as well.
<giovani> it's an ubuntu server room
<giovani> this is a pretty specific, and specialized apache question -- you're more likely to get someone with an answer in #apache, that's all
<LiraNuna> yes, I use ubuntu server; I meant, I thought I'd find answer targeted for ubuntu server here
<giovani> the answer won't be specific to ubuntu
<LiraNuna> I understand, I can adapt
<LiraNuna> thank you for pointing me to the right direction
<giovani> anyway, I doubt there's a good solution
<giovani> virtual separation is never safe
<giovani> if you used ip-based hosting, you could start a separate apache process for each site ... however wasteful of resources that might be
<LiraNuna> I was hoping for an MPM that will chroot+suid
<LiraNuna> mpm-itk seems nice, but it won't chroot
<giovani> have you googled on this?
<LiraNuna> of course
<giovani> http://wiki.apache.org/httpd/PrivilegeSeparation
<giovani> this seems highly relevant
<giovani> and seems to have a bunch of solutions
<javaTN> hi, is anyone availiable here to help me out with a server question
<artificialexit> whats the question?
<jmarsden> javaTN: Don't ask to ask, just ask.  See /topic
<javaTN> Oh sorry, i didnt know if anyone was here, thats all.
<javaTN> I am interested in setting up an "IPCop" like setup on my Ubuntu Server 9.04. Any way to accomplish this w/o a reformat to the IPCop distro?
<giovani> sure ...
<giovani> but you need to be more specific about which features you're interested in
<jmarsden> javaTN: Add whatever applications IPCop has in it to your Ubuntu server... :)
<javaTN> The web based management / monitoring of traffic, etc
<javaTN> Thats the part that stumps me ^
<giovani> javaTN: the web-based management of what?
<giovani> it's probably custom-written
<giovani> feel free to ask them
<jmarsden> For traffic monitoring in real time, check out ntop
<javaTN> Well, since IPCop can be managed through the web, thats what I would like to be able to take adavantage of on my ubuntu server
<giovani> but manage -what- through the web was the question
<jmarsden> javaTN: For basic server admin using the web, you can use ebox
<javaTN> just monitor network traffic
<jmarsden> Try ntop :)
<javaTN> I just googled it, it looks actually promising to my needs! im gonna check it out right now as im SSH'd
<jmarsden> sudo apt-get install ntop    and then browse to port 3000 (I think that's the port it uses by default)
<javaTN> ah, i just installed
<artificialexit> yea 3000 is default
<javaTN> dont know if i should be worried 'yet'... but its complaining about my eth0, but my internet is on eth1
<javaTN> "Starting network top daemon: eth0: error fetching interface information: Device not found
<javaTN> "
<jmarsden> Minot config tweak should handle that.
<jmarsden> s/Minot/Minor/
<giovani> that's probably because it presumes you want to use eth0
<javaTN> yeah, im reading the man. lol i hate when i dont have "defaults" as apps would like.
<javaTN> hmm.
<jmarsden> Then either use eth0 as your default interface, or else edit /var/lib/ntop/init.cfg and restart it.
<javaTN> yeah  im gonna do that, i just started it with ntop -i eth1
<javaTN> im afraid if i change eth1 to eth0, then its going to complain in other applications
<javaTN> oh! while im in here. maybe i can make a link? i that possible? link eth0 to eth1
<giovani> no, don't do that
<giovani> just reconfigure ntop ... it's simple
<javaTN> ok haha
<javaTN> ok now does ntop also provide a firewall setup?
<giovani> no ... ntop is a monitoring application
<giovani> the iptables firewall can be managed dozens of ways
<jmarsden> <javaTN> just monitor network traffic     -- that is what you said you wanted :)
<javaTN> yeah im sorry, but also to do a firewall *blush*
<artificialexit> try shorewall
<artificialexit> text file config though...
<javaTN> no web-based management?
<javaTN> im checking out this ebox too as someone mentioned
<artificialexit> webmin has support for shorewall
 * giovani goes to cry about what #ubuntu-server has become
<javaTN> ebox looks promising
<giovani> artificialexit: webmin is totally not supported by ubuntu
<javaTN> but this looks like a distro
<artificialexit> giovani: my bad...
<jmarsden> !ebox
<ubottu> ebox is a web-based GUI interface for administering a server. It is designed to work with Ubuntu/Debian style configuration management. See https://help.ubuntu.com/community/eBox
<javaTN> saweet! thanks im gonan try this now!
<artificialexit> giovani: never use it myself just heard about it
<javaTN> ubottu, ok how can i do this. the guide i was linked to i think has text formatting errors. to install all ebox packages, should i do "apt-get install ebox-*" ?
<ubottu> Error: I am only a bot, please don't think I'm intelligent :)
<javaTN> lol im such an idiot for messaging a bot. shows how often i use IRC.
<javaTN> im confusing myself so much! ah, maybe i shouldnt go with ebox. lol
<javaTN> now heres my other question as far as ntop goes... how can i have eth1 input the connection (from internet world), link to eth0 (to hub)?
<javaTN> anyone here?
<jmarsden> javaTN: ntop is moitoring whichever interface you ask it to monitor.  It will not link things together for you.
<jmarsden> What are you trying to do?
<javaTN> i think i asked that the wrong way. what i want to do is have eth1 input the internet connection
<javaTN> and eth0 share the connection to the network hub
<jmarsden> So... connect the Internet hookup on eth1 and the hub on eth0.  Job done.  Are you wanting the server to act as a router doing NAT for the client machines on eth0 ?
<javaTN> no, just share the connection.
<javaTN> basically the purpose of me using ntop is the monitor all network activity
<jmarsden> So the client on eth0 all have public Internet addresses?  And the server will just route to them?
<javaTN> yeah
<jmarsden> So... hook the Internet up on eth1, the hub and client PCs on eth0, enable routing.
<javaTN> internet cloud >(in from eth1) ntop server (out from eth0)> router > other computers on network
<jmarsden> Other computers all have PUBLIC IPs, right?
<jmarsden> Your ISP assigned you enough IP addresses you can give one to each client PC?
<javaTN> they are going to be LAN ip's. 192.168.64.XXX
<jmarsden> <jmarsden> So the client on eth0 all have public Internet addresses?  And the server will just route to them?
<jmarsden> <javaTN> yeah
<jmarsden> Do not say yes if you mean no!
<javaTN> oh my bad. i wasnt sure what that meant exactly :-x
<jmarsden> Then say so!
<javaTN> lol sorry
<jmarsden> Do you have a hardware device (router ) between the PCs and the Internet connection?  Or are you wanting the Ubuntu server to do that work?
<javaTN> i have a device to do that work
<javaTN> all i really need i guess is to route eth1 input to eth0 output on LAN. and ntop will monitor the eth1 connection
<jmarsden> I think you are confused... you can't have two subnets with the same IP addresses in them and expect routing between them to work.
<jmarsden> Well, you could try setting up the Server as a transparent proxy, but... I don't think that is really what you want or will be able to do...
<javaTN> how would it differ if i had no router and routed with the server?
<jmarsden> Then the server would *be* your router/firewall, and the eth1 address would be your public Internet IP, and the eth0 address would be a local private LAn address, so the two would be different.  Then you could set up the server to do NAT between the two networks.
<javaTN> oh i see
<javaTN>  what if i did a simple masquerade between eth1 and eth0
<javaTN> ?
<javaTN> would that do?
<jmarsden> As long as they are different subnets, yes.  NAT and "masquerade" are in essence the same thing.
<javaTN> subnet is the 255.255.255.0 kind of thing, right?
<javaTN> sorry, im still learning my way around networking
<jmarsden> I can't spend my evening teaching you basic networking.  http://en.tldp.org/HOWTO/Unix-and-Internet-Fundamentals-HOWTO/
<javaTN> jmarsden, http://www.howtoforge.com/nat_iptables do you think that guide would serve my needs to get this setup?
<artificialexit> cd howie
<artificialexit> damn wrong keyboard
<jmarsden> javaTN: I'm not sure, it seems fairly old, and I've never used it.  First learn enough to understand what you are trying to do, and then try to do it :)
<javaTN> well in theory i know what i want to do, but there are limitations apparently of things unbeknownst to me. like subnet issues.
<jmarsden> If you know what do to, then why do you need to follow a guide on howtoforge? :)  OK, try it if you want.
<javaTN> no its not that i know what to do, i know what i want to do.
<jbbarnes> I could use some help making a disk bootable. I copied two partitions (swap, and an ext3) to a replacement drive. The system naturally won't boot, so I think I need to install GRUB. I have booted from knoppix. How do I install grub to make hda2 bootable on this disk?
<jbbarnes> Once in knoppix, I believe I run grub and then issue a command to install it on hda. Is that right?
<kinnaz> i think so
<Maleko> can anyone comment on the last post in this topic
<Maleko> http://forum.openvz.org/index.php?t=msg&goto=37264&#msg_37264
<Maleko> ufw under ubuntu server
<javaTN> does anyone know how i can properly setup my ubuntu server as a checkpoint between my cable modem and router to monitor internet traffic on my network?
<javaTN> is anyone here able to assist me or link me to a guide on how to setup 9.04 ubuntu server as a NAT
<twb> You used to be able to simply aptitude install ipmasq
<javaTN> does thaat still work?
<twb> nowadays I guess it involves adding -t nat -A -i eth0 -o eth1 -j MASQUERADE or so.
<twb> javaTN: TIAS
<javaTN> now, im so confused when it comes to iptables. whats the best way to add that to the table and keep it saved?
<javaTN> iptables -t nat -A -i eth0 -o eth1 -j masquerade
<javaTN> and your done?
<javaTN> or add it to /etc/rc.local
<javaTN> twb, what do you think about firestarter?
<twb> I don't trust iptables abstraction wrappers.
<javaTN> ok
<javaTN> i just found out firestarter is gui based, which is pointless for me being on ubuntu server with no GUI. haha
<Maleko> what about ufw
<javaTN> does that do NAT?
<twb> ufw is an iptables abstraction wrapper
<twb> It *can* do arbitrary iptables rules, but only by a human writing iptables-restore segments into the appropriate file -- not by using the ufw CLI as such
<javaTN> oh i see
<javaTN> ive spent the last 3 hours of my night trying to set this NAT/web traffic analyzer up- got no where. im frusterated trying to configure my server with my router and get them to talk to each other and the internet cloud.
<javaTN> what exactly is a broadcast and how does it differer from the gateway?
<kinnaz> javaTN, http://en.wikipedia.org/wiki/Broadcast_address
<javaTN> thanks
<foxeylady> my machine won't turn off
<foxeylady> sudo shutdown now yields this "recovery menu" screen or something
<foxeylady> i end up having to do a hard power-off every time
<foxeylady> any way to fix this?
<BrixSat> hello
<BrixSat> i always get 7 packages can be updated. 12 updates are security updates. but no matter how many updates i do they dont go away!
<foxeylady> this is what i'm getting when  I try to shut the machine down -> http://www.geeksquadwiki.com/gsw/images/e/ef/UbuntuReset6.jpg
<twb> foxeylady: the "recovery menu" is single-user mode
<foxeylady> twb: ahh, thanks.
<foxeylady> now is there any way to have it actually shut down when i tell it to do so, instead of going right into that?
<twb> I do not know why shutdown is using mode 1 instead of 0
<foxeylady> i think it may actually be shutting down first, and then re-booting into that
<twb> Have you asked dmesg and/or /var/log/* ?
<twb> foxeylady: if that's the case, then you have "single" in your default boot menu item
<BrixSat> is there any way to remove the updates on the main windows?
<BrixSat> when i enter via ssh
<foxeylady> twb: well, if it helps tell you anything useful, if i then do a hard-powerdown and then boot up the machine, it'll give me a regular login screen
<foxeylady> twb: does it?
<twb> BrixSat: what are main windows?
<twb> foxeylady: sorry, I still dunno what's wrong with your box.
<foxeylady> anyone else??
<BrixSat> twb when i login in ssh
<BrixSat> i always get 7 packages can be updated. 12 updates are security updates. but no matter how many updates i do they dont go away!
<twb> BrixSat: that's /etc/issue and /etc/motd, I don't know what updates it -- it was implemented after the last LTS
<BrixSat> last lts?
<foxeylady> BrixSat: *L*ong *T*erm *S*upport release, such as Hardy
<BrixSat> :)
<dayo> trying to set up group-based quotas for home directories mounted on nfs. how does this apply to 8.04? http://www.faqs.org/faqs/sgi/faq/admin/section-68.html
<dayo> !quota
<ubottu> Sorry, I don't know anything about quota
<dayo> hmm
<dayo> shame on u, ubottu. really.
<twb> dayo: the quotas are applied on the NFS server side
<twb> That URL is a bit confusing.  Certainly the rquotad stuff has to be working, but unless you have a firewall in the way, it should be automatic
<dayo> twb: so, if i set up quotas on the nfs server, it applies to client logging in from a desktop?
<twb> Yep
<twb> I suspect quotad is only there so when you get "write failed!" you get a more meaningful message, like "write failed!  You gots no space!"
<dayo> twb: i see. ok, thanks. well time to try it out. i'll let u know how it went.
<dayo> twb: i have this line in my fstab: UUID=d9089c53-5d3b-4693-879d-c81098f33ef4 /srv           ext3    relatime,usrquota,grpquota   I don't want to apply quota to /srv, i want to apply it to a subdirectory of /srv. how do i do that?
<aladin_> hello, i have a big problem.. someone can tell to me the equivalent to "dpkg --force all" in apt?
<twb> dayo: not possible
<twb> dayo: make a separate filesystem for the thing you DO want to have quotas on
<twb> dayo: this is the part where you learn that you should have used LVM when you installed your server.
<dayo> twb: i tried this but it's not working:  sudo quotatool -b -u testman -q 2072MB -l 2584MB -t 7days -v /srv
<dayo> quotatool: Wrong options for -t, please see manpage for usage instructions!
<dayo> twb: man page says number followed by seconds, minutes, days, weeks or months
<dayo> :-/
<twb> Dunno
<twb> dayo: I don't even have a quotatool, just quota and edquota
<axio_> does anyone know if python is installed by default?
<twb> axio_: likely; it's a requirement for many things
<twb> axio_: but that might be python2.5-minimal rather than "batteries included"
<Claw6_> guten tag :)
<Claw6_> ich als newbe brauch mal hilfe zum thema Vhosts
<Nafallo> !de | Claw6_
<ubottu> Claw6_: In den meisten ubuntu-KanÃ¤len wird nur Englisch gesprochen. FÃ¼r deutschsprachige Hilfe besuchen Sie bitte #ubuntu-de, #kubuntu-de, #edubuntu-de oder #ubuntu-at. Geben Sie einfach /join #ubuntu-de ein! Danke fÃ¼r Ihr VerstÃ¤ndnis.
<Claw6_> oh yes sure
<Claw6_> httpd (pid 4460?) not running
<Claw6_> (13)Permission denied: make_sock: could not bind to address 0.0.0.0:80
<Claw6_> no listening sockets available, shutting down
<Claw6_> Unable to open logs
<Aison> hello all
<Aison> i've got a software raid1
<Aison> but the problem is, that I can't boot from it
<Aison> somehow /dev/md/0 is not existing on boot
<Aison> it's created later
<Aison> how can I bring the kernel to use /dev/md/d0 as root?
<dayo> twb: and there is absolutely no way for me to add a line in fstab, underneath /srv that says /srv/home ?
<twb> You cannot have quotas for only part of a filesystem.
<twb> Nothing prevents you having a separate filesystem /srv/home
<dayo> :-(
<dayo> twb: yeah, i guess that's my most viable option right now
<twb> 00:13 <twb> dayo: this is the part where you learn that you should have used LVM when you installed your server.
<dayo> twb: so, what i'll do is take free space from /srv and create a partition called /srv/home
<axio_> twb: what would be missing from like the "python light" that you're talking about?
<twb> axio_: ask your package manager
<axio_> oh, right :)
<axio_> packages.ubuntu.com seems to be broken
<axio_> or having problems
<twb> dpkg -l, -L
<axio_> how 'bout RoR?
<javaTN> is it possible to have a samba server outside of a network, yet still accessable by the internal network? ie: samba > router > clients?
<giovani> javaTN: sure, just like any server
<javaTN> the thing that baffles me is, it wont be on the same network necessairly.
<javaTN> cable modem > samba server > router > clients
<javaTN> what does it mean when an ip is typed like this: 192.168.0.0/24? whats the /24?
<ScottK> javaTN: CIDR range.
<javaTN> ScottK, so essentially a CIDR range is a range of IP addresses?
<ScottK> javaTN: Yes.  Google knows all about it.
<garymc> anyone here?
<garymc> just reinstalled chatzilla not sure thisa is working
<giovani> well it should tell you that you're connected
<giovani> but yes, clearly it's working
<garymc> yeah im just trying to join another channel and it saya that i should have identifeid myself. I have but not working?
<garymc> yeah it works now thanks
<garymc> :)
<javaTN> how do subnets masks work? i am trying to setup my server as a NAT, however should the subnet mask of the router and eth2 (output to router) be the same?
<szczym> helo, i have very strange problem: lsusb gives me nothing since 20 minutes. i have 2 devices turned into hardy server. please help
<szczym> but dmesg lists usb events ...
<MatBoy> I have a md array that failed.. it's the /tmp array, but now it mounted one of the partitions as /tmp... what should be the best to unmount /tmp ? as it's busy atm ofcourse... remap it in fstab and make a temp one on the / partition ?
<slap> I'm using jaunty and I would like to understand hot to make a permanent change to /etc/resolv.con
<slap> I read that '  If using DHCP, NetworkManager is _intended_ to replace default routes,' But I'm not using DHCP !!!
#ubuntu-server 2010-08-30
<dev001> Anybody here successfully launched Ubuntu 10.04.1 LTS Server in a Xen DomU, on a non-pvops Xen 4 host (e.g., OpenSUSE 11.3).  Initial/Stage1 install is fine, but a reboot & attempted launch of LTS Guest via pygrub, I get: ""Boot loader didn't return any data!".  Not any problems for non-Ubuntu guests.  Suspect pvops-related _sumthin_ ...
<jMyles> I need some help with port forwarding.  I'm just trying to forward a port to another computer.
<jMyles> Here are my rules: http://pastebin.com/mbVGzZbN
<tt33l3r>  Having a problem with "Keep Aligned" setting on gnome changing back to set on every reboot. Anyone know how to fix this?
<dominicdinada> Question..... I just flushed my firewall ruleset. And it is set to allow local traffic only. But for some reason Samba is not allowing local connections so what do I do ? Sudo -A -s 192.168.1.0/24 ACCEPT
<tf2ftw> I have a svn folder with perms drwxr-sr-x . My user is in the group that owns the folder yet SVN commit fails with permission error. any clues?
<chrismsnz> did you try logging in/out with that user or restarting apache?
<chrismsnz> sometimes takes a refresh/relogin for the groups to update
<tf2ftw> hmm well im sshing into the server. do i need to disconnect and reconnect?
<chrismsnz> yep
<tf2ftw> ok lemme try. thanks
<chrismsnz> hth
<tf2ftw> nope. that did not work
<tf2ftw> wait, not its unknown command
<tf2ftw> oh, no thats xchat telling me Uknown command
<chrismsnz> heh
<tf2ftw> I have a user "svn" in a group "svn"
<tf2ftw> I have changed the own of the svn folder to user svn o.O
<chrismsnz> that will work
<chrismsnz> drwxr-sr-x
<chrismsnz> = owner read/write, group read only, world read only
<chrismsnz> also, did you mean make that setguid?
<tf2ftw> ah, actuayeah
<tf2ftw> sticky bit
<tf2ftw> i can successuflly comit while on the server but not from my client
<chrismsnz> i always thought sticky was +t for some reason
<chrismsnz> ok, how are you commiting, over DAV right?
<chrismsnz> using apache?
<tf2ftw> Right
<chrismsnz> you might need to add the www-data user to that group
<chrismsnz> cause apache's going to try and write to that dir on your behalf when you connect via it
<tf2ftw> ah, so i need to add apache to the svn group?
<chrismsnz> yes
<tf2ftw> thats usermod -a -G user group right?
<chrismsnz> usermod -a -G group user
<tf2ftw> right thanks
<chrismsnz> also, I'm not 100% but i think you may need to "chmod g+w <svndir>" on your repository dir to allow apache to write
<tf2ftw> oh wait a minute, user apache does not exist.. did they change the username to apache2  ?
<chrismsnz> i'm not 100% on the way that the sticky bit works
<chrismsnz> if it's ubuntu, it's www-data iirc
<tf2ftw> yeah ubuntu server
<chrismsnz> yeah, the user is www-data
<tf2ftw> same error. maybe an apache restart?
<chrismsnz> yes apache restart
<chrismsnz> can you also show me the permissions of your svn dir
<tf2ftw> drwxrwsr-x
<chrismsnz> should work ;)
<tf2ftw> still same perm error. im going to restart my client and recreate the svn folder
<tf2ftw> ok the new svn has perm r
<ChmEarl> www-data is right
<tf2ftw> drwxr-sr-x
<chrismsnz> right
<chrismsnz> so if apache is part of the SVN group, and your SVN user is part of the SVN group
<chrismsnz> then you should:
<chrismsnz> chgrp -R svn <svn directory>
<chrismsnz> chmod -R g+w <svn directory>
<chrismsnz> which will recursively set the files to be owned by group SVN, then recursively allow the group to make changes
<chrismsnz> your next problem will probably be that files are being created in the svn directory with permissions based on whoever does the saving, which is a new kettle of fish
<tf2ftw> ok let me try that. the weird thing is i can check out the repo but i cannot commit to it (via dav)
<chrismsnz> I think I solved that with a DAV parameter somwhere
<tf2ftw> i added www-data to the svn group but still cannot commit  via dav
<tf2ftw> let me try your chgrp options\
<tf2ftw> ok  so i tried what you just suggested, but still no joy. i logged in and out, and apache restart.
<tf2ftw> i can view the repo via http (with credentials)
<chrismsnz> right
<chrismsnz> is it your DAV configuration?
<tf2ftw> let me pull that up
<chrismsnz> chuck it in a pastebin and i'll take a look
<tf2ftw> ok tahkns
<tf2ftw> http://pastebin.com/RrC4UvDY
<tf2ftw> everything else is commented out or not needed
<tf2ftw> such as <location /svn> etcc
<chrismsnz> seems fine, do you have any dav-related errors in /var/log/apache2/error_log?
<chrismsnz> error.log*
<tf2ftw> all permission denied errors XD
<chrismsnz> paste :D
<tf2ftw> http://pastebin.com/1zA09FB0
<chrismsnz> ok
<tf2ftw> i can rm the log to clear it right?
<chrismsnz> who is the owner/group of /var/svn/testfolder/db/ ?
<chrismsnz> just leave it, should be timestamped
<tf2ftw> ok db looks like this
<tf2ftw> drwxrwsr-x
<tf2ftw> which is 2775
<tf2ftw> wait, no its not
<chrismsnz> if you do "ls -ltr" it should show the user and group
<chrismsnz> both are SVN?
<chrismsnz> svn*
<tf2ftw> drwxrwsr-x root svn
<tf2ftw> everything from /var/svn down is root svn
<chrismsnz> can you paste me "cat /etc/group | grep svn"
<tf2ftw> http://pastebin.com/W0x3ch6n
<tf2ftw> wait a min, looks like www-data is not in the svn group
<tf2ftw> wtf
<chrismsnz> yep
<chrismsnz> the svn user is in the www-data grou
<chrismsnz> p
<tf2ftw> hahaha
<tf2ftw> thats what happened, i had the backwords when i did the usermod
<tf2ftw> sorry bout that
<chrismsnz> np
<chrismsnz> usermod -a -G svn www-data
<chrismsnz> i think is what you need
<tf2ftw> yeah tahts the one. lemme try a log out/in and apache restart
<chrismsnz> sorry, back
<tf2ftw> excellent! thanks a lot! im up and going now
<tf2ftw> man, its rare you dont get noob killers when you ask a question
<chrismsnz> haha
<tf2ftw> anyways, thanks again.
<chrismsnz> you caught me in procrastination mode :)
<chrismsnz> sweet - good luck, i hope you don't run into any more probs
<tf2ftw> thanks
<chrismsnz> getting two users part of a secondary group to write nicely to the same dir was a major pita for me
<tf2ftw> hmm.. im trying to break my partner of using FTP all the time.
<tf2ftw> so this should be interesting
<xampart> i did an usb-installation of 10.04 server, using usb-create. why does uname -a show 2.6.32-24-generic?
<ewook> darnit. what apache mod is it that runs not only cgi's as another user, but the "whole" (ie, different domain, different pidowner)
<ewook> I thought it was mpm
<ewook> but I cannot locate it with a2enmod.
<ewook> perhaps a question for #apache
<joschi> ewook: mpm_itk or mpm_peruser
<ewook> joschi: yeah, mpm_itk I thought I just got. but guess not.
<uvirtbot> New bug: #626728 in apache2 (main) "typing error in default-ssl" [Undecided,New] https://launchpad.net/bugs/626728
<uvirtbot> New bug: #626744 in bacula (main) "package bacula-director-mysql 5.0.1-1ubuntu1 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/626744
<kaushal> hi
<kaushal> can someone please guide me about https://bugs.launchpad.net/ubuntu/+source/xen-meta/+bug/540110 ?
<uvirtbot> Launchpad bug 540110 in xen-meta "ubuntu-xen-server has broken dependencies (dup-of: 538917)" [Undecided,New]
<uvirtbot> Launchpad bug 538917 in xen-tools "xen-tools is not available in lucid" [Undecided,New]
<kaushal> checking in again for my query ?
<soren> kaushal: I'm not sure what your question is?
<kaushal> soren: is there a fix for it ?
<soren> kaushal: Are you asking if it has been fixed or if it can be fixed?
<kaushal> soren: yeah
<pmatulis> :)
<soren> :(
<soren> kaushal: Well, then yes.
<kaushal> soren: ok
 * soren questions the usefulness of this conversation
<kaushal> so i need to wait for it ?
<kaushal> soren: apologies
<kaushal> soren: is there a irc channel for LP ?
<soren> Sure.
<soren> I'll let you guess what it's called :)
<kaushal> LaunchPad :)
<pmatulis> bzzzt
<kaushal> pmatulis: any thing wrong from my end ?
<pmatulis> kaushal: no
<kaushal> oh ok
<soren> Like other IRC channels, the launchpad channel features a # prefix.
<kaushal> soren: i needed ubuntu specific
<kaushal> soren: so any related to LP bugs should be discussed in which irc channel ?
<soren> kaushal: Are you talking about bugs /about/ Launchpad or bugs /on/ Launchpad /about/ Ubuntu?
<kaushal> bugs /on/ Launchpad /about/ Ubuntu
<soren> It depends on the nature of the bug.
<soren> In some cases, it's this channel.
<soren> Other times, it'll be #ubuntu-desktop, #ubuntu, #ubuntu-devel, #kubuntu-devel, #ubuntu-motu..
<Pici> #ubuntu-bugs ...
<soren> And this is not an invitation to post some question in all of those at the same time.
<xfaf> morning
<soren> Pici: Occasionally, I suppose.
<kaushal> soren: ok
<soren> kaushal: You seriously need to learn to ask more specific questions.
<kaushal> soren: sure
<kaushal> soren: specific questions ?
<soren> epic
<kaushal> i did not understand that
<kaushal> I wanted to discuss about that LP Bug
<kaushal> hence the question
<kaushal> was there something wrong ?
<soren> When you ask a question... Try to be specific.
<soren> Explain what you want, what you're talking about, why you ask, etc.
<kaushal> ah ok
<kaushal> soren: thanks
<soren> np
<cemc> is there a nice logwatcher style something, that will watch auth.log for me and send me an email the moment somebody logs in through ssh ?
<zash> cemc: tail -f /var/log/auth.log | grep something about successful auth | while read line; echo "$line" | mail -s "someone logged in" you@example.com; done
<zash> or something
<cemc> zash: I know I can write my own, but I thought maybe there is one already, something nicer ;)
<cemc> hm, maybe I can add it to fail2ban
<cemc> instead of iptables filtering, it just sends an email
<uvirtbot> New bug: #626812 in unixodbc (main) "unixodbc-dev depends on libqt3-mt" [Undecided,New] https://launchpad.net/bugs/626812
<Crankygeek02> Good morning
<Crankygeek02> Anyone familiar with eGroupware?
<ivoks> soren: ping
<ivoks> soren: when --dns is used with vmbuilder, vmbuilder sets up dns in /etc/network/interfaces as dns-nameserver
<ivoks> soren: and that's great, but there's on problem
<ivoks> soren: to populate /etc/resolv.conf, that requires resolvconf
<ivoks> soren: so, vmbuilder should really write to /etc/resolv.conf or install resolvconf by default
<ivoks> soren: (if you are still maintaining that code :)
<kpettit> Crankygeek02: I've used it a bit
<uvirtbot> New bug: #626822 in mysql-dfsg-5.1 (main) "mysql 5.1 don't want to shut down properly, crashes upgrades. Besides that, works like charm." [Undecided,New] https://launchpad.net/bugs/626822
<Crankygeek02> Kpettit: I am having a issue with the setup. I an passing all the config checks. But when I try and load the site, I am getting an error stating that I have npt created the database tables. Click here for setup.
<kpettit> Ah sorry, couldn't help ya there.  You might want to check #egroupware
<Crankygeek02> You can see the error here:  http://www.intellitechgroup.com/egroupware
<Crankygeek02> Nobody answers in that channel  lol
<jMyles> I'm trying to forward a port to a specific computer on my network.  If I run iptables -t nat -vnL, it shows some packets (16 of them, seemingly about one for every time I try) but it doesn't seem to do anything.
<jMyles> My IPtables rules are here (not forwarding properly): http://pastebin.com/Kb9yCDcD
<patdk-wk> heh?
<jMyles> patdk-wk: Is that directed at me?
<patdk-wk> yep
 * patdk-wk wonders where the other half of the problem is
<patdk-wk> like what are you forwarding? from where? to where?
<jMyles> All external requests at port 8000 to a single computer on my network
<patdk-wk> well, I don't see any dnat rules, so ya, it shouldn't work
<patdk-wk> your nate table is also empty, dunno how nat works
<patdk-wk> nat
<jMyles> patdk-wk: This can be a learning moment for me - I'm so lost.  I don't know what dnat is.  My nat table has a rule under prerouting: 16   776 DNAT       tcp  --  *      *       0.0.0.0/0            24.39.111.23        tcp dpt:8000 to:10.0.0.88:8000
<raubvogel> For those of you using rsyslog, how does the /etc/rsyslog.d dir works? I mean, do you start with the 50-default.conf and then add a, say, 55-mine.conf which adds the changes from the default you want to use?
<patdk-wk> jMyles, your pastbin is missing that
<patdk-wk> and I cant being to help with inproper info
<kpettit> Any recommendations for desktop search?  Need to index or be able to search for 500k files.  So I imagine I'll need something that keeps a database index
<jMyles> patdk-wk: It is on line 18 of my pastebin
<jMyles> http://pastebin.com/657BazSt
<jMyles> patdk-wk: Oh dear, I posted the wrong pastebin before, sorry.
<patdk-wk> I don't get the forward rule you have
<jMyles> patdk-wk: I'm not sure I do either - I cloned and modified it.  It it supposed to say "forward all traffic on port 8000 to 10.0.0.88"
<patdk-wk> actually, it's needed if you don't have a default policy of accept
<jMyles> So, since I have a default policy of accept, no forward rule is required?  Only the prerouting?
<patdk-wk> yep
<patdk-wk> I dunno why you would run default policy accept on a firewall though
<patdk-wk> with internet access
<jMyles> patdk-wk: Well we're not really terribly in need of a firewall anyway.  All I really want is to be able to easily forward traffic.
<uvirtbot> New bug: #626855 in openldap (main) "package slapd 2.4.21-0ubuntu5.3 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/626855
<jMyles> patdk-wk: I notice that the number of packets for the prerouting and forward rules are > 0 - does this mean they are working-ish?  Also, they are different from each other.  Is this a problem?
<patdk-wk> they mean two totally different things
<patdk-wk> it looks fine to me though
<jMyles> patdk-wk: OK.  So the traffic is supposedly making it through?
<patdk-wk> ya, as far as I can tell
<patdk-wk> use tcpdump
<jMyles> patdk-wk: What can I do, apart from netstat -ta, which doesn't show me any such traffic, to see what it happening on the computer?
<jMyles> ok, tcpdump
<jMyles> on the gateway or the endpoint?
<patdk-wk> heh, everything :)
<patdk-wk> first on your external interface
<patdk-wk> then internal interface
<patdk-wk> then on 10.0.0.88
<binBASH> Moin patdk-wk
<jMyles> patdk-wk: holy heck - it's a jungle!! :-)  Can I grep 8000 and get what I want?
<patdk-wk> sure
<ssureshot> Does ubuntu update packages during releases? Ieee... will lucid server ever have a newer version of samba than 3.4.7 ?
<ssureshot> in it's repositories of course
<jMyles> patdk-wk: So grepping shows me nothing that I can identify as interesting.  :-\
<smoser> ssureshot, it is not likely
<patdk-wk> tcpdump -ni eth0 port 8000
<smoser> in general, updates for stable releases are governed by the SRU process (https://wiki.ubuntu.com/StableReleaseUpdates)
<patdk-wk> did you test it, while you had tcpdump going?
<smoser> if you want newer packages for a stable release, you the right places i backports https://help.ubuntu.com/community/UbuntuBackports
<ssureshot> so packages are basically as is then.. I was curious just because I've had such an issue with 9.10 and samba 3.4.0..
 * patdk-wk is using samba on karmic and lucid, no issues
<ssureshot> samba worked good but the whole point and print issue was horrible
<ssureshot> thanks for the replies good info
<patdk-wk> hmm, I use it for printing at home
<patdk-wk> dunno about point
<ssureshot> the whole rpc service was broken in 3.4.0 with point and print ... wasn't fixed untill 3.4.5.. I researched that for a few weeks looking for a resolution ..
<smoser> ssureshot, the other option, if you need something newer or older is to do a rebuild of the source for yourself. you can do this locally, or use PPAs.
<smoser> ie, that would allow you to grab maverick's version and build for lucid, or some other combination
<patdk-wk> oh heh, strange
<patdk-wk> running lucid, pointnprint works for my wifes win7 laptop
<patdk-wk> I don't use samba for printing at work, so wouldn't know about more than just that one
<ssureshot> yup lucid works great :) was fixed
<ssureshot> I actually pulled samba from the lucid repository
<ssureshot> to make it work lol
<patdk-wk> karmic is almost dead now anyways
<jMyles> patdk-wk: OK!  So, on 10.0.0.88, I see this stuff whenever I try to hit it on 8000: 11:01:10.731058 IP 10.0.0.197.44208 > 10.0.0.88.8000: Flags [S], seq 4136560425, win 5840, options [mss 1460,sackOK,TS val 30137570 ecr 0,nop,wscale 5], length 0
<G> zul: yeah, just managed to work out bug 619302
<uvirtbot> Launchpad bug 619302 in qemu-kvm "kvm -initrd 'file' dumps core if 'file' does not exist" [Low,New] https://launchpad.net/bugs/619302
<ssureshot> yeah sure is... I just talked my boss into finally making the jump to 10.4 for Support reasons
<patdk-wk> jMyles, that sure isn't a firewall or even dnat issue, your accessing it direct
<patdk-wk> sounds like whatever is suppost to be on port 8000 doesn't exist
<jMyles> patdk-wk: OK, great.  Thank you for your help!! :-)
<jca1981> Need Help :  my upgrade to lucid failed, getting this error when i do a dpkg --configure -a http://pastebin.ubuntu.com/485784/
<Jimboo> hi ppl need little help, i've installed apache2 with php5 via cgi, but from some reason i have when i try phpinfo() i have virtual directory support disabled... could anyone gimme advice where to enable it?....
<G> jca1981: by any chance are yuo still in a chroot of some sort?
<soren> jca1981: I asked you about this hours ago in #ubuntu-dk :) Are you running in metal?
<soren> s/in/on/
<patdk-wk> hmm, the minimalcd's for vmware are nice
<kpettit> I'm noticing alot of newer versions of stuff I want is in the maverick but not lucid repositories.
<kpettit> Is there a way I can add packages from that repository without having to upgrade Ubuntu version or downloading packages manually?
<patdk-wk> uprade to maverick :)
<guntbert> !latest | kpettit
<ubottu> kpettit: Packages in Ubuntu may not be the latest. Ubuntu aims for stability, "latest" may not be a good idea. Post-release updates are only considered if they are: fixes for security vulnerabilities, high impact bug fixes, or unintrusive bug fixes with substantial benefit. See also !backports.
<patdk-wk> issues normal come into the dependencies
<guntbert> patdk-wk: thats not really good advice
<kpettit> yeah I got that.  There is just a couple apps that I needed.
<patdk-wk> gnuyoga, dunno, I have been running maverick since the repository was created
<patdk-wk> it's been ok for me
<kpettit> usually I can find ppa repositories, but sometimes not.  Usually end up doing alot of googling
<kpettit> patdk-wk, is your system lucid or maverick?
<patdk-wk> both
<patdk-wk> I have like 10 hoardy systems, probably 15 lucid, and 2 maverick
<kpettit> ah ok, cool thanks.
<Pici> hoary or hardy?
<patdk-wk> hardy? 8.04
<patdk-wk> the last of my xen systems
<jMyles> So, it appears pretty clear that packets are getting into my network to the forwarded computer, but they don't seem to be getting back out?  The computer in question sees the packets, but doesn't seem to issue a response.
<SpamapS> jMyles: would you mind restating your question, I don't see it in the backscroll...
<jMyles> SpamapS: I am trying to set up a (temporary) web server on a computer inside my network.  I'm trying to forward some port (any port, really, but 8000 for now) to that computer.  With a lot of awesome help from patdk-wk, I have determined that packets are now making it into that computer.  However, the connection is timing out.  http://pastebin.com/657BazSt
<SpamapS> jMyles: does the web server have a firewall configured?
<jMyles> SpamapS: Well, I ran tcpdump and verified that packets were incoming.  I even installed firestarter (which I think we're all lukewarm about) and watched as the packets came in and were blocked, and then enabled service for port 8000.
<jMyles> SpamapS: I still get messages like this (both input and output, input shown here): Input Dropped: IN=eth1 OUT= MAC=00:40:f4:58:73:8b:00:16:cb:9d:8b:da:08:00 SRC=10.0.0.88 DST=10.0.0.46 LEN=52 TOS=0x10 PREC=0x00 TTL=64 ID=16170 DF PROTO=TCP SPT=36693 DPT=22 WINDOW=1879 RES=0x00 ACK URGP=0
<jMyles> SpamapS: That's on the server.
<patdk-wk> that message is so unrelated, it's about port 22, not port 8000
<ChmEarl> maverick daily build CD is now i386 only. No amd64 for 2 days
<jMyles> Ahh, DPT.  I just learned something.
<SpamapS> jMyles: DPT=22 means ssh
 * SpamapS expresses his lag shame
<jMyles> patdk-wk and I determined that the packets were getting here, on time and in good shape, but it just doesn't work the same as it does if I access it from another computer on the network.  It's stone dead.
<patdk-wk> as far as I know, he has no service on port 8000
<patdk-wk> or, some firewall on it is blocking the replies
<ScottK> ChmEarl: We're in beta freeze so rebuilds are done as needed, not daily.
<ChmEarl> ty- i386 worked well
<jMyles> patdk-wk: Well I am watching the console on port 8000 (a django server / debugger - again super temporary) and it is doing nothing.
<SpamapS> jMyles: you may be running into "the NAT problem" if you're trying to access the external IP from a machine inside the same LAN as the web server.
<patdk-wk> na, his tests where internal on local lan
<patdk-wk> and his nat rules are good
<SpamapS> is there a SNAT rule for  -s $SOURCE_NETWORK -d $EXTERNAL_IP -j SNAT --from $EXTERNAL_IP ?
<smoser> hallyn, where do you do development of qemu-kvm ?
<smoser> ie, if i want to propose a change, where should i grab source from ?
<cemc> hey guys. in ubuntu 10.04 is there some framework to manage a bunch of servers at once? for example I want to run a certain command or sequence of commands on say 30 servers, not necessarily with cssh. something like: put scripts somewhere, upload script with rsync, run script on servers, get result in mail or something
<zul> cemc: puppet
<cemc> zul: thanks, I'll take a look
<hallyn> smoser: you mean upstream?  git://git.kernel.org/pub/scm/virt/kvm/qemu-kvm.git
<hallyn> smoser: our tree is at lp:ubuntu/qemu-kvm
<smoser> hallyn, thats what i wanted, i guess. I was wondering if you had a middle resting place for development of the packaged versions.
<smoser> ie, like the eucalyptus branch at https://code.launchpad.net/~ubuntu-virt
<smoser> anyway.
<smoser> next question:  are you expecting a new version of qemu-kvm for maverick ? hallyn
<smoser> i'm asking because if you are, then likely bug 619302 will be fixed magically. but if not, we should get a fix for it (and someone has attached patch to bug).
<uvirtbot> Launchpad bug 619302 in qemu-kvm "kvm -initrd 'file' dumps core if 'file' does not exist" [Low,New] https://launchpad.net/bugs/619302
<cemc> zul: how well is this puppet integrated in ubuntu? like... does it work? :)
<zul> cemc: yes it does afaik
<SpamapS> cemc: puppet is awesome, and works great on ubuntu.
<cemc> SpamapS: the one in lucid too? 0.25.4 ?
<yann2> cemc, works, but client daemon leaks a lot.
<yann2> better run puppet-run than puppetd
<yann2> not sure about lucid though, am mostly using 8.4 - 9.4
<hallyn> smoser: hm, i wasn't planning on a new version, but it's not out of the question
<smoser> by new version i meant new upstream qemu-kvm version
<smoser> ie, 0.13
<hallyn> right, but that's not out yet?
<hallyn> (or is it?)
<smoser> ok. just seeinf if we were meaning the same thing.
<smoser> you're much more aware of when it is due and such.
<hallyn> 0.12.5 is still the lastest unfort
<hallyn> (just pulled and checked tags)
<smoser> ok. then you should probably pull that patch.
<smoser> if you'd like I can push a branch and request a merge.
<smoser> (the patch is both trivial and upstream)
<hallyn> smoser: looks fine to me
<smoser> so nothing required from me ?
<hallyn> smoser: since i don't have upload rights yet anyway, i'd say just subscribe whatever team does the FFEs to the bug...
<smoser> well, thats not a FFE
<hallyn> oh, true
<smoser> just normal sponsorship
<hallyn> that's the part i still have the hardest time with.  (gotta find how to do a sru for lxc for lucid too)
<hallyn> smoser: all right i'll create a bzr branch and propose it for merging, thx
<smoser> is Daviey around ?
<smoser> hallyn, well, really, you need to get PPU for qemu-kvm and lxc.
<smoser> hallyn, i might suggest you put a "staging" branch in the ~ubuntu-virt team
<smoser> for qemu-kvm
<smoser> then you (and I and others) can push things there, and then when you request kirkland or someone to pull, they just build that.
<smoser> that is what Daviey is doing foe ucalyptus and it seems to work well
<hallyn> do i have write access to ~ubuntu-virt?
<smoser> i think so, i thought i'd seen you push to the ppa
<hallyn> smoser: i might do that then i guess
<tynot> help! can't get ubuntu 10.04.1 to save dns settings. just trying to use google's public dns server 8.8.8.8, but it keeps defaulting to the local router which is not running dns.
<delimiter> try removing the resolvconf package
<tynot> delimiter, where does it resolve to after resolvconf's been removed?
<delimiter> you'll need to fix /etc/resolv.conf one more time after removing it
<delimiter> to point at 8.8.8.8
<tynot> so, save 8.8.8.8 and then remove the package?
<Pici> err. A better way would be to modify your /etc/dhcp3/dhclient.conf file
<delimiter> other way around
<tynot> d'oh!
<tynot> pici, does modifying dhclient.conf hold settings better?
<delimiter> to each his own, I personally think having resolvconf running on a server a dumb idea
<Pici> delimiter: Its a dependency of postfix
<tynot> I just need a working dns to download packages.
<delimiter> no, it's only suggested by postfix
<tynot> at the moment, we're pulling from isp's.
<Pici> oh, rdepends didn't specify :(
<Pici> tynot: Modify/edit the prepend domain-name-servers line in that file so whenever you get a new dhcp address it'll use the thing you specified there instead.
<tynot> i have a static local ip
<tynot> c class
<Pici> oh. Then delimiter's suggestion is probably better.
<Pici> Odd.
<tynot> not so for our purposes
<tynot> so.... change the nameserver line from 192.168.1.1 to 8.8.8.8, for instance... remove resolvconf then we're good to go?
<delimiter> tynot: yes
<tynot> would you mind giving me the syntax to remove resolvconf package?
<ScottK> ttx: http://launchpadlibrarian.net/54104562/buildlog_ubuntu-maverick-i386.cobertura-maven-plugin_2.3%2Bdfsg-1_FAILEDTOBUILD.txt.gz looks like it needs someone who understands a bit about maven to resolve, but would be a good FTBFS for someone to fix (would also solve some depwait too)
<tynot> would you mind providing the syntax to remove resolvconf package?
<tynot> #delimiter the command to remove resolvconf?
<RoyK> apt-get remove --purge resolvconf
<RoyK> --purge is optional, though
<tynot> will try, RoyK, thx.
<tynot> delimiter, Got "Package resolvconf is not installed, so not removed" Hmm.
<delimiter> tynot: strange
<tynot> very.
<lamont> delimiter: I agree with you that resolvconf makes no sense on a server... that's why postfix only suggests it
<delimiter> what else would overwrite /etc/resolv.conf ?
<tynot> but, it defaults back to the local router, 192.168.1.1, when we reboot.
<delimiter> is dhclient running?
<tynot> shouldn't be. how do I check? The local IP was set to static 192.168.1.111.
<RoyK> delimiter: you _are_ running ubuntu server, not desktop?
<delimiter> ps -ax | grep dhclient
<RoyK> delimiter: is there a dns server set in /etc/network/interfaces?
<delimiter> RoyK: fyi It's tynot who is in need of help not me :)
 * RoyK slaps RoyK 
<RoyK> tynot: see question above :)
<tynot> ok, delimiter, what are we looking for in the output to the screen? it says eth4.leases eth4. not sure what to make of that. no ip addresses. OK, RoyK, I'll check.
<tynot> RoyK, no apparent DNS in etc/network/interfaces.
<tynot> royK?
<delimiter> tynot: sounds like you still have a dhclient process around, which could be a problem as it might be overwriting your resolv.conf. Suggest to kill it or reboot to start fresh
<RoyK> tynot: killall dhclient
<RoyK> and dpkg -l | grep dhcl
<tynot> k
<tynot> no dhclient found
<tynot> no dhclient process found
<RoyK> ok, after killing dhclient, try rebooting if convenient
<tynot> sure
<tynot> h/o
<tynot> rebooted. now what?
<RoyK> check /etc/resolv.conf
<tynot> back to local router ip 192.168.1.1
<tynot> I did write out 8.8.8.8 in nano. went back to check b4 rebooted. hmm.
<tynot> it's been years since I admin'd a linux box. is there an ipconfig equivalent in linux?
<tynot> anyone?
<tynot> buehler?
<smoser> (it may be the case that no one knows what 'ipconfig' is)
<tynot> no prob
<tynot> it's been a while for me, so this seems foreign again.
<kees> zul: hey, just wanted to point out nagios-plugins 1.4.14-5ubuntu3; looks like hardening-wrapper got missed in the latest merge. I've fixed it, though.
<tynot> cannot figure why the /etc/resolv.conf file is being overwitten.
<zul> kees: oh...odd thanks
<tynot> delimiter? RoyK? still around? sounds like dhcp is running, but is not showing in processes.
<RoyK> tynot: grep -r resolv.conf /etc
<RoyK> perhaps you can find something from that
<tynot> k
<tynot> brb
<MTecknology> !lts
<ubottu> LTS means Long Term Support. LTS versions of Ubuntu will be supported for 3 years on the desktop, and 5 years on the server. The current LTS version of Ubuntu is !Lucid (Lucid Lynx 10.04)
<tynot> ok. I get: Restart nscd becasue resolv.conf has changed, although I'm not sure what we're looking for specifically. sorry.
<tynot> one line reads: in an old resolv.conf file exists, restore it
<tynot> oops, that's "IF and old resolv.conf file exists, restore it"
<tynot> bye
<smoser> jjohansen, ping
<dominicdinada> Need some advice I accidentally knocked the server offline by doing a port scan. if the server is set to rate limit drop if a threshold is reached usually after XXX amount of time it will resume connectivity  ?
<dominicdinada> !ratelimit
<patdk-wk> no idea, depends on how you configure it
<dominicdinada> well i guess where would i look. in the logs it says syn flood detected. DROP packets... and the rules are set to drop packets if xxxx /min is reached but no where does it say to ban the computer from network
<uvirtbot> New bug: #627032 in openldap (main) "package slapd 2.4.23-0ubuntu3 failed to install/upgrade: ErrorMessage: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/627032
<tacomaster> i was wondering will ubuntu server support being an update server for other os's?
<dominicdinada> still blocking me and i dont see any rules blocking me
<dominicdinada> just a standard rate limit to drop flood packets but doesnt it lift
<tynot> have dns resloved now, but can't seem to pull down packages. advice?
<tynot> brb
<smoser> what errors do you get  when you try to "pull down packages"
<tynot> back
<uvirtbot> New bug: #625005 in krb5 (main) "package libkadm5clnt6 1.7dfsg~beta3-1ubuntu0.6 failed to install/upgrade: subprocess installed post-installation script returned error exit status 2" [Undecided,New] https://launchpad.net/bugs/625005
<bdmurray> zul: could you be a bit more verbose in stuff like bug 626416?
<uvirtbot> Launchpad bug 626416 in ubuntu "package openbsd-inetd 0.20080125-4ubuntu2 failed to install/upgrade: sub-processo novo script pre-installation retornou estado de saÃ­da de erro 2" [Undecided,New] https://launchpad.net/bugs/626416
<tynot> smoser, for instance, getting "Couldn't find package <whatever>"
<uvirtbot> New bug: #622502 in postfix (main) "package postfix 2.7.0-1 failed to install/upgrade: el subproceso script post-installation instalado devolviÃ³ el cÃ³digo de salida de error 75" [Undecided,New] https://launchpad.net/bugs/622502
<smoser> tynot, hm.. you may need to 'sudo apt-get update' first
<uvirtbot> New bug: #619728 in samba (main) "package samba 2:3.4.7~dfsg-1ubuntu3.1 failed to install/upgrade: corrupted filesystem tarfile - corrupted package archive" [Undecided,New] https://launchpad.net/bugs/619728
<uvirtbot> New bug: #620445 in samba (main) "package samba-common 2:3.4.7~dfsg-1ubuntu3.1 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/620445
<tynot> think I've tried that, smoser, but what the hell. give it another shot.
<smoser> well, what does it say when you try it ?
<tynot> smoser, THAT worked. however, I've been trying to pull down an Xwindow client, ala kubuntu-desktop, for days and get the previously-mentioned error.
<smoser> tynot, so you're doin gtha thow ? 'apt-get install kubuntu-desktop' ?
<tynot> Wow! the update worked! Woo and hoo!
<tynot> yeah, smoser, it's pulling down and installing? kubuntu desktop. how does one run it when it's through? startx?
<smoser> if you get kubuntu-desktop (this is obviously not '#ubuntu-server' related)  then you i think a reboot will get you a kdm login prompt.
<smoser> not certain, but i think
<tynot> smoser, as no one's on it yet, it's worth a try.
<tynot> to anyone, really. Is there an easy-to-use x-window or mac desktop shell for a remote MacBook Pro?
<aetaric> tynot: you can use ssh... but OS X doesn't use X11 unless you install it. and even then...MOST apps don't use it
<tynot> aetaric, so other than, like, vnc, ssh would give a MacBook Pro user an X11 view of the server?
<aljosa> i'm trying to use eucalyptus on my workstation, having problems with running an instance - http://dpaste.com/236432/ - any ideas why?
<aetaric> no. you can use ssh to view VNC securely
<aetaric> VNC is the only way to see the actual desktop
<smoser> aljosa, well, you have no nodes registered.
<smoser> $ sudo euca_conf --list-nodes
<tynot> aetaric, thanks, man. so, once kubuntu-desktop is done installing, it should boot to a gui login, correct? from there, I think I can handle setting up vnc myself.
<aljosa> smoser: "euca_conf --discover-nodes" doesn't find anything although i have eucalyptus-nc running. any idea what can i do?
<aetaric> tynot: yeah... are you installing kubuntu on you macbook pro?
<smoser> aljosa, you can register it by hand
<tynot> aetaric, ha!! GOD, NO!!! xD we're installing it on an Ubuntu server 10.04.1.
<Lichte> I just installed mysql-server and I can't get logged in as root on either localhost or from another computer
<Lichte> what do I have to do to get logged in
<aetaric> tynot: oh...i was about to say.... that seems a bit daft.
<Lichte> ?
<smoser> aljosa, sudo euca_conf --register-nodes
<tynot> aetaric, just to confirm, it looks like the get/install of kubuntu desktop worked. Once rebooted, it should now come up in a gui, correct?
<Krenari> hello guys
<aetaric> tynot: yeah
<tynot> aetaric, coolness. thx, man. cheers!
<aetaric> tynot: and OS Xhas both a built in VNC server and client
<tynot> it does?
<tynot> how? where?
<Krenari> i need a tutorial have to configure and secure ubuntu server like a web server
<tynot> how does one launch it?
<aljosa> smoser: http://dpaste.com/236438/ - i've tried but nothing changed, any ideas?
<Krenari> can anyone help me pls, i really need very good security configuration!
<DrNick_> hi.  have just upgraded a server to 10.4, since then it refuses to recognise the printer attached to the parallel port.  /dev/lp0 isn't created upon boot.  the lp module is loading, with the message "lp: driver loaded but not devices found".  have tried switching parallel port modes in the bios with no luck.  worked ok before the upgrade
<DrNick_> reloading the lp module presents the same message btw.  any ideas anyone?
<smoser> aljosa, this is lucid ?
<aljosa> smoser: yes
<aetaric> tynot: System prefs > sharing > screen sharing
<Krenari> can anyone help me pls
<smoser> aljosa, unfortunately at this point all i can suggest is restarting things.
<DrNick_> Krenari: try refering to the documentation on the website as a starting point
<smoser> sudo restart eucalyptus CLEAN=1
<aljosa> smoser: ok, thanks
<smoser> and on the node: sudo restart eucalyptus-nc CLEAN=1
<smoser> not that that is the right solution, or you couldn't figure it out, but that is what i would try at the moment.
<Krenari> DrNick_, i am very my with opensource platform can u guide me a little bit more ?
<tynot> aetaric, oh, I don't want to share MY MacBook's screen, I want to view the server's screen from my MacBook Pro. Still work the same, different IP addresses?
<Lichte> Why am I getting: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES), even after I set a password during the installation of mysql-server ???
<aetaric> tynot: yeah... there is a client, just not sure where it is
<tynot> aetaric, "Remote Management"?
<aetaric> tynot: pretty sure you see it when you click on the computer in the "Shared" area on the left of the finder window
<tynot> aetaric, thx. I'll look into it. appreciate the help. cheers!
<DrNick_> at the moment i'm considering looking out for a 2nd hand parallel port print server to just network the printer, although if i could get it working connected to the lp port again that would be great
<Lichte> Why am I getting: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES), even after I set a password during the installation of mysql-server ???
<Lichte> Why am I getting: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES), even after I set a password during the installation of mysql-server ???
<dominicdinada> !cron
<ubottu> cron is a way to schedule execution of software/scripts. See https://help.ubuntu.com/community/CronHowto  -  There is also a decent Howto at http://www.tech-geeks.org/contrib/mdrone/cron&crontab-howto.htm
<thesheff17> I'm trying to run virt-migrate with virsh migrate --live vm029 qemu+ssh://192.168.1.5/system...I see this on the debug mode on the source virtual machine msg=operation failed: migration to 'tcp:vmserver05:49155' failed: migration failed.  Any ideas?
<ChmEarl> thesheff17, you have ssh keys/passwordless setup for root?
<thesheff17> ChmEarl: yes
<thesheff17> ChmEarl: from source to dest...do I need dest -> source?
<ChmEarl> thesheff17, another sanity check is remote use of VirtManager
<uvirtbot> New bug: #627097 in dovecot (main) "dovecot crashes on sieve login" [Undecided,New] https://launchpad.net/bugs/627097
<thesheff17> ChmEarl: I think I got it to work...I think it was complaining about DNS stuff. I hard coded vmserver05 in the host file of the source virtual machine server
<Lichte> Why am I getting: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES), even after I set a password during the installation of mysql-server ???
<thesheff17> !ask
<ubottu> Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-)
<thesheff17> Lichte: try sudo /etc/init.d/mysql reset-password and make sure you are using the right password and using mysql -u root -p to connect.
<Lichte> thesheff17: that didn't work
<thesheff17> Lichte: http://ubuntu.flowconsult.at/en/mysql-set-change-reset-root-password/
<thesheff17> Lichte: it used to in ubuntu 9.10 and before
<delimiter> Does anyone think ondemand (cpu frequency scaling) is good for a server? I'm wondering if it could sap performance in any way
<cloakable> I don't think so, but there may be use cases where it would affect things.
<qman__> if you are using virtualization, CPU scaling can affect the RTC of the VMs
<qman__> that's the only real issue I've ever noticed with them, powernowd specifically
<qman__> performance in general is not a problem, just sometimes side effects like that show through
<yann2> is there any other way where to put known hosts than $HOME/.ssh/known_hosts ? I'd like to deploy it automatically and prevent a user from modifying it
<yann2> oh found it /etc/ssh_known_hosts2 , sorry
<yann2> mmh doesnt seem to work :(
<Tim_R> as anybody installed any server versions onto a flash drive before?
#ubuntu-server 2010-08-31
<Psi-Jack> Has anyone worked much with AoE targets for HA networked storage mediums? I'm curious how fault tolerant it can be.
<fbc_> I have a server with a 10gig system partition and would like to move and remap /var to another drive(bigger) as that is where all the data goes.
<fbc_> could someone point me to a guide or how to?
<fbc_> don't all jump in all at once now.
<uvirtbot> New bug: #627142 in apache2 (main) "Apache2 init.d script runs 'stty sane', which will fail on script runs" [Undecided,New] https://launchpad.net/bugs/627142
<Andre_Gondim> I have a ubuntu server with cups installed, how do I access the interface in other pc in the same network
<pmatulis> Andre_Gondim: i think cups has a web interface
<Andre_Gondim> pmatulis, yeah, it has, but how may I do the accesss by other machine not localhost
<pmatulis> Andre_Gondim: port 631 i believe
<Andre_Gondim> I did, but don't show me nothing
<pmatulis> Andre_Gondim: what exactly did you do?
<Andre_Gondim> I installed ubuntu server, and cups, then in other machine try ip_from_server:631 and shows me a error page, like that page doesn't exists
<pmatulis> Andre_Gondim: i guess the cups daemon isn't running
<pmatulis> Andre_Gondim: you should get something like what i have on my system:
<pmatulis> oops
<pmatulis> cupsd   1938 root    7u  IPv4  25155      0t0  TCP 127.0.0.1:631 (LISTEN)
<Andre_Gondim> root     19103  0.0  0.1   6872  2620 ?        Ss   21:45   0:00 /usr/sbin/cupsd -C /etc/cups/cupsd.conf
<Andre_Gondim> pmatulis, like this?
<pmatulis> Andre_Gondim: for me, it's listening on localhost
<pmatulis> Andre_Gondim: change that in your config file
<pmatulis> Andre_Gondim: what output do you get to:
<pmatulis> sudo lsof -i4tcp:631 -nP
<Andre_Gondim> cupsd   19103 root    6u  IPv4 165972      0t0  TCP 127.0.0.1:631 (LISTEN)
<pmatulis> Andre_Gondim: there you go.  change the config file so it listens on your IP address
<Andre_Gondim> pmatulis, the  config file is cupsd.conf?
<pmatulis> Andre_Gondim: yeah
<Andre_Gondim> I have inserted Listen my_other_machine_ip:631 and restarted the service cups, bug the result is the same
<pmatulis> Andre_Gondim: your other machine?
<pmatulis> Andre_Gondim: no, put the server's address
<Andre_Gondim> oh, I got it
<Andre_Gondim> pmatulis, many thanks, now I can setup my printer ;)
<pmatulis> Andre_Gondim: good work
<YDU_Remote> ...mornin y'all - anyone familiar with grub2 strangeness - i.e., on powerON (not reboot), grub just hangs with "out of disk" error...?
<pmatulis> YDU_Remote: when you power on your machine does not boot but when you reboot it does?
<YDU_Remote> pmatulis: Yeppers mate...exactly that. Strange, innit...
<pmatulis> YDU_Remote: you might have a power issue (power supply or wall outlet)
<YDU_Remote> pmatulis: All tested/checked/verified. This server installation also does the same thing at the lab, here, or anywhere else. Strange thing this - I've done the exact same installation/setup on several machines and this one is the only one that freaks like this. All the same hardware, same Ubu version, same everything.
<pmatulis> YDU_Remote: but you're using the same p/s, obviously
<pmatulis> YDU_Remote: or did you change it?
<YDU_Remote> Used the PSU from a different box (the same, but just from a different box) -> same issue.
<pmatulis> YDU_Remote: how many and what kind of drives?
<YDU_Remote> pmatulis: Single drive, internal, 1.5tb => Samsung
<pmatulis> YDU_Remote: swap in another one
<YDU_Remote> pmatulis: Hard to do whilst this site is active - have to try that at COB
<pmatulis> YDU_Remote: COB?
<YDU_Remote> pmatulis: My bad - it's a WD (COB - Close of business)
<pmatulis> YDU_Remote: alright
<JasonMSP> Is it possible to adjust some setting so that I don't see the entire directory structure at the command prompt?  maybe just the current folder name?
<YDU_Remote> These kinda niggly things really give me the shiites sometimes...re-installed grub2 so many times I feel like I'm dating it...
<pmatulis> JasonMSP: yes, of course
<pmatulis> JasonMSP: you want to "customize your bash prompt" (assuming you're using bash as shell)
<JasonMSP> pmatulis: yes
<pmatulis> JasonMSP: so google will tell you, basically you want to define the PS1 variable
<JasonMSP> thanks
<pmatulis> JasonMSP: your current prompt is given by 'echo $PS1'
<EvilPhoenix> i'm trying to configure lighttpd to work
<EvilPhoenix> i installed it with apt-get
<EvilPhoenix> and i'm getting 403 forbiddens everywhere
<EvilPhoenix> how can I fix that
<pmatulis> EvilPhoenix: i guess you need to configure it
<EvilPhoenix> no crap
<EvilPhoenix> i came here looking for support with that
<pmatulis> http://redmine.lighttpd.net/wiki/1/TutorialConfiguration
<pmatulis> EvilPhoenix: that should get you started
<tf2ftw> a
<tf2ftw> chrismsnz,
<|rt|> does anyone know of any reasons you can get really bad (4.3 MB/s) write speed and really good (187 MB/s) read with software raid in linux?
<chrismsnz> raid 5?
<|rt|> doesn't seem to matter what file system I use or kernel version
<|rt|> yes 5 disk raid 5
<chrismsnz> it's doing a lot of calculation per write
<stlsaint> any of you folks ever use psad with server edition?
<chrismsnz> i haven't seen it that bad, but it's bad
<|rt|> nah...cpu's can do that really fast
<|rt|> it's not cpu bound when this is happening
<|rt|> like 97% idle cpu
<|rt|> i've used 12 disk raid5 software raid without any issues in the past
<chrismsnz> is one of the disks dragging it down?
<chrismsnz> try to run hdparm on each individual device
<chrismsnz> hdparm -tT
<|rt|> yeah did that already...they are all about the same
<|rt|> 50-60 MB/s
<|rt|> but hdparm only test reads....and my slowness is writes
<|rt|> I'll blow my raid volume away and run some bonnie++ tests on each drive
<|rt|> but these same drives did a good job with OpenSolaris and ZFS
<|rt|> if I can't get linux working right I'll try FreeBSD and just go back to ZFS
<chrismsnz> zfs is nice :)
<chrismsnz> if you're running a recent kernel the on-disk format for btrfs is now stable so might be worth a whack
<|rt|> it is....I was hoping to play with BTRFS but the user space tools seem to be a bit lacking
<chrismsnz> obviously not if it's production, though
<|rt|> and it's documentation
<chrismsnz> I prefer RAID 10 to raid 5
<|rt|> chrismsnz: I started off playing with BTRFS on Maverick on this system just to see what it was like
<|rt|> chrismsnz: yeah I agree only the raid 10 thing but these drives are only 250GB drives so for capacity I chose to stick to RAID5 for now
<chrismsnz> in fact, you could probably use Linux's "raid 10" on your array - it outperformed raid6 in my tests
<|rt|> but BTRFS doesn't do RAID5 yet
<|rt|> yeah...I may do that just to test this....but I need to figure out what about my system Linux isn't likely
<|rt|> liking
<chrismsnz> well, if you want the bleeding edge - i heard that native linux ZFS is being released shortly as a compilable kernel module
<|rt|> yeah I think i'll pass on that for now
<|rt|> hopefully maverick +1 has BTRFS and the documentation and user space tools are improved
<chrismsnz> natty narwhal D:
<|rt|> integration with apt and snapshots would be nice too :)
<chrismsnz> is your hardware exceedingly old or new?
<|rt|> it's middle of the road
<chrismsnz> a while ago linux was having trouble with an onboard sata controller I had and it affected speeds quite badly
<|rt|> it's probably a year old or so but it wasn't cutting edge then
<chrismsnz> shouldn't be a problem there :\
<|rt|> yeah I've been looking for information about the SB700 chipset and performance issues but haven't found anything
<|rt|> the system has an AMD Athlon 4850e cpu
<chrismsnz> you're using software raid by mdadm right?
<|rt|> yeah
 * chrismsnz boggles
<|rt|> i've tried the SATA controller in SATA mode and AHCI mode with the same results
<|rt|> yeah i've never seen anything like this
<|rt|> but i've tried a few older kernels on system rescue live cd's to make sure it wasn't a kernel issue
<|rt|> what ever is going on doesn't seem to be tied to any kernel version
<chrismsnz> you're using one parity disk?
<|rt|> well raid5 doesn't have a dedicated parity disk...but it is n+1 parity
<chrismsnz> err yeah
<chrismsnz> i mean, i can see why it's slow to write... 5 disks = 4 reads and 2 writes per block
<|rt|> I have plenty of pci slots open so one option may be to pick up a couple of 4 port sata controllers
<chrismsnz> but like i said i haven't seen it that slow
<chrismsnz> or 3 reads, hypothetically, plus parity calculation
<|rt|> if the raid didn't rebuild at 50MB/s my first thought was partition mis-aligned but you normally don't see that much speed loss for that
<|rt|> just did a bigger test to make sure I wasn't seeing some caching affect on the read spead
<|rt|> wrote 7.4GB in 1767.36 seconds (4.2MB/s) and read 7.4GB in 49.046s (150MB/s) using dd
<|rt|> time to blow away the raid and run bonnie++ on each drive to see what happens
<|rt|> b/c if it's one drive the read speed could still be pretty good with raid5 as it could work around the slow drive
<|rt|> not sure it's that intelligent though
<chrismsnz> good luck :\
<chrismsnz> also check dmesg and see if it's complaining about any of your hardware
<|rt|> yeah hopefully something jumps out with the individual disks
<|rt|> nothing in dmesg
<chrismsnz> bummer
<|rt|> chrismsnz: I think I found my problem
<chrismsnz> ooh
<|rt|> mkfs.ext4 on /dev/sda2 is going really slow....fast on all the other drives
<chrismsnz> give it a prod with smartctl
<chrismsnz> see if it's noticed a problem
<|rt|> hmm sdd2 is slow as well...may have spoken too soon
<|rt|> smart data is fine on all of the drives.....i'll have to swap what port the drives are plugged into and see if the problem follows ports, drives, or cables
<Datz> hi, it seems as though cron isn't working on my system. Can someone help me out with a diagnosis?
<chrismsnz> hi
<chrismsnz> Datz: what makes you say that?
<Datz> chrismsnz: I have cron jobs scheduled and they don't run
<chrismsnz> ok, where did you schedule them?
<chrismsnz> in crontab? cron.daily? cron.d?
<Datz> crontab
<chrismsnz> unprivileged user? or root user?
<Datz> user
<Datz> groups datz
<Datz> datz : datz adm dialout cdrom plugdev crontab sambashare lpadmin admin
<Datz> I added myself to crontab group
<Datz> in hopes to fix it
<chrismsnz> do you have an /etc/cron.deny or /etc/cron.allow file?
<chrismsnz> (it's ok if you don't but if you do it will use it)
<Datz> no, for either
<chrismsnz> ok
<chrismsnz> you're running 10.04?
<Datz> yes
<chrismsnz> type "status cron"
<Datz> ok
<Datz> status: Unable to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
<Datz> this is a VM btw
<Datz> don't know if it would matter in this case
<chrismsnz> no... is it a normal server install?
<Datz> yes
<Datz> Linux VMserv 2.6.32-24-server #39-Ubuntu SMP Wed Jul 28 06:21:40 UTC 2010 x86_64 GNU/Linux
<chrismsnz> does "ps -ef | grep cron" show you anything? is it running?
<Datz> yes
<Datz> root       757     1  0 Aug21 ?        00:00:01 cron
<Datz> datz     30006 27262  0 02:34 pts/0    00:00:00 grep --color=auto cron
<chrismsnz> okay
<chrismsnz> show me the crontab line you used to schedule the task
<Datz> I just used this for testing: * * * * * date > /tmp/foo.log
<Datz> nothing in /tmp/ yet
<chrismsnz> ok one sec, just going to check a box
<Andre_Gondim> Datz, try */3 * * * * date > /tmp/foo.log
<chrismsnz> try restarting the cron daemon in the meantime "restart cron"
<Andre_Gondim> every 3 minutos
<Datz> Andre_Gondim: I'll add that one as well
<Andre_Gondim> and reload the cron liek chrislabeard said
<Datz> I'm getting the same error I got with status
<chrismsnz> ok
<chrismsnz> try restarting dbus
<chrismsnz> "restart dbus"
<chrismsnz> then "restart cron"
<Datz> should I try to restart cron with /etc/init.d/cron restart?
<Datz> ok
<chrismsnz> Datz: i don't know any more - half the friggen system is on upstart and half isn't
<Datz> it holds up on restart
<Andre_Gondim> Datz, yes
<Datz> kk
<chrismsnz> i'm testing these commands on a lucid box as i'm advising you, though :)
<chrismsnz> yeah the init.d file just restarts it via upstart anyway
<Datz> ah
<Datz> yea
<Datz> well I never restarted the system after some updates, and it keeps asking for a restart.. might this create a problem?
<Datz> humm, well now there is a foo.log :P
<Datz> so I think restarting cron worked :)
<Datz> maybe I should restart ....
<chrismsnz> yeah...
<Datz> heh hehe
<Datz> thanks
<chrismsnz> i've had dbus die after an update
<Datz> ah
<chrismsnz> whole system seems to run on it now hehe
<Datz> well.. bb in a few thanks again
<Datz> ah
<chrismsnz> cool
<Tim_R> I have a question about doing virtual ethernet onto my server how would I do that? I also got virtual hosting on couple pages how would I set all this up
<chrismsnz> you mean having aliased IP addresses on the same interface?
<Tim_R> yes
<chrismsnz> ok
<chrismsnz> Here's a good howto on adding alias's for an interface and how to configure them correctly to be permanent
<chrismsnz> http://www.cyberciti.biz/tips/ubuntu-linux-creating-ethernet-alias-for-eth0-network-device.html
<chrismsnz> However!
<chrismsnz> If you just want it for virtual web hosting, you'll be better off relying on apache's virtualhost configuration
<Tim_R> well see I have virtual hosting now on my server
<Tim_R> alright thanks chris for your help
<chrismsnz> :o
<tf2ftw> anyone have any experiance with MAMP and enabling htaccess?
<chrismsnz> i don't use mamp, but the apache config to enable it should be the same
<tf2ftw> the problem im having is rewrite is not working
<tf2ftw> let me get the file one sec
<tf2ftw> its httpd.conf correct?
<chrismsnz> the htaccess file is usually ".htaccess"
<chrismsnz> httpd.conf configures the server
<tf2ftw> i mean to enable mod_rewrite
<chrismsnz> I'm not sure how mamp has it set up
<chrismsnz> yeah httpd.conf
<chrismsnz> (probably_
<tf2ftw> looks like MAMP loads a mod_rewrite.so
<tf2ftw> ffffuuuuu they have AllowOverride in more than one place in httpd.conf and you have to enable all of them
<chrismsnz> oh, weak
<chrismsnz> you should just be able to enable it for your "location"
<chrismsnz> htdocs or whatever
<tf2ftw> hmm yeah I enabled it with an apache restart still no joy... still digging
<tf2ftw> ah so you can use a ".htaccess" file in another name using AcessFileName. That would be good for security?
<tf2ftw> do i have to have the htaccess file in the root of my web server or can i just use it in the dir im working in (ex /htdocs/site1/) ?
<tf2ftw> Its always worked for me just having it in teh working dir.
<chrismsnz> should be in the dir your working in
<tf2ftw> chrismsnz, why would httpd.conf be empty in /etc/apache2/httpd.conf ?
<twb> tf2ftw: because most stuff happens in the .d's
<tf2ftw> is that in sites-enabled?
<chrismsnz> tf2ftw: twb is right, if your setting it up on ubuntu/debian
<tf2ftw> yeah its ubuntu
<twb> chrismsnz: if it was C5 it'd be /etc/httpd :-/
<twb> Stupid apache people think theirs is the only httpd
<chrismsnz> yeah he was talking about mamp before
<tf2ftw> well looks like i might have found my htaccess problem
<tf2ftw> the script itself
<tf2ftw> does this look right? http://pastebin.com/F3UJ1a5k
<chrismsnz> not a regex expert, but the second ^ doesn't make sense to me?
<tf2ftw> let me look into that
<chrismsnz> well it's supposed to indicate the beginning of a string, so it doesn't make sense to have it in the brackets there when you're matching any character for the file name
<chrismsnz> again, not 100%
<chrismsnz> a rewrite of mine looks like this:
<chrismsnz> RewriteRule ^.*$ /index.php [NC,L]
<tf2ftw> you are correct about the second ^
<chrismsnz> so try changing yours to "RewriteRule ^.+\.html$ /$1.php [QSA,L]"
<chrismsnz> err you need the brackets actually to capture
<chrismsnz> RewriteRule ^(.+)\.html$ /$1.php [QSA,L]
<tf2ftw> aha!
<tf2ftw> take that "/" out before $1 and you are a winner
<tf2ftw> Thanks
<tf2ftw> The strange thing is i copied this htaccess file from another project that was working.. i might have copied the wrong one (obviously)
<tf2ftw> thanks
<chrismsnz> _b
<tf2ftw> here is something weird, this htaccess works on my local MAMP but not my Ubuntu apache2..
<tf2ftw> getting an Internal Server Error
<chrismsnz> check the error log it normally sheds a little more light
<tf2ftw> ".htaccess: Invalid command 'RewriteEngine', perhaps misspelled or defined by a module not included in the server configuration"
<chrismsnz> heh
<tf2ftw> check this out, how the folder is not showing up on the web server
<chrismsnz> "sudo a2enmod rewrite"
<chrismsnz> then restart apache
<tf2ftw> you nailed it again. shit im going to have to start paying you
<tf2ftw> o.O
<tf2ftw> luckily i am at the end of my server journey here (i hope)
<chrismsnz> hehe, already getting paid but considering going consulting next year
<tf2ftw> good for you man. I took the jump recently.
<tf2ftw> this server is great for keeping everything on one box. no more passing thumb drives or emails.
<chrismsnz> yeah, testing the waters here in NZ to see if I can make a living as a python/django dev & sysadmin
<tf2ftw> good luck.
<tf2ftw> ok im checking out. Thanks again
<ohzie> MDADM: Who are the experts? =D
<uvirtbot> New bug: #627227 in apache2 (main) "ChrootDir(mpm_common) and DocumentRoot issues" [Undecided,New] https://launchpad.net/bugs/627227
<chrismsnz> shout out, if someone's around they'll let ya know :)
<chrismsnz> what's your prob?
<ohzie> Trying to figure out if I can do raid 6 with more than two parity volumes, also spreading parity blocks redundantly across drives instead of having specific drives handle it. I'm very new to software raid, and not sure how to procede. =D
<chrismsnz> cool
<chrismsnz> well, you'll be happy to know that raid5 and raid6 have distributed parity, not dedicated disks
<ohzie> I did look over the manual, and I'm under the impressive from a combination of the man file and the linux-raid wiki that I can define parity drives with -spare, but I wanted to talk to someone who has done it before I've done it.
<ohzie> before I do it*
<ohzie> chrismsnz, Sweet!
<ohzie> Then I think I read it wrong and that -spare is probably not what I thought it was.
<chrismsnz> a spare isn't a parity drive, it's a drive that is inactive until there's a problem with the array - at which point it will jump in and start reconstructing
<ohzie> chrismsnz, That's automatic?
<ohzie> Not even pillar does that crap
<ohzie> Why the hell am I using Pillar
<chrismsnz> what the hell is pillar D:
<chrismsnz> ah, san/nas brand
<ohzie> It's awesome
<ohzie> but it doesn't do that
<ohzie> I mean if we needed the iops I'd understand, but we really don't.
<chrismsnz> i think it probably does, they might call it something different
<chrismsnz> hot-spare is the other common name
<ohzie> When a drive failed in one of our bricks, I had to come in on a sunday to plug the new one in. :3
<chrismsnz> yeah, obviously it has to be configured as a spare at first - remember it's completely unused until there's a problem
<chrismsnz> how many disks are we talking?
<ohzie> I'm thinking 20.
<chrismsnz> linux supports raid5 and raid6, but depending on the circumstances I normally lean towards Linux raid 10 or raid 1 + 0
<ohzie> It depends on what they want to pay for.
<ohzie> Between 10 and 20
<ohzie> with the auto-rebuilding, i might be able to sell them on more
<ohzie> with raid 6 can you have more than one backup of a parity block?
<ohzie> like, losing three drives instead of two?
<ohzie> or is more redundancy required for that?
<chrismsnz> here's a big entry on raid levels http://en.wikipedia.org/wiki/RAID
<chrismsnz> covers everything including linux's non-spec raid 10
<chrismsnz> ohzie: not sure about that
<chrismsnz> you can specify whatever redundancy you like with linux special mdraid 10 http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10
<chrismsnz> ohzie: are you building an appliance?
<ohzie> chrismsnz, Yes. :D
<ohzie> Basically they want more storage in one of our locations, but they don't want to spend 80 thousand dollars on starting another pillar
<chrismsnz> ohzie: you should play with FreeBSD and ZFS
<chrismsnz> Linux has btrfs in the pipeline which is looking good, but not fully baked yet
<chrismsnz> http://en.wikipedia.org/wiki/Zfs ZFS is awesome at managing massive amounts of storage
<chrismsnz> under linux I'd probably set up the underlying RAID comprising all of the disks and plop a flexible LVM scheme on it
<cerberos> I've got a new ubuntu server 10.04 VPS and I can't get ssh to work with keys (I've been trying on and off for days). my key is /root/.ssh/authorized_keys/id_rsa.pub - here is /etc/ssh/sshd_config http://dpaste.com/236672/
<cerberos> doh, authorized_keys is a file
<kaushal> hi
<kaushal> can i get help regarding xen on Ubuntu Server ?
<kaushal> or is there a irc channel for xen on Ubuntu ?
<jpds> kaushal: #ubuntu-virt ? As far as I know, KVM is the only supported option.
<kaushal> ok
<kaushal> jpds: I am looking out for vmlinuz-2.6.32-24.xen and initrd.img-2.6.32-24.xen
<kaushal> Cannot join to channel #ubuntu-virt (You must be invited)
<kaushal> jpds: sorry about asking basic question
<kaushal> so xen type is KVM ?
<kaushal> not sure i understand that
<jpds> kaushal: KVM is a different virtualisation technology.
<twb> KVM is hardware virtualization.  Xen is paravirtualization (though it will fall back to full virtualization).
<kaushal> ok
<kaushal> so whats the difference between them ?
<twb> The difference is in how virtualization is done.
<kaushal> twb: Thanks
<imyousuf> Hi
<imyousuf> I am trying to install sun-java6-jdk because Cloudera Distributed Hadoop has it as dependency, but I can't because of dependency failure
<twb> In Lucid, Sun Java is in the "partners" repo.  Have you enabled it?
<imyousuf> twb: let me check
<twb> (Incidentally, if you're trying to make proprietary "enterprise" crap work on linux, you might as well just shoot yourself now.  It's NOT FUN.)
<imyousuf> twb: CDH is Open Source :) they contribute back to Apache AFAIK
<twb> Everybody *says* that; I believe them when Debian ftpmasters allows it through the NEW queue.
<imyousuf> :-D
<twb> FWIW, "wnpp-check hadoop" has no hits
<\sh> moins
<siretart> \sh: OK
<\sh> siretart, cool...I just found the discussion about the freeze exception for fai
<\sh> siretart, http://www.mail-archive.com/ubuntu-motu@lists.ubuntu.com/msg04598.html but I don't read anything about a granting ;)
<siretart> \sh: there wasn't much disagreement either.
<siretart> ah, you mean it would annoy someone if we broke FF for fai?
<\sh> siretart, yes...but when we use the discussion from that time as a "granted FFE for FAI"...I'm happy to upload
<\sh> siretart, and only today I got a message on fai-linux that the PPA packages are working
<siretart> oh, what a shame
<siretart> \sh: TBH, I'd say just upload. the risk of breaking other unrelated packages is neglectable AFAIUI.
<siretart> in terms of: 0
<\sh> siretart, it won't break anything...
<\sh> well, I'll prepare maverick packages + add the pointer to the ml archive for documentation...:) at the same time I'll do an FFe report for zend-framework ;)
<siretart> cool!
<qiyong>  reload squid
<qiyong>   reload: Unknown instance:
<drurew> anyone a cacti genious here? ; Im having a wierd problem where my graphs restart collecting at @ 2am..
<twb> Most people here are humans.
<twb> Or at least anthropoids
<twb> (Oops.)
<drurew> hey twb...
<twb> wot?
<drurew> "hey" as in "drurew waves to twb"
<soren> jdstrand: It seems that the libvirt packaging branch finally has been succesfully imported. Do you want to start using it for our packaging work?
<Zikey> Hi, I'm running ubuntu 8.04LTS raid1 (MD+LVM), the first drive died, can I directly replace the first drive by a new one and boot up ?
<Zikey> I'm affraid of what will happen when ubuntu will find a new empty drive as the first drive, will it automatically ID it and replace it as /dev/sda, or will it create a /dev/sdc ? (/dev/sdb being the healthy disk)
<twb> Zikey: that depends how grub is configured on the second drive
<Zikey> it's lilo
<twb> Zikey: basically, you can't rely on it working
<twb> Well, lilo I can't comment on
<twb> lilo doesn't work with LVM reliably anyway
<twb> (Since lilo remembers block offsets into the disk, and LVM can move extents around.)
<Zikey> with 8.04 we didn't have a change to install grub that didn't work with lvm
<twb> Zikey: that's because you tried to put /boot on LVM on RAID, which is 1) stupid; and 2) not supported by grub.
<Zikey> k
<twb> Note that this can happen accidentally due to a bug in d-i in 8.04
<Zikey> unfortunately I didn't have the choice
<\sh> siretart, bug #627337
<uvirtbot> Launchpad bug 627337 in ubuntu "[FFE] FAI 3.4 for Ubuntu Maverick" [Undecided,New] https://launchpad.net/bugs/627337
<twb> (Namely: if you allocate /boot to md0 in partman, then configure LVM, it will forget about md0's configuration, go through the WHOLE install, then put /boot in with LVroot and force lilo.)
<Zikey> can i boot the 8.04LTS server by pressing some key to drop in command line without starting everything ?
<NightDragon> hello all
<NightDragon> a quick question if you dont mind
<twb> Zikey: well, you can boot with "single" which will give you a root shell.
<NightDragon> i'm getting an errror after setting up cups as a server,
<Zikey> twb, how do you do that ?
<twb> Zikey: it's usually the second option in the bootloader "rescue"
<NightDragon> Request from "129.93.xxx.xxx" using invalid Host: field "xxxx.xxx.edu:631"
<Zikey> what do I need to type when LILO boot ?
<NightDragon> (xxx's are censored of course)
<twb> Zikey: otherwise, manually add "single" to the boot parameters, however that is done
<NightDragon> when i try to axs the website, i get a '400 bad request', the website being the cups web admin page
<twb> Zikey: I haven't used lilo this millenium, so I don't remember
<twb> NightDragon: "censoring" a public IP isn't exactly going to protect you
<NightDragon> i know.
<NightDragon> matter of habit.
<twb> Bloody daft habit
<NightDragon> better safe then sorry mate
<NightDragon> oh look
<NightDragon> i fixed my own problem :X
<NightDragon> sorry guys
<NightDragon> lol
<twb> I expect you fat-fingered the URL in your browser
<NightDragon> no, actually i added the magic line in my conf file
<NightDragon> (some line that was apparently needed)
<NightDragon> in any case, while i'm here... does anybody know of anything that can enable one to get some really good control over printing on a print server? something that extends the basic functionality of CUPS perhaps?
<twb> Yeah, thingy
<twb> Some guy in europe makes it...
<twb> pykota, that's it
<NightDragon> twb: narrows it down a bit. :-P
<twb> It's *really* annoying to work with, but AFAIK nobody else has much along those lines
<twb> It does stuff like rasterizing the page and calculating ink usage
<NightDragon> actually, i think i've heard of this
<twb> And per-user ink/paper printing allowance, of course.
<NightDragon> yeah, kinda lame that they want to almost charge you for documentation
<twb> I think I basically said "I don't want to deal with this, give it to an intern to deploy"
<NightDragon> lol
<NightDragon> actually, i have a question for you
<NightDragon> i'm really sick and tired of our office (of which i'm the SA) having direct printing capabilities to all of the printers in the office
<NightDragon> the results of which are some pretty stupid crap
<NightDragon> ...such as the fact that lordy help us if an IP gets changed inadvertantly, etc
<twb> So move the printers to a new subnet and have the router block it
<twb> *block access to it
<NightDragon> ...or i have to log on as adminstrator (a PITA in XP) to add a printer
<NightDragon> ...or when some fucktard prints off a thousand page job to the printer i use (a color printer, which happens to be located right next to me)
<NightDragon> so my question...  i want to convince my boss to move to a cups/samba print server. What are the talking points of this (advantages, disadvantages, etc)
<NightDragon> , and what should i really know that i might not already? :)
<twb> SPOF
<NightDragon> yeah
<NightDragon> it has to be failproof
<twb> Versus the administrative cost of you having to go around fixing people's printer configs
<NightDragon> if i work with a single print server, can i (1) add printers to the server, and have it reflect regionally across many users?
<twb> Yes
<twb> Er, depends what you mean by "regionally"
<NightDragon> 2) can i assign users into groups, and give them access to different printers depending on their groups
<NightDragon> (i cant say 'globally' if i'm going to segment by groups lol)
<twb> I *think* so; I haven't done it myself
<NightDragon> ooh :D
<twb> Obviously it'd also depend on whether you were going via IPP or Samba or both
<NightDragon> samba
<NightDragon> that way, it will use NT domain authentication for the usernames, wont it?
<twb> So you have a homogeneous Windows network?
<NightDragon> NTLM?
<twb> I imagine s
<twb> *so
<NightDragon> yeah, we sit within a windows domain
 * NightDragon is the stubborn bastard who will shoot himself before he picks a windows server over a good *nix server
<NightDragon> come hell or high water lol... i hate working at a windows workstation as much as it is
<NightDragon> well this is all moot, if you use samba you can authenticate over various things, including LDAP (which means AD), even if your not using NTLM directly
<NightDragon> but question (4): Can you do cool things in cups like say... "users can print to this printer, but their jobs must have at least these many pages"
<twb> Samba can talk to AD, but it can't *be* AD except in Samba 4.
<twb> I don't know.
<twb> Talk to #cups or so
<NightDragon> and question 5)  Can you add imaging scanners into a print server? we have a really slick high-cap, high-speed scanner... and it would be nice to be able to network that so everyone can use it
<twb> ditto
<NightDragon> idd
<NightDragon> well this is good stuff
<NightDragon> ty
<twb> No worries
<NightDragon> of course, there is one added benefit to having a cups server... no waiting for 20 minutes while joe tool over there completes his book job
<NightDragon> >:)
<ttx> ScottK: re: cobertura-maven-plugin, I'll have a look -- however my maven foo is intentionally limited.
<Zikey> do you know if mdadm support replacing a raid 1 disk with a higher capacity one ?
<Zikey> basically can mdadm do raid1 with two differents disks size ?
<Pici> I believe its limited to the smallest disk size.
<Zikey> k
<patdk-wk> yep
<Zikey> I'm stuck, I boot up the livecd, got a terminal
<Zikey> unfortunately mdadm is not running
<Zikey> when I install/run it it doesn't detect my array
<twb> Zikey: don't use ubuntu desktop CD for recovery, it sucks
<twb> Find something that at least has mdadm and lvm pre-installed in the live CD
<twb> And yes: RAID1 of inequal nodes will use the smallest node size.
<NightDragon> ,
<Zikey> When i run fsck on the filesystem, it says the fs is clean right away, is there any way to force it to verify it ?
<patdk-wk> -f
<Zikey> thx :)
<patdk-wk> man fsck :)
<Zikey> there is no -f in the man :(
<Zikey> (ubuntu 8.04 LTS server)
<patdk-wk> odd
<patdk-wk> fsck --help :)
<Zikey> doesn't work either :)
<patdk-wk> it does on my 8.04 server
<twb> In what way does it "not work"?
<Zikey> not from the livecd it gives me an error (can not try again since it's checking now)
<Zikey> I will tell you the error when done :)
<Zikey> done
<Zikey> no error, it just gives me: fsck 1.40.8 (13-Mar-2008)
<Zikey> when i enter: fsck --help
<patdk-wk> must be a livecd thing, I am doing this on a live system
<twb> That's a really old CD
<twb> fsck here is 2.17.2
<patdk-wk> 1.40.8 on my 8.04 system
<twb> Huh, ok
<Zikey> anyone every tried to replace a raid1 disk of an array (mdadm+lvm2) ?
<Zikey> I wonder if it's that easy to remove the first disk and put a new one
<Zikey> do you know if 8.04 support hot disk swapping ? can I remove the first disk and replace it while the system is running ?
<jdstrand> soren: if it actually works and has a sane workflow, sure
<soren> jdstrand: Does bound branches+dch+debcommit count as sane for you?
<patdk-wk> heh, hotswapping is a hardware issue, really
<jdstrand> soren: sure, assuming it works :)
<twb> patdk-wk: not just hardware
 * jdstrand is slightly jaded
<soren> jdstrand: We can always try and fail miserably.
<zul> slightly? :)
<soren> jdstrand: :)
<patdk-wk> ya, but linux has supported hotswapping drives for a while now
<jdstrand> :)
<twb> Buses that don't support hot-swapping natively sometimes have non-standard vendor hacks to add it, and that requires a Linux driver for that vendor
<twb> (Unless it's handled entirely in the hardware RAID card, in which case You Win!)
<patdk-wk> I do it on non-supported busses too, just force a bus reset after I swap
<Jeeves_> echo
<Jeeves_> echo '- - -' > /sys/block/dev/blah/host/blah/scan
<Jeeves_> orso :)
<tynot> anyone know the syntax to open port 5900 for VNC server?
<patdk-wk> run the vpn server?
<patdk-wk> if you mean firewall, guess it depends on what you use to manage your firewall
<tynot> nope. ubuntu's on a local network. just trying to vnc to it from a mac.
<patdk-wk> heh? no firewall?
<binBASH> living on the edge
<patdk-wk> in my case, hope you trust your coworkers, or family :)
<tynot> just for remote admin locally.
<soren> jdstrand: Alright, I think we're all set. I've just pushed a couple of changes.
<zul> kirkland: ping when you are around?
<tynot> anyone? can't get vine vnc to run on a MacBook Pro in LAN.
<jdstrand> soren: ok, so we are talking about lp:ubuntu/libvirt?
<soren> jdstrand: We are.
<soren> jdstrand: is that ok?
<jdstrand> soren: as much as I understand udd, sure -- I just wanted to make sure we were on the same page. I'm grabbing the branch now
<tynot> anyone care to share how to setup and run vncserver locally?
<soren> jdstrand: It's pretty simple, really.
<tynot> vnc4server's already installed and running?
<soren> jdstrand: When you check it out, you  "bzr checkout lp:ubuntu/libvirt".
<soren> jdstrand: If you've already branched it, you can "bzr bind lp:ubuntu/libvirt"
<soren> jdstrand: That does the same thing.
 * jdstrand nods
<tynot> soren, thx. that's the syntax to run it from a client or server side?
<soren> jdstrand: Oh, in the latter case, you also do a "bzr update" after binding, just in case. Otherwise it gets a bit confused.
<soren> tynot: huh?
<soren> tynot: Er... Talking to jdstrand about something completely different :)
<tynot> I've both a MacBook Pro and a Windoze 7 PC trying to run Vine and UltraVNC, respectively.
<soren> jdstrand: So, you make a change and use dch to add an entry to the changelog about it.
<soren> jdstrand: like so: dch "whatever you want to be in debian/changelog"
<tynot> sorry for the confusion, folks
<soren> jdstrand: When you've done that, you run debcommit.
<soren> jdstrand: This commits it to bzr with a commit log entry based on the debian changelog.
<soren> jdstrand: Being a bound branch, this change is pushed to launchpad immediately.
<soren> jdstrand: Once we want to roll a new upload, one of use will run "dch --release" which updates the timestamp in the changelog, and replaces "UNRELEASED" with "maverick" in the changelog.
<soren> jdstrand: after that, you run "debcommit --release".
<soren> jdstrand: This pushed the changes to the bzr branch and makes a tag.
<soren> jdstrand: Once this succeeded, you build the source package and upload.
<soren> jdstrand: It sounds tedious, but it's really a quite delightful workflow if more than one person is working on a package.
<jdstrand> thanks for the explanation. I've only ever tried the udd thing with merges, and always had trouble with it
<tynot> vnc, anyone?
<jdstrand> soren: it is basically the same workflow I've used on other stuff with a bzr branch, except the debcommit parts
<soren> tynot: What are you trying to do?
<jdstrand> soren: so it should be quite comfortable
<jdstrand> soren: how is the tag used in LP?
<soren> jdstrand: I'm not sure it is, really.
<soren> jdstrand: Well, you can see it in loggerhead.
<tynot> soren, uh, not be in front of the linux console? just trying to admin the ubuntu server locally from a floor below me through vnc.
<Zikey> If you change a disk (/dev/sdb) with another one, will it be mounted by default as /dev/sbc (because udev will keep /dev/sdb for the previous one) ?
<Lichte> I just installed mysql-server and client and set the root password during setup, but I get this when I try 'mysql -u root -p' :  ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) ;   How do I get logged into mysql ???
<soren> tynot: Why do you think you need vnc?
<jdstrand> hallyn, kirkland: hey. fyi ^ please note we are going to try udd (Ubuntu Distributed Development) with libvirt. if you are unfamiliar with it, soren just gave a really good explanation of the workflow
<soren> tynot: ssh is what you need, mate.
<tynot> soren, clients are a Windoze 7 PC and a MacBook Pro. Either of them would be great to setup. I've UltraVNC on the PC and Vine on the MacBook. I don't know that I need VNC. I'm not married to it, although we're stuck in a PC world here without too much time.
<soren> tynot: so stop wasting time on vnc and install ssh. It's all you need.
<tynot> soren, ssh for Windoze?
<zoopster> tynot: came in late, but just use putty on the win7 pc and native terminal ssh from OSX
<zoopster> tynot: assuming you want to ssh into ubuntu server from those clients
<soren> tynot: The idea of <insert whatever here> for Windows is a bit foreign to me, but I know for a fact that there are ssh clients for Windows.
<tynot> I don't mind so long as we get a gui. Not adept enough to get around in a command line world anymore. I've already forgotten most of that stuff, unfort. VI was hell.
<soren> tynot: You don't get a gui.
<soren> tynot: If you want a gui, this is the wrong channel.
<tynot> bye
<soren> jdstrand: In fact, I'll roll a release straight away. The patch I added was just ACK'ed upstream, so I might as well.
<jdstrand> cool beans
<hggdh> Daviey: got a question re. euca2ools and Maverick
<zul> hggdh: no
<hggdh> Daviey: and good morning/afternoon
<hggdh> zul: oh yes
<hggdh> :-)
<zul> hggdh: no really...no
<hggdh> zul: absolutely yes. Just a small, tiny really, question
<Daviey> hggdh: o/
<soren> jdstrand: I suck.
<hggdh> Daviey: bug 615442 -- would this be fixed on 1.3? Just asking, I intend to check on it
<uvirtbot> Launchpad bug 615442 in euca2ools "euca-describe-users fails with ImportError: No module named euca_admin.users" [Medium,New] https://launchpad.net/bugs/615442
<Lichte> I just installed mysql-server and client on 10.04 Server and set the root password during setup, but I get this when I try 'mysql -u root -p' :  ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) ;   How do I get logged into mysql ???
<Daviey> hggdh: I don't think that bug is directly related to eucatools... i think it's eucalyptus
<todd> Lichte: did you set a root password when you installed mysql?
<Daviey> hggdh: But can be fixed with what we have already.
<todd> Try without the -p and see if it lets you in.
<hggdh> Daviey: I had a look at the eucalyptus source, and there is ./clc/tools/src/euca_admin that seems to have the imports, but not quite as python would like them
<Lichte> todd: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO)
<soren> jdstrand: The quilt source packages are special. When I add a patch, I need to add it both to debian/patches /and/ apply it to the source tree.
<hggdh> Daviey: one thing that I would like to have is euca-add-user (intead of going to the web interface)...
<Daviey> hggdh: This is indeed the case... it's an issue that has been deferred whilst i waiting for upstream to land a patch i sent them
<hggdh> Daviey: oh, OK. I am happy now
<hallyn> jdstrand: alright, thx for the heads-up
<Daviey> hggdh: We will have that working..
<soren> jdstrand: Uploaded. Let's see what happens.
<soren> jdstrand: Oh, bleh, we're frozen, aren't we?
<jdstrand> soren: for beta yeah
<soren> jdstrand: Oh well.
<jdstrand> it'll get in after
<soren> zul: do you happen to know the state of Xen dom0 in Maverick?
<Lichte> damn, am I going to have to go back to 9.10 /
<zul> soren: non existant
<zul> soren: its something you will have to coble yourself
<smoser> Daviey, euca-add-user is broken ?
<Daviey> smoser: Depends..
<Daviey> smoser: I think it is working on the CLC
<smoser> ah.
<soren> zul: Lovely.
<soren> zul: Do you happen to know how Debian's Xen dom0 support is?
<zul> soren: better than ours i think
<daxroc> Afternoon all
<ttx> hallyn: about bug 599342, do you think we can have a fix for the next hours (in time for Beta) or should I unmilestone it ?
<uvirtbot> Launchpad bug 599342 in libvirt "Temporary failure in name resolution" [Medium,Incomplete] https://launchpad.net/bugs/599342
<daxroc> Is the cloud version of ubuntu commercial or is there a free version?
<daxroc> or is it that you pay for the Management console ( landscape )?
<hallyn> ttx: we have a fix, we're waiting for confirmation that it fixes the bug reporter's original problem
<hallyn> i'll ping him one more time i guess
<ttx> daxroc: we don't do commercial. Our cloud "version" is free and open source
<hallyn> i suppose i'd argue that if noone else reports it, and he doesn't care for the fix, then we just leave as is?
<ttx> hallyn: if it makes it before the euca upload/build/respin, then it'll be in beta, otherwise we'll fix it between beta and finalfreeze.
<ttx> I guess we won't respin beta candidate just for this one.
<hallyn> ttx: well the fix is there - should i jsut be proposing it for merge?
<ttx> hallyn: it fixes the bug as far as you reproduced it ?
<ttx> hallyn: looking at it, I think it can wait post-beta.
<hallyn> ttx: i didn't reproduce the bug.  it requires mucking with my dns.
<ttx> ok then we'll wait for OP comment, and fix it post-beta. Unmilestoning it.
<ttx> Daviey: that makes your eucalyptus upload the only expected respin, as things stand.
<ttx> so, the earlier, the better.
<hallyn> ttx: for my education, what does 'a respin' entail, in particular on Daviey's part?
<Lichte> I just installed mysql-server and client on 10.04 Server and set the root password during setup, but I get this when I try 'mysql -u root -p' :  ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) ;   How do I get logged into mysql ???
<Daviey> ttx: Hmm.. Well - it could be defered to post-beta.. Whilst I really would like as many of the small fixes in beta, it's not gonna kill the show.
<ttx> hallyn: to make it on the ISO, packages uploaded during the freeze need acceptation by release team, building, publishing. Then a new ISO must be built (ISO must be respun), then published to the tracker
<ttx> the whole process can take a few hours
<ttx> Daviey: how many of those would qualify as regressions ?
<Daviey> ttx, possibly 0
<ttx> I was mostly concerned by bug 617053
<uvirtbot> Launchpad bug 617053 in eucalyptus "on upgrade from 1.6.2, euca_upgrade should preserve DISABLE_ISCSI="Y"" [Medium,Fix released] https://launchpad.net/bugs/617053
<ttx> being an upgrade issue
<ttx> fixreleased ?
<Daviey> good point.
<Daviey> ttx: Simple fix, but blocked on a more generic upgrade issue.
<ttx> Daviey: are we expecting upgrades to work with current ? with next upload ? Or after that ?
<Daviey> ttx next upload.
<ttx> ok, then I'd like to get that new euca in.
<Daviey> Whilst i'm not convinced many will be doing the upgrade initially, i'd really like that to be rock solid pre-beta
<ecelis> z/win 3
 * Daviey wonders if he is still "here"
<Zikey> what is the best way to make a perfect clone (identifier+mbr+partition table+data) of a disk on ubuntu ?
<Zikey> dd or ddrescue ?
<patdk-wk> they do the same thing :)
<patdk-wk> ddrescue is just for broken disks
<tynot> softball ? syntax to promote user to admin, por favor?
<ScottK> ttx: Thanks.
<ttx> hggdh, smoser, kirkland, daviey: you have a few old TODOs on the https://blueprints.launchpad.net/ubuntu/+spec/server-maverick-dailytriage spec that you should mark DONE or POSTPONED
<ttx> mathiaz: that was my fear with those "tracking-only" specs: that they are not up to date and they corrupt the rest of the charts and completion rates
<ttx> adding work for the only benefit of getting the regular task in the same common list
<zul> smoser: where is the ttylinux images again?
 * Daviey screams at slow t'internet today.
<Daviey> this is so painful.
<smoser> zul, http://smoser.brickies.net/ubuntu/ttylinux-uec/
<tynot> anyone? realize it's a rudimentary question, but I've forgotten the syntax to promote a user to admin. help, please?
<mathiaz> ttx: hm - I wonder whether it helped people to actually do them
<mathiaz> ttx: ie: does the fact that they show up on the task list made them being worked on
<sherr> tynot: if you mean, add them to the "admin" group - see : man addgroup
<tynot> never mind. got it.
<mathiaz> ttx: if so I'd argue that the experiment was good
<tynot> sherr, thx.
<mathiaz> ttx: I'd argue that getting things done is more important than reporting being accurate
<tynot> needed to give user on linux install admin priviledges is all.
<ttx> mathiaz: the experiment continues :)
<mathiaz> ttx: we shuld work on improving the reporting though
<mathiaz> ttx: *should*
<mathiaz> ttx: so that it's *also* useful on the reporting front
<ttx> mathiaz: I'd argue that weekly tasks don't mix that well with tasks that don't have to occur on a specific day
<ttx> (especially weekly tasks that you can potentially miss)
<tynot> sherr, btw, what's the syntax here to address someone (in red) like you've just done to me. irc newbie.
<mathiaz> ttx: true - tasks that should happen on a specific day should go on the calendra
<mathiaz> ttx: IIRC jiboumans created an all event for weekly summaries in our calendar
<mathiaz> ttx: that's another way to do it
<ttx> mathiaz: it kinda conflicts with the event I already have on Monday to take care of it :)
<mathiaz> ttx: lol - same here
<ttx> but yes, calendar events are the way to track weekly-that-you-can-miss type of activity
<qman__> anybody know of a good, recent samba performance tuning guide? everything I'm finding is for very old versions and assumes old network technology
<qman__> my whole network is gigabit and the servers in question have RAID, so I'm dealing with significantly higher speeds than the "Windows 95 and NT" setups mentioned in the guides
<_ruben> hmm .. wonder if i should look into the same, or just be happy with the like 300Mbps or so throughput i currently have
<hallyn> kirkland: are there kvm backports for hardy?  (i don't see them in the ubuntu-virt ppa)
<siretart> hm. I'm trying to create a new VM with virt-manager/libvirt both on maverick, but I get an "libvir: Security Labeling error : internal error error calling aa_change_profile()"
<siretart> (typos copied)
<siretart> is this a known issue?
<siretart> btw, I see no apparmor warnings in /var/log/kern.log
<qman__> and of course we have gigs of RAM where they had megs at the time, so throwing another 32k at a buffer here or there is wholly insignificant to the rest of the system, but might speed up transfers
<sbeattie> siretart: by any chance, do you have auditd running?
<smoser> ttx, do you know of any reason why i should not start testing ec2 20100830 ?
<smoser> err.. 20100831
<sbeattie> (it would cause apparmor messages to not appear in dmesg/kern.log out, but rather /var/log/audit/audit.log if you do)
<siretart> sbeattie: no, there is no process running with that name. how to start it?
<siretart> ah, installing the package auditd seems to help
<sbeattie> siretart: no worries, it's not installed by default, wanted to make sure it was the reason you weren't seeing messages.
<sbeattie> err, it wasn't the reason.
<siretart> ah, now I'm seeing these two messages in the audit.log:
<siretart> type=ANOM_PROMISCUOUS msg=audit(1283270700.144:20): dev=vnet0 prom=256 old_prom=0 auid=4294967295 uid=0 gid=0 ses=4294967295
<siretart> type=ANOM_PROMISCUOUS msg=audit(1283270700.152:21): dev=vnet0 prom=0 old_prom=256 auid=4294967295 uid=118 gid=127 ses=4294967295
<siretart> can someone translate this to english? ;-)
<qman__> vnet0 went in and then out of promiscuous mode
<siretart> and this is something that is not allowed? I want to create a new virtual machine, and eventually start it
<jdstrand> siretart: those messages aren't denials. apparmor denials will still have the familiar 'DENIED' in the output
<jdstrand> siretart: it is just letting you know that the process with uid=118 and gid=127 (presumably libvirt-qemu:kvm) put the vnet0 interface in promiscuous mode
<jdstrand> siretart: do you have the domain xml for the machine you are trying to create?
<siretart> jdstrand: I'm using maverick's virt-manager to create the machine
<siretart> I guess the xml is generated on the fly
<jdstrand> siretart: what is the name of the vm?
<siretart> 'debtest'
<jdstrand> siretart: can you paste the output of 'virsh dumpxml debtest'?
<jdstrand> it should already be defined
<jdstrand> (since the failure is in trying to start it)
<siretart> no, it has not been defined yet. this is the step that fails
<siretart> virsh list --all doesn't list any machine
<jdstrand> siretart: are you sure? 'virsh list --all' should show it
<jdstrand> hmm
<jdstrand> maybe virt-manager tries to be smart and undefines on failure...
<jdstrand> siretart: does the libvirt-qemu:kvm user have write permissions to the directory that you are trying to put the disk image?
<siretart> jdstrand: yes, and the image has been created successfully. I've done 'sudo chmod 1777 /var/lib/libvirt/images'
<siretart> it's owned by root:root
<jdstrand> siretart: I'm confused by this statement: "the image has been created successfully", yet you don't have xml?
<siretart> exactly
<siretart> 304214    4 drwxrwxrwt   2 root     root         4096 Aug 30 21:24 /var/lib/libvirt/images/
<siretart> 304331    0 -rw-------   1 root     root     8589934592 Aug 30 21:24 /var/lib/libvirt/images/debian.img
<siretart> but no output of virsh list --all
<jdstrand> siretart: you created /var/lib/libvirt/images/debian.img yesterday?
<siretart> yes, I did. via virt-manager connecting to 'localhost'
<jdstrand> siretart: what does /var/log/libvirt/qemu/debtest.log have to say?
<siretart> the unusual part of this setup is that this is actually a kvm virtual machine, that I'm using for my maverick tests. I wanted to see what improvements mavericks brings in the libvirt camp, so I've installed libvirt-bin and virt-manager and tried to setup a qemu VM
<kirkland> zul: yo!
<siretart> that logfile contains the commandline to qemu, and "libvir: Security Labeling error : internal error error calling aa_change_profile()"
<siretart> (the typo in 'libvir' is actually there)
<jdstrand> siretart: I'm not 100% sure virt-manager can be used in this way...
<jdstrand> siretart: can you paste the full command line?
<siretart> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/qemu -S -M pc-0.12 -no-kvm -m 496 -smp 1,sockets=1,cores=1,threads=1 -name debtest -uuid 1e6f1921-e986-84c3-9d4e-9c52c20fa7be -nodefaults -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/debtest.monitor,server,nowait -mon chardev=monitor,mode=readline -rtc base=utc -no-reboot -boot d -drive file=/var/lib/libvi
<siretart> rt/images/debian.img,if=none,id=drive-ide0-0-0,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive file=/var/lib/libvirt/images/debian-testing-i386-netinst.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -device rtl8139,vlan=0,id=net0,mac=52:54:00:ac:04:8e,bus=pci.0,addr=0x3 -net tap,fd=27,vlan=0,nam
<siretart> e=hostnet0 -chardev pty,id=serial0 -device isa-serial,chardev=serial0 -usb -device usb-tablet,id=input0 -vnc 127.0.0.1:0 -vga cirrus -device AC97,id=sound0,bus=pci.0,addr=0x4 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
<siretart> jdstrand: in what way? connecting to localhost and create a qemu VM?
<jdstrand> siretart: importing an existing image
<jdstrand> it might, I just don't know
<siretart> well, the gui offered me that
<jdstrand> ok
<siretart> I can also delete and retry it, but I do remember that the error message was exact the same the first time I've tried that
<jdstrand> siretart: can you paste the following: cat /etc/apparmor.d/libvirt/libvirt-1e6f1921-e986-84c3-9d4e-9c52c20fa7be*
<siretart> sure: cat: /etc/apparmor.d/libvirt/libvirt-1e6f1921-e986-84c3-9d4e-9c52c20fa7be*: No such file or directory
<siretart> root@siretart-desktop:/var/log/audit# find /etc/apparmor.d/libvirt/ -ls
<siretart> 656106    4 drwxr-xr-x   2 root     root         4096 Aug 30 21:20 /etc/apparmor.d/libvirt/
<siretart> 656107    4 -rw-r--r--   1 root     root          164 Aug 25 21:44 /etc/apparmor.d/libvirt/TEMPLATE
<jdstrand> siretart: my feeling is this suggests no xml and virt-manager/libvirt is not erroring out in the right place and we aren't getting a useful error
<jdstrand> siretart: can you file a bug with exact steps to reproduce. please file it against virt-manager for now
<jdstrand> siretart: if it is somewhere else, we can move it
<siretart> instructions are easy: install maverick, install virt-manager and libvirt-bin, try to install debian squeeze via netinst iso
<jdstrand> siretart: you already said that you did something 'different' though
<jdstrand> siretart: you used an existing image
<siretart> I'm removing it and try with a new one
<jdstrand> siretart: we need all those differences. virt-manager is confirmed to work with an iso install
<siretart> yepp, I've now tried with creating a new image, exactly the same problem
<jdstrand> siretart: please file a bug
<kirkland> hallyn: there's a kvm-84 for hardy
<kirkland> hallyn: we can't backport much beyond that as we dropped the dkms kernel module from the kvm package build
<kirkland> hallyn: i know our kernel team are supposed to be publishing backported kernels soon
<siretart> jdstrand: Bug #627514
<uvirtbot> Launchpad bug 627514 in virt-manager "[maverick] fails to create a new VM: libvir: Security Labeling error : internal error error calling aa_change_profile()" [Undecided,New] https://launchpad.net/bugs/627514
<kirkland> hallyn: with those, we could backport qemu-kvm, and depend on a particular kernel version, with a new enough kvm module
<kirkland> hallyn: however, i would strongly advise against anyone wanting to run production KVM against 8.04
<siretart> jdstrand: I've even included a screenshot of the error message
<kirkland> hallyn: strongly suggesting that they either upgrade to, or deploy 10.04
<kirkland> hallyn: since we have a newer LTS, with a production-ready hypervisor
<kirkland> hallyn: 8.04's KVM was tech-preview, at best
<kirkland> hallyn: and the backport package is visible at https://edge.launchpad.net/ubuntu/+source/kvm
<kirkland> hallyn: that's in the official backports repo
<jdstrand> siretart: thanks
<siretart> jdstrand: anything else I should add to the bug while I have the machine still on?
<jdstrand> siretart: not at this time, thanks
<hallyn> kirkland: i've got a testbed set up, meanwhiel i made the suggestion
<hallyn> kirkland: i'll be honest, the kvm bug backlog is my primary source of stress and dismay these days
<mathiaz> ttx: heya
<mathiaz> ttx: was wondering about your findings about hudson?
 * SpamapS <heart> CI systems.
<mathiaz> SpamapS: do you know of other CI systems?
<SpamapS> mathiaz: CruiseControl is sort of the sad cousin of Hudson. ;)
<kirkland> hallyn: ack
<kirkland> hallyn: it's a PITA
<hallyn> kirkland: also a PITA?  is launchpad pinning my cpu since last update
<hallyn> $*&%(*$&%(*
<kirkland> hallyn: launchpad?
<kirkland> hallyn: as in, rendering a web page?
<hallyn> yup
<hallyn> just started this morning
<hallyn> kirkland: would server team meeting today be a good time to bring up the question of updating lucid kvm to 0.12.5 or something?
<hallyn> kirkland: (that would, for instance, solve bug 574665)
<uvirtbot> Launchpad bug 574665 in qemu-kvm "kvm + virtio disk corrupts large volumes (>1TB)." [High,Triaged] https://launchpad.net/bugs/574665
<kim0> Hi folks .. I can't make today's meeting, since I have a committment. There's no real updates from my side since I'm mostly just polishing the cloud portal
<kuttan_> hi is there any channels discussing snmp / oids thanls
<kuttan_> thanks
<kirkland> hallyn: you would have to upload 0.12.5 to lucid-backports
<kirkland> hallyn: you can't do a major version bump in an LTS
<kirkland> hallyn: but that's fine by me (uploading 0.12.5 to lucid-backports)
<hallyn> should i ask if there are objections at team mtg, or just do it?
<mathiaz> jjohansen: hi!
<jjohansen> mathiaz: hi
<mathiaz> jjohansen: is it normal that linux-virtual installs 124M of modules?
<jjohansen> mathiaz: define normal?
<mathiaz> jjohansen: one of the goal of creating the linux-virtual package was to have stripped down kernel
<mathiaz> jjohansen: in lucid linux-virtual modules would take less than 40 MB
<jjohansen> mathiaz: its a known issue, basically when -virtual was split off of server it lost its pared down modules config and I need to go in and trim again
<mathiaz> jjohansen: ok great
<mathiaz> jjohansen: is there a bug about that alread?
<jjohansen> yeah, just a sec
<jjohansen> mathiaz: Bug #621175
<uvirtbot> Launchpad bug 621175 in linux "-virtual kernel contains too many modules" [Undecided,New] https://launchpad.net/bugs/621175
<mathiaz> jjohansen: great - thanks
<yann2> is it possible to use scp in combination with sudo? ie: i want to copy via ssh files from a remote server to my local server, the remote files belong to root:root and are readable only by root, and the root login is disabled via ssh
<ttx> mathiaz: I just looked quickly into it. It's quite basic but pushes towards best practices. I'd compare it to Nagios
<ttx> i.e. no black magic, but does what it should do, and well.
<mathiaz> ttx: but in a different domain right?
<yann2> is there anything shorter than copying the files to another location on the remote server, chmod them, copy them, move them again, and rechmod them?
<ttx> mathiaz: oh yes.
<mathiaz> ttx: where you looking at hudson as a product to package or to use in the team?
<ttx> mathiaz: more as part of my "staying current" objective. I already looked at it from a packaging perspective
<hallyn> yann2: so you want the target files owned by root:root?  Only way I can think of would be to do it as root from target machine, scp'ing with the source being remote :)
<mathiaz> ttx: right - I'm more interested as a CI to use in the server team
<mathiaz> ttx: I've just refactored my iso testing scripts
<ttx> mathiaz: it's your classic Java thing: too many deps to package, compared to the ease of use of "just" deploying the prebuilt wart
<ttx> war
<mathiaz> ttx: and I may look at using hudson to track all the iso testing
<mathiaz> ttx: now that I'm able to fully automate tests
<yann2> hallyn, yeah, but then I couldn't scp them to the folder I want, as the normal user wouldnt have access to there :)
<ttx> mathiaz: Hudson allows CI, but you can use to track any result
<ttx> s/use/also use it/
<yann2> also gets complicated with ssh agents, if I sudo -s to root on the remote host, I lose my agent connection
<mathiaz> ttx: right - does it allow to attach random piece of  information to results?
<hallyn> you do what?
<mathiaz> ttx: like installation logs?
<ttx> mathiaz: there is an interface that allows you to post a result, I think you could attach anything to it
<hallyn> yann2: so i think the answer is "no, bc otherwise you lose any safety of disabling root remote logins in the first place :)
<yann2> well not if the user has sudo anyway...
<kirkland> hallyn: fyi, https://help.ubuntu.com/community/UbuntuBackports
 * RoyK just wrote a perl thing to find duplicate files in a directory hierarchy - nice in case you have tons of mp3s or other large files on a chaotic filesystem
<Krazyderek> looking for a good way to backup 12 google apps accounts, including sent mail
<Krazyderek> is postfix the best solution? or do i want something a bit more custom to dealing with google accounts?
<RoyK> fetchmail?
<Krazyderek> @RoyK have you used it for a while?
<RoyK> not with google
<Krazyderek> hmmm i guess i'd like something that will download evethying at a set time, say 3am, to keep a backup of google, and in the event someone blows up google or something, that i could just install thunderbird on everyone's desktop and just point it at our linux box downstairs and keep on chuggin' along
<RoyK> Krazyderek: fetchmail can feed the mail into an MTA like postfix
<Krazyderek> RoyK: sorry MTA?
<kirkland> smoser: ping
<kirkland> smoser: have you gotten the ttylinux image running in UEC/Maverick yet?
<smoser> i think so , yeah
<kirkland> smoser: if not, do you have a running UEC where you can try it?
<kirkland> smoser: can you check?
<smoser> http://smoser.brickies.net/ubuntu/ttylinux-uec/
<kirkland> smoser: i'm trying to run it in a vm
<kirkland> smoser: ie, in qemu alone
<smoser> it should be fine.
<kirkland> smoser: hrm, it's not coming up ...
<kirkland> smoser: well, i used a really old one
<kirkland> smoser: do you have console working with these new images?
<RoyK> Krazyderek: mail transport agent
<smoser> what is console ?
<RoyK> Krazyderek: something like postfix or sendmail or whatever
<smoser> remember that maverick euca-console-output was recently broken.
<smoser> i'm not sure if its fixed now or not
<kirkland> smoser: euca-get-console
<kirkland> smoser: so that i can see where it's stuck, if it's stuck
<kirkland> smoser: what's the username/password in that image?
<kirkland> smoser: root/linux ?
<smoser> i dont know.
<Krazyderek> RoyK: ah so would i setup both now? or just fetchmail? looks like i'd be using ETRN or ODMR modes and both are unfamiliar to me
<RoyK> fetchmail can download your email easily using imap or pop3 and feed it to postfix, which can store it a dovecot store or something
<smoser> i just verified: wget http://smoser.brickies.net/ubuntu/ttylinux-uec/ttylinux-uec-i686-11.2_2.6.35-16_3.tar.gz -O out.tar.gz && uec-publish-tarball out.tar.gz foobucket i386 && euca-run-instances --key mykey $AMI
<smoser> and reached the instance fine.
<smoser> consoel output is present.
<RoyK> Krazyderek: postfix doesn't do stuff like imap or pop, it's a plain MTA, but a jolly good one. Dovecot can do the storage, as a POA (post office agent)
<Krazyderek> RoyK: i have an ubuntu server book here for the postfix and devecot stuff, and i can probably feel my way through that for one user, it's just keeping everything in order for all 12 people that worries me
<kirkland> smoser: you rock, thanks
<RoyK> Krazyderek: dovecot can easily handle quite a few users
<RoyK> thousands
<Krazyderek> RoyK: but how does fetchmail?
<RoyK> fetchmail is just run with a cron job
<RoyK> it fetches mail, basically
<RoyK> for 12 users, that won't be much of a problem
<Krazyderek> hmm
<Krazyderek> RoyK: but i do need it, right? postfix can't do this for me?
<Krazyderek> RoyK: sorry first mail server
<cloakable> postfix won't fetch mail for you, mail needs to be sent to it.
<RoyK> afaik postfix can't do pop or imap, so it can't aquire email from another host
<cloakable> postfix does smtp only, yea
<RoyK> yes, as cloakable said, use something like fetchmail to download the stuff
<pmatulis> Krazyderek: consider setting up a typical mail server and then have google accounts send a copy of all mail to it
<RoyK> feed it into postfix, and have postfix send it to dovecot (or something)
<RoyK> pmatulis: give you have a static IP, that's probably the best
<cloakable> Unless your isp has a smarthost, of course :)
<pmatulis> cloakable: smarthost is for outgoing mail, not incoming
<Krazyderek> we'd have to pay at least $15 a month extra for a static
<RoyK> Krazyderek: then try fetchmail
<pmatulis> Krazyderek: i don't understand, you're worrying about google blowing up but you can't afford a static IP?  ;)
<Krazyderek> hmmm fetchmail has I9 in their FAQ saying it doesn't play to well with gmail as of April 2008
<cloakable> Krazyderek, pmatulis: Which would be a problem if your MX record was an ip address :P
<RoyK> fetchmail -> postfix -> dovecot
 * cloakable receives mail on a dynamic ip
<pmatulis> Krazyderek: you can easily use the dynamic ip checker method - works very well
<cloakable> Mmmm
<JasonMSP> already did a google search.  Im unfamiliar with terminology so my search isn't great.  Im trying to understand SSL in order to configure VSFTPD for secure transfers (so passwords aren't sent in the clear).  I can easily setup the VSFTP.conf options, but I need help with what else to make happen for the connection.  Certificates, etc.. Can someone point me to a clear post that I can read up on?
<Krazyderek> i do have a no-ip account for remote desktop and vpn addressing
<kirkland> smoser: hrm, well, okay, i just published your latest tarball to my all-in-one-UEC-running-in-a-beefy-KVM
<kirkland> smoser: published fine (thanks)
<kirkland> smoser: running the instance, it goes to the "running" state no problem
<kirkland> smoser: note that i'm using --addressing private
<cloakable> kirkland: Use that as your MX record then *shrug*
<kirkland> smoser: it stays in the running state (ie, it doesn't commit suicide)
<smoser> do you get console output ?
<kirkland> smoser: negative
<smoser> then you need to fix that first.
<kirkland> smoser: been running for ~350 seconds
<kirkland> smoser: all i get is the timestamp back
<kirkland> smoser: when i euca-get-console-output
<smoser> your running into bug 619843
<kirkland> smoser: well, instance id, then timestamp
<cloakable> Erk
<uvirtbot> Launchpad bug 619843 in eucalyptus "euca-get-console-output returns one single line" [High,Fix committed] https://launchpad.net/bugs/619843
<Krazyderek> pmatulis: i'm not sure if the "forward copy" option in google will work for sentmail, so you'd have to BCC mail to yourself and setup some kind of rule to put it in the sent items....errr..
<kirkland> smoser: ah
 * kirkland goes tackle that nasty bugger
<pmatulis> Krazyderek: huh?
<smoser> kirkland, that is probably supposed to be fixed
<smoser> Daviey, its not fix released ?
<Krazyderek> pmatulis: when you were saying setup a regular mail server and have google send a copy
<pmatulis> Krazyderek: ok, it won't work?
<Krazyderek> pmatulis: not for sent mail i don' think
<pmatulis> Krazyderek: not sure why
<Krazyderek> pmatulis: i thought the forwarding only appied to incoming mail
<RoyK> http://karlsbakk.net/fun/sinking.mpg
<Krazyderek> pmatulis: i'll try it now
<kirkland> Daviey: what change did you make?  did you just add eucalyptus to the kvm group?
<Krazyderek> pmatulis: ya incoming only
<Krazyderek> i want to have a record of all sent mail backed up as well, so hopefully fetchmail is working better with google these days :S
<Krazyderek> RoyK: i don't see a listing for postfix under the fetchmail website, do you have some recommended settings or a reliable tutorial?
<kirkland> smoser: sweet!
<smoser> what'd you find ?
<kirkland> smoser: i've got console output, and i can see my failure :-)
<kirkland> smoser: you da man
<smoser> what is fialure ?
<kirkland> smoser: well, i have a mismash of network hackery i need to sort out
<kirkland> smoser: basically, i need to fix my networking setup in eucalyptus.local.conf
<kirkland> smoser: but the bugger is most definitely running!
<JasonMSP> okay lets ask that question again in a different way.  Can anyone point me to a primer on creating secure connections with ubuntu.  like the ultimate basics so that i understand how the process works and then can apply it to VSFTP?
<JasonMSP> (im looking at TLS in wiki now)
<qman__> secure and FTP are mutually exclusive items
<qman__> you can sandbox it off in its own restricted corner, but you cannot secure FTP
<JasonMSP> qman: vsftp from what I understand can be configured with TLS/SSL.  But again I really don't have any idea what im doing.
<qman__> if it can, it's a new feature that I am unfamiliar with
<qman__> I use the sftp built into openssh
<qman__> as far as FTP daemons go, VSFTP is the best I have used, but it's still FTP
<JasonMSP> Im working on the ability for clients to FTP into their sites.  Each has a user id with which they can FTP, SSH is disabled so they can't get in to the server, and they are chrooted to their web.com folder.
<JasonMSP> Im still weighing if it is necessary to secure the connection, but i don't like the idea of passwords being sent in the clear.
<JasonMSP> http://wiki.vpslink.com/Configuring_vsftpd_for_secure_connections_%28TLS/SSL/SFTP%29
<qman__> centos, yuck
<qman__> good information though
<qman__> should be the same on ubuntu swapping yum for apt-get
<qman__> and /etc/rc.d/init.d for the service command
<JasonMSP> yes..  I tried this last week and didn't get it working.  It may be the clients im working with to get in which is why I need the bare basics to undersatnd the process.  I know how to create the certificates, but then the client connecting doesn't work with what I've tried so far.
<JasonMSP> ive been using sudo service vsftpd restart
<qman__> well, that's what's known as FTP/S, not SFTP
<qman__> so make sure your clients are in the right mode
<JasonMSP> ok then where can I get basics on the difference between all those.
<qman__> with TLS it should still use port 21, too
<qman__> SFTP is FTP over SSH
<qman__> FTP/S is FTP with TLS encryption
<JasonMSP> ok.  so then im looking for FTP/S...  Thanks!
<qman__> a number of other protocols work like that too
<qman__> TLS uses the standard, clear port and then starts encryption
<qman__> where SSL uses a separate port, like how HTTPS uses 443
<JasonMSP> does that mean password will still be sent in the clear?
<qman__> no
<qman__> the initial connection handshake is, then the STARTTLS command is sent
<qman__> and the rest continues like SSL
<JasonMSP> ah
<JasonMSP> (i hear the angels signing in the background)
<JasonMSP> *singing
<qman__> now, I don't know if FTP/S transfers the files over the encrypted channel, or what other ports, if any, it uses, you'll have to look that up
<qman__> but I do know that the authentication and control are handled over the secured connection
<Krazyderek> RoyK: i sink i'm going to tackle this one tomorow, thanks for the tip i'll see how it goes on a small account first
<Krazyderek> exit
<JasonMSP> Does SFTP require users to have shell access?
<qman__> no
<qman__> it uses sshd, but the users can be easily denied shell access
<qman__> however, the users must have a valid shell defined
<JasonMSP> ok so it would be just as easy for them to SFTP.  ive read that an option is /usr/sbin/nologin   ??
<JasonMSP> (easy because the client software is easily come by)
<qman__> lots of clients support SFTP, on many platforms
<qman__> however, I don't think nologin works
<qman__> I can test and verify
<qman__>  /bin/false does not work
<JasonMSP> i've got my ssh config file setup to allow only those in the group sshlogin
<JasonMSP> so that should secure anyone else from shell access.
<qman__> nologin does work
<qman__> a bit surprising, guess I'm making some changes
<JasonMSP> so as long as their shell folder is set with nologin, they don't have ssh access (because they are not in the group) they should only be able to access their folder.  What about CHROOTing them to their web folder, this should still be accomplished by VSFTPD
<JasonMSP> (their folders via FTP that is)
<qman__> VSFTPD has nothing to do with SFTP using sshd
<qman__> they are completely separate
<qman__> whichever one you are using, you need to configure chroot with
<qman__> or both, if using both
<JasonMSP> VSFTPD is not needed if you are doing SFTP.  ok.  Is this because the client software acts as if it is FTPing, but in reality it is being done through SSH?  As you can see im getting lost on the fundamentals which is what has been making this difficult.
<qman__> SFTP is an internal function of sshd
<qman__> it works like FTP but is built in
<JasonMSP> ie its built in.
<qman__> you can use other SFTP subsystems with sshd, but the internal one is the one I know and use with sftp-only users and chroots
<JasonMSP> im on my third generation of working this solution. the first was straight VSFTP, then I was recommend SFTP and when I couldn't get either of those working I went back to VSFTPD.  I have a working in the clear setup with VSFTPD now.  I never was able to get a working SFTP solution (except for myself via key)  I've got passwords turned off in SSHD config.
<qman__> vsftpd cannot be used as an SFTP subsystem, because it's not SFTP, it's FTP/S
<JasonMSP> i understand.  they are separate (daemons?)
<qman__> well, if you have password authentication disabled, SFTP users will need keys
<qman__> a very secure configuration, but you may have trouble instructing users to use it
<JasonMSP> EXACTLY!
<JasonMSP> which is why i thought I would be able to use VSFTPD.
<qman__> you can, but it will be FTP/S, not SFTP, and will require a different client mode
<qman__> and I'm not sure on the specifics of how that one works
<JasonMSP> client mode meaning protocol?
<qman__> yes
<qman__> also, FTP/S is not really standardized, some softwares implement it differently
<JasonMSP> ok.  I just used WinSCP to connect in the clear with a password which means it is using the VSFTPD daemon
<JasonMSP> that means that it is possible then to secure this connection with TLS if that link I sent earlier configures it correctly.
<qman__> that page states that winSCP doesn't do FTP/S
<qman__> I can neither confirm nor deny
<WinstonSmith> try filezilla
<WinstonSmith> does FTP/S i think
<JasonMSP> you can select the protocol (FTP) and then it gives you a dropdopwn for encyption
<JasonMSP> then you can choose none, SSL/TLS implicit, SSL explicit, or TLS explicit
<JasonMSP> I used TLS explicit and it connected with the right certificate but they did not end up in the correct directory.  so im thinking its a config setting
<Zikey> I have a file named "core" at the root (/), it's probably a coredump, is there any tool to know more about it ?
<Patrickdk> only if you care to debug it
<Zikey> I just would like to know which binary crashed
<Zikey> it's probably written in the core file
<Patrickdk> gdb I believe
<Patrickdk> been a few years since I worked on one
<Zikey> strings <core worked :)
<Zikey> damn asterisk...
<Zikey> last question, how do you force a fsck on next reboot ?
<soren> Zikey: touch /forcefsck
<soren> iirc.
<soren> Yup.
<uvirtbot> New bug: #627676 in mysql-dfsg-5.1 (main) "package mysql-client-core-5.1 (not installed) failed to install/upgrade: trying to overwrite '/usr/bin/mysql', which is also in package mysql-client 0:5.1.50-2" [Undecided,New] https://launchpad.net/bugs/627676
<Zikey> thx soren !
<Kaelten> anyone have an example multipath.conf file they could point me at?
<tucemiux> anyone knows how to configure a printer server manually?
<DigitalDeviant> Would anyone here be helpful to help me setup a bind9 dns server on my ubuntu server? I have tried to follow multipe howto's and even webmin but i just can seem to get it working correctly. If someone could do a remote session with me and show/explain to me how it works I would be very greatfull
<tomsdale> DigitalDeviant: sry - no experience here either but did you try howtoforge?
<ScottK> DigitalDeviant: Look in the Ubuntu server guide on help.ubuntu.com.
<ScottK> It has specifics on how to do it.
<DigitalDeviant> yah, i have been looking at docs all day. They are confusing as hell to me. I have no problem setting dns up in windows, lol but since i am new to linux I keep hitting walls
<ScottK> Did you try the Ubuntu server guide?
<DigitalDeviant> yah, Ive spent a good 6 hours trying to get it to work
<tomsdale> Does 10.04, apart from apparmor have a second firewall - I changed my ssh port to 31337 but the connection times out - on 22 it works. I already uninstalled apparmor
<ScottK> tomsdale: There is one installed, but not active by default.  I'd recommend putting apparmor back as it won't interfere with that.
<tomsdale> what could it be then that won't allow me to connect via ssh? netstat -tap shows it listening on 31337
<DigitalDeviant> you behind a hardware firewall tomsdale ?
<tomsdale> nop - all in the internal network
<DigitalDeviant> try using a port scanner to the ip with the SSH server. see if the port is open from site to site first
<tomsdale> already set loglevel to debug but I don't see any request being made in the auth.log. And apparmor is uninstalled as I mentioned.
<tomsdale> good idea
<DigitalDeviant> are you using fail2ban as well ?
<tomsdale> yes, nmap says filtered, ELITE
<DigitalDeviant> so you are using fail2ban
<tomsdale> yes, but according to the log I'm not jailed
<DigitalDeviant> have you added your ip to the allow list in fail2ban
<tomsdale> eh - banned.
<tomsdale> I don't think it's  my IP the problem - on port 22 all works fine. As soon as I change it to 31337 it stops working
<DigitalDeviant> once you change the port, you have restarted the service right? i know stupid question, but sometimes people forget
<tomsdale> I even did a good old fix all win problems reboot :-)
<DigitalDeviant> lol
<DigitalDeviant> ok
<tomsdale> There somehow is another firewall actice I have the feeling.
<DigitalDeviant> doesnt pure-ftp use port 31337. are you running that?
<tomsdale> ah - there is bastille-firewall which is active.
<DigitalDeviant> when i run into issues like that, i install webmin so i can look at everything all at once, it helps, then i purge it
<tomsdale> that's it. it appears ubuntu server 10.04 uses by default apparmor and bastille-firewall. Two more secure than one ?
<DigitalDeviant> i use ddwrt firewall and fail2ban. so im not to familair with software firewalls in ubuntu
<sbeattie> um, what? bastille is not included by default.
<tomsdale> hm - did a clean install from a howtoforge tut - checking whether it slipped in there somewhere.
<DigitalDeviant> do i need multipe IP's to setup a dns server / lamp server ?
<DigitalDeviant> i have two public ip's one pointing to dns port and the other to the lamp server. is the correct setup so far?
<DigitalDeviant> or do i just use the one ip address for the dns and the lamp server
<DigitalDeviant> im only hosting like 4 domain names
<DigitalDeviant> neither will have email
<fluvvell> I've got 10 domain names, dns and lamp all on the same ip. It really doesn't matter if you handle ports and everything correctly
<DigitalDeviant> oh
<DigitalDeviant> good then, that gives me ann extra ip :)
<fluvvell> but I use fail2ban and shorewall (firewall)
<DigitalDeviant> i use fail2ban and a hardware firewall with iptables
<kirkland> smoser: still around?
<kirkland> smoser: ideas?  http://pastebin.com/Vx24Xn7H
<fluvvell> DigitalDeviant: are you wanting bind for external dns or internal network ?
<kirkland> smoser: metadata failage
<DigitalDeviant> external. My server is behind verizon fios and I need to setup fwd and reverse zones for them to setup the ptr records or something
<SpamapS> DigitalDeviant: you want to have your IP resolve to your server's hostname?
<SpamapS> DigitalDeviant: you need at least two servers to host your own DNS. Do you have two?
<DigitalDeviant> yah, the other one is sitting here right now getting server loaded on it as we speak, but right now on the fios i only have one
<DigitalDeviant> brb guys
<kirkland> smoser: nevermind, i think i have it!
<DigitalDeviant> right now, my domains are using the ns1.verizon.net and ns2.verizon.net per the tech support. So i need to setup the dns server to resolve the domains or someting..ive never done dns on ubuntu so im lost like a 9 year old in a candy store
<DigitalDeviant> and the howto's are not helping me a bit
<DigitalDeviant> im more of a watch and learn type person, ya know
<DigitalDeviant> I installed webmin to help because a frind of mine said it would make it easier....pfft
<DigitalDeviant> ive been at this all day with no progress and im about to throw my server out the window
<tomsdale> DigitalDeviant: Don't know if it helps your cause but I'm atm installing ispconfig 3 in a 3 server setup. Haven't gotten as far as DNS yet but it seems you can create zones in a web interface.
<DigitalDeviant> yah, you can do that with webmin as well
<DigitalDeviant> its just too confusing for me
<tomsdale> you're using bind?
<DigitalDeviant> yah
<tomsdale> on howtoforge some people us mydns - have heard it's smaller and easier but the hosters tend to prefer bind.
<DigitalDeviant> yah, i was told to use bind
<tomsdale> I mean a software with Version Nr. 9 - It gotta be good. Even windows only got to 7 so far :-)
<tomsdale> sry - not really helping your problem :-(
<tomsdale> http://oreilly.com/catalog/9780596100575  ??
<DigitalDeviant> its all good, been going through this crap all day :(
<ath88>  Hello, anyone whos nifty with sshtunnels got the time and temper to help me? I need to setup a tunnel from a netbook to my server, so i can access that netbook no matter what network it is logged onto. Is that possible?
<DigitalDeviant> well the good news i guess is if im going to setup a second dns server, I might as well host mysql on it as well and get some load of the web server
#ubuntu-server 2010-09-01
<ath88> Or maybe guide me to a more fitting channel for such a question?
<tomsdale> ath88: security or #security ?
<ath88> Ill try, thanks. :)
<fluvvell> DigitalDeviant:, sorry got called out. bind9 - its a fairly involved task, sometimes I wonder if I have a handle on it at all. We have dnsmasq for internal dns and bind for our internet records. I have 2 other bind servers on the internet that act as backup dns and are on other sites, I have only remote access to them and use command line and text editing to change records. Webmin looks good, but in the end I trust things work proper
<fluvvell> ly when I see the config files looking the way they should.
<uvirtbot> New bug: #627723 in mysql-dfsg-5.1 (main) "package mysql-server-5.1 5.1.41-3ubuntu12.3 failed to install/upgrade: subprocess new pre-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/627723
<thesheff17> has anyone seen Failed to find device to remove that has udev name '/sys/devices/virtual/net/vnet0' with virsh --migrate --live?
<thesheff17> *virsh migrate --live
<Hypnoz> I realize this is a weird question for this chan, but anyone know the name of the item that screws a 4 post rack to the floor?
<WinstonSmith> Hypnoz, /ironic on screwdriver ?
<Patrickdk> hypnoz, a wrench?
<Hypnoz> its called a bolt-down kit
<Patrickdk> basically what I use
<Patrickdk> http://www.altex.com/Concrete-Floor-Anchor-Kit-Secures-Relay-Racks-To-Floor-RIK01-P139683.aspx
<Hypnoz> was having trouble finding the product
<Patrickdk> concrete floor ancors :)
<Patrickdk> have a box of 50 from lowes
<Patrickdk> though mine are 5/8"
<trimeta> I've checked USN, I've checked "aptitude changelog linux-image-2.6.32-24-server", what are the changes to the kernel that warranted a version bump and may warrant a reboot?
<fluvvell> trimeta, have you got proposed updates turned on ?
<trimeta> I don't think so...this is a relatively standard Lucid install.
<fluvvell> trimeta, server install or desktop?
<trimeta> Server.
<trimeta> Looks like I have the lucid and lucid-updates repositories in sources.list
<fluvvell> it may be the 10.04.1 update that came through for the desktop not long ago.
<trimeta> Perhaps...what did that bring with it? Anything I'd want to sacrifice uptime for?
<pmatulis> trimeta: usn-966-1 and usn-974-1 are 2 that i know of
<trimeta> pmatulis: I don't think it's actually security-related, it seemed to have been pulled from the lucid update server and not lucid-security.
<trimeta> I just mentioned that since it's the first place I check for kernel updates.
<pmatulis> trimeta: lucid-security dumps into lucid-updates after a very short while
<trimeta> Still, this just bumped tonight, and since I don't run Hardy and have updated between now and August 4, I'm pretty sure it's neither of those.
<pmatulis> trimeta: well, when did you last update your package index?
<trimeta> I just ran aptitude update && aptitude safe-upgrade earlier tonight....I don't remember when I did it last, but certainly within the past week.
<trimeta> (the one earlier tonight is what pulled in a new version of the kernel.)
<pmatulis> trimeta: maybe your using a crummy mirror
<trimeta> I don't think it's that bad...besides, I vaguely remember having installed the usn-966-1 update previously.
<trimeta> To be honest, there's no real problem here; I could just reboot and get it done with. I'm just curious what changed.
<pmatulis> trimeta: what mirror are you using?
<trimeta> http://us.archive.ubuntu.com/ubuntu/
<pmatulis> trimeta: dunno.  check your apt logs for more info...
<trimeta> Where do I find those?
<pmatulis> trimeta: under /var/log/apt
<trimeta> I suppose I should have checked there first...
<trimeta> The upgrade appears to have been from 2.6.32-24.41 to 2.6.32-24.42 (both of these being versions of linux-image-2.6.32-24-server).
<trimeta> Hmm...maybe I'm just not reading the changelog correctly, and I've been stupid this whole time?
<trimeta> It mentions change pertaining to this specific version, but they were checked in over a week ago...I know I've updated more recently than then.
<trimeta> Perhaps I just don't understand how the dates in the changelog work?
<pmatulis> trimeta: changelogs show committed times, not released times
<trimeta> Hmm...is there a way to see released times?
<pmatulis> trimeta: i'm not sure.  launchpad maybe?
<trimeta> I'll look around...and in the future, try to keep in mind that even if the update in the changelog seems to have happened ages ago (in Linux time), it might still be the update which just installed. Thanks for listening to me be an idiot.
<pmatulis> trimeta: do you have -proposed enabled?
<trimeta> No.
<holmser> does anyone here use ISP config?
<holmser> I was just gifted a new server, and I would like administer both of them from ISP config, but I am having trouble figuring out how to do it.
<Andre_Gondim> Hi I want to mapping home em temporaty directories at samba, but I didn't found a logon file exampla, anyone may help me?
<JasonMSP1> im using VSFTPD and I have multiple log files..  standard, .1 .2 .3
<JasonMSP1> how can I force a new log to start?
<jmarsden> JasonMSP1: Probably using the logrotate command.  man logrotate.  Not sure about vsftpd specifically, but many subsystems use this.
<JasonMSP1> thx
<seezed> anyone here have any experience with group-office
<twb> seezed: the wine thing?
<seezed> nope its a web based project management/groupware app
<joschi> twb: I think you mean crossover office ;)
<seezed> im looking for a groupware app to run on a fresh 10.04 install
<seezed> but im having dramas getting things running, so I was wondering if there was a pre-packaged app anyone had tried
<twb> Oh, right, groupoffice is the OWA clone
<twb> I thought it was packaged
<seezed> OWA?
<seezed> i cant find it in apt, and there is no deb available that I've been able to find
<twb> seezed: outlook web access
<biodegabriel> saucy
<twb> Basically a rewrite of outlook to run on iexplore/activex
<seezed> its a bit more then that
<twb> Fair enough
<seezed> im looking for a web app to host docs, manage project time lines etc
<twb> IME when people say "groupware" they really just mean calendaring stuff
<twb> seezed: maybe something like trac?  That's project-oriented
<biodegabriel> seezed basecamphq.com ?
<seezed> yeah i've looked at trac, but it doesnt seem to do the doc hosting stuff
<seezed> biodegabriel: i'm looking for something more along the lines of a self hosted app
<biodegabriel> seezed: http://github.com/dannyweb/BaseApp2
<seezed> just to make things hard
<seezed> biodegabriel: are you suggesting i make my own? because that's not really practicable
<OWDH> ikonia bruder olaaaaa
<OWDH> bazhang bro ahoy
<OWDH> wtfm where is da fat and ugly holy shit linuks penguin, the called TUKS? TUKS must Die! I am da TUKSKILLA!
<soren> OWDH: Please take this nonsense elsewhere.
<twb> How peculiar
<soren> Kids. What can I say?
 * soren reboots (into maverick)
<jpds> soren: Hi.
<huats> morning
<ttx> huats: yo!
<huats> hello ttx !
<soren> jpds: o/
<uvirtbot> New bug: #627922 in openldap (main) "package slapd 2.4.21-0ubuntu5.3 failed to install/upgrade: le sous-processus script post-installation installÃ© a retournÃ© une erreur de sortie d'Ã©tat 1" [Undecided,New] https://launchpad.net/bugs/627922
<siretart> does the ubuntu server team look after bugs in virt-manager? or do you focus on "infrastructure" packages like 'libvirt'?
<twb> I imagine virt-manager has a specific Maintainer:
 * lifeless bets on dustin
<twb> Grmph
<twb> Maintainer: Ubuntu Core Developers <ubuntu-devel-discuss@lists.ubuntu.com>
<twb> IMO packages that do that should indicate guilty individuals via Uploaders
<theyranos> can anyone tell me what "monitor socket did not show up" actually means? Google has not yielded much of anything helpful.
<twb> theyranos: what issues that warning?
<theyranos> sudo virsh create /path/to/xml
<twb> Sorry, I don't do libvirt
<twb> Wait patiently for someone else and/or try #libvirt (#virt?)
<theyranos> will do. thanks
<theyranos> dang. #libvirt got created when i tried to join it just now. #virt wants a password... but i'll go looking for a libvirt community
<soren> theyranos: #virt is on oftc.
<theyranos> soren: thanks
<soren> theyranos: "monitor socket did not show up" usually means the domain took to long to boot (or didn't boot at all)
<soren> theyranos: Look in /var/log/libvirt/qemu/<name of domain>.log for more info.
<theyranos> you have no idea how grateful I am just for telling me where the heck the log files for virsh are.
<jussi> #ubuntu-virt was redirected to here, so the correct people _should_ be here now...
<theyranos> oh lord... there was an extra space in my xml file. I've been trying to fix this for three hours and that was the entire problem. soren, thank you so very much for the log path.
<uvirtbot> New bug: #627961 in mysql-dfsg-5.1 (main) "package mysql-server 5.1.41-3ubuntu12.6 failed to install/upgrade: fÃ¼ggÅsÃ©gi hibÃ¡k - e csomag beÃ¡llÃ­tatlan maradt" [Undecided,New] https://launchpad.net/bugs/627961
<ttx> jiboumans, Daviey, kirkland, hggdh: doing UEC ISO testing, I can't get it to work :/ See https://bugs.launchpad.net/ubuntu/+source/eucalyptus/+bug/627963
<uvirtbot> Launchpad bug 627963 in eucalyptus "CC doesn't start correctly, CLC struggles with certificates" [Critical,New]
 * Daviey looks
<ttx> Daviey: i'd welcome some outside reproduction
<Daviey> ttx: Okay.. i'm planning to reproduce it shortly..
<Daviey> I'm expecting it to be a depends that broke it TBH
<Daviey> odd that i hadn't seen this before.
<Daviey> .. and you have seen it twice!
<Daviey> infact, even Carlos hasn't seen it!
<ttx> I did two amd64 installs in a row
<Daviey> Which to me, indicates something underlying busted
<ttx> Daviey: I'm lacking some reference point, but I suspect cloud-cluster.log should not be empty, that's why I say "CC is not started correctly"
<ttx> and then those received/expected certificates dumps every 20s look suspicious
<ttx> I found them while investigating bug 585108
<uvirtbot> Launchpad bug 585108 in eucalyptus "euca_conf --register-nodes returns 0 but the node is not registered" [High,Confirmed] https://launchpad.net/bugs/585108
<ttx> looks like that one would be a consequence rather than a cause.
 * ttx lunches
<Daviey> agreed
<uvirtbot> New bug: #627973 in openvpn (main) "Assertion failed at socket.c:629" [Undecided,New] https://launchpad.net/bugs/627973
<ttx> Daviey: i386 test in progress
<ttx> Daviey: re: dependency, maybe, the certificate mismatch seems to affect only the C components (CC and NC)
<Daviey> ttx, I'm halfway through amd64
<ttx> Daviey: I don't run into it with i386. Might be arch-related, or pure luck
<ttx> ...
<ttx> the i386 setup ends up being so slow it even gets the avahi broadcasts wrong
<ttx> eucalyptus-cloud maintains high load
<Daviey> ttx, Hmm - i just got a registration issue on the single machine :S
<ttx> Daviey: describe
<Daviey> ttx, rebooting, will pastebin reg'.log
<Daviey> BTW, shutdown speed is laaaaaging
<ttx> the reg log shows duplicate tries, but with SUCCESS
<Daviey> as in frozen
<ttx> (in my tests)
<Daviey> ttx, Have you been able to reboot?
<ttx> Daviey: why would I ?
<Daviey> mine has been jammed for 3 mins on "Deconfiguring network interfaces"
<ttx> right, I got that
 * Daviey hard powers off.
 * ttx is focused on the critical issues, but has a backlog of small issues he also needs to file
<ttx> Daviey: when was the last time you had a working setup ?
<Daviey> ttx, Friday was the last time i did a fresh install.
 * Daviey winces at ipv6 error
<ttx> Daviey: filed bug 628025
<uvirtbot> Launchpad bug 628025 in eucalyptus "Duplicate registration tries show up in registration.log" [Low,New] https://launchpad.net/bugs/628025
<ttx> is it the same one ?
<Daviey> http://eucalyptus.daviey.com/logs/
<Daviey> ttx, same errro
<ttx> right, it's not the blocking factor
<ttx> something makes duplicate announcements
<ttx> hmm
<ttx> that might explain the expected/received cert issue though
<ttx> Daviey: I'm having this bug also on i386 where everything registered properly
<ttx> so I discounted it as yab (yet another bug)
<Daviey> ttx, http://community.eucalyptus.com/forum/error-errorc94-oxs-error-x509c287-opensslx509ge
 * Daviey screams
<ttx> Daviey: this one was always present
<ttx> Daviey: https://bugs.launchpad.net/ubuntu/+source/axis2c/+bug/456099
<uvirtbot> Launchpad bug 456099 in rampart "axis2c.log is full of spurious (and full of typos) errors" [Low,Triaged]
<ttx> so, that one is clearly unrelated (yab)
<zul> Daviey: serenity now!
<Daviey> zul, o rly :)
<ttx> Daviey: I get everything registered on i386
<Daviey> ttx, I'm going to spin a new install, amd64....
<ttx> Daviey: (with duplicate messages in registration.log, one success 1 fail for each)
<ttx> trying to run an instance for kicks
<zul> Daviey: serenity now....insanity later
<Daviey> ttx, I'm wondering if a depends has changed recently, and only amd64 has been accepted into the archive; or it FTBFS
<Daviey> guessing WHICH depends is fun.
<ttx> Daviey: comparing ISO contents since Friday will help
<ttx> if only we kept that ISO on cdimage
 * Daviey raises again that snapshot archive would really help in this instance.
<Daviey> ttx, I can look through ubuntu-changes
<ttx> Was your Friday test an amd64 one ?
<Daviey> yes.. i generally hammer amd64 more.
<ttx> ok. so it would truly be a recent regression, or just a flaky thing we were all lucky with
 * ttx looks up ubuntu-changes as well
<Daviey> nah, we've ruled out luck i feel
<Daviey> Carlos, or me didn't discover this sooner
<Daviey> brb
<ttx> nothing jumps out in maverick-changes
<ttx> hggdh: ping us when around, we could use another pair of eyes reproducing those
<PrestonConnors> Hello, I am trying to get unattended-upgrades to NOT install and only notify me of packages that are available for upgrade via E-Mail. I noticed the option APT::Periodic::Unattended-Upgrade "1";  in /etc/apt/apt.conf.d/10periodic via Google searching but this is not an option in my installations of Ubuntu 10.04; what does APT::Periodic::Unattended-Upgrade "1"; do and if I set it to 0 will it NOT install updates and still notify 
<hggdh> ttx: ping
<hggdh> ttx: I have an idea re bug 627963
<uvirtbot> Launchpad bug 627963 in eucalyptus "CC doesn't start correctly, CLC struggles with certificates" [Critical,New] https://launchpad.net/bugs/627963
<ttx> hggdh: cool
<hggdh> ttx: for a topo1,under /var/lib/eucalyptus/keys, you will find *both* the certs *and* a directory named after your CC
<hggdh> ttx: try replacing either one of the set of certificates. I did it the hard way, and it worked
<hggdh> ttx: somebody seemgly changed the places for (some of) the certificates
 * ttx will try again
<hggdh> ttx: hard way == replace the /var/lib/eucalyptus/<Cluster Name> certs on all machines
<ttx> hggdh: is this a random error ? Or you always run into it ?
<ttx> just wondering why I'm the first one to see it, on D-1 for beta release
<ttx> reinstalling amd64 UEC rig
<hggdh> ttx: this is new on ubuntu2, I think, did not run on it before
<ttx> huh
 * hggdh is also starting from scratch
<hggdh> I will try it, but without dacing around like the first time
<marek_> hi, i have a problem with proxy, i need to do the same thing as mentioned here http://serverfault.com/questions/140125/port-to-subdomain - so i added apache modules and addedd vhosts file, but it is still not working, i even tried to edit proxy.conf to enable "Allow from all" for proxies, but still no luck, how can i debug this problem?
<ttx> hggdh: I see nothing in -ubuntu2 diff that would be relevant
<hggdh> ttx: I am not saying it was a change with 0ubunut2, I said this was new on 0ubuntu2... I sure as hell did not have it before, and just got it now
<hggdh> ttx: but I am still about 20 minutes from finisheing the CLC/CC/SC/Walrus reinstall
<ttx> hggdh: if you can document a workaround on the bug, that would be great (and lower it to High from Critical)
<hggdh> ttx: I will confirm the workaround and lower the importance -- if it actually works, of course --
 * hggdh wonders how much was luck
<ttx> I might get there before you do
<hggdh> heh. The race is on ;-)
<hggdh> ttx: one thing I *should* have done was to openssl x509 -text the certs (I am not sure the certs are in x509)
<hggdh> so replace x509 by whatever format needed
<ttx> hggdh: copied keys to <clustername>, retsrtaed eucalyptus
<ttx> and now I have a cc.log
<hggdh> ttx: I can confirm that /var/lib/eucalyptus/keys/cluster-cert.pem and /var/lib/eucalyptus/keys/<clustername>/cluster-cert.pem are different
<ttx> hggdh: amd64/all-in-one unblocked, installing NC now
<ttx> let's see if I hit bug 628055 now
<uvirtbot> Launchpad bug 628055 in eucalyptus "Instances don't go to "running" state: Security Labeling error running aa_change_profile()" [Critical,New] https://launchpad.net/bugs/628055
<ttx> hggdh: does that one ring a bell ? ^
<\sh> maswan, ping your problem with the new hp blade series G7 and the new broadcom chipsets...I'll visit tomorrow the testlab of HP Germany...I want to test jaunty, lucid and maverick...anything else I shoud  pay attention to when testing the setups?
<hggdh> ttx: no, it does not. I never saw this error...
<ttx> hggdh: was on my i386 test, which for some reason avoided that certificate thing
<hggdh> ttx: weird is that I installed, and I *do* have a cc.log
<hggdh> ttx: but the registration.log shows the error, OK
<hggdh> brb
<ttx> hggdh: no
<ttx> hggdh: the registration.log shows a different error
<ttx> http://launchpad.net/bugs/628025
<uvirtbot> Launchpad bug 628025 in eucalyptus "Duplicate registration tries show up in registration.log" [Low,Confirmed]
<ttx> that oe is unrelated imo
<maswan> \sh: Hm. Not really, that I can think of. Hm. It is sometime non-trivial to get bios and grub to agree which boot device is which if you have a storage blade, but you might not have that configuration.
<ttx> hggdh: so if you have a cc.log, it's probably a lucky strike
<\sh> maswan, no simple 465G7
<hggdh> ttx: so... it might be with us for some time now
<hggdh> ttx: I still do not understand why the two different sets of cluster-cert.pem. Ah well, I will go ahead and install the NCs, and see what happens
<uvirtbot> New bug: #628109 in krb5 (main) "package libgssapi-krb5-2 1.8.1 dfsg-2ubuntu0.2 failed to install/upgrade: il sottoprocesso dpkg-deb --control ha restituito lo stato di errore 2" [Undecided,New] https://launchpad.net/bugs/628109
<maswan> \sh: I've found that on just normal boots, adding crc32c to /etc/modules brings up the network interfaces fast, as a workaround.
<maswan> \sh: but that doesn't help the installer
<ttx> hggdh: I could run an instance on amd64, once I applied your workaround
<\sh> maswan, hmm..what das crc32c do? ;)
<\sh> s/das/does/
<ttx> so I didn't hit the instance running bug
<maswan> \sh: It's a dependancy of the bnx2x module
<ttx> lowering prio since it may be a one-off or a i386-specific bug
<maswan> \sh: but somehow indirect, for some reason
<\sh> maswan, hmm..kernel module bug ?
<maswan> \sh: Well, it does get loaded eventually (at first it didn't, that's a separate bug that got fixed)
<maswan> or if it wasn't included in the install initrd or something. I forget, this was karmic alpha time, so it's a bit hazy now. :)
<ttx> hggdh: so let me get this straight. You started from scratch and ended up with an install that had a cc.log after registration, and no certificate errors in cloud-output.log ?
<ttx> (on amd64 ?)
<\sh> maswan, well, I will see tomorrow :)
<\sh> maswan, thx for the infos
<maswan> \sh: It's quite easy to see if it hangs for several minutes when detecting netowrk hardware, at least. :)
<hggdh> ttx: correct, on amd64. Previous install (and first on 0ubuntu2) failed wit the cert errors
<ttx> ok, so it's probably not arch-related
<\sh> maswan, hehe...yeah :)
<ttx> I suspect some kind of race
<hggdh> yes. The certs for the cluster seem to be created dinamically
<ttx> like if cluster shows up before the CLC created the certs
<hggdh> ttx: er
<ttx> i386 being utterly slower avoids that issue more often
<hggdh> ttx: I see this http://pastebin.ubuntu.com/486780/ at about the time the cluster certs are being/have been created
<ttx> not sure I'm any wiser now
 * ttx will try his 5th (and hopefully last) UEc install of the day, i386 to confirm/deny bug 628055
<uvirtbot> Launchpad bug 628055 in eucalyptus "Instances don't go to "running" state: Security Labeling error running aa_change_profile()" [High,New] https://launchpad.net/bugs/628055
<ttx> everyone: don't forget to do ISO testing today, see http://iso.qa.ubuntu.com/qatracker/build/ubuntuserver/all
<hggdh> ttx: should we open a bug for this http://pastebin.ubuntu.com/486808/
<hggdh> ttx: I do not see it really as a problem, seems to be just lack of support for IPv6
<ttx> hggdh: yes, it's slightly different from bug 628025
<uvirtbot> Launchpad bug 628025 in eucalyptus "Duplicate registration tries show up in registration.log" [Low,Confirmed] https://launchpad.net/bugs/628025
<ttx> hggdh: please do
<ttx> (Low, confirmed, since I hit it too)
<hggdh> ttx: ack
<hggdh> Daviey: 09:52:23 ERROR [SystemUtil:Thread-33] com.eucalyptus.util.ExecutionException: ///usr/lib/eucalyptus/euca_rootwrap vgscan  error: ERROR: Disallowed command vgscan
<soren> https://launchpad.net/ubuntu/maverick/+queue?queue_state=0&queue_text=swift  \o/
<ttx> smoser: are you going to run the ec2 instances autotest ?
<smoser> yes.
<smoser> i'm running.
<smoser> they're mostly finished, just wating on availability in ap-southeast
<smoser> for amd64
<ttx> smoser: also if you have a working UEC[tm], could you cover the cloud images ?
<smoser> i have a working-ish uec.
<smoser> (one with local hacks, ie to fix the console output bug)
<ttx> that will do :)
<smoser> but i can try to run them here.
<hggdh> oh. yes, the console-output bug... let me see how the current euca is faring
<ttx> mathiaz, SpamapS, hallyn, kirkland, zul: I trust you're going to cover the remaining cases
<zul> ttx: yep
<ttx> From where I stand, I think we are good for server ISO and cloud images. For UEC we need to investigate the issues to come up with good release notes
<ttx> that's what hggdh, Daviey and me are trying to cover
<kickingintender> i wanna know is it easy to install xserver for ubuntu server
<hggdh> ttx: heh. I restarted Eucalyptus, and now I *do* get the certificate thingy
<ttx> ew
<ttx> you mean it's not even a one-time fix :)
<hggdh> so... I am betting there is some sort of fall-back/timing where euca relies on the ./keys/*.pem until ./keys/clustername> is created
<hggdh> no, I had not fixed it yet
<hggdh> there was *no* problem... until I restarted
 * ttx wonders if upstream tests the topology 1
<hggdh> :-)
<ttx> hggdh: maybe they don't, and maybe we shouldn't
<hggdh> I will fix it, and then restart the beast again
<ttx> hggdh: did you have the chance to run topology2 ?
<hggdh> ttx: no, this is a good fit for small deployments
<ttx> hggdh: it's important for demo/eval purposes, yes
<ttx> hggdh: but if they don't support it, we should not either
<hggdh> ttx: question for them on next go-around
<uvirtbot> New bug: #628145 in eucalyptus (main) "Eucalyptus does not understand IPv6" [Low,Confirmed] https://launchpad.net/bugs/628145
<hggdh> ttx: but yes, I agree now
<kickingintender> how to install x11 for ubuntu server from command line
<kickingintender> i mean gui
<ttx> kickingintender: sudo apt-get install ubuntu-desktop ?
<hggdh> kickingintender: it is a standard X, so you just apt-get install <whatever X you want>
<hggdh> like apt-get install ubuntu-desktop
<hggdh> or apt-get install xubuntu-desktop
<hallyn> ttx: 'upgrade test' i assum emenas install lucid, and then upgrade to maverick?
<kickingintender> so really with x == is it equals to ubuntu desktop
<hggdh> hallyn: yes
<hggdh> kickingintender: no X != ubuntu-desktop
<hallyn> still trying to find the 'non-uec beta test' on the tracker...
<ttx> hallyn: http://testcases.qa.ubuntu.com/Testing/Cases/ServerUpgrade
<hggdh> ubuntu-desktop carries X
<ttx> hallyn: referenced from http://iso.qa.ubuntu.com/qatracker/result/4484/192
<hallyn> ttx: yes, it doesn't say "start with lucid" though...
<hallyn> wanted to be sure :)
<ttx> hallyn: that should be it :)
<ttx> Daviey: when you come back: please confirm/reproduce the issues we already found... forward upstream... and maybe propose a releasenote text on the bugs
<Daviey> ttx, ack
<Daviey> ttx, oddly, on install #3 - reboot worked :S
<ttx> Daviey: ...
<Daviey> (on both boxes)
<Daviey> ttx,  eucalyptus-cloud is still hammering
<uvirtbot> New bug: #628174 in nis (universe) "ypbind man page does not document -p switch" [Undecided,New] https://launchpad.net/bugs/628174
<sla> can someone help me with dbconfig-common and aptitude? trying to install phpmyadmin/mantis/mediawiki etc.. all failing to create a database on installation. although i'm sure the mysql root password i provide works
<sla> been trying it for 3 hours now, driving me crazy :S
<hallyn> ok whoever decided that it might be ok for an update to reset the 'don't suspend on lid close' setting should die a slow painful death
<sla> all i get from /var/log/dbconfig-common/dbc.log is: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES).
<sla> unable to connect to mysql server.
 * hggdh tries to repeat the bypass for the changed certificates, and instead opens the door to the Labyrinth :-(
<Daviey> hggdh, https://bugs.edge.launchpad.net/eucalyptus/+bug/627963/comments/11
<uvirtbot> Launchpad bug 627963 in eucalyptus "[maverick] Inconsistent certificates prevent CC to start correctly (no cc.log)" [High,Confirmed]
<Daviey> :S
<kpettit> Can anybody recommend a desktop search that doesn't suck up to many resources.
<kpettit> Been trying tracker, but it sucks.  Even when it's not indexing the tracker-store sucks up 1/2 my memory.
<kpettit> I've got 500k - 750k files to go though and search on.
<\sh> ./dev/brain and a good directory structure ;)
<hggdh> Daviey: yes... and I installed topo1, everything fine. restarted eucalyptus, bang!
<Daviey> hggdh, *sigh*... things were looking so good.
<hggdh> Daviey: yeah :-(
<kpettit>  \sh, be easier if I had that choice.  I have a mess of files I'm cleaning up.
<hggdh> Daviey: did you see my ping about vgscan?
<Daviey> hggdh, no ;/
<hggdh> wrappers.conf lost the vgscan entry
<padhu> ! $sudo apt-get install lamp-server^ ; in this command, why '^' is needed? what it say?
<ubottu> Error: I am only a bot, please don't think I'm intelligent :)
<kpettit> I'm trying to get to the point where it will be orginized, but I'm dealing with what I have.  A search would make things easier rather than manually sifting through things.  Meta data helps that as well
<hggdh> Daviey: wrappers.conf lost the vgscan entry
<Daviey> hggdh, interesting... when did that happen!?
<hggdh> Daviey: I do not know, just saw the error on 0ubuntu2, did not have time to look at the bzr
<hggdh> Daviey: and, since on 2.0 we default to iSCSI...
<Daviey> hggdh, thanks for raising this.. i'll fix it in bzr
<hggdh> Daviey: my pleasure
<hggdh> oh, this does not really sound correct
<Daviey> huh?
<hggdh> there is no pleasure in finding yet another issue
<Daviey> :)
<Daviey> hggdh, Hmm.. are you sure vgscan was ever in wrappers.conf?
<hggdh> Daviey: at least up to 2.0, yes -- it was required on 1.6
<uvirtbot> New bug: #625882 in libdbi (main) "libdbi0: ABI breakage without package name change" [High,In progress] https://launchpad.net/bugs/625882
<Daviey> hggdh, hmm... vgscan isn't in current lucid
<hggdh> Daviey: weird
<hggdh> then it must have been added for 2.0 (I remember Neil stating the code was re-written/factored), and I got confused
<hggdh> dammit
<hggdh> Daviey: I indeed think I was worng, and vgscan is new on 2.0
<hggdh> Daviey: want a new bug for it?
<uvirtbot> New bug: #628220 in clamav (main) "package clamav-base 0.96.1+dfsg-0ubuntu0.10.04.1 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/628220
<the_file> hi im trying to boot 15 computers into a simple dos program thats actually under 1 megabyte in size, and I want this to be done thru network boot, so I want to set up a machine with linux on it, but don't know a thing about network booting.
<the_file> was wondering how to set up network boot and utilize ubuntu as the server to boot the machines into a dos program
<hggdh> ttx, Daviey: can you search your euca logs for "auth.login.AuthenticationException"?
<the_file> does ubuntu server come with x?.
<the_file> want to use kde or gnome
<the_file> with it
<ScottK> It's not in the default install, but you can add it easiily enough.
<hallyn> zul: feh!  i'm trying again cause i may have messed it up, but lucid->mav upgrade test seemed unhappy with mysql
<zul> unhappy how?
<hallyn> zul: near the end of the dist-upgrade, it hung (until i ctrl-c'd) on replacing mysql configuration files
<zul> hmmm...ok ill have to look
<the_file> is there a package that I can download for the ubuntu server that will automatically configure network booting?.
<zul> the_file: no there isnt you might to check google
<Daviey> hggdh, not seeing it :/
<hggdh> hell
<hallyn> zul: i shouldn't have to configure mysql before doing dist-upgrade right?
<zul> nope
<uvirtbot> New bug: #572635 in postfix (main) "package postfix 2.7.0-1 failed to install/upgrade:" [Undecided,New] https://launchpad.net/bugs/572635
<kirkland> smoser: howdy!
<smoser> hey
<kirkland> smoser: can you take a look at https://wiki.ubuntu.com/Euca2oolsTestCoverage
<kirkland> smoser: any of those gaps on the right you can help test easily against your UEC?
<smoser> well, not *right now*.
<smoser> give me maybe an hour
<kirkland> smoser: sure, that's fine
<hggdh> Daviey, ttx: the certificate issue happens on every reboot
<Daviey> hggdh, interesting
 * Daviey suspects code left somewhere it shouldn't have been during refactoring
<hggdh> Daviey: I also opened bug 628291 for vgscan
<uvirtbot> Launchpad bug 628291 in eucalyptus "wrappers.conf is missing vgscan, seemgly new addtion to 2.0" [Undecided,New] https://launchpad.net/bugs/628291
<hallyn> zul: well i'ts doing better so far - last time the first hint of trouble was at the libc upgrade, when it restarted mysql for the nss update, i think it had said it failed to restart mysql
<zul> hallyn: gah...ill an update for myself tomorrow
<Overand> Is there a "right" way to add a new software-RAID device to ubuntu server 10.04 - not a fresh install.
<Overand> i.e. I read that the installer uses udev rather than mdadm - and i know *nothing* about this, and wasn't able to really track it down via docs.  I've always used an mdadm.conf type setup
<kirkland> ttx: around?
<elv2> hi
<elv2> I have a problem with SVN and apache, and the workaround is to delete or rename libc6-i686. Is this safe as my servers shows uname -m i686?
<smoser> kirkland, i can make a pass at that stuff tomorrow.
<smoser> been bogged today with ec2 testing
<kirkland> smoser: okay
<smoser> have o run now.
<kirkland> smoser: yeah, i'm iso testing now too
<ikar> Hey, whats the command for "sftp" upload a directory?
<ikar> nevermind, i found out
<qman__> Overand, the installer uses udev for hardware, but mdadm for software RAID
<Takyoji> Any have specific webserver hardening suggestions, such as Single Packet Authentication, basic blacklisting scripts, security modules, chroot for Apache, etc? Anyone used grsecurity before?
<stlsaint> Takyoji: iptables, fail2ban and deny.hosts :D
<stlsaint> Takyoji: grsecurity is nice but is mainly used a larger scale than home use (if your thinking of making a home server)
<stlsaint> Takyoji: also if remote managment go with openssh with key authentication
<Takyoji> It's intended for a public webserver hosted on Linode
<Takyoji> because I'm intending on my making a public service, rather than using it for personal use.
<kees> Takyoji: I recommend using AppArmor to isolate each of your virtual hosts.
<kees> Takyoji: and to use 64bit
<Takyoji> Yes, I have a 64-bit installation
<hallyn> SpamapS: regarding bug 625882, can i set the one against rrdtool to same status as the libdbi one?
<uvirtbot> Launchpad bug 625882 in libdbi "libdbi0: ABI breakage without package name change" [High,In progress] https://launchpad.net/bugs/625882
<SpamapS> hallyn: I'll be moving them to In Progress as I work on them. I'd set them to Confirmed myself, but its probably best if others take a look at them and agree that they're Confirmed/Triaged...
<hallyn> SpamapS: i'm just sayin',
<hallyn> you set it to the one affected proj, but not the other,
<hallyn> so iw asn't sure if the other was invalid
<SpamapS> right because I haven't started on the fix in rrdtool yet
<hallyn> ok
<hallyn> thx
<SpamapS> it has to be fixed in all the rdepends on libdbi
<SpamapS> So probably "Triaged" if you agree that it does need to be fixed. ;)
<hallyn> SpamapS: maybe i'll just leave it alone then :)
<SpamapS> These are pretty low hanging, easy bugs so it would be cool if some aspiring community members picked them up and fixed them. :)
<hallyn> SpamapS: maybe we need some blogging to encourage community to pick up bugs
<hallyn> SpamapS: was just doing my triaging and wondering why that one was hanging
<SpamapS> hallyn: I think its worthwhile for you to set them to Triaged. The process of reviewing the bug report is important to make sure I'm not just cowboying it in. ;)
<hallyn> set both to triaged?
<SpamapS> no, the New one
<hallyn> ok
<SpamapS> Thats the only reason its showing in the untriaged list
<SpamapS> Whoever gets to triage tomorrow will probably see it again with a few more packages added. :)
<SpamapS> so maybe leave a good comment explaining our conversation. :)
<hallyn> \o/ :)
<SpamapS> jus thro ya slashes in tha ayre .. 'n wave 'em like ya jus don care
<hallyn> yup, it's EOD and getting a little silly :)
<hallyn> SpamapS: zul: you know what'd be useful?  a locale hook that auto-creates an english version of all logs
<hallyn> cause there are two bugs i can't make heads or tails of :)
<SpamapS> translate.google.com :)
<hallyn> google doesn't like me ever since i turned on noscript
<SpamapS> hallyn: still chugging away with the transmeta eh? ;)
<hallyn> SpamapS: lol, no, i couldn't run firefox at all on that
<SpamapS> hallyn: why turn off scripts? its pretty much the only sane way to make web apps.
<hallyn> bc i dont' trust your webapps
<SpamapS> wuss
<SpamapS> ;)
<hallyn> i turn them on when i actually want them
<hallyn> (noscript is plenty flexible to make that feasible - else i wouldn't do it)
<SpamapS> The backend stuff is way more scary than the frontend stuff these days.
<SpamapS> So many SQL injection attacks and just plain bad admin passwords.. I think XSS hacks get all the press, but breaches on the backend are the ones that will get your data.
#ubuntu-server 2010-09-02
<shauno> I read a forum recently, I don't recall where; someone pointed out a google query that'd bring up page after page of mysqladmin pages that were left completely open to the web
<shauno> that's when these convenient applications become terrifying
<SpamapS> shauno: yeah, backend web apps.. the scourge of the internet. ;)
<shauno> http://www.forum.psoft.net/showthread.php?t=25834   that was the example I found.  someone dropped a table off a customer's db, and the autopsy revealed they came in just via google
<uvirtbot> New bug: #628546 in mysql-dfsg-5.1 (main) "package mysql-server-5.1 5.1.41-3ubuntu12.3 failed to install/upgrade: Package is in a very bad inconsistent state - you should  reinstall it before attempting a removal." [Undecided,New] https://launchpad.net/bugs/628546
<shauno> not sure I'm understanding ufw correctly.  it seems if I enable it (the only rule I've added so far is allow ssh), I drop off IRC.  The connection appears active (if you can read this, I'm still connected ;), but ctcp ping doesn't seem to work
<Italian_Plumber1> say I have two indentical files in different directories.  Can I use md5sum to verify they are identical, and do so in one line without writing the sums to a file?
<Takyoji> Anyone know the practicality of applying grsecurity to an Ubuntu 10.04 (64-bit) installation?
<erichammond> Italian_Plumber1: Why not use "diff -q" to see if the files are identical?  md5sum must read both files entirely, while diff may have the chance to stop at the point where they differ.
<lifeless> erichammond: Italian_Plumber1 'cmp' is your friend.
<twb> erichammond: why not use cmp, then?
 * lifeless high fives twb
<Italian_Plumber1> they're binary files, I only care that they differ
<lifeless> yes
<lifeless> cmp
<twb> $ cmp <(head -1 /dev/urandom) <(head -1 /dev/urandom)
<twb>  /dev/fd/63 /dev/fd/62 differ: byte 1, line 1
<Italian_Plumber1> :) ok... cmp is cool..  now how do I compare the contents of two directories
<erichammond> Italian_Plumber1: diff :)
<Italian_Plumber1> hmmm.... all of these commands are producing no output. :)
<erichammond> twb, lifeless: In my tests diff -q and cmp -s are performing equally well.
<Italian_Plumber1> scratch that I'm comparing the same directory
<Italian_Plumber1> ok cool.  thanks guys
<Italian_Plumber1> and gals, if applicable.
<erichammond> Italian_Plumber1: Send the output to /dev/null and use the exit value of the command to make decisions.  True means the files are the same.
<Italian_Plumber1> I had copied a torrent from one place to another and I suspected there were some errors during the transfer.  Looks like there actually weren't any.
<Italian_Plumber1> Thanks again...
<lifeless> erichammond: diff -q yes, true.
<uvirtbot> New bug: #628577 in openssh (main) "connection fails: ssh_exchange_identification" [Undecided,New] https://launchpad.net/bugs/628577
<Delemas> I'm trying to repackage my code for ubuntu. In the rpm world %config(noreplace) /etc/someconfigfile is frequently used. Is there an equivalent in Ubuntu?
<shauno> ufw appears to be inhibiting outbound ipv6 connects as default?
<twb> erichammond: what data are you testing on?
<uvirtbot> New bug: #628584 in mysql-dfsg-5.1 (main) "package mysql-server-5.1 5.1.37-1ubuntu5.1 failed to install/upgrade: sub-process new script pre-removal gives exit code 1" [Undecided,New] https://launchpad.net/bugs/628584
<jdstrand> shauno: IPv6 is disabled by default and only packets on lo are allowed
<jdstrand> shauno: to enable ipv6, you need to adjust /etc/default/ufw then disable and enable the firewall. see 'man ufw' for details.
<twb> Interestingly, comparing a 1GB and a 2GB empty, sparse file is 7s for cmp and 0.005s for diff
<twb> Presumably diff first checks the length, whereas cmp supports streams
<twb> diff -q is equally fast for two 1GB sparse files, where one contains a byte at the 512M'th place.
<twb> Perhaps diff -q looks at the same thing as du --apparent-size when comparing sizes.
<shauno> jdstrand: thanks
<Jordan_U> twb: I expect that it's just that reading a very sparse sparse file requires almost no IO, which is the main bottleneck.
<Jordan_U> twb: Nvm, looking back I see you were talking about relative speed (not just that diff was fast).
<twb> Probably someone suggested optimizing cmp in the same way and was told "you can't, it breaks <obscure posix compatibility requirement>"
<AlexMax> Hi there, is there any way for me to take a look at the changelog for packages that apt wants to update before I update them?  I tried installing apt-listchanges which is supposed to run when running 'apt' but I never see it appear.
<AlexMax> Does it not work with aptitude or something?
<twb> !apt-listchanges
<twb> Grr.
<twb> The ops should just seed ubottu with the dpkg bot's database
<twb> Oh, he left anyway
<Jordan_U> twb: !info <package> works
<twb> The answers he wanted were: use the C key in aptitude (installing libparse-debianchangelog-perl recommended), and apt-listchanges only runs AFTER apt (but before dpkg), and it only lists NEWS by default, not changelog -- most packages don't have NEWS files.
<twb> Jordan_U: not quite the same thing though
<twb> dpkg> well, apt-listchanges is a package that shows you the NEWS.Debian and/or the Debian changelogs for packages before upgrading them.  It is well worth it for stable users, to see what is happening to their machines as they apply security fixes.  It should be made _mandatory_ for users of testing and unstable.  Also ask me about <d-d-a>, <apt-listbugs>.
<twb> versus
<twb> dpkg> apt-listchanges: (package change history notification tool), section utils, is standard. Version: 2.85.3 (sid), Packaged size: 82 kB, Installed size: 280 kB
<twb> (If you don't already know, you can /msg dpkg if you first /join #debian-bots.)
<twb> So I have a prospective customer that needs some handholding for DNS for his cpanel-based VPS
<twb> I haven't used cpanel before, but I expect it's basically like webmin, and that I shouldn't go behind its back
<fluvvell> Any diff gurus who could tell me how to compare two lists (dpkg -get-selections lists) off two machines to see which packages arent in both lists?
<twb> Never mind, he ran away
<twb> fluvvell: just diff them
<twb> fluvvell: you might need to sort them first
<twb> You could also use comm(1) to emit the left-only, right-only or shared lines
<SpamapS> twb: cpanel is WAY more controlling than webmin
<fluvvell> twd, diff gives me a double list, I want a list of packages I can remove from the machine with more
<fluvvell> shhesh, sorry i'm dislexic twb!
<SpamapS> fluvvell: sort them, diff is great for sorted lists
<twb> SpamapS: "webmin done by professional crack monkeys rather than amateur crack monkeys" :P
<fluvvell> SpamapS, is there a setting that can pretty up the output without a whole lot of < > or hex numbers ?
<fluvvell> I've sorted them
<twb> comm -13 <(ssh foo dpkg --get-selections | sort) <(ssh bar dpkg --get-selections | sort)
<twb> fluvvell: you should use diff -u unless you have a really good reason not to
<twb> The default diff format is daft
<SpamapS> twb: its just that cpanel dictates a lot of stuff, like there's only one mta, and one httpd ..
<SpamapS> fluvvell: I like -u
<fluvvell> Oh, comm !
<fluvvell> so diff -u, are all the - signs the packages that don't exist in both ?
<twb> SpamapS: so it's more like the bastard offspring of webmin and, say, zimbra
<SpamapS> fluvvell: right, - will be everything that is in box1, but not in box2. + will be things in box2, but not box3
<twb> fluvvell: -u lists deletions (-), insertions (+) and, by default, two lines either ide
<twb> *side
<fluvvell> yeah, comm -3 might do me
<SpamapS> rright, comm .. comm is good for this too, I forgot
<SpamapS> I always forget comm
<fluvvell> cheers guys, thats been mega helpful
<twb> fluvvell: #bash is good for generic scripting questions
<intelliant> Hi!
<intelliant> I have a few ubuntu VMs installed on KVM and they otherwise seem to work okay but randomly give this error - http://picpaste.com/ubuntu_vm-eGalwjCq.png
<intelliant> How serious could this be and what could be the cause of the same?
<intelliant> At times when I boot these VMs, the filesystem seems to be read-only. This is not necessarily after an improper shutdown.
<intelliant> A clean reboot may also result in this.
<intelliant> The host OS is openssue 11.3 .
<qman__> on real hardware, that could indicate a failing disk or disk controller
<qman__> but with a VM, lots of other factors come into play
<intelliant> qman__: yes i agree
<qman__> could just be a bug, could be failing hardware, could be a misconfiguration
<twb> Where's TTL in dig's output?
<intelliant> there are other openssue and centos VMs as well but this problem only shows up in ubuntu ones
<intelliant> qman__: so i think it could not be a hardware issue
<qman__> well, it's clearly failing with a DMA write to the VM's primary disk
<qman__> does it happen periodically or only during boot?
<intelliant> randomly
<qman__> disabling DMA may eliminate the error, but may also impact performance
<intelliant> the read-only problem only happens during boot and a reboot fixes it with pressing of F 'fix errors' prior to boot.
<qman__> I don't know enough about how DMA performs in VMs
<intelliant> the screen dump about DMA is random
<intelliant> at times it has gone days without any such error, at times within a few hours
<qman__> my best guess would be an issue between your virtual disk availability/performance and ubuntu's disk access performance tweaks
<qman__> in that ubuntu is trying to write to the disk without waiting for it to become available
<qman__> but that's only a guess
<intelliant> qman__: so is there some other channel you suggest I post to ?
<qman__> probably to do with upstart, which boots significantly differently from sysv-style rc
<qman__> or a kernel setting that ubuntu has and the other VMs don't
<qman__> it would take some more in depth investigation
<intelliant> okay upstart is a quick-boot thingie in ubuntu, right?
<intelliant> I am new to the ubuntu world
<qman__> upstart replaces sysv-init
<intelliant> right, thanks.
<qman__> the way it works is, scripts have certain event triggers to start up
<qman__> instead of loading in a pre-set order for given runlevels
<intelliant> where do you suggest I continue digging
<qman__> well, I'd first try disabling DMA as a kernel parameter, because it's easy to do and test
<intelliant> but this problem doesn't seem to happen at boot only
<intelliant> how do we do that?
<qman__> hold shift while booting to get to grub, edit the boot line, and add the right parameter
<qman__> I think it's "nodma" but I don't recall
<qman__> looks like ide=nodma
<qman__> this is a one-time option, just for testing
<intelliant> qman__: http://ubuntuforums.org/showthread.php?t=1014723
<qman__> other than that, I'd try to figure out what's writing to the disk and when, when that error comes up
<intelliant> libata.dma=0 is the option to use as per this http://ubuntuforums.org/showthread.php?t=1396670
<intelliant> that is very tough as it is very very random
<intelliant> and the only way to figure that out is by having a constant watch at the virt-viewer
<qman__> also, keep an eye on your disk files
<qman__> if they're on a network share, see if they're disconnecting, or if they're local, see if something else is trying to lock them
<qman__> check your disk performance, too
<qman__> the fact that it's specific to ubuntu VMs in an environment where others work fine means that the VM side is more likely
<qman__> but it's still possible
<intelliant> yes the home is nfs mounted
<intelliant> there are 4 test users but none of them have ever complained
<qman__> also, check the dmesg on your other machines
<qman__> they might be having the same problems but not telling you
<intelliant> actually there seems to be no performance loss or disruption
<intelliant> it is only the dump
<intelliant> no they are well aware about the test env and report any kind of smallest niggles
<intelliant> they are trained for that ;)
<qman__> I mean the other OSes might not be reporting them to the console
<qman__> depends on the defaults
<intelliant> that could be
<qman__> but the error would always show up in dmesg
<qman__> regardless of console reporting
<intelliant> but coincidentally i have tried this on 2 different hardwares (Intel and AMD) with the same set of VMs and the results for Ubuntu VMs remains consistent
<qman__> if there is no data loss and no performance hit, you may just leave it, but failure to write to disk could be something more serious, it just requires more investigation
<intelliant> hence I feel it is something to do with Ubuntu 10.04 installed as VMs only
<qman__> well, you need to verify that it is just the ubuntu VMs, that would eliminate the host software
<qman__> check the dmesg on the other VMs
<intelliant> there is at times loss or corruption of some files such as package manager config or ldap.conf
<intelliant> available file,etc
<intelliant> but i have seen that is a common issue in ubuntu and deb package mgmt
<intelliant> but ldap.conf being lost is something serious
<qman__> I wouldn't call it a common issue
<qman__> I've only run into it on failing hardware
<intelliant> i simply keep a backup and restore it from there and all works again
<intelliant> but deb pkg related issues are all over google
<qman__> well, if it keeps happening post-boot, it's probably the kernel
<intelliant> i have just updated the kernel this morning
<qman__> try the no DMA setting, try playing with other performance settings relating to disks
<qman__> maybe compare the ubuntu kernel with the kernels from your other OSes
<intelliant> 2.6.32-24-generic #42-Ubuntu SMP Fri Aug 20 14:21:58 UTC 2010 x86_64 GNU/Linux
<qman__> see what's set differently
<intelliant> 2.6.34-12-desktop #1 SMP PREEMPT 2010-06-29 02:39:08 +0200 x86_64 x86_64 x86_64 GNU/Linux
<qman__> not the versions, the settings
<intelliant> okay
<qman__> compile options, kopts
<qman__> should be in the documentation or source packages for each distro
<ttx> kirkland: I'm now
<qman__> focus on disks
<qman__> try to find something that sticks out
<intelliant> you are right
<qman__> if you can eliminate your environment as the problem, file a bug report too
<intelliant> other OSes throw this up in dmesg
<qman__> in that case, it's likely an issue with the NFS share
<intelliant> http://pastebin.com/V2ZdBmxS
<qman__> network congestion, hardware failure, host software issues, you name it
<qman__> even NFS mount options
<intelliant> but this VM has nothing to do with NFS
<intelliant> oh it does
<qman__> well, it depends on where the VM's disk is stored
<intelliant> i will first work with the NFS options
<qman__> that's what I'm getting at
<qman__> the problem is between the running VM and its disk
<intelliant> all are sparse
<qman__> could be a number of things, the intermittency points at NFS if the disks are over NFS
<intelliant> some are running from the local system and some from NFS shares
<qman__> could be host machine getting bogged down, too
<intelliant> but the 2 i am analysing now are from the same host system
<qman__> too many iops
<intelliant> so now it seems i need to look into mount options only
<qman__> mount options, and start taking performance logs on the system where the disk images are stored
<intelliant> or analyse a VM without any NFS mount
<intelliant> performance logs - how?
<qman__> if it has a failing disk or is struggling to keep up, that could be it
<qman__> you probably want iostat
<intelliant> qman__: will do some homework and revert, thanks for your prompt help
<qman__> no problem
<intelliant> should not be failing hw as it would be too much of a coincidence that both my servers are failing - one 3 yrs old and one a few weeks old
<uvirtbot> New bug: #628646 in mysql-dfsg-5.1 (main) "package mysql-server-5.1 5.1.41-3ubuntu12.3 failed to install/upgrade: subprocess new pre-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/628646
<ttx> smoser: in case you're insomniac, ping
<intelliant> qman__: exactly same problem - http://ubuntuforums.org/showthread.php?t=1450439
<intelliant> similar issues - http://ubuntuforums.org/showthread.php?t=1489063
<intelliant> http://tinyurl.com/37p7jj6
<huats> morning
<intelliant> qman__: looks like the actual demon lies here - https://bugs.launchpad.net/opensuse/+bug/584048
<uvirtbot> Launchpad bug 584048 in opensuse "kvm images losing connectivity w/bridged network" [Undecided,New]
<intelliant> this is getting very hot
<zenmower> how do i write an iso to an empty partition
<daxroc> zenmower: If you just want to output it as is use dd if='my.iso' of='/path/to/partition'  ( with out the quotes )
<zenmower> yeah
<zenmower> thanks
<daxroc> actually not sure if that works
<zenmower> well we'll see
<daxroc> zenmower: any joy?
<zenmower> no i gotta repartition
<zenmower> so i havent tried yet
<uvirtbot> New bug: #628700 in openssh (main) "package openssh-server 1:5.3p1-3ubuntu4 failed to install/upgrade: el subproceso script pre-installation nuevo devolviÃ³ el cÃ³digo de salida de error 1" [Undecided,New] https://launchpad.net/bugs/628700
<zenmower> ok it's doing somthing
<zenmower> we shall see
<zenmower> i think this happened to me
<zenmower> http://linux.slashdot.org/story/10/08/28/2112208/Some-Windows-Apps-Make-GRUB-2-Unbootable
<uvirtbot> New bug: #628764 in samba (main) "The Samba 'panic action' script, /usr/share/samba/panic-action, was called for PID 18453 (/usr/sbin/smbd)" [Undecided,New] https://launchpad.net/bugs/628764
<bt36> hello
<bt36> Could somebody point me to a good step-by-step for setting up samba share for windows network on a non-graphical ubuntu server (8.04)
<bt36> ??
<bt36>  Could somebody point me to a good step-by-step for setting up samba share for windows network on a non-graphical ubuntu server (8.04)??
<smoser> ttx, here now.
<squidly> bt36: there are a couple of good ways to do that. 1. Read the examples. 2. use swat.
<ttx> smoser: Do you think we should mention the results of sanscloud in the technicaloverview ?
<squidly> also http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/install.html#id2551954
<smoser> ttx, you have an example of past technical overview ?
<zenmower> check ubuntuforums
<ttx> smoser: if yes, please edit https://wiki.ubuntu.com/MaverickMeerkat/TechnicalOverview
<ttx> smoser: see what's already there for "Ubuntu Server Cloud images"
<ttx> smoser: I already mentioned cloud-init and kernel upgrades (for alpha2 and alpha3)
<smoser> the kernel is the big thing.
<smoser> i wouldn't mind putting the sans sclod info there.
<ttx> smoser: please do !
<smoser> as it is somethign people have asked about "Can I run this image on my  local system" ?
<smoser> ok.
<smoser> should I have a link to more info there also?
<ttx> sure
<Daviey> SpamapS, hey.. can you grab me when you are about please :)
<ttx> smoser: also do you think any of the bugs reported against the EC2 images should be targeted to Maverick ?
<smoser> ttx, http://paste.ubuntu.com/487194/
<ttx> (the "ISO" testing bugs)
<smoser> ttx, yes.
<ttx> about your blurb: could use an external link for more info
<ttx> about the bugs: which ones are "solvable" for release ?
<smoser> hit reload on page and read.
<smoser> ttx, let me put together a list of what is what.
<smoser> a bug list.
<ttx> smoser: hit reload; doesn't seem to include a link
<smoser> There is more information available on the Ubuntu wiki.
<smoser> "wiki" is a link
<ttx> ah ,ok
<ttx> smoser: works for me, thanks
<mealstrom> hi
<mealstrom> is that possible to separate ubuntu updates and ubuntu security-updates ?
<ttx> mealstrom: yes
<ttx> mealstrom: just enable to -security apt repository (and not the -updates one)
<mealstrom> ttx: ahhh.. tnx a lot.
<Daviey> Although.... I believe -security updates are tested against what is in -updates... Not a big deal, unless there is an ABI bump - which we have seen before.
<matti> Hey DavidLevin
<matti> Hey Daviey
<Daviey> hey matti
<DavidLevin> matti, Hi
<smoser> ttx, http://paste.ubuntu.com/487207/
<smoser> ttx, ping
<ttx> smoser: yep
<ttx> reading
<\sh> maswan, great news...I tested lucid and maverick today on those 465g7 blade servers...and only the installer kernel modules udeb package is broken, means, the hp be2{net,scsi} drivers are not in those udebs...booting into the installed system everything works fine and doesn't give any headaches because of timeouts
<\sh> maswan, for reference bug #628776
<uvirtbot> Launchpad bug 628776 in linux "HP NC511i Driver (be2net and be2scsi) is missing in kernel module udebs" [Undecided,New] https://launchpad.net/bugs/628776
<jdstrand> Daviey: just a very minor clarification-- we build security updates without -updates (so they are guaranteed to be installable for systems without -updates), but we do typically test with -security and -updates installed
<jdstrand> Daviey: we also pull from -updates to create a -security update
<jdstrand> Daviey: see https://wiki.ubuntu.com/SecurityTeam/UpdatePreparation#Packaging for details
<Daviey> jdstrand, Interesting... so your work is based on that of -updates, but your main test base is the main release pocket?
<jdstrand> Daviey: that isn't what I meant to say
<Daviey> oh
<jdstrand> Daviey: we use the latest source package (excepting -proposed) and build with only release and -security
<jdstrand> Daviey: this is to support all Ubuntu users-- ones that only have -security enabled and ones who also have -updates enabled
<jdstrand> Daviey: we test with -updates enabled
<jdstrand> Daviey: since most people run with -updates
<Daviey> jdstrand, Oh great.. How will that scale with the perception I have that we are producing more SRU's.. ie the delta between -updates and release is larger?
<Daviey> jdstrand, I always knew you *could* run without -updates, but i just assumed it wasn't a supported model... :)
<Daviey> jdstrand, So... If -security creates an ABI bump, meaning something statically built against it's rdepends; should the no change rebuild be put in -security or -updates?
<jdstrand> Daviey: running with or without updates is supported
<jdstrand> Daviey: we pull from -updates to reduce version skew within the archive
<jdstrand> Daviey: if something from -updates needs a security update but also a bunch of other 'stuff', we will pull that other 'stuff' into the -security update
<Daviey> jdstrand, thanks :)
<jdstrand> Daviey: it happens less frequently than one might think, but we have most definitely been known to do 'no change rebuilds for security' for things that a security update pulled from -updates depends on
<hallyn> that sounds painful
<jdstrand> Daviey: and on a personal note, I always run -updates on desktops, and on servers it is more of a choice-- I've definitely been known to only run with -security on certain systems
<Daviey> jdstrand, I remember discovering one.  doing the dance of raising a bug, creating a package, getting it into -proposed... only to see one uploaded into -security a few days later :)
 * Daviey cursed :D
<jdstrand> hallyn: it can be, but it is less painful than having to create and test up to 2 security updates per release
<jdstrand> hallyn: ie, one for -updates and one for -security
<jdstrand> Daviey: yes, we try to notice those and will mention in the bug that this will happen/has happened
<Daviey> jdstrand, It was a universe one, that one of your chapies noticed later.
<jdstrand> Daviey: but that goes both ways-- sometimes something pops up in -updates while we are preparing a -security update and on the day of publication we have to reroll and retest :)
<Daviey> lovely :)
<jdstrand> again, still less pain overall than having to test twice and potentially having significantly different versions of software to update
<jdstrand> (and in the archive)
<jdstrand> eg, gnome and kde sometimes get largish changes into -updates
<jdstrand> (whole version bumps)
<jdstrand> it is admittedly imperfect, but the least evil
<jdstrand> we like to turn it around and say it is a feature for people running with only -security-- they get the benefit of getting particularly well tested bug fixes when running with only -security (ie, something goes to -proposed, goes to -updates and sits there until a -security update happens-- very low chance of regression typically)
<jdstrand> the spin doctors that we are :)
<maswan> \sh: excellent
<benedikt> How can I add a route without being root? Since network-manager-openvpn is able to add the VPN routes without NM being root, I figure this is possible?
<\sh> benedikt, I think NM is setting the routes via dbus magic which somehow has root privileges afaik
<benedikt> \sh: ah. Thanks, now I'll figure out how to make dbus set routes
<\sh> benedikt, na..openvpn sets the routes and this openvpn process is being started by nm-openvpn magic via dbus as root I'm not a specialist for NM...ask someone from plumbers frontier
<benedikt> \sh: hm.. i was going to do a quick hack, but since dbus has been added to the equation this wouldn't be a quick hack any more.
<benedikt> I think the quickest solution would be to allow the user to run sude ip route add without asking for password
<ttx> mathiaz: do you plan to exercise part of your magic scripts on the beta ISo testing ? I did not see any results from you yet
<mathiaz> ttx: working on it
<mathiaz> ttx: any tests are missing for now
<\sh> I really would like to read something about udevs magic how to number the different pci devices..is it from highest to lowest IRQ? so SmartArray on IRQ11 will become /dev/cciss/c0d0 and the smartarray on irq10 will become /dev/cciss/c1d0?
<ttx> mathiaz: is that a question ?
<mathiaz> ttx: I'm integrating my iso testing scripts with hudson
<mathiaz> ttx: any tests are missing for now ?
<ttx> i386 / Install (default + RAID1)
<ttx> the ESX tests but we can't really do those
<ttx> + some optional testcases, including the "UEC install (separate networks topology)"
<mathiaz> ttx: ESX -> usualy done  by someone from QA
<mathiaz> ttx: ok - I'll do the raid1
<ttx> we may have a couple of hours left, fyi
<ttx> but not much more.
<ttx> Daviey, hggdh: did anyone test the separate networks topology (aka Topo3) ?
<\sh> benedikt, I wouldn't allow any user to set ip routes at all...they could break things
<Daviey> ttx, I have tested a 3 box topology a week ago.. Carlos has been mainly focusing on multiple
<ttx> Daviey: would be good to register a result with the beta candidate on Topo3 in the tracker
<Daviey> ttx, agreed
<ttx> since complex topos tend to work better than simple ones those days
<benedikt> \sh: this is just a quick script for me to route certain networks without going through the vpn router
<Daviey> ttx, Would a preseeded test from hggdh suffice?
<hggdh> ttx: I am going to test it now (topo3)
<hggdh> Daviey: yes
<Daviey> rockin'
<SpamapS> Daviey: here
<SpamapS> Daviey: though I'll need to step away briefly when the baby wakes. ;)
<nxvl> zul: btw, i uploaded augeas to debian last week
<Daviey> SpamapS, heh, ok ;)
<zul> nxvl: is it worth asking for a FFE for it
<ttx> SpamapS, zul: which one of you two plans to push the Lucid SRU on bug 564920 ?
<uvirtbot> Launchpad bug 564920 in php5 "PHP5 under Apache2 on 64 bit system is not completely 64 bit " [Undecided,Confirmed] https://launchpad.net/bugs/564920
<nxvl> zul: let me check
<zul> ttx: i can do it this afternoon
<SpamapS> ttx: I submitted it already as a merge proposal, I can't push anymore than that. ;)
<ttx> hmmm
<nxvl> zul: seems so
<zul> nxvl: k...remind me on monday
<ttx> SpamapS: and i commented on your proposal
<nxvl> zul: a couple of performance stuff, some bug fixing
<ttx> SpamapS: I haven't seen those remarks taken into account yet :P
<ttx> SpamapS: see my comment on https://code.launchpad.net/~clint-fewbar/ubuntu/lucid/php5/lucid-sru-lp564920/+merge/32803
<nxvl> zul: http://paste.ubuntu.com/487274/
<zul> nxvl: cool
<ttx> SpamapS: maybe you can write the SRU report and zul can sponsor your branch ?
<zul> ttx: that would be ideal ;)
<SpamapS> ttx: oh I may have missed that.
<X-Sleepy-X> ive plugged in a hdd into new hardware, m-board etc, and it has ubuntu server on it, think its 8.04. it doesnt find the network and i wonder if there's a neat little command to solve this problem?
<patdk-wk> do-dist-upgrade? :)
<X-Sleepy-X> patdk-wk: as i said, i dont have network connection...
<patdk-wk> so?
<patdk-wk> I can do it without a network connection
<patdk-wk> :)
<patdk-wk> what network chipset?
<X-Sleepy-X> realtek semiconductor rtl-8139/8139c
<patdk-wk> that is supported
<X-Sleepy-X> and VIA Tecnologies vt8233
<patdk-wk> dunno about that one
<patdk-wk> are you sure it's just not showing up as eth1? eth2? eth3? ....
<X-Sleepy-X> yes, but the installation used another motherboard so its like it hasnt detected the change in hardware
<patdk-wk> ubuntu tracks network interfaces
<X-Sleepy-X> yeah i see eth1 and 2 on iwconfig
<patdk-wk> the id of the old motherboard eth0, will be eth0 forever
<patdk-wk> new motherboard would use next available ethx
<X-Sleepy-X> ok but it finds them as wireless cards
<patdk-wk> unless you wipe out the persistant file that tracks it
<X-Sleepy-X> ifconfig finds nothing and iwconfig finds them
<ivoks> try not to use ifconfig, rather iproute
<ivoks> (ip a would be the command)
<X-Sleepy-X> k
<ivoks> that's not the solution for the problem
<ivoks> it's just general advice
<X-Sleepy-X> hmm
<ivoks> if you replaced you network cards with new ones
<SpamapS> ttx: SRU info added
<ivoks> delete /etc/udev/rules.d/70-persistent-net.rules file and reboot
<X-Sleepy-X> ivoks: thanks, will try that if not changing the settings in /etc/network/interfaces work on this reboot
<ivoks> ubuntu ties interface to hardware
<ivoks> if you add/change new hardware, it will get new interface name
<ivoks> those links are stored in /etc/udev/rules.d/70-persistent-net.rules
<X-Sleepy-X> ivoks: didnt work
<X-Sleepy-X> well
<X-Sleepy-X> imean
<X-Sleepy-X> it now detects the eth0 and 1
<X-Sleepy-X> but still no connection to the internet
<X-Sleepy-X> this is driving me nuts
<X-Sleepy-X> shouldnt be that hard to fix
<siretart> show the output of `ip a ls` (in a pastebot)
<ttx> mathiaz: on http://iso.qa.ubuntu.com/qatracker/result/4442/287 you didn't encounter the same bugs as kirkland did in http://iso.qa.ubuntu.com/qatracker/result/4441/286 ?
<ttx> mathiaz: ignore me, kirkland clarified on PM
<zul> ttx: when i was testing samba yesterday i had both smbd and nmbd fyi
<kirkland> mathiaz: however, i am curious
<kirkland> mathiaz: can you look at the 3 bugs I reported against that test case (RAID) and tell me if any of those look familiar?  or how you worked around them?
<Maletor> Why can I not grow my RAID array through disk utility. More information can be found here. http://ubuntuforums.org/showthread.php?p=9797001
<mathiaz> kirkland: bug 628374
<uvirtbot> Launchpad bug 628374 in debian-installer "partman does not detect swap on a raid device" [Undecided,New] https://launchpad.net/bugs/628374
<mathiaz> kirkland: ^^ didn't see it
<mathiaz> kirkland: bug 628400
<uvirtbot> Launchpad bug 628400 in plymouth "[regression] boot degraded question never posed" [High,New] https://launchpad.net/bugs/628400
<mathiaz> kirkland: didn't see it either
<mathiaz> kirkland: I was prompted when the system was booting from a degraded array
<kirkland> mathiaz: that's a regression since alpha3
<kirkland> mathiaz: then
<kirkland> ttx: I believe that one should be a regression and release blocker, and we need foundations to fix it by RC
<mathiaz> kirkland: bug 628418
<uvirtbot> Launchpad bug 628418 in grub2 "default timeout is too low, impossible to escape in a VM" [Undecided,New] https://launchpad.net/bugs/628418
<mathiaz> kirkland: ^^ haven't needed that
<ttx> kirkland: I'm surprised it doesn't affect Lucid, after all plymouth didn't change that much in Maverick
<garymc> Hi Guys, I setup my website off my office server sometime last year.... I have lost the chat files and documents I kept for adding websites to my server. I wonder if anyone can be of guidance?
<SpamapS> garymc: thats a very open ended vague question. What exactly do you need help with?
<garymc> OK I have a website hosted on my office server. mywebsite.com
<garymc> I want to host mywebsite.com with a hosting company to free up my office internet connection
<garymc> But I need to keep the Applyform.php on my office server. So I want to point mywebsite.co.uk at the office server instead. So when people are on my hosted site mywebsite.com and they click on Apply.php it links to the page on our office server
<garymc> under mywebsite.co.uk
<garymc> ok?
<garymc> So right now I have pointed the .co.uk to my server ip address
<SpamapS> garymc: sounds overly complex. Why can't you put Applyform.php on your hosted server too?
<kirkland> mathiaz: ttx: can you boot maverick in a KVM and get to the grub menu?
<kirkland> mathiaz: ttx: by pressing <shift>?  I can't at all
<cyphermox> mathiaz, I'm testing JeOS on ESX for the beta and there's a number of things that don't match up with the test case... specifically, now uname -r shows "-virtual" for i386 and amd64 (like for KVM), and modules/full install take more space than they should
<kirkland> mathiaz: ttx: this really sucks
<garymc> Because it is using a mySQL server that is on our office machine. And the Terminals link to that machine. Thus when someone applies..
<ttx> kirkland: i admit doing my ISO testing on real HW [tm]
<patdk-wk> garymc heh, you are going have to use like a proxy for that file, to do that
<mathiaz> cyphermox: uname -r is normal
<mathiaz> cyphermox: it's has been corrected in the JeOS on KVM test case
<mathiaz> cyphermox: install space is a known bug
<cyphermox> mathiaz, cool, that I expected. bug #?
<mathiaz> cyphermox: bug 621175
<uvirtbot> Launchpad bug 621175 in linux "-virtual kernel contains too many modules" [Medium,Triaged] https://launchpad.net/bugs/621175
<cyphermox> mathiaz, I'll fix the test case for uname -r
<mathiaz> cyphermox: cool - thanks
<SpamapS> garymc: that all sort of makes sense. So whats the question again?
<garymc> brb
<jdstrand> kirkland: if using libvirt, try something like this:
<jdstrand> virsh start sec-maverick-i386 ; sleep 0.5 ; virt-viewer -c qemu:///system --wait sec-maverick-i386
<jdstrand> kirkland: I have good success seeing the grub menu with that
<kirkland> jdstrand: hmm
<kirkland> jdstrand: okay
<jdstrand> in fact, you could probably omit the sleep entirely
<TeTeT> kirkland: unfortunately I could not test the virt PPA for bug 590929 today, as the firewall does not give me access to it. I will see if #IS opens it overnight and get it to it first thing next morning, if need be, d/l the files by hand
<uvirtbot> Launchpad bug 590929 in eucalyptus/1.6.2 "eucalyptus create and delete volumes sometimes fail on lvm commands (POC + ENT configs)" [Undecided,Fix committed] https://launchpad.net/bugs/590929
<kirkland> TeTeT: okay, thanks
<vmlintu> How does one set the limit for maximum number of open files in 10.04? In 8.04 one would edit /etc/security/limits.conf and add nofiles -setting there, but so far I'm not having any luck. slapd doesn't like the default 1024, so it needs to be made higher..
<garymc> SpamapS : Ok was on important call where was i .... ?
<garymc> I need Apache to recognize mysite.co.uk
<garymc> what files do I need to alter
<vmlintu> garymc: virtual host configurations are usually under /etc/apache2/sites-available/
<garymc> ok and I just add mysite.co.uk in there like the others?
<vmlintu> garymc: do you want to add a new site or new alias for an existing site?
<garymc> Can I make the .co.uk use the same folders as the.com site?
<garymc> new alias
<vmlintu> do you have a file there for your site?
<garymc> ?
<vmlintu> under /etc/apache2/sites-available
<garymc> I have a current file for 2 sites I have on our server
<garymc> yes
<vmlintu> Do you there ServerName directive?
<garymc> ?
<vmlintu> ServerName yoursite.com
<vmlintu> in the configuration file under /etc/apache2/sites-available
<garymc> I have file "hosted-site"
<garymc> with details in there
<vmlintu> Inside that file, do you have a line that has ServerName in it?
<garymc> yes
<vmlintu> After that line add a new line:
<garymc> ok
<vmlintu> ServerAlias yoursite.co.uk
<vmlintu> That adds an alias for the existing site
<garymc> ok
<garymc> do i put the www
<garymc> ok do i need to restart apache
<vmlintu> apache recognises exactly the name you put in there, so if you want it to be www.yoursite.co.uk, you have to add www
<vmlintu> Yes, apache needs to be restarted after the change
<garymc> sorry how do i restart apache again. Long time since I last did it
<garymc> apached restart?
<vmlintu> /etc/init.d/apache2 restart
<garymc> ok done
<SpamapS> err
<garymc> gonna test now
<SpamapS> better to suggest people use 'service apache2 restart'
<garymc> ok well my .co.uk is still not pointing at my server ?
<vmlintu> SpamapS: true.. I'm just too used to using /etc/init.d/*.. old habits die slow..
<garymc> or my server isnt allowing it
<SpamapS> vmlintu: me too ;)
<SpamapS> garymc: DNS will need to be setup first.
<vmlintu> garymc: do you dns setup or /etc/hosts with the name?
<garymc> I have it is pointed at my office server
<garymc> but I only did that about 1 hour ago
<garymc> maybe it takes a little time to go through
<vmlintu> garymc: you can check with dig if dns gives you the right ip
<vmlintu> dig www.yoursite.co.uk
<vmlintu> And on the server you can check the logfiles to see if the request arrives there. The logfile is usually defined in the configuration file you just modified.
<garymc> ok thanks
<garymc> whats dig?
<vmlintu> dig is a DNS lookup utility - I think it's in dnsutils package
<vmlintu> nslookup works also
<SpamapS> nslookup is oooll school
<SpamapS> host ftw ;)
<vmlintu> I feel really old..
<SpamapS> garymc: your domain most likely has a TTL set very high. in the dig response, you should see the "TTL" of the response as the number right after the hostname
<SpamapS> ;; ANSWER SECTION:
<SpamapS> spamaps.org.		1800	IN	A	65.98.207.160
<SpamapS> as an example
<SpamapS> garymc: TTL is the number of seconds caches are allowed to hang on to the response.
<ttx> yay complete ISO testing coverage for server !
<ttx> Kudos everyone
<ttx> -smoser who failed to validate some asia-pacific AMI, of course :P
<SpamapS> ttx: ugh, sorry I wasn't able to contribute to the iso testing. Been chasing this libdbi stuff. :-P
<garymc> ok my .co.uk is now pointing at my server, but if I type into the browser mywebsite.co.uk it displays my error 404 page
<garymc> instead of the homepage
<gholms> smoser: ping
<garymc> what could cause this SpamapS ^^
<garymc> ?
<smoser> gholms, here.
<SpamapS> garymc: enter the hostname, without any path, does it show you the "IT worked!" page?
<gholms> smoser: I see version numbers on the cloud-init packages, but no tarballs on the LP site for it.  Is that by mistake, or do I need to be using checkouts to build tarballs all the time?
<smoser> :)
<smoser> i just haven't really had a reason to build them.
<smoser> gholms, i usually do build the tarball and put it at http://smoser.brickies.net/cloud-init-dist/
<gholms> Ah, that would work.
<gholms> smoser: I'm looking into using most of those scripts for Fedora, so a more-or-less official spot to download release tarballs from would be most appreciated.
<smoser> cloud-init is really almost a native-package.
<smoser> gholms, i can start posting to launchpad
<gholms> That would be really helpful.
<smoser> and i would absolutely do that if your willing to help make it more portable :)
<gholms> Obviously calls to apt and things wouldn't work, but I would hate to have to fork everything just because it doesn't quite work on RH-type distros.
<smoser> yeah.
<smoser> i'm definitely open to having distro specific stuff
<smoser> and being more generic across the board.
<smoser> ie, "apt_update" => "software_update"
<smoser> or such
<gholms> Sure
<smoser> i'll post the most recent tarball on launchpad
<smoser> and try to continue doing that.
<gholms> That would be awesome.  I'll let you know how it seems to work out from the Fedora side.
<garymc> why is my website mywebsite.co.uk working but if I do www.mywebsite.co.uk I get my error404 page?
<garymc> fixed it
<smoser> gholms, fwiw, you can get to the tarballs from launchpad, via the ubuntu builds.
<smoser> ie: https://launchpad.net/ubuntu/maverick/+source/cloud-init
<smoser> has cloud-init_0.5.14.orig.tar.gz
<gholms> Yeah, that's what I initially grabbed.
<smoser> i'm trying to figure out how to put a tarfile up there, though.
<smoser> anyone know how i would do that?
<gholms> It didn't seem "upstream" enough to satisfy the packaging committee.  :P
<patdk-wk> heh, talk to salmon, they go upstream all the time
<bjaanes> I
<smoser> i'm seriously baffled.
<smoser> i have no idea how to put a file up for download on launchpad
<smoser> https://launchpad.net/cloud-init
<bjaanes> I've just set up my DNS & DHCP server. Everything is pretty basic (from Ubuntu Server book and "official" documentation and such.) but the server keeps getting an DHCP address from itself. I have configured the interfaces file with static IP - but it doesnt seem to bother with it. So now I have to restart the networking service every time want to have the right IP. Anyone got a clue about this?
<patdk-wk> heh?
<patdk-wk> what is in /etc/network/interfaces?
<bjaanes> one moment
<bjaanes> auto eth0
<bjaanes> iface eth0 inet static
<bjaanes>         address 192.168.192.11
<bjaanes>         network 192.168.192.0
<bjaanes>         netmask 255.255.255.0
<bjaanes>         broadcast 192.168.192.255
<bjaanes>         gateway 192.168.192.1
<bjaanes> and it works when I go "sudo /etc/init.d/networking restart"
<bjaanes> but withing a minute or two it gets an IP like 192.168.192.52
<bjaanes> Could even my dhcp server override static stuff like that?
<patdk-wk> dhcp servers don't override
<bjaanes> what i thought
<patdk-wk> but if you are running a dhcp client, it would
<bjaanes> okey?
<bjaanes> Why would a dhcp client be running?
<patdk-wk> dunno :)
<patdk-wk> ps ax | grep dhc
<patdk-wk> might show if one is
<bjaanes>   816 ?        Ss     0:00 dhclient3 -e IF_METRIC=100 -pf /var/run/dhclient.eth0.pid -lf /var/lib/dhcp3/dhclient.eth0.leases eth0
<bjaanes>  2902 ?        Ss     0:00 /usr/sbin/dhcpd3 -q -pf /var/run/dhcp3-server/dhcpd.pid -cf /etc/dhcp3/dhcpd.conf eth0
<bjaanes>  3781 pts/0    S+     0:00 grep --color=auto dhcp
<patdk-wk> yep, dhclient is running
<bjaanes> Awe. Why does it do that? :( My last server didn
<RoAkSoAx> kirkland: when's next ISO testing?
<patdk-wk> I dunno :)
<patdk-wk> unless it was set to, iface eth0 inet dhcp
<patdk-wk> and you just changed it to static
<patdk-wk> did you try rebooting?
<patdk-wk> and see if dhclient is still running then?
<patdk-wk> cause it shouldn't
<patdk-wk> and I dunno what you did to make it do it
<bjaanes> It helped to reboot. I can
<bjaanes> Sorry to have bothered you with something I should have thought of trying =/
<bjaanes> But very much thanks :D
<smoser> gholms, whoowhoo. https://launchpad.net/cloud-init/trunk now has a download.
<smoser> that was a lot of work
<RoAkSoAx> smoser: when's the next ISO testing?
<gholms> smoser: Awesome
<smoser> RoAkSoAx, ?
<smoser> iso testing occurs ~ 3 days before release.
<smoser> RC release info athttps://wiki.ubuntu.com/MaverickReleaseSchedule
<RoAkSoAx> smoser: Ok, I just read in the meeting minutes that soon will be ISO testing. ANyways, how soon do you need the  'testdrive -u' thing for UEC images?
<gholms> smoser: The first thing I think would help is factoring out package management code.  I can definitely try to help figure that out.
<smoser> RoAkSoAx, what is 'testdrive -u' ? what is working now ?
<smoser> at this point its probably very difficult to get any new features into ubuntu.
<smoser> gholms, yeah, that'd be great. its probably a good time for it too, as the maverick code is roughly frozen (other than some bug fixes).
<RoAkSoAx> smoser: 'testdrive -u' support to test drive local uec images
<gholms> smoser: If all else fails, 10.10 can release without the generic code while Fedora picks up the newer stuff in the mean time.
<smoser> so right now there is generally functional use of uec images in the archive, RoAkSoAx  ?
<smoser> gholms, yeah, 10.10 is releasing with what is there. which is fine.
<RoAkSoAx> smoser: currently it only supports testing daily images from uec-images.u.c
<smoser> RoAkSoAx, thats fine then.
<smoser> so, i dont' see a real reason to push to get '-u' into the archive.
<smoser> for maverick.
<RoAkSoAx> none pf this is on maverick
<smoser> so, i'd say, just leave that on your own schedule, and lets make sure we have it in natty
<ssureshot> I cant seem get ubuntu to boot using a software raid on my server... Are there any workarounds for this? If I boot to the cd and choose to boot fronm hard disk it works fine... Drive is set with the boot flag...
<smoser> (Narwhals Narwhals swimming in the ocean)
<smoser> RoAkSoAx, "none pf" ? i'm confused now.
<RoAkSoAx> smoser: i'm gonna backport when the changes are ready, but first I want to add suport for testing local UEC images, given that you were requesting that :)
<RoAkSoAx> smoser: "none of"*
<smoser> oh. i thought that you had gotten part of it into maverick.
<RoAkSoAx> smoser: nope, got busy with coming back to the US so I couldn't push it to maverick,a nd I've been sick the past few days
<smoser> oh well.
<hggdh> ScottK: do you have a bit of time for a chat?
<ScottK> hggdh: My latency might be a bit high, but sure.
<RoAkSoAx> smoser: I just wanna know for when would you like the functionality of testdriving local UEC images.
<smoser> RoAkSoAx, well, at this point i'm not in a big hurry.
<smoser> so a couple weeks is fine. it missed maverick, so i'd like it in natty.
<RoAkSoAx> smoser: I thought you just wanted the functionality, and not specifically in maverick :). That's why I actually only added functionality for uec-images.u.c but didn't upload. Anyways, testing local uec images will be ready soon, push to PPA, upload to Natty eventually, and backport to Maverick
<smoser> i opened a feature freeze bug
<smoser> and i thought i said that we'd need it uploaded before beta freeze.
<smoser> sorry i wasn't more clear.
<RoAkSoAx> smoser: It's my bad :(, since I thought you only wanted support for uec-images.u.c, but when you told me local uec image too, I wanted to have both before uploading, but anyways, we can always backport :)
<uvirtbot> New bug: #629005 in mysql-dfsg-5.1 (main) "package libmysqlclient16 5.1.41-3ubuntu12.6 failed to install/upgrade: trying to overwrite '/usr/lib/libmysqlclient.so.16.0.0', which is also in package mysql-cluster-client-5.1 0:7.0.9-1ubuntu7" [Undecided,New] https://launchpad.net/bugs/629005
<pfifo> hi
<pfifo> can just anyone use the public cloud that is shown on the site, and dose it cost anything?
<patdk-wk> heh?
<patdk-wk> shown on what site?
<patdk-wk> only public cloud I know of is ec2
<patdk-wk> I believe rackspaces cloud is internal only
<pfifo> yes amazon ec2
<patdk-wk> ya, ec2 costs
<patdk-wk> your basically, renting a server from them
<pfifo> i figured it would
<patdk-wk> now, that you can rent it for just a few hours, vs per month, makes it cheaper, overall
<pfifo> heres the sitrep, i just lost my harddrive last night, i had some really important data on it that is now lost in the void. Im searching around for something that will allow me to store files remotely. Looked into gmailfs but that appears to be slow and broken, i looked at ubuntu one and came across the ec2 thing. Any other places I can snag several gigs of data storage from?
<pfifo> it dosent nesscarrily have to be reliable
<gholms> If storage is what you're after what you really want is S3 as far as Amazon goes.
<gholms> There isn't really much for free.  JungleDisk, which runs on top of S3, is designed for exactly what you want, but you have to pay for that as well.
<_ruben> pfifo: a couple of usb flash disks containing multiple copies would probably be cheapest and also fastest
<gholms> ^ this
<uvirtbot> gholms: Error: "this" is not a valid command.
<gholms> ...
<pfifo> if i was going to pay, i would do 10 a month to ubuntu
<smoser> pfifo, so you're looking for a place to stash lots of data temporarily.
<smoser> realistically, i say you either pay ubuntu $10 (thats nice)
<smoser> or use s3.
<gholms> Doesn't Ubuntu use S3 anyway?
<smoser> if you put up 100G into s3, the storage would only cost you $10 for the first month.
<pfifo> smoser: longterm, but i dont care if it gets deleted due to something like gmail tos violation cause if its free i can obivously make mutiple copies
<smoser> you only need this for short time ?
<pfifo> smoser: LONG term
<smoser> oh.
<smoser> sorry.
<smoser> i thought you were just trying to migrate data.
<smoser> actually, google offers a file storage option too
<pfifo> smoser: no what i lost was my cross-tool chain and my code. (and some other crap)
<smoser> http://picasa.google.com/support/bin/answer.py?hl=en&answer=39567
<pfifo> the trick is though, it needs to be mountable or somehow (rsync) on my computer, none of this web interface crap, rapidshare would work for that
<smoser> yeah.
<smoser> maybe picasa interface
<smoser> you just have to call them "pictures"
<smoser> :)
<pfifo> ever tried to upload an exe to docs? google throws a fit
<smoser> http://code.google.com/apis/storage/docs/faq.html#pricing
<gholms> smoser: Thanks for getting that posted.
<smoser> there is that. its free !
<pfifo> which they should how many users really make their own exes
<smoser> for a while.
<smoser> that is googles "cloud storage" (s3). they're beta and developers only stuff right now. so probably not a lot of tools available at the moment.
<pfifo> google is cheap though
<pfifo> is googles extra storage pricing compatable with google code too?
<pfifo> cause that would be perfect, git would sync everything perfectly
<gholms> smoser: I'm looking at DataSourceEc2.py here; are things like us-west-1b.ec2.archive.ubuntu.com supposed to be cnames that resolve to mirrors in the right region, or...?
<smoser> pfifo, i dont know. somehow i doubt it.
<smoser> yeah. it tries to lookup mirror based on availability zone
<gholms> Is it only resolvable within EC2 or something?
<harpette> hi there. Anyone has an idea why a "vmbuilder --mac=00:16:3e:10:34:31" builds a VM with <mac address='52:54:00:7f:93:28'/> ?
<smoser> gholms, availability_zone[:-1]
<smoser> us-west-1.ec2.archive.ubuntu.com , not us-west-1b.ec2.archive.ubuntu.com
<gholms> Doh!
<smoser> but they are only *reachable* inside ec2
<gholms> Do they have a fixed set of IP blocks for each region?
<pfifo> i hope i didnt get you guys sidetracked lol
<gholms> Crap, looks like they don't.
<pfifo> im out, thanks for the info guys
<Datz> Hi, when I log on to the system, after recent updates, system statistics are not show if load is over 1. How can I change this threshold?
<gholms> smoser: Instead of (or in addition to) adding "built-in" config data do you suppose cloud-init could read a distro-provided config file in /usr/share for defaults?
<gholms> Sorry if I'm missing something here; I'm more or less asking questions as I go along.
<smoser> "built in config data" ? youmean from /etc/cloud/cloud-init.cfg ?
<smoser> err..
<smoser> cloud.cfg
<gholms> __init__.py in the cloudinit directory
<gholms> That and DataSource.py have a bunch of default configuration info embedded into the source.
<gholms> cloud.cfg goes in /etc, so it might be a bad place to specify, say, mirror locations since people can overwrite it completely.
<smoser> i would not be opposed to it.
<smoser> one thing you may realize is that its kind of messy...
<gholms> ?
<smoser> cloud.cfg is semi "cloud-init config" and "default #cloud-config"
<smoser> that make sense ?
<gholms> I'm not sure what the second part means.  :-\
<smoser> i think there is one or two tihng that you can change in /etc/init./cloud.cfg that make no sense to "cloud config".
<smoser> cloud_type for example.
<smoser> so, cloud-config is the "easy config".
<smoser> but cloud-init supports other types of input.
<smoser> and largely, cloud.cfg just serves as the default "cloud-config"
<smoser> there isn't really a file to configure cloud-init
<smoser> make sense ? or did i make it worse.
<gholms> More or less, yeah.
<smoser> either way, i'm all for moving hard coded values out of __init__.py
<smoser> :)
<gholms> There needs to be a way to pick which modules should run on a given system.  cc_byobu.py makes no sense on Fedora, for instance.
<gholms> So that might be the purview of such a config file.
 * gholms hopes he's making sense
<gholms> Whoops, there's no regular init script.
<ideaman> Who can tell me whats the best way to figure out where my 100 megs of UDP usage a day is going on my server?
<ideaman> Ubuntu 10.04
<smoser> that does make sense, yes.
<smoser> cc_byobu would makes sense on fedora (it is packaged there)
<smoser> gholms, yeah, its upstart only
<\sh> ideaman, tcpdump ?
 * gholms might need to get a packaging committee exception
<ideaman> will try that
<ideaman> was trying bandwidthd and doing w3m, but not very comprensive
<gholms> smoser: Would you be opposed to someone's submitting systemd configs to go along with the upstart ones?
<smoser> gholms, no.
<kirkland> Daviey: are you gone for the day?
<Daviey> kirkland, sadly not
<zul> kirkland: he is having fun with me
<gholms> smoser: How do you guys get packages from the master package mirror to the S3 mirrors?
<smoser> they're not s3
<gholms> Or for that matter, how does Ubuntu's mirror infrastructure work inside EC2?
<smoser> canonical runs ec2 instances in each region providing a apt mirror.
<smoser> so it basically runs as any other mirorr would run.
<gholms> Oof, that means an awful lot of inter-zone transfer fees.
<gholms> I presume Canonical uses elastic addresses for those and just eats the data transfer fees?
<ahasenack> hey, do you guys know if in UEC one instance can connect to another one using the external ip?
<ahasenack> instead of the internal one, I mean
<gholms> ahasenack: They can, but then you get charged transfer fees.
<ahasenack> I managed to connect using the internal one and after having the security group authorizing the other group as the source, but not using the external ip
<ahasenack> gholms: it works in ec2, right
<gholms> Err, right.  I guess I shouldn't be speaking for UEC.
<ahasenack> gholms: I'm wondering if it should work in UEC too or if it's a network setup problem in this particular UEC I'm using
 * gholms shuts up and lets someone knowledgeable about it answer
<ahasenack> it would be much simpler for my setup if it worked
<ahasenack> I'm using SSL and I have the external hostname as the commonName of the cert...
<ahasenack> and gnutls didn't like me putting other names in subjectAltName
<patdk-wk> heh? if you use subjectaltname, commonname is ignored, so you have to duplicate the commonname in subjectaltname
<RoyK> Mein FÃ¼hrer - I can walk!
<_ng> :O)
<RoyK> Dr. Strangelove ftgw
<RoyK> Dr. Strangelove ftw
<Datz> Hi, when I log on to the system, after recent updates, system statistics are not show if load is over 1. How can I change this threshold?
<ScottK> Datz: It's a recent change that was on purpose (if perhaps misguided).  See the recent history of the ubuntu-server mailing list for details.
<gholms> smoser: Why not import boto instead of using boto_utils.py?
<smoser> it should import boto now.
<smoser> the reason it has the staic copy is that lucid at one point was about to revert to boto 1.8
<smoser> which didn't have the function that i needed.
<gholms> Is that obsolete now, then?
<smoser> yes
<gholms> Cool
<smoser> well, we would need to remove it
<smoser> the boto_utils.py is used now.
<smoser> but there is no reason for that any more.
<gholms> Oof, there's nowhere to report bugs for that.
<gm1959> can anyone please tell me how to get 'upstart' to not run the X server / gnome stuff?  There's no inittab to edit it out of?
<|rt|> gm1959: upstart still reads the sysvinit rc runlevel files so you can still use the normal update-rc.d script to add and remove things from the various runlevels
<gholms> smoser: Where should bug reports and patches and things for cloud-init go?
<smoser> launchpad.
<smoser> open agains the cloud-init project.
 * gholms thinks https://launchpad.net/cloud-init needs to be set up the rest of the way since "Report a bug" doesn't work there
<smoser> well that stinks.
<smoser> let me look
<smoser> gholms, fixed. now you can "Report a bug"
<smoser> :)
<gholms> :D
<gholms> Does python << 2.7 mean that this stuff doesn't work with python 2.7?
<gholms> (Because that would be a problem for me)
<Datz> ScottK: thanks.. I'll see if I can hunt down that list
<ScottK> Datz: archives are on lists.ubuntu.com.
<Datz> great, thanks ScottK
<Datz> humm.. ScottK do you perhaps know a specific range of days that the change might have taken place?
<ScottK> Datz: This week.
<ScottK> Today or yesterday.  They run together.
<Datz> thanks
 * Datz doesn't know what it would solve even if he could find it. :P
<Madwill>  hey what do you guys use for os level virtualisation ?
<Madwill> my clients use OpenVZ and i would like home to go for something more recent then ubuntu 8.04
<Madwill> what it used now as lightweight container
<SpamapS> LXC seems to be popular
<SpamapS> And I believe is going into, or has been accepted into, the upstream kernel
<Madwill> nice
<Madwill> starting to read about it
<Madwill> the poor dev's team made deprecated choices..
<Madwill> i mean the poor men' dev team
<soren> Yeah, LXC has been upstream for a while now.
#ubuntu-server 2010-09-03
<smoser> Madwill, lots of pepole here willj ust think I'm being a stick in the mud
<smoser> but lxc != openvz
<smoser> lxc is not as robust as openvz was (or that I understand it was).  You cannot trust a UID 0 process in your lxc container.
<smoser> maybe robust wasn't the right word for that.
<smoser> it makes no promises that you could do such a thing.
<YankDownUnder> Anyone install/use Tryton?
<uvirtbot> New bug: #629234 in vsftpd (main) "package vsftpd 2.2.2-3ubuntu6 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/629234
<uvirtbot> New bug: #629236 in samba (main) "Can't install samba in 10.04.1 fresh install" [Undecided,New] https://launchpad.net/bugs/629236
<oracle> upon actually installing ubuntu server 10.04, will there be an option to encrypt everything before the install continues?
<wbiesing> Hi all. I'm having some issues with Dell's Hardware RAID controller, Ubuntu 10.04, Dell T3500. Is this the right place to ask?
<uvirtbot> New bug: #629247 in libvirt (main) "Failure to alter VM's MAC address with virsh" [Undecided,New] https://launchpad.net/bugs/629247
<MTecknology> A little off topic and random. I just looked at the lines of code in Nginx - 137,212 LoC
<danielhdiaz> Necesito montar un servidor con ubuntu, podrian ayudarme?
<twb> !es
<ubottu> En la mayorÃ­a de canales de Ubuntu se habla sÃ³lo en inglÃ©s. Si busca ayuda en espaÃ±ol o charlar entra en el canal #ubuntu-es. Escribe "/join #ubuntu-es" (sin comillas) y dale a enter.
<danielhdiaz> ok thank you.
<twb> De nada.
<q_a_z_steve> How do I check whether I've set an "internet host name" for my 10.04 server install? like "hostname" but for web server identity..
<twb> q_a_z_steve: where are you hearing about this "internet host name"?  From e.g. apache's logs?
<twb> It probably means telling the daemon in question what FQDN to use, and/or putting same in /etc/hosts.
<q_a_z_steve> twb: well I'm not even sure I've set one yet. I've killed my box and just ran through a million lines of aptitudes since
<twb> q_a_z_steve: check "hostname --fqdn" first, I guess
<q_a_z_steve> twb: well that shows .lan so I probably haven't set one at all yet.
<q_a_z_steve> right?
<twb> That is most likely provided by the OpenWRT or whatever that is running your network
<twb> Or *was* running your network at the time you installed ubuntu
<twb> Unless your host has a public IP on the internet, getting a correct FQDN probably isn't important.
<q_a_z_steve> well obviously I *want* a domain name eventually to be tied to this web server...
<q_a_z_steve> twb: but you don't think that's an issue yet, right?
<intelliant> qman__: hi!
<intelliant> noticed that all the VMs had a shared virtual Optical drive. I removed that from all VMs and restarted them. eversince that, the errors have not reappeared.
<intelliant> will be monitoring this further over the next 2 days.
<Jeeves_> TeTeT: You awake?
<sherr> intelliant: I saw your conversation yesterday - how was the CDROM shared between guests? I have 2 guests, each with a CDROM, and each KVM definition has :
<sherr> <address type='drive' controller='0' bus='1' unit='0'/>
<sherr> But I would think that is "private" to each guest (i.e. 2 separate cdroms)
<TeTeT> Jeeves_: barely ;) need to visit the dentist in the next 10 minutes, will be back in an hour
<Jeeves_> TeTeT: Ah. I'll give you something to think about at the dentist
<Jeeves_> dhcp4!~
<Jeeves_> (I need it to run ISC dhcpv6 :))
<PresuntoRJ> :-\
<Jeeves_> PresuntoRJ: ?
<TeTeT> Jeeves_: hmm, check out the PPA from Mathieu Trudel, he has a new network manager and probably also a dhcp4 client in it
<TeTeT> Jeeves_: or he had a patch to remove the need for it, don't remember it exactly
<Jeeves_> I need the server and relay, but I'll have a look at his ppa. Thanks!
<intelliant> sherr: http://pastebin.com/Fryb0esQ
<Jeeves_> Mathieu only has networkmanager
<intelliant> i agree to what you are saying but this is just one observation
<TeTeT> bummer
<intelliant> sherr: I still beleive the problem may lie elsewhere
<TeTeT> maybe there are debian experimental packages for it?
<Jeeves_> Oh, that's a good one
<TeTeT> best to test those in a vm, things can go awfully wrong with the experimental packages in my experience
<intelliant> may be over the weekend I will add the cdrom back and wait for the errors.
<Jeeves_> TeTeT: Yeah, I'll try that
<Jeeves_> Debian renamed it to isc-dhcp
<Jeeves_> which makes sense, given the fact that there are more dhcpd's around
<sherr> intelliant: thanks. I have a "block" device, you have a file. Similar otherwise. Well, let's wait and see - I hope it works for you.
<uvirtbot> New bug: #629304 in mysql-5.1 (main) "-DMYSQL_CLIENT_NO_THREADS and undefined reference to `my_pthread_fastmutex_init'" [Undecided,New] https://launchpad.net/bugs/629304
<TeTeT> Jeeves_: any success with the experimental packages?
<Jeeves_> TeTeT: Yes
<Jeeves_> I've got isc-dhcp-relay running
<Jeeves_> All I gotta do know is figure out how dhcpv6 works :)
<Jeeves_> But first a shower, to really wake up :)
<uvirtbot> New bug: #629318 in bacula (main) "package bacula-director-mysql 2.4.4-1ubuntu9 failed to install/upgrade: Unterprozess installiertes post-installation-Skript gab den Fehlerwert 1 zurÃ¼ck" [Undecided,New] https://launchpad.net/bugs/629318
<Jeeves_> grrr, people that post bugs in their own language--
<LordKitsuna> hey can anyone help me my server tottally just blew up in my face and i have no idea what to do
<Scorpion> hello
<Scorpion> who install oracle 11gR2 on ubuntu 10.04?
<LordKitsuna> i uninstalled a package then ran some updates and now when i try to boot i get "fsck from util-linux-ng 2.17.2  /dev/sda1: clean, 132765/4800512 files, 7904061/19182080 blocks (check in 2 mounts) init: udevtrigger main process (276) terminated with status 1 init: udevtrigger post-stop process (281) terminated with status 1
<PresuntoRJ> !ask | Scorpion
<ubottu> Scorpion: Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-)
<PresuntoRJ> !details | Scorpion
<ubottu> Scorpion: Please give us full details. For example: "I have a problem with ..., I'm running Ubuntu version .... When I try to do ..., I get the following output: ..., but I expected it to do ..."
<X-2> PresuntoRJ: I know you.
<LordKitsuna> ;-; please can anyone help me i have a lot of people who use this server and im still learning some of the more in depth things and i just cant figure this problem out
<PresuntoRJ> X-2: of course :D
<PresuntoRJ> LordKitsuna: how come "blew up" ? electrical explosion? hdd failure? bad upgrade?
<LordKitsuna> well i say that when a big error that prevents me from doing what i want to do happens
<LordKitsuna> its just a figure of speech
<LordKitsuna> as i said i uninstalled a package (bandwidthd to be exsact) then ran apt-get update follwed by apt-get upgrade (couldnt tell you what it upgraded) then all that error happnened
<PresuntoRJ> I have used bandwithd before... maybe there is an HDD error that went unoticed before you have tried this upgrade, because there is nothing in the package that could cause a failure to bot
<Psi-Jack> Hmm
<PresuntoRJ> LordKitsuna: do you remember what is your filesystem? ext4? ext3? reiserfs?
<LordKitsuna> ext4 i think
<LordKitsuna> iirc
<Psi-Jack> For things that are managed by upstart, for example, avahi-daemon, what is "good practice" to do to not have that service started/stopped except manually, rather than automatically?
<PresuntoRJ> LordKitsuna: do you have a livecd with you? when was your last backup ?
<LordKitsuna> last backup...ummm ummm not tooo long ago also yes i have a live cd of ubuntu desktop with me
<LordKitsuna> my power is being flashy it had better not go out this would be the worst possible timing
<LordKitsuna> so what should i do with the live cd PresuntoRJ
<PresuntoRJ> could you try to power it on from the livecd? in there, try running (before you mount the HDD) a fsck to your partitions? let it complete the process before you try to boot it again
<sd-d_> how can i create localhost in ubuntu
<sd-d_> ???
<PresuntoRJ> LordKitsuna: tune2fs also gives you some feedback on the health of the harddrives
<PresuntoRJ> sd-d_: ?
<LordKitsuna> im not 100% sure how to run fsck (still learning ubuntu) do i just type it in a termanal?
<PresuntoRJ> sd-d_: 127.0.0.1 ?
<sd-d_> PresuntoRJ, like we open a directory in browser in asp.net
<PresuntoRJ> LordKitsuna: most linux are the same... fsck /dev/partition (he correct name)
<PresuntoRJ> LordKitsuna: try $ man fsck to read the manual
<sd-d_> PresuntoRJ,  ur close ..
<PresuntoRJ> $ man also works for most commands
<PresuntoRJ> sd-d_: you want to access your filesystem from a web browser?
<sd-d_> yes ...
<twb> PresuntoRJ: that's called DAV
<twb> :P
<PresuntoRJ> I think there is a python trick for that
<PresuntoRJ> python -m SimpleHTTPServer
<twb> You can do that with any old browser
<PresuntoRJ> make sure you have the tcp port opened on your firewall
<PresuntoRJ> but thats pretty unsafe
<twb> I usually use "busybox httpd" or thttpd
<LordKitsuna> bleh my live cd takes forever to boot, ill let you know what happens once i actually get to it
<PresuntoRJ> twb: I think he mean from another computer
<sd-d_> PresuntoRJ, was that for me ??
<twb> Well, both of those export the working tree via HTTP as the current user on a given port -- just like Python's built in HTTPd implementation
<PresuntoRJ> sd-d_: yes... try from the folder you want to browse $ python -m SimpleHTTPServer
<PresuntoRJ> twb: never tried with busybox... I'll give it a try
<twb> PresuntoRJ: unfortunately you'd need the larger busybox in Ubuntu -- the one they give you to boot with doesn't have the httpd applet :-(
<PresuntoRJ> sd-d_: they are both incredibly unsafe... do it for LAN access only... set a proppet firewall rule (with ufw) for that
<twb> But "busybox httpd" works on any Debian box you're likely to run into :)
<PresuntoRJ> twb: cool
<PresuntoRJ> sd-d_: do you know how to use ufw to set up a firewall rule ?
<twb> sd-d_: why do you want to access the directory tree in a web server?  Maybe there's a better way, like rsync.
<LordKitsuna> server....you could use apache couldnt you? then put the files you want under the /var/www
<twb> LordKitsuna: yes
<LordKitsuna> i hardly know what im talking about so just ignore me if tahts a stupid idea lol
<twb> Or really, you should use /srv/www or so -- /var/www is vestigial
<LordKitsuna> all i know is i run a gazzel website (well kinda im more or less a chooser and my friends do the coding bit) and i can host shit in the /var/www so i just thought id throw it out as an idea
<PresuntoRJ> LordKitsuna: those are all pretty stupid ideas, even mine, if you don't know what you are sharing... at least with apache you could set up a password (htpasswd), and share only your home files, and choose not to follow symlinks... and NEVER FORGET to set a firewall to prevent access from where you do not trust in the first place
<PresuntoRJ> LordKitsuna: but anyway, if its only temporary...
<LordKitsuna> PresuntoRJ, the file system shows as EXT3/4 (waht?) and i cant figure out how to use fsck right lol ill keep trying tho
<felixhummel> hi! i accidentially did a ``chown -R foo: /``. Is there some (semi-)automated way to restore permissions to system defaults?
<LordKitsuna> oh wait
<LordKitsuna> hells yea
<LordKitsuna> there is a gui for fsck
<PresuntoRJ> twb: if its for the access per-se, why not keep it to ssh/scp ?
<PresuntoRJ> LordKitsuna: a couple of questions... GUI for fsck? cool... and ? GUI? are you running your server with a GUI ? really?
<PresuntoRJ> LordKitsuna: not wrong, just unusual
<LordKitsuna> PresuntoRJ, the check finished like instantly and said it was clean also no as i said i have a ubutu desktop live cd with me not server so natrually it had a gui
<PresuntoRJ> LordKitsuna: not judging here... I do this at home too... just said it was unusual
<PresuntoRJ> LordKitsuna: clean for every partition (if more than one, of course)
<twb> PresuntoRJ: that, too.
<LordKitsuna> actually i know why it was so fast now, i did it via command line just to be sure and got the following error "permission denied while trying to open /dev/sda you must have read/write to the file system or be root
<twb> PresuntoRJ: but you can hand out rsyncd access to anonymous users, and it supports DAV-style listings.
<twb> It really depends what he wants to achieve
<LordKitsuna> and no my server itself has no GUI but i figured it would be easyer to just use my ubutu desktop live cd to fix it
<PresuntoRJ> LordKitsuna: try sudoing it first... sudo fsck /dev/partition
<twb> LordKitsuna: what is this alleged "gui fsck" called?
<PresuntoRJ> LordKitsuna: or gksudo if its a GUI application
<twb> PresuntoRJ: you don't need to use gksudo to launch a GUI -- the difference between gksudo and sudo is how it asks for a password.
<PresuntoRJ> twb: its the system/admin disk utility
<PresuntoRJ> twb: or whatever the name in english... it has a little drive icon (in pt-BR: UtilitÃ¡rio de Unidades)
<PresuntoRJ> twb: you can mount, umount, check, format, etc
<PresuntoRJ> twb: I think even partition, though I prefer and trust gparted for that
<twb> If you open up a terminal and run xprop, then click on the window, what is the WM_CLASS ?
<twb> (That's usually the "real" app's name)
<PresuntoRJ> twb: I use gksudo for GUI basically cause I would tell him to run via Alt+F2 also... ;)
<LordKitsuna> PresuntoRJ, and when i use sudo fsck i get "device or resource busy while trying to open /dev/sda filesystem mounted or opened exclusively by another program?
<PresuntoRJ> LordKitsuna: umount it first, from the livecd
<LordKitsuna> i did its not mounted
<PresuntoRJ> LordKitsuna: never try to fix a drive/partition while in use (mounted)
<PresuntoRJ> LordKitsuna: wiered
<PresuntoRJ> LordKitsuna: do you have it opened with gparted?
<PresuntoRJ> LordKitsuna: try $ sudo lsof -n | fgrep sda1
<LordKitsuna> PresuntoRJ, this might sound really stupid but how do you make the | (i copyed that from your message) i cant find that on my keyboard
<LordKitsuna> PresuntoRJ, oh wait nvm i found it, it had a break in it on my keys so it looked like something else
<LordKitsuna> PresuntoRJ, lsof :WARNING cant stat () tmpfs filesystem /cow output information may be incompleate lsof: WARNING cant stat() fuse.gvfs-fuse-deamon file system /home/ubuntu/.gvfs output information may be incompleate
<PresuntoRJ> LordKitsuna: no worry... the WARNINGS are for unaccessible folders (usually there)
<PresuntoRJ> LordKitsuna: if there is no hit for sda1 (guessing your partition is /dev/sda1 , of course) I am not sure what could be happening to prevent the fsck
<LordKitsuna> PresuntoRJ,  well i tried sda then sda1 and 2 just to be sure and those are the errors i got for all of them
<PresuntoRJ> LordKitsuna: are you sure that is the name of the HDD on your system?
<PresuntoRJ> LordKitsuna: it should have worked if it is correct and unounted
<PresuntoRJ> umounted
<Psi-Jack> For things that are managed by upstart, for example, avahi-daemon, what is "good practice" to do to not have that service started/stopped except manually, rather than automatically?
<Psi-Jack> I mean, upstart's nice and all, but there's no actual management system to manage it like there are SysV/LSB style init scripts.
<PresuntoRJ> Psi-Jack: what do you mean?
<LordKitsuna> PresuntoRJ, yea its just shown as "/dev/sda" in the mounting manager and it shows as unmounted
<Psi-Jack> PresuntoRJ: like, to stop a LSB script from starting during init, you'd update-rc.d -f servicename remove
<Psi-Jack> I want to stop avahi-daemon from starting up on it's own.
<PresuntoRJ> Psi-Jack: i think they have more or less the same goal... where upsatart is an attempt to rewrite init.d in a kind of better way... not sure how... there is literature for that in the ubuntu site
<PresuntoRJ> !upstart
<ubottu> Upstart is meant to replace the old Sys V Init system with an event-driven init model.  For more information please see: http://upstart.ubuntu.com/
<Psi-Jack> Yeah, not really covering my topic, actually.
<Psi-Jack> Heh
<PresuntoRJ> LordKitsuna: but there are partitons under it? sda1 or sda2 ?
<Psi-Jack> The only way I can figure to stop a service from auto-starting, is to remove it's event states.
<PresuntoRJ> LordKitsuna: which is root? which is boot? which is home?
<Psi-Jack> As-in, editing /etc/init/avahi-daemon, and commenting out the start line.
<PresuntoRJ> Psi-Jack: you could try sudo chmod 644 the script file
<Psi-Jack> It's not a script file.
<Psi-Jack> it's a .conf file for upstart.
<PresuntoRJ> Psi-Jack: or update-rc.d --remove it
<Psi-Jack> Again
<Psi-Jack> Not an LSB script.
<PresuntoRJ> :) sorry
<Psi-Jack> upstart's not handled by rc.d's, it's handled by upstart.
<Psi-Jack> Which is why I'm asking the question I'm asking. :)
<LordKitsuna> PresuntoRJ, im not 100%sure i thought i had it as default (it put /home and stuff in seperate parts) but it looks to all be one
<LordKitsuna> PresuntoRJ, i did i few installs of it so its hard to remember what it ended up with in the end
<PresuntoRJ> Psi-Jack: but upstart still run the init scripts... just under upstart now
<LordKitsuna> PresuntoRJ, i checked with the disk utility its all one partition
<LordKitsuna> PresuntoRJ, or at least thats what it says
<Psi-Jack> There's a big difference between /etc/init.d/ and /etc/init/
<PresuntoRJ> LordKitsuna: you could try a bit of a labor here
<PresuntoRJ> create a /mnt/sda1 folder
<PresuntoRJ> then a /mnt/sda2
<PresuntoRJ> and so on
<PresuntoRJ> mount them there , manually
<PresuntoRJ> browse for a while until you figure out who is what
<PresuntoRJ> you know what goens in your /home, dont you?
<PresuntoRJ> Psi-Jack: have you looked in /etc/default
<PresuntoRJ> Psi-Jack: http://upstart.ubuntu.com/misc/upstart.pdf
<Psi-Jack> PresuntoRJ: For? There's nothing in /etc/default/ for avahi-daemon
<LordKitsuna> ill need to have my friend do all this
<LordKitsuna> hes the one who knows what hes doing
<LordKitsuna> ill just end up breaking it more
<LordKitsuna> thanks for the help
<Psi-Jack> And that PDF is pretty much so old it's not even useful, nor it's contents useful. LOL
<Psi-Jack> upstart's design ideas started in 2006, when this document was made. heh
<soren> jdstrand: I know we've briefly chatted about it a number of times now, but we've never really gotten anywhere... ufw and libvirt's use of iptables... How can we make them better friends?
<PresuntoRJ> Psi-Jack: sorry, never read it... :D just found it
<Psi-Jack> heh
<soren> jdstrand: I don't know if we've discussed this particular approach before, but how about if ufw had a concept of a "transient rule", i.e. a rule that gets added through ufw, but doesn't persist across reboots.
<Psi-Jack> PresuntoRJ: Well, no offense, but it's obvious you know pretty much nothing about upstart, so you really can't help. ;)
<PresuntoRJ> Psi-Jack: no offense at all
<PresuntoRJ> Psi-Jack: we try to learn as well as we try to teach and help
<Psi-Jack> Like I said, it /seems/ the only way to get, for example, avahi-daemon, to not run at startup, is to comment out the start rule it has, so it has no start event to trigger it's startup.
<PresuntoRJ> Psi-Jack: if upstart was not kind of obscure, it would be obvious for you too
<Psi-Jack> upstart is by far, an incomplete replacement to LSB init scripts.
<PresuntoRJ> Psi-Jack: worst case scenario, at /etc/rc.local you could call for a service avahi-daemon stop
<PresuntoRJ> ugly !
<Psi-Jack> I mean, it doesn't even have anything even close to error tracking/handling needed for things such as CRM management of services.
<PresuntoRJ> Psi-Jack: I believe it was not the intention... it does asynchronous and inter-dependents starts pretty well...
<Psi-Jack> Yeah. It works well, beyond it's limitations.
<PresuntoRJ> Psi-Jack: and it might do exactly what you asked for, I just don't know how to help you my self
<Psi-Jack> But, it's limitations outweighs it's actual usefulness, which is it's bad side. ;)
<Psi-Jack> PresuntoRJ: I really don't think it does, actually.
<Psi-Jack> No matter what, I have to hack up the .conf file itself to make it stop, there's obviously no other way.
<PresuntoRJ> Psi-Jack: the most obvious trick I could think of would be move the avahi-daemon.conf to avahi-daemon.conf.disabled ...
<Psi-Jack> I have basically two choices. Hack the avahi-daemon.conf and comment out the start rules. Alternatively, add a /etc/default/avahi-daemon file with START=false and hack the avahi-daemon.conf to check for it during the pre-start and make it exit if START != true
<Psi-Jack> PresuntoRJ: Which disables it completely from even manually starting if I wanted to.
<PresuntoRJ> Psi-Jack: and from what I have found now, update-rc.d also works to disable most upstart scripts
<Psi-Jack> No, it doesn't.
<Psi-Jack> It has absolutely no effect, actually, because upstart doesn't care a spit about what's in /etc/rc#.d
<PresuntoRJ> Psi-Jack: good to know
<Psi-Jack> All the /etc/init.d/'s are to upstart controlled services are is symlinks to upstart's control interface/
<Psi-Jack> That's it. ;)
<Psi-Jack> Ubuntu 10.04 doesn't even use the old init system at all during boot up.
<Psi-Jack> It's actually upstart itself that handles starting the /etc/init.d LSB scripts, now.
<Psi-Jack> Heh
<PresuntoRJ> Psi-Jack: lol
<Psi-Jack> Even upstart's faq is out of date now, too, because it said to reload the upstart configuration, use initctl reload, but that fails because it's missing a job name.
<Psi-Jack> It's actually initctl reload-configuration now.
<Psi-Jack> I dunno..  At this point, I'm thinking upstart is a dead-end project of Canonical's. It had great ideas, just not implemented well or fully, after 4, gaining quickly on 5 years now.
<PresuntoRJ> Psi-Jack: have you read about initctl ?
<PresuntoRJ> initclt list
<PresuntoRJ> etc
<Psi-Jack> PresuntoRJ: Yep.
<Psi-Jack> Like I said, the actual docs on upstart.ubuntu.com are out of date, already.
<PresuntoRJ> Psi-Jack: it is sad
<Psi-Jack> I'm wondering how Fedora's systemd will end up being.
<Psi-Jack> They're moving to it next release I hear, since it's finally just about ready for actual production use. ;)
<Psi-Jack> Heck, it might even make it into RHEL6, but I'm not sure of that, yet.
<Psi-Jack> I remember Solaris's SMF init system. Amazing stuff, great ideas, well done, even.
<Psi-Jack> upstart was ideally going to be similar, but better, but... It fails. ;)
<PresuntoRJ> Psi-Jack: you should file a bug report on upstart... at least it call their attention
<Psi-Jack> On what? The whole damned thing's still not even close to production quality. LOL
<Psi-Jack> hehehehehehe.
<Psi-Jack> There's even a brainstorm on how systemd is better than upstart and ubuntu should replace upstart with systemd. ;)
<Psi-Jack> And ironically, the votes are also in favor of it.
<twb> systemd *is* better than upstart
<twb> It's just not production-ready
<Psi-Jack> twb: Yeah. I've been noticing that.
<Psi-Jack> It /almost/ is.
<Psi-Jack> They were actually going to roll it out in f13, but it didn't quite make it.
<twb> personally of the three, I've been most impressed by Squeeze's startpar
<Psi-Jack> startpar? hmmm
<Psi-Jack> Never heard of startpar
<twb> Since 1) it works; and 2) it's backwards-compatible; and 3) the speed gain is on the same order as upstart or systemd
<twb> Psi-Jack: As of Squeeze, Debian defaults to reordering sysvinit jobs based on LSB headers, and running them in parallel.
<PresuntoRJ> Psi-Jack: I have found a "start on never" condition you should try on the upstart avahi-daemon.conf file
<PresuntoRJ> Psi-Jack: it should not start up on boot, and still be available for manual instructions
<Psi-Jack> PresuntoRJ: Which would be the same as not giving it a start rule at all, simply commenting it out. LOL
<PresuntoRJ> Psi-Jack: :-p
<Psi-Jack> heh
<twb> I'm also *really* not sure about putting init in the hands of the kinds of people that get off on dbus.
<PresuntoRJ> Psi-Jack: I still do many init.d scripts my self...
<PresuntoRJ> Psi-Jack: that's why I never got to understand the upstart model
<Psi-Jack> I've done both, for many years.
<Psi-Jack> Well, upstart, only a few months.
<Psi-Jack> But, yeah, I go in constantly having to fix broken "LSB" scripts because they're definitely and obviously NOT LSB.
<Psi-Jack> Everytime I see "set -e" in an init script, I cringe.
<twb> Psi-Jack: pretty much every LSB header in Debian should be fixed now
<twb> Since it's a requirement for that startpar stuff I was talking about
<Psi-Jack> That is, absolutely, and posatively, ALWAYS the /worst/ thing you could possibly do in an LSB init script, is set -e
<PresuntoRJ> twb: is this startpar also event driven ? does it respawn dead daemons?
<twb> PresuntoRJ: no
<Psi-Jack> Yeah, startpar fail then.
<twb> PresuntoRJ: it's a conservative/incremental improvement
<Psi-Jack> SMF, upstart, and systemd, all have the advantage of watchdogging each service so if it stops, it can respawn it.
<PresuntoRJ> inittab also did it quite well a looong time ago
<twb> Whereas upstart and systemd are more like "works as long as you don't do anything interesting"
<Psi-Jack> I will say this though.
<Psi-Jack> upstart IS still better than djb's daemontools. THAT was pure junk.
<PresuntoRJ> twb: pls, don't try to customize our scripts, they were meant for out of the box only !
<PresuntoRJ> :-/
<twb> PresuntoRJ: by "interesting" I mean things like booting / and /home off NFS
<Psi-Jack> Ahhhhh..
<twb> You know, like unix has been doing since the 80s
<PresuntoRJ> twb: ldap? anyone?
<Psi-Jack> systemd is targetting F14. Which should be ...
<PresuntoRJ> twb: I really miss CDE from time to time
<twb> It completely and totally fails to work in lucid due to cyclic dependencies and race conditions in upstart jobs
<twb> Look, I'll show you my workaround...
<Psi-Jack> Just around the corner now.
<PresuntoRJ> lest all go back to system 4.4 (pre BSD, pls)
<Psi-Jack> twb: heh, ouch! Yeah. I can see that.
<twb> printf %s\\n >/etc/init/mountall-net.conf 'description "Mount network filesystems"' "start on startup" "script" "sleep 2;while :;do pkill -USR1 mountall||:;sleep 0.1;done" "end script"
<Psi-Jack> Yuck!
<twb> Yes, that's right, I just ignore events and have it try to mount any not-yet-mounted network filesystems every tenth of a second, FOREVER
<Psi-Jack> Yikes.
<twb> mountall(8) is a half-assed kludge because upstart forgot to solve mounting filesystems
<PresuntoRJ> twb: let the io wars begin !
<Psi-Jack> Again, AND anew!
 * X-2 grabs his old commandor64
<Psi-Jack> yeah, it seems systemd took all the good ideas from SMF. ;)
<Psi-Jack> In fact, systemd uses ideas from both SMF and launchd, which is fantastic.
<Psi-Jack> That's /exactly/ what I've been waiting for for decades.
<Psi-Jack> heh
<Psi-Jack> yeah, I'm gonna try the alpha version of fedora 14, since it uses systemd. Wanna see it in action for myself.
<alien1> Hi there
<alien1> I need a help with a bind server on ubuntu. I configured my DNS server I tested it from another server ... when I use command dig mail.domain doesn't work but if the command is dig domain it works.
<alien1> I badly need mail.domain to be resolved
<jdstrand> soren: hey. I've not really had time to add ufw support to libvirt. I kinda figured that I needed to add FORWARD (and these days maybe ebtables too) support to ufw first. that said, all the necessary chains should already exist in ufw (ie, even though ufw doesn't manage the forward chains via the cli, the chains are there)
<jdstrand> soren: so transient rules would be quite easy
<jdstrand> soren: well, though I don't do anything with POSTROUTING-- strictly FORWARD
<soren> Right.
<soren> jdstrand: Have you looked at the nwfilter stuff at all?
<jdstrand> (currently)
<jdstrand> soren: no
<soren> jdstrand: Ok.
<jdstrand> soren: reading about it now, it seems their use of chains and subchains is quite compatible with ufw
<jdstrand> soren: in that libvirt and ufw should stay out of each other's way currently, and that adding support to ufw wouldn't be horribly difficult (though, we'd need some new infrastructure)
<soren> jdstrand: Yeah, not maverick material, clealy.
<soren> clearly, even.
<soren> jdstrand: It sounds really neat. I'm trying to get it working right now.
<jdstrand> cool
<soren> jdstrand: Oh, I never got around to asking you about this... Now that we run kvm guests as libvirt-qemu:kvm, what about disk image ownership? Does it get mangled or does libvirtd fiddle with ACL's or something?
<jdstrand> soren: it gets mangled. but it did before too-- just to root:root
<soren> ?!? What, really?
<soren> Why would it do that?
<jdstrand> soren: yes-- 0.8.3 does this sort of thing automatically
<soren> Oh, did it drop CAP_DAC_OVERRIDE ?
<jdstrand> I didn't look at the implementation, I saw the results
 * soren still finds that *incredibly* offensive.
<jdstrand> yeah, you are not ht eonly one
<jdstrand> and there is no way to disable it that I know of, cause it happens in the DAC security driver, which the other security drivers stack on top of
<jdstrand> soren: under some circumstances, it will put the files back after it is done with them
<jdstrand> soren: that is not true of disk images, but is true of things like a usb key. unfortunately, having it use your actual cdrom device (eg /dev/sr0) will change that until your next reboot (when udev puts it back to what it is supposed to be)
<soren> jdstrand: dynamic_ownership in qemu.conf seems to be handy.
<soren> # Whether libvirt should dynamically change file ownership
<soren> # to match the configured user/group above. Defaults to 1.
<soren> # Set to 0 to disable file ownership changes.
<soren> #dynamic_ownership = 1
<jdstrand> soren: oh, I did not see that
<jdstrand> soren: nice. though, we can't turn that to '0' by default without breaking *a lot*
<soren> I only just found it now by tracing back through the DAC override code.
<jdstrand> soren: but it is good to know it is there
<soren> jdstrand: Yeah. Sadly.
<jdstrand> mdeslaur: ^
<soren> Do we have /any/ idea why it's not based on acl?
<jdstrand> soren: I would imagine for maximum portability
<jdstrand> one would think it could try acl first then fall back
<soren> It wouldn't be hard (nor unusual for libvirt) to attempt fancy new things, and if it fails, fall back to old, crappy things.
<jdstrand> but I've not looked at it
<jdstrand> yeah
<jdstrand> it's done things like that with qemu for forever
<mdeslaur> yeah, +1 for acls...the chowns it does sucks
<soren> :( someone dropped my favourite virt-viewer patch.
<mdeslaur> soren: what was it?
<soren> mdeslaur: The one that let me not have to put "-c qemu:///system" on its command line every single time.
<mdeslaur> soren: :(
 * soren headdesks
<soren> Oh, well.
 * soren fixes libvirt
<mdeslaur> soren: please add it back
<soren> mdeslaur: I'll fix it properly this time.
<soren> How do I shed one of group memberships?
<soren> I mean, just for a single process, until it terminates.
<soren> From the commandline.
<harrisonk> is there a log of everyone that logs in to a server?
<Pici> harrisonk: /var/log/auth.log
<harrisonk> thanks
<kaushal> hi
<kaushal> I have a weird issue of ubuntu server getting freezed, I do not see anything in dmesg, kern.log and syslog, daemon.log
<kaushal> is there a way to investigate it further ?
<kaushal> I have enabled mcelog too
<kaushal> Please suggest/guide
<kaushal> I am running Ubuntu Server Hardy 8.04
<harrisonk> help.ubuntu.com
<harrisonk> then click on the server guide at the bottom
<harrisonk> https://help.ubuntu.com/8.04/serverguide/C/index.html
<harrisonk> sorry I thought you needed a guide book
<kaushal> hi
<kaushal> can some one please guide me about setting up NameBased Virtual Host in Apache ?
<kaushal> I have three Doc Root how do i access it on the browser ?
<kaushal> so ServerName would be in this case would be ?
<MTecknology> What would be the best way to secure a USB drive but make it not dependant on that system?
<MTecknology> Normally I use truecrypt but since reading their license I'd very much prefer not doing that
<sherr> MTecknology: license issue with Truecrypt - what in particular?
<MTecknology> sherr: they call it 'open' but it's very very VERY restricted
 * patdk-wk wonders what the definition of, not dependant on that system, is
 * patdk-wk finds luks to not be dependent :)
<patdk-wk> did you mean cross os compatability?
<MTecknology> How hard is luks to use?
<patdk-wk> luks is esay
<MTecknology> any wiki page for that?
<patdk-wk> in the gui, ubuntu autodetects and uses them no issues
<MTecknology> I don't like gui so I'd get to use cli - but that's pretty cool
<patdk-wk> I do most of mine with cli, just takes 2 or 3 steps then
<patdk-wk> unless you use crypttab to make it easier
<patdk-wk> something like this: https://help.ubuntu.com/community/EncryptedFilesystemHowto3
<patdk-wk> cryptsetup luksOpen, mount; unmount, cryptsetup luksClose
<MTecknology> yuppers.. that's exactly what I was looking for
<patdk-wk> in the gui, it just shows the drive, click on it, it asks for password, it then opens it, and mounts it
<patdk-wk> I haven't played with using random offsets for the luks header yet
<MTecknology> I only ever used LUKS from the alternate installer and from gentoo - one easy and one sucked pretty hard :P
<MTecknology> This looks incredibly easy
<patdk-wk> my harddrive is full encrypted using luks
<patdk-wk> so is my home server
<patdk-wk> too many warrenty drive replacements, keeping the data encrypted simplifies that
<xmaxmex> hey patdk-wk.... /boot too...encrypted ?
<MTecknology> I'll probably do about 300GB for backup and 200GB for NTFS
<patdk-wk> xmaxmex, nope
<MTecknology> afaik, you can't encrypt /boot - only keep it on something external
<MTecknology> has that changed?
<patdk-wk> if grub supported luks, it would work
<MTecknology> you mean grub on the mbr could load a luks volume?
<patdk-wk> that would be the idea
<MTecknology> I didn't know it could do that :P
<patdk-wk> it can't :)
 * patdk-wk notes the keyword, if
<MTecknology> oh..
<MTecknology> I'm actually testing the drive with badblocks
<MTecknology> not sure if I feel like letting it finish though
<patdk-wk> if it's an ssd, run it a few more times :)
<MTecknology> I'm not a big fan of SSD (yet)
<MTecknology> It's a 500GB external seagate
<patdk-wk> I'm loving my ssd
<patdk-wk> laptop slowing it down though
<patdk-wk> my tests where getting 265MB/s throughput
<patdk-wk> laptop only has sata1, so max 140MB/s
<patdk-wk> encryption penalty, and it gets about 80MB/s
<MTecknology> My only experience has been with the first netbooks
<MTecknology> Not sure if I want to finish with badblocks..
<patdk-wk> desructive test?
<MTecknology> badblocks -c 10240 -s -w -t random -v /dev/sdb
<soren> mdeslaur: \o/
<mdeslaur> soren: what's up? :)
<patdk-wk> heh, let it finish
<soren> mdeslaur: https://www.redhat.com/archives/libvir-list/2010-September/msg00043.html
<MTecknology> patdk-wk: I'll consider my drive super clean after this :D
<soren> mdeslaur: Didn't mean to leave you hanging there, I just had to wait for it to hit the ml archive.
<soren> mdeslaur: That fixes virsh, virt-viewer, and virt-manager in one go.
<mdeslaur> soren: oh, cool! :P
<soren> mdeslaur: In theory, at least. :)
<mdeslaur> hhe
<soren> I wonder why I didn't do it that way to begin with.
<soren> Oh right, because libvirt always went to xen by default.
<soren> meh. Brave new world.
<MTecknology> patdk-wk: yay! it passed 5% :P
<MTecknology> patdk-wk: 15%... this is getting old :P
<ScottK> SpamapS: I think you're premature to declare victory on gems.  I still object to the fact that gems can silently replace system binaries in your proposal.
<uvirtbot> New bug: #629524 in squid (main) "db_auth missing in ubuntu packages" [Undecided,New] https://launchpad.net/bugs/629524
<patdk-wk> MTecknology, yes, but it's a good thing :)
<MTecknology> patdk-wk: ya- but there was nothing private on it :P
<hallyn> you know, i'm trying out 'cache='none'' in virt-manager right now, and am pretty sure this is way slower than it was before
<patdk-wk> hmm, that isn't going destroy any data
<patdk-wk> it reads the drive, then writes random stuffs, tests it, then writes the org stuff back
<patdk-wk> oh wait, heh never mind that
<hallyn> oh, hm, maybe it wasn't its fault :)
<zul> smoser: soren wants to more about the grub boot floppies for UEC
<soren> smoser: So... zul says something about floppies and uec and kernels.. What's that all about?
<smoser> :)
<smoser> ok.
<smoser> so, eucalyptus and ec2 run things with a kernel
<soren> Right.
<smoser> in euca, and kvm, that means '-kernel <thing.here>'
<soren> yup
<smoser> i wanted to duplicate functionality of ec2's pv-grub solution (they use grub 0.97 to read /boot/menu.lst and register the pv-grub as a kernel, and it loads kernels and ramdisks)
<soren> Oh, I didn't know they added that.
<soren> Cool.
<smoser> at first i thought i could just give "kernels" that were grub multiboot images.
<smoser> as kvm can load a grub multiboot image
<soren> Nah, that'd be by luck, I think.
<smoser> well, that doesn't really work, as when kvm does load a multiboot image, that multiboot image doesn't see biosdisks of type scsi
<smoser> (it does work for virtio, but isn't promised to, and actually fails on reboot)
<smoser> anthony's suggestion was to create a boot floppy
<soren> With grub on it?
<smoser> so, what i do is let the user register a multiboot compliant image as a kernel
<soren> First-stage loader?
<soren> Oh, ok.
<smoser> and i just promise to load that
<Deep6> guys is there a way to get xen working on lucid without having to resort to hackish stuff?
<smoser> i do that now by creating a grub floppy that multiboot loads the thing that they gave me.
<soren> smoser: "they"?
<Deep6> my box doesn't have HW vm support
<smoser> they, as in registered kernel.
<soren> smoser: Oh, "the users".
<smoser> right
<soren> smoser: Gotcha.
<soren> smoser: Thought you meant eucalyptus or kvm or whatnot.
<smoser> right.
<smoser> (getting a link to patch for eucalyptus)
<soren> Ok, so how is this strung together? You ship the floppy image or do you generate it?
<Deep6> tap tap this thing on? anyone read me or am I still in some sort of limbo?
<smoser> its kind of hacky how i do it.
<soren> Deep6: We hear you.
<smoser> i generate it on the node.
<Deep6> soren ok :)
<Deep6> wasn't sure as I've not used this irc client before
<soren> smoser: Ok... and then how do you determine that this is what you want to use, rather than the regular kernel/ramdisk combo?
<smoser> if its a multiboot image it takes the floppy path
<soren> Oh, that's easily detectable?
<smoser> http://bazaar.launchpad.net/~ubuntu-virt/ubuntu/maverick/eucalyptus/2.0/annotate/head%3A/debian/patches/22-uec-multiboot-kvm.patch
<smoser> that shows how.
<smoser> basically 3 uint32 fields in the first 8192 bytes .
<smoser> one is a signature byte
<smoser> then the 3 sum to uint32 0
<smoser> so, there is obivoulsy a chance for false positive.
<smoser> but its the same logic that kvm uses
<smoser> so if i hit false positive, kvm would have anyway
<Deep6> can anyone recommend a page for getting xen to work on lucid?
<smoser> mk-mb-loader is what makes the floppy disk.  that gets called by gen_kvm_libvirt_xml
<Deep6> I've been googling about but nothing solid
<Deep6> seems kind of broken :(
<soren> Deep6: It very likely is.
<Deep6> soren...that's disappointing :(
<soren> Deep6: That's Xen for you.
<Deep6> well I'd have to point the fault at Ubuntu this time
<Deep6> broken packages
<soren> smoser: So... the goal of all of this is to make it so that people can put their own kernels in the filesystem.
<Deep6> looks to be missing a xen kernel image entirely
<soren> smoser: ...and then eucalyptus will use that.
<smoser> soren, well, yes.  putting kernels inside a filesystem is a pretty common practice :)
<Daviey> zul, Would you be the best person to chime in with Deep6 ?
<soren> Deep6: We don't support Xen dom0 and haven't for a long time
<soren> smoser: ...but having it work on EC2 is not :)
<soren> smoser: Well, now it's becoming so, but up until recently.
<smoser> right, and then genkvm_libvirt_xml writes xml that adds a floppy to 'loader' if it found a multiboot. otherwise, it writes 'kernel' to 'kernel'
<Daviey> Deep6, I haven't touched xen since hardy :(
<zul> Deep6: get the xen source from xenbits.xen.org and build it from source
<smoser> soren, yeah, so it is a new feature, but i personally think its a *huge* feature.
<soren> smoser: I'm probably being really dense here..
<smoser> dense on what ?
<soren> smoser: I'm getting to that :)
<patdk-wk> beep6, xen was dropped after 8.04
<soren> smoser: So... On EC2, how does the user tell the system that he wants to use the kernel on the filesystem and not one provided by Amazon (or someone else).
<Daviey> Deep6, If it's not working for you with the packages, please do raise a bug
<patdk-wk> so unless you do it yourself, from scratch, it isn't going work
<SpamapS> ScottK: hmm, I hope it didn't come off as victory. Its just consensus and a move in the right direction by the maintainer.
<soren> smoser: Is there a magic AKI?
<Deep6> Daviey there appears to already be a bug opened
<smoser> yes, amazon has 2 magic aki's per region
<soren> smoser: Oh, and I do agree this is a huge feature, by the way. No doubt.
<soren> smoser: I figured as much.
<Deep6> zul: I'm not wanting to invest that much time to be candid :(
<smoser> http://bazaar.launchpad.net/~ubuntu-on-ec2/ubuntu-on-ec2/ec2-publishing-scripts/annotate/head%3A/kernels-pv-grub-hd0-V1.01.txt
<Deep6> I'll just likely grab debian instead as my dom0
<smoser> that is the akis
<soren> smoser: And for people to use this on UEC, they shove a special kernel into their installation, and use the AKI corresponding to that to make this work.
<SpamapS> ScottK: that said, only Daigo has stated in the bug report that he is concerned with binaries going into the path. An overwhelming group of people are quite vocal that they want those binaries in the path by default.
<soren> smoser: ...and then eucalyptus detects this special AKI and uses the boot floppy to boot.
<soren> smoser: Is that about right?
<smoser> no
<smoser> oh. well, maybe
<smoser> on eucalyptus, there is no speciak "AKI"
<smoser> (in my implementation)
<smoser> ie, not a special aki id
<smoser> if the kernel that is given to boot is a multiboot image, then it treats it specially.
<soren> I see, ok.
<smoser> i'm not sure whether or not amazon actually had to do something special with their akis or not.
<smoser> or, if anyone who could publish a kernel (ie me) could have just loaded a functional pv-grub loader as akernel and magic would have happened
<Daviey> smoser, BTW... what are the steps to create a tarball of a lucid and mverick image with a ramdisk?
<smoser> yeah, silly openstack, and its insistence on ramdisk. what year is this ?
<soren> :)
<smoser> Daviey, extract tarball, mount image, copy /mnt/boot/initrd-*virtual* ./my-initrd
<soren> What happens if you just provide an empty ramdisk?
<soren> That should do the trick, really.
<smoser> then you can use publish-image for the kernel and ramdisk, or create a tarball with all those files init.
<smoser> soren, yeah, i dont know its possible. the kernel would jus tignore it and try to go on with life.
 * Daviey waits for his eucalyptus cloud to finish installing before trying it.
<soren> Exactly.
<Deep6> Daviey: https://bugs.launchpad.net/ubuntu/+source/xen-meta/+bug/540110
<uvirtbot> Launchpad bug 540110 in xen-meta "ubuntu-xen-server has broken dependencies (dup-of: 538917)" [Undecided,New]
<uvirtbot> Launchpad bug 538917 in xen-tools "xen-tools is not available in lucid" [Undecided,New]
<alex_joni> assaasaasasaassasasssssasasaddassOPOP456s4d65asdsdsakjljkjkljkl
<alex_joni> ss/awa
<ScottK> SpamapS: I just said it in the bug report too.
<alex_joni> oops.. sorry
<smoser> soren, one other piece of info.
<MTecknology> patdk-wk: Can cryptsetup handle UUID?
<smoser> our uec-images come with a file named '-loader' which is a grub  multiboot loader that just basically does "multiboot (hd0,1)/boot/grub/core.img"
<ScottK> SpamapS: I don't mind in the path, just not so it can replace system packages.
<MTecknology> This is the first time UUID would actually help me :P
<smoser> so then the guest just has to maintain /boot/grub/core.img and the loader will work.
<soren> smoser: Oh, ok, so it's not exactly like EC2.
<smoser> well its really close. we use grub2
<smoser> they use grub1
<smoser> i didn't want to maintain grub1 code
<smoser> more than i had to
<soren> smoser: i don't blame you :)
<smoser> http://bazaar.launchpad.net/~ubuntu-on-ec2/vmbuilder/automated-ec2-builds/annotate/head%3A/mk-uec-mb-loader
<smoser> that is what makes our loader file
<soren> I'm just curious, though...
<soren> If you don't support the exact same approach as..
<soren> Oh..
<soren> I know why :)
<soren> I was going to ask:
<soren> If you don't support the exact same approach as EC2 anyway, why not go all in and just skip the whole -kernel (and optionally -ramdisk) business and just boot directly (like most peopel use kvm).
<smoser> we have 2 loader management utilities in our images.  grub-pc (for uec) and grub-legacy-ec2 (which does not conflict with grub2)
<soren> ..but obviously, there's not mbr.
<soren> s/not/no/
<smoser> right. i'd have had to have eucalyptus write an mbr, and put a loader on it.
<smoser> and actually, my loader will do that.
<smoser> so, if you booted an instance, via core.img
<smoser> then removed core.img
<smoser> and installed any loader onto the guest's /dev/sda
<smoser> then rebooted
<smoser> the multiboot loader would not find the core.img and chainload to the disk
<soren> Nova, by the way, is growing an option to let people upload raw disk images and specify a special AKI that will just boot the disk image directly.
<smoser> (this is not tested, though, but "should work")
<smoser> i did consider that path.
<smoser> and using something like 'aki-RAWDISK'
<smoser> but that was more intrusive.
<soren> It's very handy for people using hypervisors that don't let you pass kernels and ramdisks and such.
<soren> Like VirtualBox, for intsance.
<smoser> well, the floppy solution works there to
<soren> instance, even.
<soren> Maybe.
<soren> Well, probably.
<smoser> if you dont mind using hardware from 1990
<smoser> (the floppy
<smoser> )
<patdk-wk> MTecknology, I hope so
<soren> The rawdisk thing lets you boot other OS's, too, though.
<patdk-wk> crypt640b	UUID=9dffcad4-f051-4db9-9323-51cd74ba2681 /etc/keys/storage.key luks
<patdk-wk> crypt2a		UUID=6e5be471-b3e1-448f-8433-bb081cc4f7ef /etc/keys/storage.key luks
<patdk-wk> crypt400	UUID=4afe55a6-1610-4f07-b07e-8c73d700c1c1 /etc/keys/storage.key luks
<patdk-wk> crypt1b		UUID=e3fbc6b2-5877-4c97-846c-bd3532ec2c00 /etc/keys/storage2.key luks
<patdk-wk> crypt1a		UUID=a8fb5ac8-680a-4acb-8fd6-414cb871591e /etc/keys/storage2.key luks
<soren> Who have never wanted to run OS/2 in the cloud? Come on?
<MTecknology> patdk-wk: I take that as a yes :P
<soren> patdk-wk: Please don't do that agian.
<MTecknology> patdk-wk: thanks :D
<patdk-wk> heh, it's only 5 lines
<soren> patdk-wk: s/only //
<smoser> soren, so does the floppy
<smoser> (boot other oses)
<zul> soren: i still have that caldera bootdisk lying around
<smoser> you just provide a loader.
<soren> smoser: I clearly haven't grasped grub2. How'd you do that?
<smoser> uec images provide a linux specific loader. but anyone can provide a loader that loads windows.
<smoser> grub2 is crazy cool
<smoser> look at lines 53-77 or so at http://bazaar.launchpad.net/~ubuntu-on-ec2/vmbuilder/automated-ec2-builds/annotate/head%3A/mk-uec-mb-loader
<smoser> that is grub script
<soren> smoser: Ah, wicked.
<smoser> i'm guessing it would not be difficult to add identical function to openstack, to use a boot floppy if a multiboot image is found in specified aki
<smoser> other than using a floppy, i think its really clean.
<smoser> the nice thing is, what i promise to the end user is that i can load a multiboot image.
<smoser> which is a documented standard, and one that grub isn't going to drop support for anytime soon.
<soren> Yeah. Very cool!
<soren> I notice there's code for virtio in one of the scripts you patch.
<smoser> yeah, euca 2.0 supports virtio root
<soren> How do you determine if the image in question will handle virtio disks nicely?
<smoser> you dont
<soren> It's globally configured?
<smoser> yeah
<soren> Thought so.
<smoser> yeah, its a mess of backwards compatibility
<soren> Yeah. There's a reason we didn't just tweak the libvirt xml ourselves back then.
<soren> But meh.
<smoser> they would have had to extend the ec2 api to allow per-image choice
<soren> Yup.
<smoser> well, its configurable.
<smoser> i had multiple conversations with multiple people on exactly what would break or was at least indeterminable when the admin turned on virtio root, virtio net, and/or virtio ebs volumes
<smoser> basically, you cannot keep backwards compat.
<smoser> but the scsi is a dead, unmaintained option.
<SpamapS> ScottK: the proposal has things going into /usr/local, where dpkg isn't allowed to put files.
<soren> smoser: Yup. It's teh suck.
<SpamapS> ScottK: so there is no replacement of packaged system files at all.
<soren> smoser: I'm looking at your code... Are the floppy and the loader tied to each other?
<smoser> no.
<smoser> well.
<soren> smoser: Then why must I pass the loader to the floppy generation thing?
<ScottK> SpamapS: Replacement not in the send of replacing the file, but in the sense of superseding what gets run.
<smoser> because the floppy will load *any* multiboot compliant loader
<smoser> and the uec images come with *a* multiboot compliant loader
<smoser> (which is quite likely suitable for other linuxes with grub-pc)
<mathiaz> ScottK: how is that different from "make install", CPAN, python?
<smoser> i could have cut out one of the steps, and just had the user supply a floppy disk as an aki.
<ScottK> mathiaz: Python at least puts stuff in site/dist-packages where it's only in the path for Python.
<soren> smoser: Ok, it just seems odd that it needs it at build time, but I can use something completely different at runtime.
<ScottK> So it's radically different.
<mathiaz> ScottK: installing via python distutils could drop an new apt binary in /usr/local/bin
<soren> smoser: Possibly because I don't understand it very well yet.
<b0gatyr_> I need to install an MTA on my ubuntu box for only sending emails, what should I use?
<smoser> build time ?
<smoser> what build time
<ScottK> mathiaz: It could, but it's not the typical use case.
<smoser> i really should document this by the way.
<mathiaz> ScottK: so how is that different in the gem world?
<soren> smoser: mk-mb-loader
<soren> smoser: Builds the floppy image.
<smoser> i've wanted to, and this conversation is the best doc there is on it at the moment.
<smoser> right.
<smoser> oh
<ScottK> mathiaz: My understanding is in the gem world installing to /usr/local is the normal use case.
<smoser> wait
<soren> smoser: Oh, I thought that floppy image would be reused?
<smoser> no.
<smoser> the floppy that it outputs cannot be reused
<soren> smoser: Ok, then I get it. No worries.
<smoser> it is specific to that aki
<ScottK> And while gems are generally thought of as a developer tool, once you freeze the gems and distribute your app, then it's an end user problem too.
<mathiaz> ScottK: well - yes - user scripts go in /usr/local/bin/.
<mathiaz> ScottK: isn't that the same in the python world as well?
<soren> smoser: Gotcha. I'm with you now.
<smoser> i could have done a generic one, by using 2 floppies
<smoser> :)
<smoser> but comon, who ever had 2 floppy drives!
<mathiaz> ScottK: easy_install ends up in /usr/local/bin/.
<smoser> fwiw, the floppy could easily be made into a cdrom
<soren> smoser: I just somehow got the idea that it was only generated locally beucase it was simple to do so, people were likely to have the dependencies anyway, and it saved a bit of bandwidth.
<soren> smoser: Hey, I had two floppy drives for years.
<soren> smoser: And nothing else.
<soren> smoser: And one of them was 1.44MB!
<ScottK> mathiaz: But we patch easy_install to respect if the package is already installed via a Debian package and not replace it.
<ScottK> I'd be happy with that.
<smoser> i got to run. i will try to write this out somehow.
<smoser> but i think you get the general idea
<soren> smoser: Wicked. Thanks for clearing this up!
<smoser> and why the solution
<soren> smoser: Certainly.
<mathiaz> ScottK: right - that could be a useful improvment
<ScottK> mathiaz: Do that and I'm happy.
<mathiaz> ScottK: how does it relate to the /usr/local/bin issue?
<smoser> so far the only issue we've hit with it was that i added 'grub-pc' as a dependency to eucalyptus-nc.
<smoser> which forced grub-pc to be installed earlier in the install process, which caused installer issues.
<mathiaz> ScottK: what does easy_intall actually do wrt to debian packages?
<mathiaz> ScottK: does it check if there is already an executable of the same name provided by a debian package?
<mathiaz> ScottK: and if so easy_install refuses to proceed?
<ScottK> mathiaz: I don't recall the details and I'm in a meeting at the moment, but something like thtat.
<ScottK> Acutally I think it considers itself to have succesfully provided the requested package.
<SpamapS> I'm fairly certain the ruby devs of the world would *hate* that gems wouldn't let them get the newer ruby lib just because the debian version was installed.
<SpamapS> I've had many occasions with CPAN where I just want to replace one library in the chain of dependencies with the newest version.. but the others from deb/rpm/whatever are fine.
<SpamapS> But thats not really at issue. The current rubygems will happily let you replace an existing debian installed gem with a newer one.
<SpamapS> Its just that if it has binary scripts, they won't be in the path
<twb> Even if it lets you shoot yourself in the foot, it should still warn you
<SpamapS> which is, btw, awesome because the binary script that you have, may not be compatible with the one from the library.
<SpamapS> twb: ... ./configure doesn't warn you that you're putting stuff in /usr/local..why should gems?
<twb> You said "replace"
<twb> I assumed that meant clobbering files
<SpamapS> Nah
<twb> Anyway, it's nowhere near as bad as trying to mix cabal and debian Haskell packages
<SpamapS> just putting them in an earlier point in the ruby library path
<twb> Which is pretty much guaranteed to result in your compiles failing at link time due to it trying to statically link in multiple versions of a library
<SpamapS> doh
<twb> At least debian haskell packages are actually maintained now
<ScottK> SpamapS: I get that Ruby devs are insane, that doesn't mean we should be too.
<SpamapS> I have to agree with the poster who is concerned about how far Debian diverges from upstream.
<twb> upstream is always insane
<SpamapS> ScottK: whats next, patch autoconf to put things in /var/lib/C by default?
<ScottK> SpamapS: I'm not arguing the current situation is good.
<ScottK> SpamapS: I'd be OK if it would do something like fail and warn "gem X would supersede binaries provided by package Y.  If you want gem X, remove package Y or reinstall with -f."
<SpamapS> ScottK: protecting /usr/bin from /usr/local/bin overrides isn't really something the OS can or should do.
<ScottK> Not allowing third party non-native package managers to break the system, however, is.
<vmlintu> Speaking of ruby and gems - does anyone know how actively the gem debs are maintained? It seems like most ruby applications in the wild require something that is not packaged..
<SpamapS> Because right now, the bulk of the ruby world starts their system configuration on debian and ubuntu with  wget http://rubygems.org/rubygems.tar.gz && ... make make install and gets their gems *in /usr/bin*
<SpamapS> So by trying to protect these users, we've forced them into a much worse situation.
<SpamapS> vmlintu: Ruby is exploding way to fast for packagers to keep up.
<ScottK> OK.  So let's find a compromise that improves the situation.
<SpamapS> ScottK: If there was a hook in rubygems that could use update-alternatives, that might be better than just tossing things in /usr/local/bin from the debian package maintainer viewpoint. I am wary of the complexity of such a solution though.
<ScottK> SpamapS: That's was got reverted last time around.
<SpamapS> indeed
<vmlintu> Are there efforts underway to get any ruby applications and their dependencies packaged or are all the efforts doomed?
<ScottK> There are ruby packages in the archives, just not gem based ones.
<SpamapS> well at this point, gem install from ruby 1.9.2 will put things in /usr/local/bin directly
<SpamapS> I'd like to see a dh-make-gem created
<SpamapS> At least from that standpoint, it would lower the barrier to entry for making gems into debs
<ScottK> SpamapS: I don't understand why it is essential that gems have unfettered access to trample the namespace of every binary on a system?
<twb> vmlintu: there's at least one ruby app in ubuntu -- apt-listbugs :P
<SpamapS> ScottK: because thats what the authors intend it to have?
<ScottK> SpamapS: And the Debian package system intends it to have none.  So what's the middle ground.
<vmlintu> twb: puppet is also using ruby and rails
<SpamapS> ScottK: again, make and autoconf do the same thing... nobody's patching them to avoid /usr/local
<SpamapS> Quite a few users download unpackaged software and untar, ./configure, make && make install, and accept the problems with that.
<vmlintu> I wonder what happened to debgem.com as they managed to package a huge number of gems in some way..
<Madwill> is this possible that httpd.conf is an empty file ?
<twb> vmlintu: getting 90%-right packaging is something you can pretty much automate
<SpamapS> Its very similar to the common carrier problem ISP's have. They *could* stop child porn at the routers, but then they'd be *responsible* for how people use their network. We are not taking responsibility for users' actions at a level that IMO is inappropriate.
<twb> Madwill: check the .d directories
<SpamapS> s/are not/are now/
<SpamapS> huge DOH
<ScottK> SpamapS: If they replace our gems with theirs, then it's equally no longer our problem.
<Madwill> conf.d
<Madwill> only charset in there
<ScottK> So this kind of argument also works for keeping the status quo.
<Madwill> must be on the wrong folder
<Madwill> etc/apache2
<SpamapS> ScottK: right, so thats what most do, because they like Ubuntu enough to put up with that crap. But I have two personal friends who considered switching to CentOS because rubygems was so broken, they were tired of fixing it themselves.
<twb> Madwill: I don't run apache, so I can't help much more.  Have you checked what the Ubuntu Server Guide has to say about it?
<SpamapS> ScottK: and we were inundated with ruby sysadmins and devs at Velocity 2010 begging us to fix rubygems
<twb> SpamapS: have they filed a bug report in launchpad? ;-)
<SpamapS> twb: years ago
<Madwill> i think its in apache.conf and not httpd.conf anymore
<ScottK> SpamapS: That's fine, but the definition of "fix" is not necessarily follow upstream's insanity blindly.
<Madwill> then what do you run out of curiosity ?
<SpamapS> ScottK: agreed, which is why we change /usr/bin, to /usr/local/bin. :)
<twb> Madwill: busybox httpd
<SpamapS> ScottK: at least that way users can blow away /usr/local/bin/* and be "back to debian" ;)
<Madwill> interresting thx
<ScottK> SpamapS: I agree that's an improvement, but I'd like to find a reasonable way to protect the namespace of existing binaries.  I wouldn't even mind if it was limited to protecting non-gem binaries.
<SpamapS> It would be interesting to expose the security problems in CPAN/pypi/rubygems by creating a MITM DNS cache poisoning attack that replaces the word "the" in string literals of code with "pwn3d" or something like that.. :)
<SpamapS> ScottK: is there an existing list of all packaged binaries? Otherwise are you going to do a 'which' before install (with users', not root's path)?
<twb> SpamapS: apt-file ?
<SpamapS> That only knows about the currently installed binaries, right?
<twb> Emphatically, no
<twb> But even so, it wouldn't help, because I could upload a new .deb next week that provides a new binary
<ScottK> SpamapS: Maybe XB-Ruby-Gem and then if that's present, it's OK to supersede it.
<SpamapS> Ok, so that might work.
<twb> Also apt-file isn't installed on normal systems :-)
<SpamapS> twb: yeah, like I was thinking earlier.. I don't think its the OS's job to protect that namespace. If the sysadmin starts installing things with a different package manager, they have accepted responsibility... we should of course make efforts to avoid unrepairable breakage, but I don't think we should stop them.
<SpamapS> And somebody already pointed out that you only have to be a member of 'staff' to install in /usr/local, so you can even protect yourself by installing gems as a staff member, and not root.
<twb> IMO it's reasonable to expect gems to (mis)behave about the same as cpan(1) and python-setuptools.
<SpamapS> Yeah
<SpamapS> CPAN puts stuff in /usr/bin by default I think
<twb> And I'd say the best way to fix it is to write code that can automatically turn gems into local packages, and encourage people to use that instead of just writing files onto the filesystem
<SpamapS> twb: yeah like the sdist tool for python that allows very easy debianizing of pypi pakages
<twb> SpamapS: I was actually thinking even more one-shot, like "m-a a-i foo" is/was
<vmlintu> I'd love having something to easily turn gems to debs..
<twb> vmlintu: I'm not stopping you from implementing it :P
<SpamapS> vmlintu: file a wishlist bug. :)
<ScottK> SpamapS: Would you agree with the idea that as a design goal it would be reasonable for gems to not supersede binaries provided by non-gem packages?
<twb> ScottK: where "provided" means already installed on the filesystem and visible to dpkg -S ?
<ScottK> Yes
<ScottK> Can't expect it to know about packages not installed.
<twb> ScottK: it's TECHNICALLY possible (assuming your sources.list doesn't change), but I grant that it shouldn't be expected to know
<vmlintu> twb: I'll probably implement something as I have a project in development with huge pile of gem dependencies that would need to be distributed somehow..
<SpamapS> ScottK: by limiting gems to /usr/local, you are already guaranteeing you won't overrite files. I think its reasonable to do a checkbefore install that goes "WARNING: bin files in this gem overrwrite files you already have in your path" .. but they should be able to continue anyway
<ScottK> twb: Agreed.  It's also very hard to not be slow even for ruby if you have to check stuff not installed.
<twb> Are gems always 100% architecture: all?
<SpamapS> ScottK: the problem with that is, root's path is different from bob's path
<SpamapS> twb: no
<vmlintu> twb: no.. some of them compile binaries from c or something else..
<SpamapS> looking at the format of a gem .. it should be trivial to generate a debian/control and a tool to add to debhelper
<twb> Make sure to write the tool in perl to spite the ruby users :P
<SpamapS> :)
<SpamapS> wasn't ruby's creation inspired by perl6?
<twb> I thought it was caused by greenspunning
<dominicdinada> how do i flush the arp tables ? some annoying entries in there that i cant figure out where the machine name is set to rujl1rb3tr13g90b lol
<twb> dominicdinada: ip neighbour flush ?
<SpamapS> dunno, the history on the website doesn't actually mention perl so maybe that was some crackpot idea from the nether regions of my head
<twb> SpamapS: it's more a mishmash of the more obvious features of smalltalk and lisp, iirc
<dominicdinada> twb: i would assume so... i dont see it set in any of my hostnames but the server keeps calling it ewrfhj;wgasg;
<twb> dominicdinada: uh, your getting line noise at the end of your messages
<twb> SpamapS: http://en.wikipedia.org/wiki/Greenspun%27s_Tenth_Rule
<dominicdinada> twb: lol not noise the machine name is a bunch of random letters... so u can see how annoying that is
<SpamapS> twb: hah, what an interesting concept. :)
<twb> dominicdinada: is your host on a trusted network?
<dominicdinada> no
<twb> dominicdinada: well, you can't use arp on an untrusted network
<dominicdinada> i am pretty sure it is my doing when i was pissed long long ago i just cant track down which machine i set with the name lolk
<twb> I guess someone is arp poisoning you
<dominicdinada> twb: well it is not ness, a arp entry i just assumed it was in the arp cache and other OS
<dominicdinada> OS's store such information and such in the arp tables
<twb> SpamapS: it applies to more than just C and Fortran, of course -- it's just that's all that was around when Greenspun formulated the hypothesis.
<twb> dominicdinada: when you run "ip neigh show", is it there, or not?
<SpamapS> twb: Lisp and Haskell are on my todo list.. I suppose I should give them both a try. :-P
<kirkland> zul: hey, i have an apache2 change i'd like to run by you
<twb> See also jwz's "Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can."
<zul> kirkland: sure
<dominicdinada> twb: dang it i think i named the gateway the jibberish :/
<kirkland> -                       --with-suexec-docroot=/var/www \
<twb> He had a really funny comment in, lesse, xscreensaver code, too...
 * SpamapS rejoices as his U-verse connection returns, freeing him from the bonds of 3G
<kirkland> -                       --with-suexec-docroot=/home \
<kirkland> +                        --with-suexec-docroot=/home  \
<twb> dominicdinada: if it's an appliance, maybe it just got its knickers twisted -- bounce it
<zul> kirkland: why?
<kirkland> zul: basically, changing the suexec doc root default from /var/www (which doesn't work out of the box, as far as i can tell)
<kirkland> zul: to /home
<SpamapS> kirkland: +++
<kirkland> zul: so that it'll work with /home/*/public_html
<dominicdinada> twb: still learning the underlying linux commands as this information is stored elsewhere in other OSs flavors
<SpamapS> suexec is only useful in per-user contexts
<kirkland> zul: as it stands, you'd have to move each user's public_html to /var/www/*/public_html for it to work
<kirkland> SpamapS: have you experienced this before
<kirkland> SpamapS: this is the first I've encountered it
<twb> Shouldn't stuff be pointing at /srv/www instead of /var/www by now?
<kirkland> SpamapS: but I found http://www.unixguru.biz/howto-apache2-suexec-php5-and-fastcgi-for-virtual-domains/
<twb> dominicdinada: no worries
<SpamapS> kirkland: I've configured suexec many times.. you never want that docroot to be the system wide docroot.
<zul> kirkland: its this for php/
<kirkland> SpamapS: okay ... how do I reconfigure it?  looks to me like the suexec binary has to be rebuilt from scratch
<kirkland> zul: php or python, yeah
<SpamapS> I haven't ever used suexec on a distro packaged apache though, because its usually pretty broken.
<zul> kirkland: im hesitant against this change right now because we are getting closer
<dominicdinada> twb: ok it is actually this computer that is named "blah-tvykimdu0" since i am stream video to this machine iptraf shows the packets streaming BUT this machine hostname is set to diabolical-xx
<dominicdinada> where the heck else is the machine name set if not in hostname... but it could be old neighbor entries :/
<kirkland> SpamapS: would you mind filing a bug against apache2, complaining about suexec, and assign it to me?  :-)
<twb> dominicdinada: probably the hostname you have in /etc/hostname (and /etc/hosts) doesn't agree with the records in your router's DNS server
<progre55> hi guys, I need to gzip a huge file and pass it over to another server. But I dont have enough space on my current server for the gzipped file. How can I pipe the output of "tar -czf file.tar.gz file" into scp?
<twb> dominicdinada: are you using DHCP to get an IP?
<SpamapS> kirkland: hah, sure. :)
<patdk-wk> you can't pipe to scp
<dominicdinada> twb: in the router it is set correctly. because on the activity report it lists the machine name correctly there... yes dhcp
<kirkland> zul: bummer
<twb> patdk-wk: cat -zc foo/ | ssh 'cat >foo.tar.gz
<twb> Oops, bad completion
<kirkland> zul: i'll have to build a local copy then for my fedex project
<kirkland> zul: how long does apache2 take to build locally?
<patdk-wk> hehe :)
<zul> kirkland: couple of minutes
 * patdk-wk wonders what output 'tar -czf file.tar.gz file' makes anyways
<dominicdinada> twb: nano hostname shows correct computer name ugh
<twb> dominicdinada: then I don't know where the "bad" hostname is coming from.
<dominicdinada> twb: on the server i did ip neighbor flush all
<dominicdinada>  maybe it needs a reboot :/
<twb> dominicdinada: you COULD reboot the server.  Or you could isolate and fix the damn problem.
<twb> SpamapS: wouldn't something sudo- or polkit-flavoured be easier to lock down than suexec?
<dominicdinada> twb: well i am making the effort as i said before i cant track down any issue except old records it is very possible i named this machine blah-gkeghjeg before and changed it. or it could be a naming conflict but i dont think there is one
<twb> dominicdinada: OK, fair enough
<dominicdinada> twb: every where i know to check the machine names etc it is not showing as a funny name :(
<patdk-wk> dominicdinada, where do you see the incorrect name?
<dominicdinada> well in iptraf       but it is all over like in netstat etc
<patdk-wk> those all use whatever is set in nsswitch
<patdk-wk> hosts:          files dns
<dominicdinada> i will check on both machines
<SpamapS> twb: yes, suexec *sucks*
<twb> patdk-wk: run "getent hosts" on both the good and bad names
<twb> SpamapS: maybe it should just not be shipped by ubuntu, then
<dominicdinada> nsswitch.conf = db files :O
<patdk-wk> heh, I don't use getent much, but then I don't have issues like that :)
<patdk-wk> heh?
<twb> dominicdinada: run "getent hosts" on both the good and bad names
<dominicdinada> ok
<twb> patdk-wk: ignore that, I'm fat-fingering nicks
<SpamapS> twb: no, its an industry standard and you would alienate *thousands* of hosting providers by not shipping it
<twb> SpamapS: bummer
 * patdk-wk loves suexec
<dominicdinada> twb: patdk-wk both results look fine no bad names
<patdk-wk> well, when I must and forced to use it
<SpamapS> patdk-wk: as do many others. :)
<twb> I wish there wasn't such a disconnect between "industry standard" and (genuine) best practices
<patdk-wk> using suexec is just the slowest way to run an cgi ever
<twb> dominicdinada: I give up
<SpamapS> twb: the key is not to eliminate risk, but to expose it.
<SpamapS> the more that ubuntu can help people calculate and accept the risk their taking, the more succesful people who use ubuntu will be
<twb> I guess, but educating people who don't want to learn is HARD
<patdk-wk> people that don't want to learn, most likely will never bother to use suexec
<SpamapS> if you try to take all the risk away, you basically just get a mediocre system that never lets anybody do anything interesting
<twb> Just isolating their chunk of the net from everyone else is easy (e.g. RBLs) :-)
<patdk-wk> and will run everything as www-data :)
<MTecknology> twb: gimma gimma gimma - don't make me learn - just tell me how to do it the way i think it should work
<SpamapS> kirkland: done
<kirkland> SpamapS: bug #?
<twb> SpamapS: dude, I'm a security weenie.  Null utility means aleph security :-P
<SpamapS> bug 629633
<uvirtbot> Launchpad bug 629633 in apache2 "suexec should be configured to use /home as its docroot" [Wishlist,New] https://launchpad.net/bugs/629633
 * SpamapS wonders why it didn't pop up in channel yet
<twb> SpamapS: the 1s got stuck in the tubes
<SpamapS> twb: don't they just slide right through the holes in the 0's ?
<MTecknology> SpamapS: LP seems very slow right now
<twb> not if the cable's too bent
<SpamapS> s/right now//
 * SpamapS apologizes to lifeless for that cheap shot
<patdk-wk> isn't lp getting updates today?
<MTecknology> SpamapS: on a slow day it's still much faster than drupal.org
<SpamapS> hmm I just got the bug mail.. I bet the bot gets it shortly
<uvirtbot> New bug: #629633 in apache2 (main) "suexec should be configured to use /home as its docroot" [Wishlist,Triaged] https://launchpad.net/bugs/629633
<kirkland> SpamapS: ah, i stand corrected ... there is a apache2-suexec-custom package
<SpamapS> kirkland: that addresses this issue?
<kirkland> SpamapS: possibly, i'm testing now
<SpamapS> kirkland: is that mentioned in the README.Debian?
<kirkland> SpamapS: i've still not actually gotten suexec to work
<Met4physica> after following this guide: https://help.ubuntu.com/10.04/serverguide/C/mail-filtering.html i can no longer connect to my mail server from client
<Met4physica> any help?
<patdk-wk> Met4physica, and what is the issue?
<Met4physica> patdk-wk: can't send mail
<patdk-wk> that guide is fine
<Met4physica> patdk-wk: prior i could telnet localhost 587 and it would work. post guide i do that, and it can't connect. client can't connect to smtp server
<patdk-wk> define, send mail :)
<patdk-wk> so your client can't connect? or does it connect and doesn't authenicate? what error message?
<Met4physica> patdk-wk: thunderbird email client was set up to use my server with postfix, dovecot and was working prior to going through this guide. made changes in guide. now, it states "Sending of message failed.
<Met4physica> The message could not be sent because connecting to SMTP server "
<Met4physica> xxx.server.com failed"
<Met4physica> can't telnet to the port anymore, seems to be closed?
<batok> I need to add a second "nic" to lucid , but I don't remember how to do that with command tools.  Any hint about a link or documentation on the subject?
<patdk-wk> did you screw up postfix config and it is not starting?
<patdk-wk> batok, turn off lucid, install nic, turn on, continue life
<batok> in this case is a vNic ( vmware vsphere hypervisor ).
<batok> I added the vnic already
<patdk-wk> hmm, I normally just reboot the vm
<batok> I am going to restart , tks
<Met4physica> patdk-wk: postfix restarts just fine, without any errors (as far as I can see)
<kirkland> SpamapS: dang...  do you have any hints for me?
<kirkland> SpamapS: i just want to have /home/kirkland/~public_html/foo.php to run as kirkland
<patdk-wk> kirkland, did you make a virtual section in apache?
<kirkland> patdk-wk: hmm, not beyond the default
<patdk-wk> I believe suexec only works inside a virtualhost
<patdk-wk> and you need to use: SuexecUserGroup kirkland www-data
<patdk-wk> or something like that
<Met4physica> patdk-wk: I can telnet 10024 and connect to Amavis - says the service is ready. However, now I can't connect on 25 or 587
<patdk-wk> Met4physica, postfix isn't working, or setup correctly
<patdk-wk> and those instructions don't cover that
<Met4physica> patdk-wk: i had postfix setup and working prior to this guide, so i would really wonder what changed? how would you suggest i go about troubleshooting?
<patdk-wk> first, netstat -atn | grep 587
<Met4physica> patdk-wk: no result
<Met4physica> Sep  3 16:57:12 aegir postfix/master[15108]: fatal: /etc/postfix/master.cf: line 26: bad transport type: content_filter=
<Met4physica> in the guide it said, "Also add the following two lines immediately below the "pickup" transport service:" - seems to be related?
<patdk-wk> Met4physica, did you indent them?
<Met4physica> patdk-wk: no
<patdk-wk> heh
<Met4physica> patdk-wk: ok i fixed that
<Met4physica> patdk-wk: now my ports are open
<Met4physica> patdk-wk: that guide should have a note about the importance of indentation for us noobfolk
<patdk-wk> heh
<patdk-wk> it might assume you know postfix, not sure
<patdk-wk> I mean, email servers themselfs are not hard
<patdk-wk> but making them talk to other ones, are
<Met4physica> patdk-wk: is there an equivalent to postfix with a more ...accessible configuration file format?
<patdk-wk> sendmail?
<Met4physica> patdk-wk: tried that, wasn't my cup of tea either
<patdk-wk> I always thought postfix was pretty straight forward, only 2 files to worry about
<patdk-wk> though, I still do lots of sendmail work, editing cf files
<Met4physica> patdk-wk: i completely agree. but where would one read about how whitespace was important?
<Met4physica> patdk-wk: assuming they were self taught linux hobbiest types
<patdk-wk> the postfix master file manual
<patdk-wk> http://www.postfix.org/master.5.html
<Met4physica> patdk-wk: fair enough. still have a bit of resistance to reading such things, but its important. will do
<patdk-wk> there is very little to learn about master.cf, unless you want to make all kinds of new things in it
<patdk-wk> generally each line is a service
<patdk-wk> if you need more lines to define a service, you need whitespace first, to tell it you are continuing the last line
<qman__> sendmail? accessible? not in this world
<Met4physica> patdk-wk: ah i see
<qman__> the other supported mail server is exim, which I have zero experience with
<jjk9> anyone confirm samba 3.4.7 on ubuntu 10.04 LTS was compiled without ldapsam support?
<patdk-wk> qman__, never had an issue, I always thought it was pretty stright forward :)
<qman__> postfix is probably the easiest I've ever used
<patdk-wk> I just haven't had time to look at exim
<patdk-wk> I mainly use postfix, and almost retired all sendmail
<patdk-wk> I banish qmail as soon as I find it
<qman__> there's one reduced feature set mail server basically just for forwarding local mail to a real mail server
<qman__> forget what it's called
<patdk-wk> ssmtp?
<patdk-wk> all the ones I noticed, won't queue
<twb> ssmtp screwed me
<patdk-wk> so if the connection, or real server is not working, your screwed
<twb> I use msmtp-mta and it has never screwed me
<Met4physica> patdk-wk: now my client can send mail, but it does not reach its destination : my gmail account, in inbox or spam
<patdk-wk> Met4physica, welcome to the world of, you must have everything in order for anyone to accept email from you :)
<twb> there's also the one that queues locally...
<patdk-wk> get your hostnames, dns, dkim, spf, .... all setup correctly
<qman__> yeah, it's a lot of effort
<EvilPhoenix> any of you able to check the output of chkrootkit and tell me if there's anything i should be supremely worried about?
<Met4physica> patdk-wk: it was working previously to this guide... :(
<twb> patdk-wk: if he's using an envelope FROM of the gmail account he's TLS'd into, it'll accept anything, because it's a submission not a relay
<twb> Oh sorry, I misread
<Met4physica> patdk-wk: i also can't send an email to myself and receive it either . this is odd
<qman__> EvilPhoenix, pastebin the output
<EvilPhoenix> one sec
<hggdh> Daviey, kirkland: there is a mismatch on expectation between /etc/init/eucalyptus.conf and /etc/eucalyptus/eucalyptus.conf
<hggdh> Daviey, kirkland: we do not use CLOUD_OPTS in /etc/init/eucalyptus.conf
<EvilPhoenix> bleh
<qman__> !pastebin
<ubottu> For posting multi-line texts into the channel, please use http://paste.ubuntu.com | To post !screenshots use http://tinyurl.com/imagebin | !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic.
<EvilPhoenix> evil connectivity
<EvilPhoenix> i know pastebin
<qman__> you were killed for excess flood
<EvilPhoenix> yah
<EvilPhoenix> znc and xchat dont like each other
<EvilPhoenix> qman__:  http://starfleet.pastebin.com/aMDXaHmw
<EvilPhoenix> for the chkrootkit output
<jjk9> which ubuntu chat room to ask about samba on 10.04 LTS server? tks
<qman__> well, unless bindshell is supposed to be running an IRC server, you're owned
<guntbert> jjk9: server related issues are handled here (if anyone knows :-))
 * patdk-wk knows (but isn't saying :)
<Met4physica> patdk-wk: is there a way to see what happens to my email after my client believes it is delivered, but it has not reached destination?
<patdk-wk> Met4physica, logs :) /etc/log/maillog
<qman__> default ports, these guys aren't even trying
<qman__> they probably didn't even delete the logs
<patdk-wk> qman__, for what?
<qman__> his chkrootkit, binshell is listening on 6667
<patdk-wk> ah
<jjk9> guntbert: tks yeah but is samba server-related? or more general
<patdk-wk> jjk9, depends, are you talking about samba *server*? or samba *client*?
<Met4physica> patdk-wk: my mail.log is empty, but my mail.info is FULL of goodies
<guntbert> jjk9: just ask your question, nobody will hurt you :-)
<EvilPhoenix> qman__:  there's an ircd on there
<EvilPhoenix> qman__:  it helps to highlight me x]
<patdk-wk> Met4physica, heh, I haven't used syslog for so long :)
<jjk9> is ldapsam compiled into ubuntu samba 3.4.7?
<qman__> EvilPhoenix, only one program can listen on a port, so if your IRCd is listening on 6667, then you're fine there
<EvilPhoenix> okay...
<qman__> the suspicious files could be legit, they're just suspicious
<Met4physica> patdk-wk: if not using syslog,what would i use?
<EvilPhoenix> mmm
<Met4physica> patdk-wk: would you mind taking a look at my log ?
<EvilPhoenix> so nothing outwardly suspicious then
<Met4physica> patdk-wk !)ClamAV-clamd: Can't connect to UNIX socket /var/run/clamav/clamd.ctl: 2, retrying (2)
<Met4physica> is one notable error
<qman__> EvilPhoenix, http://ubuntuforums.org/showpost.php?p=4176512&postcount=2
<SpamapS> lifeless: shaking over there?
<jo-erlend> why is #ubuntu-virt invite only? I was wondering how difficult it is to install and configure spice with kvm at this point. Do we have packages for it in lucid?
<EvilPhoenix> qman__:  thanks
<EvilPhoenix> where can I suggest a change to a doc?
<SpamapS> jo-erlend: its been folded into ubuntu-cloud / ubuntu-server
<qman__> file a bug against the appropriate -doc package, I think
<qman__> but I'm not sure
 * EvilPhoenix checks the list of installed packages on his system
<EvilPhoenix> ...
<EvilPhoenix> there is no -doc package with chkrootkit
<lifeless> a
<lifeless> SpamapS: it was
<lifeless> SpamapS: appears to be some significantly damaged bits
<lifeless> no stats on human impact yet
<lifeless> search for earthquake on twitter though
<lifeless> or the #earthquake tag
<SpamapS> lifeless: its weird, I woke up last night thinking there was a giant earthquake going on but there wasn't
<jjk9> lifeless: google earthquake new zealand will give top hit nz earthquake site, 7.4, 30 km from christchurch ~4:35am local
<lifeless> jjk9: thats right
<lifeless> http://www.geonet.org.nz/earthquake/quakes/3366146g-maps.html
<SpamapS> ouch thats really damn close to a big city
<lifeless> we're in rangiora on the left hand map
<jjk9> u would have felt that good then. I'm orig from wgton but now in San Diego
<SpamapS> lifeless: how common are quakes there?
<lifeless> SpamapS: of this magnitude? not very AIUI
<SpamapS> Having grown up living virtually on top of the san andreas fault .. I find quakes fascinating.
<jjk9> when I was in Wellington for 40+ years biggest was about 6.7 and very rare that size
<SpamapS> Well quakes over 6.5 are pretty rare period
<Met4physica> multiple amavisd and postfix errors: would someone mind looking at a log?
<SpamapS> just a few each year
<SpamapS> Met4physica: paste.ubuntu.com the relevant parts
<Met4physica> SpamapS: http://paste.ubuntu.com/487922/
<Met4physica> why is MTA blocked?message not being delivered
<SpamapS> Sep  3 17:31:52 aegir amavis[16333]: (16333-07) (!)FWD via SMTP: <transformationarts@openashland.com> -> <zkrebs@gmail.com>, 450 4.4.1 Can't connect to INET4 socket 127.0.0.1: Connection refused, MTA([127.0.0.1]:10025), id=16333-07
<SpamapS> connection refused.. are you sure there's an MTA running on port 10025 ?
<Met4physica> my client is connecting to 587
<Met4physica> so that should be the port defined?
<remix_tj> Met4physica: maybe amavisd is down?
<Met4physica> remix_tj: here's an updated log, tried changing a couple of things http://paste.ubuntu.com/487930/
<Met4physica> i get a return from sender email! it states, "Diagnostic-Code: smtp; 554 5.4.0 Error: too many hops
<Met4physica> "
<remix_tj> Met4physica: can you paste the output of the command postconf -n ?
<Met4physica> http://paste.ubuntu.com/487932/
<Daviey> hggdh, OK, great - can you raise a bug please? :)
<Met4physica> remix_tj: here is my postfix master file: http://paste.ubuntu.com/487934/
<veenenen> anyone know any good tutorials for setting up a kvm guest from the command line
<remix_tj> Met4physica: wait a bit
<veenenen> the default (https://help.ubuntu.com/community/KVM/CreateGuests) just isn't working
<kirkland> SpamapS: you're not kidding -- suexec is friggin hard to get right
<remix_tj> Met4physica: the content filter should use port 10024
<Met4physica> remix_tj: what file is that setting in?
<remix_tj> Met4physica: main.cf
<remix_tj> and in master.cf you should add this
<remix_tj> http://paste.ubuntu.com/487935/
<remix_tj> because you need to create a special istance of postfix that will recieve the mails checked by amavis
<Met4physica> remix_tj: at the top of my master.cf is the second entry necessary? i was just fudging around
<remix_tj> Met4physica: dunno, buy you can leave it
<remix_tj> Met4physica: http://www.howtoforge.com/amavisd_postfix_debian_ubuntu this is a good tutorial
<remix_tj> :-)
<Met4physica> remix_tj: i left it, made your chagnes, and it works! AND additionally my DKIM seems to work now :) thanks a billion
<SpamapS> kirkland: Have never had it working in under a full day of work. :-/
<SpamapS> kirkland: there are just too many pieces that can and do go wrong
<baccenfutter> ez Barre
<baccenfutter> oops
<baccenfutter> sry
<Met4physica> Excuse me my DKIM does *not* work, gmail now just gets Mailed By
<ScottK> Met4physica: What are you using for dkim signing?
<remix_tj> Met4physica: i do not know how dkim works...
<Met4physica> ScottK: OpenDKIM Filter v2.1.3
<ScottK> Met4physica: On maverick or an earlier release with your own package?
<Met4physica> I initially installed it by source
<Met4physica> but then just installed the Ubuntu Package
<Met4physica> 10.04
<ScottK> Then you have 2.0.2 if you are using the Ubuntu package.
<ScottK> If 2.1.3 is running, it's still yours.
<Met4physica> a message I just sent cliams its 2.1.3
<RoAkSoAx> SEJeff_work: I entered the same race condition with only two servers with configs: Srv1: http://pastebin.ubuntu.com/487938/ Srv2: http://pastebin.ubuntu.com/487939/. What might be wrong on them? Srv1 log: http://pastebin.ubuntu.com/487941/ srv2 log: http://pastebin.ubuntu.com/487942/
<ScottK> Then it's still the one you installed from source.  I'd remove that and use the packaged one (I know it works)
<RoAkSoAx> ups
<uvirtbot> New bug: #629720 in apache2 (main) "package apache2.2-common 2.2.14-5ubuntu8 failed to install/upgrade: el subproceso script post-installation instalado devolviÃ³ el cÃ³digo de salida de error 1" [Undecided,New] https://launchpad.net/bugs/629720
<Met4physica> ScottK: I am having trouble with the DNS record too I Believe...
<Met4physica> smtp.mail=transformationarts@openashland.com; dkim=neutral (no signature) header.i=@openashland.com
<Met4physica> yet my email does contain a DKIM signature
<MTecknology> Any of you happen to have dig + pastebin + time available for me?
<ScottK> Met4physica: Paste the signature
<Met4physica> ScottK: here is the whole email (with the sig) http://paste.ubuntu.com/487947/
<uvirtbot> New bug: #629723 in chkrootkit (main) "False Positive: IRCDs running on port 6667" [Undecided,New] https://launchpad.net/bugs/629723
<ScottK> Met4physica: Where's your key record?
<Met4physica> /var/db/dkim
<Met4physica> my private key?
<ScottK> No, I mean the DNS record of the public key.
<ScottK> IIRC it should be located at dig txt mail._domainkeys.openashland.com and I don't find it.
<Met4physica> i am using Linode's DNS manager
<Met4physica> lets see..
<Met4physica> do you see it now?
<ScottK> Yes
<ScottK> So that was your first problem.  No DNS record published.
<Met4physica> ScottK: so the next problem ? :(
<ScottK> Met4physica: Dunno.  Does it work now?
<Met4physica> no
<Met4physica> "Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of transformationarts@openashland.com designates 173.230.154.165 as permitted sender) smtp.mail=transformationarts@openashland.com; dkim=neutral (bad format) header.i=@openashland.com"
<ScottK> They may have your lack of DNS record negativel cached.  What's the TTL on the DNS record?
<ScottK> You'll have to try again later.
<Met4physica> TTL = default
<ScottK> Then it's hard to tell.
<ScottK> For me to look, I ssh'ed into a machine on a different network to avoid the negative cache.
<ScottK> I'd try again tomorrow.
<Met4physica> ScottK: thanks for the tips. If I sent an email with 1000 recipients out without DKIM, will I be spam blocked?
<ScottK> Met4physica: Hard to say, but the absence or presence of DKIM is unlikely to have a major effect.
<Met4physica> ScottK: I am really attempting to understand what would have an effect...
<ScottK> That's a who area of study that's not particularly on topic here.
<Met4physica> ScottK: you're right, thx for the technical help
<ScottK> Met4physica: You're welcome.
<twb> If you do Bad Things you get blocked
<twb> What constitutes a bad thing is defined on a per-site basis by the site admin
<twb> Occasionally I'll run into people who reject all mail from gmail, for example.
<progre55> hey guys, when you run  "tar zcvf - SOURCEDIR | ssh user1@remotehost 'cd DESTDIR; tar zxvf - ' ", does it tar first on the memory and then send it, or does it tar and send it at the same time? I mean, I have a file about 19Gb, and only 8Gb ram and 4Gb free space on my local server hard-drive..
<twb> progre55: tar operates on streams
<progre55> twb: so it means I'm safe to run it, right?)
<twb> But you could just say tar -zxC DESTDIR
<twb> progre55: I believe so.
<progre55> twb: great, thanks man, appreciate )
<JasonMSP> i've got VSFTPD running great on my server, but I would like to secure the connection with ssl_enable=YES.  So far I haven't gotten the connection to work over TLS which is what I would like to do.  Does anyone have this working?  Im using WinSCP to connect.
<uvirtbot> New bug: #629685 in postfix (main) "package postfix-pcre 2.7.0-1 failed to install/upgrade: problemas de dependencias - se deja sin configurar" [Undecided,New] https://launchpad.net/bugs/629685
<thafreak> Are there any docs on setting up kvm+libvirt/virt-manager on lucid to allow for live migration?
<b0gatyr_> guys how to I make postfix my default MTA for ubuntu server , I installed ssmtp but I want to go back to postfix
<thafreak> Where's the best place to go to talk about kvm/virsh/virt-manager?
<twb> b0gatyr_: you can only have one MTA installed at a time
<twb> b0gatyr_: so I guess "apt-get install postfix"
<b0gatyr_> twb: thanks, that did it.
<soren> iulian: Thanks for the nova ack.
<soren> thafreak: Right here.
<iulian> soren: Don't mention it.
<thafreak> soren: are there any docs on setting up kvm for HA?
<thafreak> Mainly I'm just wondering what the preferred way of sharing VM disk images is.
<thafreak> I saw alot of people talking about just using NFS, but wasn't sure if iscsi is better
<thafreak> And if iscsi is better, do you just use one lun per vm, or do you do something like CLVM on one big iscsi target?
<soren> thafreak: Not that I know of.
<thafreak> Are images directly on NFS generally ok to use, or is that not really recommended for production?
<SpamapS> thafreak: I would think NFS would be awful for performance vs. iscsi.
<thafreak> I would think so too
<thafreak> But most of what I read about live migration, most people refer to NFS
<giovani> SpamapS: it's not going to be a drastic change, NFS has some optimizations
<giovani> but they're not really comparable
<SpamapS> I suppose NFS is pretty good at "give me Z bytes at offset X of file Y"
<giovani> well, it's a filesystem, not a remote block device -- so it does a ton of fancy stuff
<SpamapS> like what, bake souflÃ©s?
<SpamapS> hmm  wonder how many of these we'll see at UDS-N http://www.noisebot.com/narwhal_t-shirt.htm?cmp=elist20100903
<MTecknology> SpamapS: lol..
<Hypnoz> when doing a "find | xargs rm -f" anyone know how I can view the file names as it's running so I can track the progress?
<zash> Hypnoz: rm -fv
<Hypnoz> zash: interested i was looking for a find verbose didn't think to put it on the rm command
<zash> Hypnoz: :D  You could also do "find | tee /dev/stderr | xargs", but rm -v would probably be simpler
<Hypnoz> zash: -fv worked great thanks!
<progre55_> hey guys, how can I untar a file, but at the same time delete the archive, as I dont have enough space on my disc for both of them?
<Hypnoz> yikes that doesn't seem possible. could you put the archive on a flash drive or network drive?
<Hypnoz> how big of an archive are you talking?
<progre55_> the arch itself 9Gb
<progre55_> and the file is 19
<progre55_> but I have 25 only
<progre55_> and it's a remote server
<Hypnoz> hmm ....
<Hypnoz> 25gb is the largest/only partition avail?
<Hypnoz> does the system have access to any other servers or network drives?
<progre55_> well, there's only 1 partition, and it's 32 Gb
<Hypnoz> you could go on another server, export a nfs share, and move the archive there, then extract over the network maybe
<progre55_> hmm.. let me see
#ubuntu-server 2010-09-04
<progre55_> Hypnoz: is it possible to do this? "sudo tar zcvf - SOURCEDIR | ssh user1@remotehost 'cd DESTDIR; tar zxvf - '" but, preserving the file permissions and ownership?
 * SpamapS drums his fingers as he waits for mysql to build in a PPA.........
<progre55_> I was thinking about his approach, but couldnt preserve ownership and stuff, so I did "tar zcvf - SOURCEDIR | ssh user1@remotehost "cat > /DESTDIR/DESTFILE.tar.gz"" so now I have a huge tar file I cannot extract ))
 * SpamapS also drums fingers waiting for mongodb to build
<SpamapS> progre55_: that is possible, I've done it before.
<progre55_> SpamapS: which one?
<SpamapS> progre55_: precede your 'cat' with a touch/chmod/chown command
<SpamapS> progre55_: oh you want the contents to have the same uid's/gid's ?
<progre55_> yeah
<progre55_> cause it's a huge dir
<progre55_> with lots of files..
<SpamapS> progre55_: are the UID's/GID's the same on the destination box? tar usually uses names so they will map right on the dest host
<progre55_> yep
<SpamapS> progre55_: so whats wrong then? Not sure what you want to do that the command you gave won't do
<progre55_> SpamapS: the dir on server 1 is owned by root and mysql, but the user I'm ssh'ing it with is different.. so with the first approach all the files were under that user's ownership
<progre55_> but with the second approach, I dont have enough space on my second server to untar the dir.. cause the dir is 19.3 Gb, the tar is 8.5 and I only have 25Gb
<SpamapS> progre55_: well thats because you're not allowed to create files as another user unless you are root
<progre55_> well yeah I know that )
<SpamapS> so... be root
<progre55_> and that's why I was asking if it's possible to do it..
<SpamapS> progre55_: why not just use rsync?
<progre55_> hm.. that's an idea )
 * SpamapS realizes he is taking it for granted that you would know about rsync. :)
<progre55_> but dont really know the syntax, never used it )
<SpamapS> very similar to scp
<SpamapS> rsync localdir/ user@remotehost:remotedir/
<SpamapS> the trailing slash on the source is important
<SpamapS> if omitted, the destination will be where to put a dir called localdir, if its there, them remotedir will be made to look like localdir. Make sense?
<progre55_> hmm.. yeah =)
<progre55_> but how does it connect?
<progre55_> I mean, does it ask for password or smth?
<zash> progre55_: it goes over ssh
<progre55_> oh )
<zash> progre55_: so it'll ask for ssh credentials
<progre55_> okay )
<progre55_> thanks man, I'll try it now..
<SpamapS> rsync is amazing
<SpamapS> you are a changed man now that you have discovered it
<progre55_> haha =)
<progre55_> let me experience it first ))
<SpamapS> rsync is like pizza and sex.. even when its bad, its good.
<progre55_> hmm.. rsync creates the dir itself, but doesnt copy the files, says "skipping directory ."
<progre55_> SpamapS: ^^
<SpamapS> progre55_: rsync -a
<Hypnoz> I ususally do -av
<SpamapS> progre55_: or -r
<SpamapS> yeah, -av is good
<SpamapS> will show progress as it goes if you add -P
<SpamapS> (and will keep partial files which is good if you have giant files and low bandwidth)
<SpamapS> progre55_: -a means keep permissions and ownership the same too
<SpamapS> sort of mimics cp -a
 * SpamapS signs off
<progre55_> aha, cool, thanks
<progre55_> hm.. I like rsync a lot =)
<uvirtbot> New bug: #629929 in apache2 (main) "MM (10.10) does not connect to the internet" [Undecided,New] https://launchpad.net/bugs/629929
<_Techie_> is there any reason that root would have excessive amounts of files on the hard drive,   http://i161.photobucket.com/albums/t212/xperiment62/usage.jpg
<Hypnoz> _Techie_: is this only looking in /root/ or across the whole system?
<_Techie_> thats looking @ /
<Hypnoz> as root, try running   cd / && du -sh *
<Hypnoz> maybe keep digging down into the larger foldres to find where things are getting large
<_Techie_> ooh, i see where a ton of it is now
<_Techie_> website backups
<_Techie_> thanks heaps Hypnoz, i know exactly what  to do now
<_Techie_> also is there a bug in 9.10 that causes it to cache alot to RAM
<_Techie_> RAM was flushed of hard drive cache ~ 25 minutes ago, and now the RAM usage is at 1.08gigs, with 232.15 MB of that being processes
<EvilPhoenix> other than chkrootkit, does anyone know other root kit checkers or similar programs?
<JanC> EvilPhoenix: $ apt-cache search rootkit
<JanC> chkrootkit - rootkit detector
<JanC> rkhunter - rootkit, backdoor, sniffer and exploit scanner
<JanC> unhide - Forensic tool to find hidden processes and ports
<EvilPhoenix> JanC:  thanks
<JanC> (you could have done the same apt search, of course ;)
<EvilPhoenix> JanC:  apt-cache search is being evil on my system
<EvilPhoenix> cant even find already installed packages
<EvilPhoenix> i'm reinstalling it all tomorrow
<JanC> well, if you are infected by a rootkit, no program is guaranteed to find it, of course...
<dominicdinada> where besides the hostname file are hostnames stored  ?
<pmatulis> dominicdinada: in the kernel
<pmatulis> dominicdinada: or kernel-aware
<pmatulis> dominicdinada: man sysctl
<dominicdinada> pmatulis: ok i will check i
<wogamycrazy> hello
<MakX> quick newb question
<MakX> nothing needs to be restarted for php.ini changes to take effect, right?
<DigitalDeviant>  question: is there a way to upgrade a ubuntu server to centos remotely ?
<lifeless> "Upgrade"
<DigitalDeviant> is there anyone here that will help me purge out ispconfig and all the other crap that comes with it? I can setup a remote session so you can access my ssh
<Lovetrain> Hi alll
<oracle> hey, any service configuration utils--similar to the one on centos--on ubntu?
<uvirtbot> New bug: #630028 in mysql-dfsg-5.1 (main) "package mysql-server-5.1 5.1.37-1ubuntu5.4 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/630028
<q_a_z_steve_> I installed virtualbox on my 10.04 server, and now I am stuck wondering, can I avoid installing X11 and still use it?
<q_a_z_steve_> hey, anyone here?
<joschi> q_a_z_steve: VirtualBox itself runs well without an X server. but I think all existing packages depend on X.org, so you'd need to repackage or compile/install vbox yourself
<bjaanes> I have this DHCP/DNS server which i JUST got working properly with ddns-update and all (DHCP updating the DNS and such). What is not irritating me is that when I add an server to dhcpd.conf file as reserved - the DNS does not get update upon lease. Why? O_o
<LowValueTarget> 10.04.1 x64 server -- MySQL wont start.... i see this in 'messages' .... http://pastie.org/private/vdkr4casa2zah1gnxn8o8a
<LowValueTarget> any ideas?
<LowValueTarget> `status mysql` => mysql respawn/post-start, (post-start) process 2385
<LowValueTarget> the pid changes every few seconds
<LowValueTarget> i assume its in some sort of loop
<joschi> LowValueTarget: seems like AppArmor doesn't like your mysqld to access the new config file /usr/wp/conf/mysql/my-normal.cnf
<joschi> you'll either need to disable the apparmor profile for mysqld (bad) or modify the existing profile to allow access to that file
<LowValueTarget> joschi: ha! I just stopped apparmor and started mysql
<LowValueTarget> works like a charm
<LowValueTarget> i need to find where the mysql profile is for apparmor
<joschi> LowValueTarget: yes, but if it's a public accessible system, you should invest some time in reading about apparmor and how to use it
<kklimonda>  /etc/apparmor.d/usr.sbin.mysqld
<LowValueTarget> kklimonda, joschi I just add the path in there?
<joschi> LowValueTarget: you could do that
<LowValueTarget> is that a list of "allowed" paths
<joschi> LowValueTarget: it's a little bit more, but that's certainly part of it
<LowValueTarget> then permissions for each
<LowValueTarget> ill read up on it
<joschi> LowValueTarget: https://help.ubuntu.com/10.04/serverguide/C/apparmor.html
<joschi> LowValueTarget: for starters just duplicate the line for /etc/mysql/my.cnf in the apparmor profile and adjust the path of the duplicated line
<LowValueTarget> i did. Thanks joschi
<Psi-Jack> Bleh.
<Psi-Jack> Now I have to figure out how to convert this fricken upstart init for libvirt-bin into a proper LSB init script.
<Psi-Jack> Because upstart doesn't have ANY kind of proper LSB error codes at all for CRM management.
<wildem> Hello, anyone familiar with this problem?: mysql -u root gives ERROR 1045 (28000): Access denied for user 'root'@'localhost'. I have removed all mysql packages and reinstalled them making sure I do not set a root password. Still it does not work.
<TANATHOS> wildem : mysqladmin -u root ?
<wildem> Cannot find an answer on the web
<wildem> TANATHOS: mysqladmin -u root didnt work before but now it seems to work
<TANATHOS> mysqladmin --user=root password 'my_root_password'
<TANATHOS> actually that should be the sintax
<TANATHOS> *syntax
<wildem> Then i get the same error
<wildem> or if i try to set up a database
<TANATHOS> because
<TANATHOS> you should use
<TANATHOS> mysql -u root -p
<TANATHOS> and the input the password
<wildem> I have set password blank
<wildem> when i installed the packages
<TANATHOS> wildem: not a very good ideea but still
<wildem> yeah, just for ease of development atm
<TANATHOS> set a password and then 'mysql -u root -p'
<wildem> with mysqladmin --user=root etc?
<TANATHOS> yes
<wildem> mysqladmin --user=root password 12345'
<wildem> mysqladmin: connect to server at 'localhost' failed
<wildem> error: 'Access denied for user 'root'@'localhost' (using password: NO)'
<wildem> except with one more '
<TANATHOS> :)) there is no '
<wildem> before 12345, accidentally deleted it
<wildem> i removed the password before pasting ^^
<wildem> there was a ' before
<effj> wildem, add -p and see if it'll ask you for a password
<TANATHOS> I was thinking how to explain him that
<wildem> effj, TANATHOS: tried that
<wildem> still same problem edcept using password: yes
<TANATHOS> wildem: mysql -u root -p password
<effj> TANATHOS, oh sorry, you had already written it above, didn't see, kinda just switched to this window :/
<wildem> TANATHOS: but there is no passowrd set :/
<TANATHOS> effj : no problem
<simplexio> :) my usuall advice to mysql problems is "install postgresql" :) , yes i knoe, it isnt helpful
<wildem> Jesus
<TANATHOS> wildem: last time, set a password with mysqladmin then try to connect with -p argument
<wildem> I removed all mysql-related packages
<wildem> reinstalled, without supplying a passowrd
<wildem> and still it does not work :(
<wildem> i cannot set a password with mysqladmin TANATHOS
<wildem> Should I be able to do that?
<simplexio> wildem: it probably left old db files to some dir ( usually its in /var/something/something )
<TANATHOS> wildem: you should be able to do that if it's the first install
<wildem> its not the first install, it's a reinstall of mysql now
<wildem> simplexio: I'll have a look
<TANATHOS> the initial root user has no password in mysql and you should be able to set it with mysqladmin
<TANATHOS> there is a way of resetting the root passwd
<wildem> simplexio: Would it leaving old DB-files keep some old root password?
<wildem> TANATHOS: I take it that this should not happen? # mysqladmin --user=root password 'password'
<TANATHOS> wildem: /etc/init.d/mysqld stop
<wildem> mysqladmin: connect to server at 'localhost' failed
<wildem> error: 'Access denied for user 'root'@'localhost' (using password: NO)'
<wildem> stopped the service
<TANATHOS> wildem: sudo mysqld --skip-grant-tables &
<TANATHOS> wildem: mysql -u root mysql
<TANATHOS> wildem: UPDATE user SET Password=PASSWORD('YOURNEWPASSWORD') WHERE User='root'; FLUSH PRIVILEGES; exit;
<TANATHOS> that should reset your passwd
<wildem> TANATHOS: thanks
<wildem> hmm sudo mysqld --skip-grant-tables got me stuck in a process
<wildem> cant even ctrl + c out of it
<TANATHOS> it takes a while
<TANATHOS> not too much depending on your box
<wildem> ah
<wildem> now
<wildem> UPDATE user SET Password=PASSWORD('YOURNEWPASSWORD') WHERE User='root'; FLUSH PRIVILEGES; exit;
<wildem> TANATHOS: is PASSWORD the old password?
<TANATHOS> no
<TANATHOS> that's a field
<wildem> ah
<wildem> not to used to mysql queries
<wildem> im a front-end programmer
<TANATHOS> wildem: you need to change online what's between ''
<wildem> yeah
<TANATHOS> I'm not a programmer at all
<TANATHOS> :)
<wildem> TANATHOS: It seems to have worked!
<wildem> thanks a million :)
<TANATHOS> you are very welcome
<wildem> I'll make sure I set passwords in the future
<TANATHOS> !ubottu
<ubottu> Hi! I'm ubottu's favorite infobot, you can search my brain yourself at http://ubottu.com/factoids.cgi | Usage info: http://ubottu.com/devel/wiki/Plugins | Bot channels and general info: https://wiki.ubuntu.com/IRC/Bots
<TANATHOS> !ubottu mysql
<TANATHOS> I forgot how I should use her
<TANATHOS> :)
<wildem> hehe
<RoyK> anyone here using denyhosts?
<TANATHOS> RoyK: just ask
<RoyK> I just wonder if the distributed thing - how does that work? will any login attempt or block result in the IP being blocked by all denyhosts users? if so, it's quite hard to test this without hassle
<TANATHOS> RoyK: I'm sure you mean the package denyhosts which denies ssh access after some failed attempts
<RoyK> indeed
<TANATHOS> RoyK: to be honest I usually use fail2ban easier to use and apt-get-able
<RoyK> denyhosts is apt-get-able as well
<RoyK> in lucid, at least
<TANATHOS> ups sorry my bad then ( as I told you I never used it) as far as I know is that you can set it to ban a ip after several failed attempts to login
<TANATHOS> and for more you can set sshd service not to allow root logins
<RoyK> well, I guess I'll be going back to fail2ban - that works well
<TANATHOS> use a test box for denyhosts - that's what I do most of the time
<RoyK> well, it's on a test box
<TANATHOS> so hand out the ip and we'll help ya
<TANATHOS> :))
<RoyK> I just worry if denyhosts will ban my IP if I test a login failure
<RoyK> 213.236.233.237
<TANATHOS> on my way
<RoyK> :)
<TANATHOS> not banned so far
<RoyK> Sep  4 18:08:53 droetker denyhosts: Added the following hosts to /etc/hosts.deny - 194.106.155.145 (unknown)
<TANATHOS> got a ban
<TANATHOS> :))
<RoyK> thanks :)
<TANATHOS> I am banned
<TANATHOS> :)
<TANATHOS> you are very welcome
<RoyK> now, do you have a lucid test box?
<TANATHOS> jesus it's been a while since I spent time in here
<TANATHOS> RoyK: I am in the middle of changing ISP's I don't have internet for the next two days :)
<TANATHOS> what do you need it for
<RoyK> just try to install denyhosts and see if 194.106.155.145 is in the hosts.deny file
<RoyK> s/see/check/
<TANATHOS> I don't think it works that way
<RoyK> it's supposed to be distributed - my hosts.deny file already contained 30 hosts at the time of install
<TANATHOS> if you have the synchronize data thing installed
<RoyK> I just apt-get install'ed it
<TANATHOS> but 30 hosts is way too few if all users would sinchronize they'r files
<RoyK> sure, but can you still test?
<TANATHOS> check /var/log/denyhosts
<TANATHOS> to see if there is any log about synchronization
<RoyK> erm
<RoyK> 2010-09-04 17:32:45,672 - denyhosts   : INFO     denyhosts synchronization disabled
<RoyK> but then - why was there 30 IPs in hosts.deny?
<WinstonSmith> RoyK, if it would be a distributed thing in your local deny.hosts you would have entries poppin up that got there from other users
<TANATHOS> RoyK: there is a setting saying synchronization on / off
<TANATHOS> if you want to use it
<TANATHOS> ok guys have a good day c ya
<RoyK> WinstonSmith: it _is_ distributed, but I didn't enable that
<WinstonSmith> RoyK, yes i saw that later. but where did the deny.hosts entries came from then?
<RoyK> no idea
<RoyK> WinstonSmith: perhaps the package installer added them - if so, that's not very nice
<uvirtbot> New bug: #625698 in php5 (main) "php5-fpm assert failure: *** glibc detected *** /usr/sbin/php5-fpm: double free or corruption (fasttop): 0x0a87a958 ***" [Undecided,New] https://launchpad.net/bugs/625698
<uvirtbot> New bug: #625695 in php5 (main) "php5-fpm crashed with SIGSEGV in php_log_err()" [Undecided,New] https://launchpad.net/bugs/625695
<RoyK> seems running alpha distros in production isn't a good idea ;)
<slestak> anyone run the coherence upnp server on their 10.04 lts server?
<slestak> python-coherence is the package.  it doesnt come with an init script, which i found odd
<uvirtbot> New bug: #630391 in unixodbc (main) "Missing libodbc.so symbolic link" [Undecided,New] https://launchpad.net/bugs/630391
<Datz> does a security update indicate a kernel update which will require a restart?
 * Datz has his answer
<uvirtbot> New bug: #630396 in whois (main) "update .dz ccTLD whois server" [Undecided,New] https://launchpad.net/bugs/630396
<_Maximo_> hello, would it be possible to install ubuntu server over a slackware install via ssh remotely?
<lymaleal> hi
<lymaleal> fala ae
<lymaleal> oi
<lymaleal> oi
<GrUbEr111> hey, im putting ubuntu server on a laptop as a ssh server
<GrUbEr111> so like would i ssh to the server, then ssh from the server to the desktop I want to ssh to?
<JasonMSP> What is a good solution to give users upload/download access to files without them having shell access?
<merlin_nl> JasonMSP : using FTP ?
<JasonMSP> yes but from what I understand FTP isn't a great solution when it comes to security.
<merlin_nl> true
<merlin_nl> but easiest probably
<merlin_nl> echo "/sbin/nologin" >> /etc/shells
<merlin_nl> useradd -s /sbin/nologin $username
<merlin_nl> is simplest
<kklimonda> sftp should work with /sbin/nologin
<merlin_nl> also an option :)
<JasonMSP> thx
<JasonMSP> i've got VSFTPD working on my system, but haven't been able to get a secure connection to work.
<kklimonda> JasonMSP: sftp is for ssh ftp
<kklimonda> well, sftp I was thinking about :)
<JasonMSP> ssh ftp would require shell access though would it not?  I also have passwords disabled so that would require access via key which would be too much for customers.
<JanC> JasonMSP: sftp doesn't need shell access
<JasonMSP> hmmm... ok
<Razican> Hello, I have a problem with Wake On Lan in Ubuntu Server 10.04. When I shutdown the computer, it doesn't keep the interfaces on. In ethtool it says wake-on: g
<JanC> Razican: maybe BIOS issue?
<Razican> In the BIOS I have configured WOL to enabled
<Razican> I have in BIOS ACPI shutdown: S3 | S5: enabled | Wake on LAN: enabled
<intel352> hey guys, is it possible to create a rule within UFW to always trust eth0, and only apply firewall rules to eth1?
<JanC> intel352: see the manpage (maybe search for "interface")
<JanC> (so, yes, it's possible)
<intel352> thx
<intel352> JanC, just a note, when i was looking at the manual, all references to interface kept referencing an IP specifically
<intel352> I want to use the actual interface, not IP, so I can create a generic server image
<JanC> intel352: teh example given is "ufw allow in on eth0 to any port 80 proto tcp"
<intel352> JanC, ah, nice, thx, i missed that
<intel352> much appreciation!
#ubuntu-server 2010-09-05
<Razican> Is there a solution?
<Razican> Hello, I have a problem with Wake On Lan in Ubuntu Server 10.04. When I shutdown the computer, it doesn't keep the interfaces on. In ethtool it says wake-on: g
<qman__> Razican, I don't know about the ubuntu side of it, but you have to enable said feature in the BIOS first
<qman__> if the BIOS does not support wake on LAN, you will not be able to shut down, only sleep to RAM
<qman__> though pretty much anything 2002 or newer should support it
<Razican> I have in BIOS ACPI shutdown: S3 | S5: enabled | Wake on LAN: enabled
<JasonMSP1> .
<LowValueTarget> is this proper cron syntax ( the minutes designation ).... i know it can be done easier, just trying to solve an argument
<LowValueTarget> 0,5,10,15,20,25,30,35,40,45,50,55 * * * *    ( cd /usr/wp ; ./admin wpcron ) 2>&1 > /dev/null
<JasonMSP1> as a sudoer how do I ssh-keygen and output that for a different user?
<LowValueTarget> JasonMSP1: you'll want to `su - username` then run it
<JasonMSP1> ah!  thx!
<LowValueTarget> then you can type exit to un-su
<JasonMSP1> wait what if the user doesn't have sudo?
<LowValueTarget> you are user1.... you are in the sudoers file..... you want to generate user2's ssh-keys
<LowValueTarget> su - && su - user2 && ssh-keygen && exit && exit
<JasonMSP1> ok thanks!
<LowValueTarget> That will sudo you to root
<LowValueTarget> then you sudo to the other user
<LowValueTarget> create their keys as if you were them
<LowValueTarget> you could use sudo too
<LowValueTarget> i just always work as root
<JanC> you could just sudo to the other user, of course...
<JanC> (and su doesn't work by default)
<LowValueTarget> JanC: knows more than me ;)
<JasonMSP1> how bout sudo -i?
<LowValueTarget> 0,5,10,15,20,25,30,35,40,45,50,55 * * * *    ( cd /usr/wp ; ./admin wpcron ) 2>&1 > /dev/null
<LowValueTarget> Is that minute designation correct?
<LowValueTarget> i know you can specify */5
<LowValueTarget> but otherwise, is that correct?
<JasonMSP1> su - user2 says, "this account is currently not available"  Is it because shell is set /usr/sbin/nologin???
<JanC> JasonMSP1: sudo -i runs the default shell for that user, so if there isn't a real shell available...  ;)
<JasonMSP1> sudo -i works to get me to root.  su - user2 is coming up account not available.
<JanC> JasonMSP1: try "sudo -i -u <user> -s /bin/bash" or "sudo -i -u <user> ssh-keygen" or some such (or just create a key and copy it over and set the ownership & permissions correctly)
 * JanC goes to sleep (have to promote Ubuntu tomorrow at a computer fair ;) )
<RudyValencia> OK, I accidentally entered the wrong partition specifier to format when doing mke2fs, I canceled it during the first part of the process (creating tables). Is there a chance I can still recover data from the partition?
<JasonMSP1> im setting up sftp.  I have a group for ssh logins and that is the only group listed in AllowGroups in sshd_config.  For SFTP access does my FTP user group need to be added to that?  I do not want these users to have shell access.  What do I do?
<maddhat1> Hi all, i have 3 hard drives (shared through SMB) but id like to create just one share and have the files show up in one master list. is there a way to do this?  i was thinking symlinks but wasnt sure how to get it to update when files are added/deleted from one of the drives
<qman__> maddhat, symlinks are by their very definition dynamic
<qman__> and therefore always up to date
<qman__> you will, however, need to enable wide links in samba on that share
<qman__> which, if the share is writable, is a security risk
<qman__> if that's a problem, you can mount them within a shared directory instead
<maddhat> qman__: thanks for the additional info.!
<wildem> Where does this command download the files to? scp -r user@host:/home/user/directory /
<wildem> Note the space before the slash
<ChmEarl> nowhere  since you don't use sudo
<intel352> anyone know of a good hosted service for load testing a server?
<intel352> distributed hosted service ideally, so multiple different points hitting the server
<joschi> intel352: amazon ec2 with VMs in different country zones are be practical for that purpose
<intel352> joschi, i'm looking for something hosted that will provide stats, graphs, etc, and allow to compare servers
<intel352> not manual :-)
<jjk9> looks quite here but I'l ask anyway: for samba 3.4.7 on 10.04 LTS server, to do AD authentication using LDAP do I need LDAPSAM to get ACLs working from windows or is it enough to use another passwd db and install LDAP locally to store the daya from the central LDAP?
<oracle> anyone here?
<jjk9> yep
<jjk9> oracle:  I think Iit's just u and me
<oracle> slow channel
<jjk9> oracle: do u know: for samba 3.4.7 on 10.04 LTS server, to do AD authentication using LDAP do I need LDAPSAM to get ACLs working from windows or is it enough to use another passwd db and install LDAP locally to store the daya from the central LDAP?
<oracle> never messed with that
<oracle> ##linux guys may know
<jjk9> oracle:  ahhh maybe I try there, but it is realy ubuntu specific as it seems ubuntu samba does not have ldapsam compiled in
<masshuu> I am missing the postgres script in /etc/init.d/   Is there a way to restore this?
<jjk9> masshuu:  I think I may have a copy on one of my servers if u want
<masshuu> why not
<masshuu> :p
<jjk9> masshuu:  wait one while I go copy and pastbin...
<masshuu> yay! i love copypasta
<jjk9> masshuu:  arrgh mind muddle: dont have postgres sorry... :(
<WinstonSmith> masshuu, look here : http://pastebin.com/3bRGqY2d
<masshuu> ty
<masshuu> :D
<WinstonSmith> yw
<loganlhp> hello, does anyone know how to find my machine's INTERNET IP address under ubuntu server? the kind of result you would get going to www.whatismyip.com,
<loganlhp> anyone?
<joschi> loganlhp: wget -qO - http://icanhazip.com/
<joschi> loganlhp: or just visit www.whatsmyip.com with wget/curl/...
<loganlhp> hmm...its showing the same exact address my macbook is showing doing it on that..is that normal?
<joschi> loganlhp: or `dig +short @208.67.222.222 myip.opendns.com`
<joschi> loganlhp: if you have the same public ip address, sure
<joschi> loganlhp: ie. use the same dialup account through a router
<loganlhp> ok, so if i wanted to use that address for dyndns.com it would work?
<joschi> loganlhp: depends on what you want to achieve.
<joschi> loganlhp: that's your public ip address, so if you wanted to know that, it'll work
<loganlhp> web serving..from home without having to go through static ip
<joschi> but you could simply use a dyndns client which will automatically update your settings
<joschi> no need to do that manually
<loganlhp> im still required to signup through dyndns though, correct?
<joschi> yes
<loganlhp> ok i got my dyndns account and ddclient running, however when i access my dyndns url in a browser, it gets the default index.html page from my mac and not my ubuntu server..how do i get it to point only to the server and not any other device on using the same public ip address on my network?
<yaboo_> trying to rebuild my raid array getting the message no superblock, how do I overcome this???
<joschi> loganlhp: configure port forwarding on your router
<joschi> loganlhp: but that's not ubuntu related at all ;)
<loganlhp> haha
<loganlhp> port 80?
<loganlhp> is someone willing to test me web server setup for me really quick and just tell me if they see the index.html i put on there?
<loganlhp> test my*
<loganlhp> please?
<alex_joni> without a link surely no-one will try
<loganlhp> http://lhpnetwork.redirectme.net/index.html
<loganlhp> tell me what the first line says
<yaboo> any raid guru
<yaboo> s
<yaboo> about
<loganlhp> what?
<yaboo> rebooted my machine and got inactive md0 now
<yaboo> get superblock errors
<yaboo> any reason why
<alex_joni> loganlhp: Uhhh hey baby huhuh
<loganlhp> haha good it works
<loganlhp> thanks
<scar__> my server rebooted a few hours ago, I did not tell it to do so, when I checked "last -d reboot" it shows "reboot   system boot  0.0.0.0          Sun Sep  5..." how do I find out what caused it to reboot?
<Pupeno> I can run the program bundler as pupeno, and as root, but when I do sudo bundler I get command not found. Any ideas why?
<joschi> Pupeno: wrong $PATH when running `sudo`. try using the complete path to `bundler`
<Pupeno> joschi: I cannot modify the command line.
<joschi> Pupeno: the modify $PATH for the user you try to run bundler with (probably root?)
<Pupeno> joschi: well, as I said, the path was correct for both my user and root, I can run bundler as both, just not using sudo. This works: sudo -i  and then bundler
<joschi> Pupeno: does the message say `bundler` not found or does bundler call an executable which can't be found.
<Pupeno> It says:  bundler: command not found
<joschi> Pupeno: what's the output of `which bundler` and `sudo which bundler`
<Pupeno> joschi: at the moment I'm reinstalling it... just a second.
<uvirtbot> New bug: #630747 in backuppc (main) "package backuppc 3.1.0-9ubuntu1 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/630747
<RoyK> scar__: check the logs
<RoyK> scar__: /var/log/messages is a good start
<scar__> will do thanks
<scar__> looks like someone has been trying to hack me through ebox
<scar__> not entirely sure, could also be ebox itself going crazy. I'll firewall it for now and see how it goes
<RoyK> scar__: can you pastebin some of the logs?
<RoyK> if someone has tried to get in there - run chkrootkit on it to see if that finds anything - download it from http://www.chkrootkit.org/ directly instead of using apt-get in case that's been tampered with
<scar__> ran krhunter right after reboot
<scar__> it didn't find anything :)
<RoyK> try chkrootkit, fresh install
<RoyK> rkhunter may have been tampered with....
 * RoyK is just paranoid, but paranoia can help sometimes
<scar__> yeah rather safe...
<scar__> at lest the box is here it home, though it's only been running for a few days publicly and it's already being tested
<RoyK> this lists a few tools :) http://www.cyberciti.biz/faq/howto-check-linux-rootkist-with-detectors-software/
<scar__> sshd[833]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=184-106-198-140.static.cloud-ips.com  user=root
<scar__> got a few of those lines, seems it's time to disable text logins
<RoyK> just install fail2ban or denyhosts
<RoyK> both scans logs to stop intruders
<RoyK> fail2ban can use iptables or hosts.deny and can be configured to scan for intrusion on other services than ssh, whereas AFAIK denyhosts only (currently) work for ssh, but then, it can use a distributed list of denied hosts
<RoyK> denyhosts uses hosts.deny
<RoyK> sshd reads that
<scar__> I'll try the denyhosts think that was the simple one
<RoyK> they're both quite simple to setup
<RoyK> just remember to enable synchronisation with denyhosts - it's disabled by default
<scar__> I'll also monitor the connections via the firewall, it should be really easy since there's only 2 people using this box
<RoyK> still, that's manual work - fail2ban/denyhosts will work automatically, and ssh attacks rarely succeed in the first few attempts unless you have bogus passwords
<scar__> awesome thanks RoyK, denyhosts is already doing it's job :) by the way the reason why I'm putting rules in place on the firewall is to check if my server tries to send out spam or tries to attack other hosts
<RoyK> scar__: nice
<RoyK> scar__: you might want to change denyhosts' settings - it normally bans IPs for some days, whereas the defaults in fail2ban is an hour or so. A ban for days may be bad if something goes wrong somewhere
<scar__> I'll maybe set the ban longer, since there's always someone close to the box. If push comes to shove, I've got static ips at work that I can add to hosts.allow.
<scar__> made sure that allow has higher priority than deny, so it's all good :)
<uvirtbot> New bug: #630810 in apache2 (main) "package apache2.2-common 2.2.14-5ubuntu8 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/630810
<touch-e> hi all
<touch-e> i need help about tomcat and ssl activation
<touch-e> please :)
<touch-e> (i have read the doc and do as the quick step, but i seems not be activate)
<touch-e> it seems*
<incorrect> other than chkrootkit what else can i use to see if my system was compromised?
<joschi> incorrect: rkhunter for example
<joschi> incorrect: but if you need them check your system, it's already too late actually.
<joschi> incorrect: a proper IDS and IPS from the start would have been useful then
<incorrect> joschi, i am not too sure how far they got
<joschi> incorrect: you should check your binaries with `debsums` and a clean package db
<incorrect> nothing adverse has happened, the account they got into didn't have sudo
<incorrect> weird chkroot kit thinks my /sbin/init has been compromised
<yaboo> ok, how do I fix a raid5 array
<incorrect> yaboo, mdadm?
<yaboo> incorrect: done this to no avail, getting superblock errors
<incorrect> i just used sfdisk and dumped my partition table from another disk, then used --force to copy it onto the new disk
<incorrect> then added the new partitions back into the array
<yaboo> ah ok
<incorrect> wasn't a big deal
<incorrect> i guess your partitions are the wrong size or something
<yaboo> incorrect: power failure, rebooted, and array went from md0 to md_d0
<incorrect> errr
<incorrect> check your config
<yaboo> got a inactive sdd1[2](s) in proc/mdstat
<incorrect> maybe udev got screwed
<yaboo> ok
<yaboo> udev?
<incorrect> this is weird /sbin/init still reports and infected even if i copy over one from a system that isn't infected
<yaboo> incorrect: can I just use fdisk instead of sfdisk
<incorrect> yaboo, depends if you want to spend 10 seconds copying over the partition table
<yaboo> incorrect: ok
<yaboo> incorrect: getting permission denied with sfdisk
<yaboo> sfdisk -d /dev/sdb | /dev/sdd
<incorrect> weird
<incorrect> i must admit i did put it to a file first
<yaboo> ok
<incorrect> wow this guy really tried to get into my system
<yaboo> mdadm -A -f /dev/md0 /devsd[bcd]1
<yaboo> mdadm: cannot open device /devsd[bcd]1: No such file or directory
<yaboo> mdadm: /devsd[bcd]1 has no superblock - assembly aborted
<yaboo> incorrect: is there a fix to this
<incorrect> i've not had this problem
<yaboo> setup the correct partition in the sdd
<incorrect> hmm, i wonder why chkrootkit thinks /sbin/init is compromised
<incorrect> ah its a bug in ckrootkit, others have it too
<incorrect> maybe i should leave a VM running outside of my internal network
<incorrect> am i still here?
<yaboo> can't find a answer to fix my raid array
<yaboo> incorrect: you about
<SpaceBass> hey folks
<SpaceBass> rank amateur question - but how do I find out which /dev/ a pic card is assigned to?
<SpaceBass> I see it when I do an lspci - v
<kees> SpaceBass: I always look in /sys, but i bet there is a better way
<SpaceBass> kees, thats where I started too...in /sts
<SpaceBass> sys
<oxi> hi
<oxi> I'm having huge trouble with my neworking setup which I've been trying to solve for hours
<oxi> I would very much appreciate if anyone could help
<oxi> at the moment I have inbound traffic working but not outbound
<oxi> this is my routing setup: http://pastebin.com/faHK6Htq
<oxi> I can't ping or wget the outside from it
<SpaceBass> oxi openvpn?
<oxi_> SpaceBass: yes
<SpaceBass> oxi_, can you pastebin your ovpn config?
<oxi_> SpaceBass: thank you very much!
<SpaceBass> dont think me yet - blind leading the blind here :D
<SpaceBass> oxi, 178.63.123.9 is your local IP?
<oxi> that's my vps
<oxi> SpaceBass: here: http://pastebin.com/99ZTMmV2
<oxi> the exact same "proplay.conf" config file is working on osx and windows for outbound traffic (ping google.com/traceroute google.com/wget googl.com/...)
<savid> Anyone know how to get mod_vhost_alias installed on apache?  I can't find the package, and a2enmod says it's not found.
<SpaceBass> oxi, looking now...
<oxi> SpaceBass: I've been trying for hours to figure out what's wrong on my ubuntu server, why I can't achieve it there
<SpaceBass> oxi, can you pastebin the ovpn logs?
<oxi> SpaceBass: where do I find those? :-)
<oxi> SpaceBass: from the client or from the server?
<SpaceBass> oxi, not entirely sure... I use OpenVPN on BSD :D ...try /var/logs/openvpn
<SpaceBass> oxi, both would be helpful, but lets start with the client
<SpaceBass> savid, apt-cache search <keyword>
<SpaceBass> savid, also: sudo apt-file search <filename>
<savid> SpaceBass, yeah, that's what I did first :-P
<SpaceBass> savid, then you are ahead of me :D
<savid> SpaceBass,  That's what I meant when I said I couldn't find the package
<coder> hi, is there any 10.04 sparc tftp image out there? I can't find it
<oxi> SpaceBass: client: http://pastebin.com/bgGvfm3E
<oxi> SpaceBass: server: http://pastebin.com/DfAKkSiJ
<SpaceBass> oxi, did you manually adjust the routing table?
<coder> no tftp image for sparc? really?
<oxi> SpaceBass: are you using OVPN on FreeBSD?
<SpaceBass> oxi, yeah, and osx
<oxi> SpaceBass: yes
<oxi> nice
<SpaceBass> oxi, mostly I use it b/t two BSD routers, for a site-to-site vpn
<oxi> very nice
<oxi> I'm trying to have an ubuntu box route to a vps
<oxi> SpaceBass: I can give you ssh access if that helps
<SpaceBass> oxi, your problem is clearly in the routing table - what happens if you don't adjust it yourself? clean reboot?
<oxi> SpaceBass: the routing table of my client or my server?
<SpaceBass> oxi, thanks for the vote of trust :D but I wouldn't go around offering SSH access to internet people ;)
<SpaceBass> I had a buddy do that in college, long story short, he still has an FBI record b/c of what the guy did once he was in the network
<oxi> SpaceBass: it's only a sandboxed VM :-)
<SpaceBass> ah :D
<SpaceBass> oxi, so the server is trying to push the route options to the client (which is standard ovpn) and the client is saying that the route command failed due to conflict
<oxi> SpaceBass: @buddy: hehe! ... :-)
<SpaceBass> my guess is that conflict is b/c you altered the routing table... which doesn't look right to begin with...gateways are all wrong
<oxi> SpaceBass: ah!!!!
<oxi> SpaceBass: that is extremely helpfull
<oxi> now at least I realize what I'm facing
<SpaceBass> in the sever logs, see lines 870 and  871
<oxi> I can't believe I was stepping into the wrong direction for so long!
<oxi> I'll reboot my client
<SpaceBass> on the client logs look at 2414-2417
<oxi> SpaceBass: I did a fresh reboot. Does that seem wrong?: http://pastebin.com/FWxJJ0V3
<SpaceBass> oxi, can you ping the outside world now from that box?
<oxi> yes
<SpaceBass> oxi, if it is a VM, is the networking setup as nat or bridged?
<oxi> yes, bridged
<SpaceBass> whew! nat would be a nightmare
<oxi> but I have a windows bux using the exact same bridged vmware adapter and using the exact same openvpn config in the same network ... which is working
<oxi> SpaceBass: yeah, NAT would be a real nightmare!
<SpaceBass> try to bring up the ovpn client now
<oxi> SpaceBass: the traceroute seems weird to me: http://pastebin.com/Me5tq8f6
<oxi> I'll fire up ovpn
<SpaceBass> what seems odd? that the traceroute failed?
<SpaceBass> whats the IP of the host that is running the VM?
<oxi> SpaceBass: http://pastebin.com/frq9SPuW
<oxi> SpaceBass: @traceroute: yeah, it seemed odd that it failed, but I think it doesn't have much to do with my trouble
<oxi> SpaceBass: the VM Host has ip 10.0.0.14
<SpaceBass>  ok, so the tunnel is up/open - can you ping across it?
<oxi> yes
<SpaceBass> so you can reach the server?
<oxi> yes
<oxi> but I can't ping the outside from it
<oxi> 
<SpaceBass> ah
<oxi> SpaceBass: 178.63.123.25
<oxi> I can get in, but not out :-)
<SpaceBass> ok... one-way routing :D
<oxi> hehe, yep
<oxi> I trying to figure out how I can find out more about the iptables config push collision
<SpaceBass> oxi, can you pastebin the openvpn config file from the server?
<oxi> SpaceBass: sure, hold on :-)
<oxi> SpaceBass: there's not much kungfu in there: http://pastebin.com/kdRcBxFG
<SpaceBass> oxi, there are some clues, like "push redirect-gateway"
<SpaceBass> whats the lan IP of the server?
<oxi> SpaceBass: there's also some other iptables lines on the vps server:
<oxi> http://pastebin.com/NFa3Amu8
<oxi> the server has ip 178.63.123.9 and 10.8.0.1 and 10.8.0.21 for my vm which has 178.63.123.25 and 10.8.0.22
<SpaceBass> ok
<oxi> the config works for openvpn in windows and in osx
<SpaceBass> try this - comment out the line server 10.8.0.0 255.255.255.0
<oxi> I guess there's something special about my ubuntu box, even though I picked a default install from a fresh iso
<SpaceBass> Im not so sure it is with the ubuntu client ... windows openvpn does some stuff on the networking stack differently than linux
<SpaceBass> what client are you using on osx?
<oxi> Tunnelblick
<SpaceBass> great app!
<oxi> yeah, it's cool :-)
<SpaceBass> its interesting, the tun0 is getting the right info on the client
<SpaceBass> hang on, I'm setting up a new server and client similar to yours to test
<oxi> I took out that line
<oxi> it didn't work
<oxi> SpaceBass: I'm using http://www.turnkeylinux.org/blog/core-lucid-beta
<oxi> I wanted something very small and widespread supported :-)
<SpaceBass> sorry, that line was correct :D
<SpaceBass> brb
<oxi> the server is also ubuntu, but an openvz template running in a computational centre :-)
<oxi> nevermind, I'll kick that line in again :-)
<SpaceBass> hummm I'm afraid I'm running out of ideas :(
<SpaceBass> have you tried in #openvpn?
<oxi> SpaceBass: here's a successfull connect from OS X, to the same server: http://pastebin.com/e1fcT4jb
<oxi> SpaceBass: yeah, also tried #openvpn
<oxi> :-)
<oxi> me too - I have no idea what's going on
<bjaanes> Hey, i was wondering: I got my DHCP/DNS server going now with ddns-update (updating the DNS every time i give out a lease). But now that I have reserved an IP in the DHCP - why won't it update the DNS with that info? Is that something i have to do manually, or can it be done with some configuration?
<oxi> SpaceBass: I rebooted the server and it worked!
<SpaceBass> oh wow!
<SpaceBass> thats great news oxi
<SpaceBass> wonder what changed?
<oxi> it seems that there was an old iptables rule present which kept it from working
<oxi> that rule was probably present from previous tries
<oxi> I prabably had a typo and therefore routed ougoing traffic with the wrong rewrite
<oxi> I assumed that when I enter a iptables rule again it would overreide
<oxi> I guess it ne
<oxi> ver overr
<oxi> eide
<oxi> I guess it never overrided
<oxi> how do I list all present iptables rules?
<oxi> SpaceBass: what's your paypal account? :-)
<SpaceBass> oxi, glad it is working!
<SpaceBass> I know how fustrating VPN issues can be!
<oxi> yeah, I was tapping in myany wrong directions - it just seems that I was uncareful somewhere, but the configuration was fine
<oxi> it had nothing to do with the client :-)
<oxi> and I debugged the client for hours .-)
<MrUrbanity> disablind X in 10.04 anyone?  Or converting a Desktop install to a Server install ?
<MrUrbanity> disablind = disable.
<Pici> !nox | MrUrbanity
<ubottu> MrUrbanity: To start your system in text-only mode append 'text' (without the quotes) to the kernel line in the grub menu. You can access the grub menu by pressing Esc (Grub legacy) or Shift (Grub2) during boot. For more info see https://help.ubuntu.com/community/BootOptions#Text%20Mode
<MrUrbanity> ah ok.. well too late, i ripped gdm, xserver-xorg out via aptitude and rebooted, all works as I expected.
<Patrickdk> all you have to do is a apt-get remove ubuntu-desktop :)
<Patrickdk> or maybe even an: apt-get remove .*-desktop
<MrUrbanity> no that didnt do it all, i did that first.  Must of been a while since I messed with this because 10.04 has some weird stuff going on.  I was update-rc.d -f gdm remove and it was still booting.  had to try and read about the new startup stuff but not a lot on it around.
<MrUrbanity> eventual had to do ubuntu-desktop, gdm and xserver-xorg to get all the junk to stop from starting..
<Patrickdk> well, that marks them all as unneeded
<Patrickdk> then a apt-get autoremove will clean them up
<MrUrbanity> ah ok.
<Patrickdk> unless you marked something for forced install
<MrUrbanity> nah, pretty much a default fresh instlal.. just didnt have a server disk handy.
<Patrickdk> then you would have to do a forced uninstall
<Patrickdk> I install everything as a generic, barebone install
<Patrickdk> then install ubuntu-desktop, or server stuff
<MrUrbanity> yeah I should have too but it is a low PC and it took about 15 minutes just to get to the install screen.
<MrUrbanity> slow.
<Patrickdk> that is cause you used desktop :)
<MrUrbanity> lol.. yep.
<Patrickdk> the netbook/minimalcd/servercd is much much faster
<Patrickdk> netbook/network :)
<MrUrbanity> yeah.. last time i did this (ie used the wrong disk) was a while back and i could just swap between them.. but that has been removed.
<MrUrbanity> i had a 8.10 server disk sitting there.. might have been faster to use that and upgrade. lol
<Pupeno> I have this content http://gist.github.com/566218 in /etc/cron.d/projectx and it seems it's not being run. Any ideas how to find out why?
<uvirtbot> New bug: #631040 in samba (main) "samba share on windows: ":" bug is back/still here" [Undecided,New] https://launchpad.net/bugs/631040
<silentwater77_> hi i ran into trouble upgrading mountal on lucid minimal. could someone pls be so kind and explain me http://paste.ubuntu.com/488861/
<uvirtbot> New bug: #631064 in apache2 (main) "connection "forbidden" on localhost after wireless connected" [Undecided,New] https://launchpad.net/bugs/631064
<Aison> hello, I have got 4 ubuntu servers, now I installed an 5th older server that should run as backup server. So all 4 server should backup their system and defined user data to the backup server nightly
<Aison> is there some good backup software or do I have to create my own scripts with rsync or so?
<silentwater77_> Hello there, is anyone willing to help me with my problem with not loading daemons after reboot?
<RudyValencia> What's the easiest way to back up everything on an internal drive mounted at /srv to an external USB hard drive mounted at /media/backup ?
<stgraber> RudyValencia: "sudo rsync --delete -avP /srv/ /media/backup/" that's going to copy everything that's in /srv and not in /media/backup and remove anything that's in /media/backup and not /srv (so they are going to be exact copies with anything that was only in /media/backup being removed)
<RudyValencia> Ah.
<RudyValencia> Can I cron that?
<stgraber> sure
<RudyValencia> (add it to crontab as a root job done every... maybe 48 hours?)
<stgraber> yep, you should then trop the -v so it won't show you progress information (not so useful when runnning from cron)
<RudyValencia> ...and also got to do something about automounting the USB drive.
<jjk9_> silentwater77_:  which daemons?
#ubuntu-server 2011-08-29
<ejat> hi .. i do the release upgrade on ec2 .. from maverick to natty then i get this
<ejat> http://paste.ubuntu.com/676897/
<ejat> is it ok for me to reboot or need to fix it 1st then reboot
<Doonz> hey guys anyone have experince with smal storage solutions?
<jcastro> jamespage: Does the etherpad-lite formula work? I just gave it a try.
<jcastro> it listens on port 9001 iirc?
<patdk-lap> smal?
<Doonz> small*
<jmarsden> Doonz: Please try to be much more specific about what you are asking.  A microSD card is a "small storage solution", and I have one of those -- but I don't think that is what you mean... is it?
<Doonz> well lets see. Ok Im trying to build a media storage set up for home around the 100tb mark. I currently run 25tb btu what im finding is my sever is getting to taxed and performance is suffering. So i have thought of breaking the one large system into multiple systems
<Doonz> i just dont understand alot of the terms
<Doonz> and im not sure which way i should go without spending a billion bux
<trapmax> Any idea how to get rid of "File descriptor 15 (socket:[6111]) leaked on lvremove invocation." messages?
<van7hu> howdy
<van7hu> how could I know if my kernel was compiled with netfilter support?
<ayambit> van7hu: try to add iptables rule?
<van7hu> .e.g iptables ... etc?
<van7hu> by the mean of ubuntu, is it default?
<ayambit> van7hu: yes, iptables. It is enabled by default (of course) in ubuntu.
<van7hu> but netfilter?
<ayambit> iptables needs netfilter, so if your iptables rule will work, netfilter is available.
<ayambit> And it is in ubuntu-server by default.
<van7hu> okay, thank you
<uvirtbot`> New bug: #634102 in cloud-init (main) "t1.micro EC2 instances hang on reboot" [High,Fix released] https://launchpad.net/bugs/634102
<igcek> hello, what would be the best way to have multiple web servers on one static ip
<igcek> different machines with for. ex.
<igcek> ubuntu server installed on them
<ersi> igcek: Utilise different ports for the different daemons
<ersi> Or have some reverse proxy magic on :80 and the other web daemons on other ports and rewrite the requests
<ersi> igcek: Why several different web servers on the same IP though? Or do you just want to serve different domains/sites on the same web server?
<igcek> i have production server and a server production to be... so they all have to be seperated and on port 80.
<igcek> would it work if i just like redirected domain to ip with port and then do a port forwarding? ip:xx.xx.xx.xx:234
<ersi> igcek: You have one serving the 'production site' and one for testing?
<igcek> now i have one serving the production site, and it probably is not a good idea to put next to it a testing web page. but overall idea is to have for every company physicly different server
<igcek> virtualisation
<ersi> igcek: Yeah, okay - I understand that. But why have it on the same IP?
<ersi> And the same port..
<igcek> they would all be web servers. (port 80) i only have one static ip. (worldwide)
<linocisco> I want to make ubuntu mail server for windows clients. THat mail server would be for communicating one PC to another if internet is offline. I would like to know where I can find reference link.
<ersi> igcek: Why is changing IP not feaseble? Like using :8080 for the test machine
<ersi> igcek: I meant port, not IP
<igcek> i would like to have transition as painless as can be
<linocisco> I keep seeking on youtube. all are with non-english version. and with GUI desktop. I want to see english version configuration on CLI only pure ubuntu server .
<ersi> linocisco: Usually you will need no further configuration than setting up the mail accounts on the same domain and serve that domain from That mail server
<ersi> linocisco: As in user@companyA.com will be able to mail collueage@companyA.com even if other networks are unreachable
<linocisco> ersi: Yes. I did choosing internet site.
<ersi> And?
<linocisco> ersi:  I would like to know how to create users and how to configure accounts on windows mail clients like outlook or thuderbird
<linocisco> ersi:  I tried telnet localhost 25 . it was fine
<ersi> Depends on configuration.. usually the default is to use whatever accounts are on the machine. ie local users (root, linocisco, etc in /etc/passwd)
<ersi> 25? Have you only configured SMTP?
<linocisco> ersi: as I have no extra computer. I am to setup my ubuntu text base server on VM and client is my widnows 7 on celeron Toshiba Laptop
<ersi> Um >_> Hm
<linocisco> ersi: I have only one default account since creation. I have not setup DNS server or BIND config on that.
<ersi> Well, you don't really need that for a functional mail service.. What you need is something like postfix to handle incoming mail, and whatever you're using now for SMTP/shuffling outgoing mail
<ersi> I usually just use postfix and dovecot together
<ersi> There's like, endless amounts of documentation of both softwares (in English).. They can be a bit tricky to get to work if it's the first time though
<ersi> linocisco: Maybe this can be of service to you? https://help.ubuntu.com/10.04/serverguide/C/email-services.html
<uvirtbot> New bug: #836544 in mysql-dfsg-5.1 (main) "package libmysqlclient16 (not installed) failed to install/upgrade: trying to overwrite '/usr/lib/libmysqlclient.so.16.0.0', which is also in package mysql-cluster-client-5.1 0:7.0.9-1ubuntu7" [Undecided,New] https://launchpad.net/bugs/836544
<linocisco> ersi: The worst thing is I have no internet with my ubuntu server. When I installed mail server, I did using CD installer during installation. I selected [Mail server]. I dont know how to install dovecot without internet. Even with CDrom, I dont know how to  , after installaton.
<Daviey> Who is in the mood for some bug triage?  Yeah!
<ersi> linocisco: Heh, oh yeah.. Hm, that always makes stuff a lot more... interesting.
<linocisco> ersi: does server installer include dovecot or Not ?
<ersi> linocisco: I don't know, I'm spoiled with always-internet :|
<linocisco> ersi: that sucks. our country has expensive internet
<linocisco> Does anybody know how to build email server for windows clients using ubuntu server installer CD only ?
<linocisco> is there any ubuntu server training video except the one by Michael Jang which is just in brief ?
<EriksLV> hi
<EriksLV> any recommendations for raid5 controller that works with ubuntu server?
<linocisco> EriksLV: hardware raid or software raid ? hardware RAID is better
<EriksLV> hardware raid
<linocisco> EriksLV: I would recommend ASUS motherboards
<EriksLV> asus
<EriksLV> for a server?
<EriksLV> a joke?
<EriksLV> I need PCIE RAID controller
<patdk-lap> what kind of drives are going be used?
<EriksLV> 4xWD RE4 1TB SATA2 3.5" 7200RPM 64MB
<EriksLV> part number: WD1003FBYX
<EriksLV> any recommendations patdk-lap?
 * soren doesn't care much for hardware raid
<Guybrush_T> hi guys! i want to use a mra to get all the mails from my different mail accs and forward them to a specific imap folder - i heard from fetchmail and getmail and also that they are not that geat pieces of software. anyone has a better idea?
<memoryleak> I'm trying to use a bash variable in a sed expression - how could I make this work:  sed 's/__USER__/${USERNAME}/g' template.conf
<EriksLV> maybe escape dollar sign...
<EriksLV> just a guess
<trapmax> or "'$USER'"
<Ursinha> morning
<Daviey> Ursinha: o/
<zul> hi Ursinha
<Daviey> Today is a national holiday :/
<Daviey> zul: Stuff which concerns us on, http://people.ubuntu.com/~davewalker/component-mismatches-mir-track.html - can you check we need them, and raise a MIR if we do?
<Daviey> -carrot will be required for nova after all.
<Daviey> -dingus and -anyjson not looked at
<zul> kombu is not going to work?
<Daviey> -stompy needs body in the MIR
<Daviey> zul: nah, it turned out to be too heavy to replace this late in openstacks cycle.
<zul> Daviey: i was going to make a joke about it but good thing for the delete key
<Daviey> heh
<ersi> What's a "MIR"?
<jcastro> Main inclusion report: https://wiki.ubuntu.com/MainInclusionProcess
<Daviey> kees: Would you be able to look at the kombu MIR please? :)
<soren> Daviey: If we're stuck on -carrot, why do we need kombu?
<zul> soren: glance uses it
<linocisco> hi
<linocisco> how to setup ubuntu mail server to be used by microsoft outlook ?
<patdk-wk> what exactly is, ubuntu mail server?
<patdk-wk> a postfix/dovecot install?
<uvirtbot> New bug: #836728 in krb5 (main) "package libkadm5clnt-mit7 1.8.3 dfsg-5ubuntu2.1 failed to install/upgrade: error writing to '<standard output>': Success" [Undecided,New] https://launchpad.net/bugs/836728
<soren> zul: Ah.
<laserbled> Hi, I have logged into 10.10 64 bit through ssh - I installed xorg-xserver - but I cant startx - please tell me what to do to get to the x environment
<elz89> How can I configure slapd daemon to listen on only one of the network interfaces?
<bkerensa> elz89: Have you checked the slapd.conf ?
<elz89> bkerensa: no only /etc/ldap/ldap.conf I shall check slapd.conf now.
<bkerensa> elz89: Yeah I would check it and see if it allows you to define listening
<elz89> bkerensa: all I can find is /usr/share/slapd/slapd.conf and no mention in there of interface.
<bkerensa> odd
<bkerensa> elz89: Here http://www.openldap.org/doc/admin24/security.html
<elz89> bkerensa: I have that page already, and it does not mention how it is done on ubuntu server in a config file.
<elz89> Or does it mean if I start it once with specified interface it will remember that in config file for next start?
<bkerensa> perhaps
<bkerensa> let me dig a little more
<elz89> OK thank you :-)
<elz89> I'm happy to be given things to read as well.
<bkerensa> elz89: I looked pretty good and it doesnt seem like there is any info in man or on google to do selective listening
<bkerensa> =o
<elz89> bkerensa: Yeah I had a good google :-P or so I thought.
<bkerensa> elz89: Have you considered asking a OpenLDAP dev?
<pmatulis> elz89: see /etc/default/slapd
<bkerensa> elz89: Did pmatulis help you out?
<kernelpanicker> feel free to suggest the right channel... but where does bind9 store it's DNS cache?
<bkerensa> kernelpanicker: I think its stored in memory unless you specific it to backup elsewhere
<bkerensa> kernelpanicker: Check /var/tmp
<uvirtbot> New bug: #836849 in samba (main) "Samba starts before static IPs set in network manager" [Undecided,New] https://launchpad.net/bugs/836849
<noecc> I've compiled git from source, aptitude install puppet wants to install git-core{a}.  How can I proceed without installing git-core{a}?
<bkerensa> noecc: Perhaps remove git then apt get install git && puppet
<noecc> bkerensa: yes except I prefer the latest git from source.
<bkerensa> hmm ok does the latest git have a .deb?
<genii-around> noecc: Maybe set APT::Install-Recommends and APT::Install-Suggests both to zero
<bkerensa> genii-around: +1 I totally didnt think of that
<uvirtbot> New bug: #836875 in bacula (main) "package bacula-director-mysql 5.0.3-1ubuntu5 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/836875
<elz89> pmatulis, bkerensa, : nothing in that file either :-(
<bkerensa> elz89: OpenLDAP devs maybe ask them thats what I would do :D
<bkerensa> elz89: If you find out how ping me and I will make a How-To for others
<pmatulis> elz89: this doesn't give you a hint:
<pmatulis> # Example usage:
<pmatulis> # SLAPD_SERVICES="ldap://127.0.0.1:389/ ldaps:/// ldapi:///"
<Daviey> soren: glance needs kombu
<Daviey> Ah, zul already answered you
<Daviey> soren: infact, glance D4 is blocked in unapporved pending MIR kombu MIR.
<elz89> pmatulis: Thank You
<uvirtbot> New bug: #836889 in bacula (main) "package bacula-director-mysql (not installed) failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/836889
<kernelpanicker> I'm setting up postfix on ubuntu 10.04; how do I hold mail for users in such a way that they can't log into the system via, say, SSH?
<alamar> by not giving them a valid shell for example?
<alamar> or by using non-system user mail accounts
<TheEvilPhoenix> ^
<kernelpanicker> well... I'm creating a VPS to handle the mail and lists for an existing server.  I'll have some mail coming in for some regular users, and the rest will be list traffic.
<kernelpanicker> I'm unclear on where the mail will be 'held' for the list members, and how to deal with the 'regular' mail users...
<alamar> kernelpanicker: mail will not be held for mailinglist users
<kernelpanicker> I have to use postfix/dovecot/mailman, as well, as that's what I've been told to do...
<alamar> it will be delivered to the mailinglist's registered addresses
<kernelpanicker> and I'll be honest, I've been wrestling with this for almost 2 1/2 weeks now...
<kernelpanicker> something that would take one of you probaby 3 hours...
<kernelpanicker> because I don't have a clue.
<alamar> there's like a million howtos that exactly describe how to set up this software constellation
<kernelpanicker> But onward ho, right?
<kernelpanicker> I have a bookmark for each how to...
<kernelpanicker> all of them imperative masterpieces, for particular distros...
<kernelpanicker> etc...
<alamar> well maybe you should pay someone who has more of a clue
<kernelpanicker> I would, if I had more than, what, .27 cents in my pocket.
<kernelpanicker> and also, I want to learn about this stuff.
<kernelpanicker> that was 27 cents...
<kernelpanicker> at any rate...
<kernelpanicker> it seems like the non list email recipients need something to hold their mail, right?
<kernelpanicker> and that not all of them should be able to log into the box...
<kernelpanicker> so I can't just make them into users, right?
<kernelpanicker> There should be some mechanism for holding mail for them, but not making them users, and I recall something like 'adduser -s /sbin/nologin emailusername; passwd emailusername' on other flavors of linux...
<alamar> you can do more or less anything. you can create individual system user's, you can prevent them from logging in, you could create virtual user'S (meaning non-system but in some other database like ldap or mysql)
<kernelpanicker> but that doesn't seem to work in Ubuntu.
<kernelpanicker> alamar: OK
<alamar> kernelpanicker: it is not a good idea to try to administer a public running server if you do not have any idea what you are doing
<kernelpanicker> alamar: tell me about it
<kernelpanicker> doesn't life suck
<ersi> Unfortunally, with mail services - it's usually learning by doing
<alamar> ersi: you can learn and do a LOT by reading appropriate documentation and using a controlled environment
<kernelpanicker> it's also not a good idea to get married, have kids, or jump from hovering helicopters
<alamar> or it will be learning the hardware by ending up on nearly every blacklist
<alamar> and having a non functional mailserver anyway..
<glebaron>  I have a question about ubuntu 10.04 server with multipath drive arrays attached.  http://paste.ubuntu.com/677378/ is my multipath listing. http://paste.ubuntu.com/677380/ is my ls -al from /dev/mapper/
<alamar> s/hardware/hard way/
<kernelpanicker> what's fantastic is that if one does decide to learn, one can plunge in, read documentation, and get help from irc channels when you're stuck ;)
<ersi> alamar: Well well, you can prepare all you want.. when you go live, you can bet your horse on that something's wrong anyway :P
<alamar> ersi: but then I didn'T rush into running live before even knowing what I'm doing
<glebaron> All is good until I reboot the server. When it comes back up, in /dev/mapper/ all of the mpathxp#'s are missing.
<glebaron> Then the only way I can get them back is to run partprobe /dev/mapper/mpathx.
<ersi> alamar: All sound advice, for sure.
<glebaron> I am thinking that a reboot should not cause it to loose the partition maps.
<glebaron> *lose
<kernelpanicker> alamar is 100% right
<alamar> ersi: it's servers like that which are sending millions of spam mails before their admins EVEN NOTICE it
<kernelpanicker> so, alamar, what are your top two ways of locking down your servers to prevent them from becoming spam farms, and I'll go google them to learn.
<ersi> alamar: I was not saying that one should rush into deploying, without looking at the documentation or taking the 'simple' steps to make sure you won't get instablacklisted
<kernelpanicker> or just one will get me going.
<ersi> Always always authenticate users of your outgoing services
 * kernelpanicker googles 'authenticating users of outgoing mail services'
<ersi> "Outgoing authentication" is perhaps a better search term
<kernelpanicker> ok
<tarvid> if I have a string of nsupdate commands that appear to do what I want to do, where is the best place on the client to hook them?
<alamar> set up dns properly (forward confirmed reverse dns), read up on spf (some server's will not accept mail without proper spf records for your domain), implement proper user authentication, provide ssl, use monitoring(munin/nagios), read the documentation for all used software components(postfix, dovecot, mailman and whatever your authentication backend will be)
<alamar> think about using dnsbls/rbls
<alamar> think about greylisting and/or whitelisting
<alamar> and run ntpd to keep your time in synch if you not already do so
<alamar> don't run any other unecessary services, keep up with the security updates for your distribution
<alamar> think about using ratelimiting rules in your firewall, maybe use some mechanism like fail2ban to monitor/ban password/account cracking attempts
<alamar> test your configuration (at least with http://www.abuse.net/relay.html ) if it can be abused as open relay
<Daviey> soren: Around?
<Daviey> I wanted to talk to you about greenlet.
<soren> Shoot
<Daviey> soren: So.. Firstly.. I had NFI it was monkey patching.. Really confused me why standalone code was working, but not inside openstack :)
<Daviey> soren: secondly.. Do you know how to NONBLOCK ?
<Daviey> This crappy snippet works with standard os, but not greenlet monkey patched: os.fdopen(os.open(fpath, os.O_RDONLY | os.O_NONBLOCK))
<Daviey> ie, it's blocking.
<soren> Yh.
<soren> Uh, even.
<soren> Sorry, what are you trying to do?
<koolhead17> RoAkSoAx: ping
<Daviey> soren: so.. I want to open a char device.. so "cat foo" would hang forever as there is no EOF.  It's like doing a tail -f.
<soren> Daviey: Ok.
<Daviey> I want to capture what is there, which is achieved with teh fugly snippet up there using standard os.
<soren> Daviey: So you want it block.
<Daviey> but the monkey patched os ignores the os.O_NONBLOCK
<Daviey> no, i want it to NOT block
<soren> Daviey: You're confused.
<soren> Or you
<Daviey> ah.  I bet os.O_NONBLOCK = None in the monkey patched os.
<soren> 're making me confused.
<Daviey> hold fire.. lemme check my theory.
<soren> Daviey: You say: 'so "cat foo" would hang forever as there is no EOF.
<Daviey> Yes.
<soren> Daviey: That means it's blocking.
<soren> Non nonblocking.
<Daviey> yes, i want it to not block.
<soren> Ok... Let's pretend I didn't hear anything but that, shall we?
<soren> :)
<Daviey> ok :)
<soren> Specifically, I'll ignore all the confusing bits you said. :)
<Daviey> I'm not sure what part confused.. :/
<soren> WEll, if "cat foo" *hangs* forever... That means it's blocking.
<Daviey> correct
<soren> ...but you say you want non-blocking.
<soren> That's confusing.
<soren> Or a *REALLY* bad example of what you want to do.
<Daviey> I want to cat foo and for it to NOT block.
<soren> Aha!
<soren> Awesome.
<soren> Ok.
<soren> So..
<soren> This is why I hate eventlet.
<Daviey> my theory is that os.O_NONBLOCK is not implemented.
<soren> You don't get to play with non-blocking I/O.
<soren> It does it for you and you only get to pretend you're dealing with blocking I/O.
<SpamapS> eventlet means that every event "blocks" but returns control to the event loop, right?
<soren> So all your code ends up looking like it's using blocking I/O, but behind the scenes, eventlet does a buch of magic to make it non-blocking.
<soren> Yes, the is easier for people who don't grok how to write proper async code, but it's really, really confusing if you do.
<soren> SpamapS: That's essentially what ahppens behind the scenes, yes. Hidden away. Unless you go and look, you haven't a clue there's such a thing as an event loop.
<soren> For better or worse.
<Daviey> soren: both values of os.O_NONBLOCK == 2048..  So it gets deeper.. *sigh*
<soren> Daviey: You need to stop making assumptions about anything.
<soren> Daviey: Because eventlet probably monkey patched those assumptions to no longer be true.
<Daviey> soren: Well i mean, 2048 is a value in both patched and pure os.  However, the 'deeper' means it's not implemented within greenelt.
<soren> Daviey: Do you have some code that demonstrates your problem that I can help you debug?
<Daviey> soren: It would be easier for you to share my env tbh. Give me a few.
<soren> Daviey: ok, cool.
<Daviey> soren: ssh ubuntu@91.189.93.86 -t bash ; sudo -s ; screen -xr
<Daviey> soren: The only window running vi is the one to grok
<soren> Daviey: What are you trying to achieve again? Does the existing code not work?
<soren> Daviey: I wonder how longs it's been since I wrote the Xen code. It's not entirely impossible it predates the eventletification.
<Daviey> soren: sniffing bug 832507, and using a ring buffer, rather than a standard file.
<uvirtbot> Launchpad bug 832507 in nova "console.log grows indefinitely" [Low,Confirmed] https://launchpad.net/bugs/832507
<RoAkSoAx> koolhead17: pong
<koolhead17> RoAkSoAx: do you have few minutes? i got few questions.
<RoAkSoAx> koolhead17: sure
<Daviey> soren: so get consolelog blocks using standard os read() as it doesn't EOF
<koolhead17> cool
<soren> Daviey: ...for Xen.
<Daviey> soren: I'm only testing against kvm/qemu atm
<soren> Daviey: That doesn't make any sense.
<soren> Daviey: kvm logs directly to a file.
<soren> Daviey: A plain file. It absolutely EOF's.
<Daviey> soren: which is the roblem.
<Daviey> problem.
<soren> I understand that that is the problem outlined in the bug.
<Daviey> 20:31 < Daviey> soren: sniffing bug 832507, and using a ring buffer, rather than a standard file.
<uvirtbot> Launchpad bug 832507 in nova "console.log grows indefinitely" [Low,Confirmed] https://launchpad.net/bugs/832507
<soren> Ok, you've lost me somewhere.
<soren> kvm logs to a file.
<soren> This file can grow indefinitly, which is really bad.
<soren> Now, where does the need for non-blocking I/O come into the picture?
<Daviey> soren: Yes, which is why i am sniffing using a ringbuffer char device, rather than a plain test file.
<Daviey> text file*
<soren> A.... ringbuffer char device?
<soren> Enlighten me please.
<Daviey> soren: emlog
<Daviey> soren: http://www.circlemud.org/jelson/software/emlog/
<koolhead17> RoAkSoAx: 1. i am using cobbler system for adding a specific system and i have defined it to acquire static IP in th preseed like this http://pastebin.com/Mw7UiVNB but its failing.
<Daviey> soren: WIP - http://paste.ubuntu.com/677426/
<soren> Daviey: Oh, and it doesn't only support Linux 2.2. It supports Linux 2.4, too!
<soren> Oh, wait.
<RoAkSoAx> koolhead17: what's failing?
<Daviey> soren: I'm actually using a fork :)
<koolhead17> netcfg/choose_interface=eth1 as additional kopts
<koolhead17> RoAkSoAx: it fails to assign this static IP after reboot, rather i would say it just takes DHCP ip for installing the whole system :(
<koolhead17> am trying to provision Oneiric :D
<Daviey> soren: BTW, if you have a better idea.. i'm all ears :)
<RoAkSoAx> koolhead17: right, so on installation it doesn't use the values you passed on the preseed but rather it used DHCP?
<koolhead17> RoAkSoAx: yes
<RoAkSoAx> koolhead17: so when you reboot, it does not have the network values you have specified
<koolhead17> yes
<RoAkSoAx> koolhead17: that might be a problem of the installer, ping cjwatson
<RoAkSoAx> koolhead17: other thing you can do, is enable DHCP on cobbler, and set the values for the system, so you configure your network to obtain DHCP from the cobbler server
<koolhead17> RoAkSoAx: yeah i tried that as well, inside cobbler system options, even it fails
 * koolhead17 pokes cjwatson: 
<RoAkSoAx> koolhead17: it shouldn;t fail, I have it working
<Daviey> soren: So that patch does work, when using the standard os for read()'ing the file.
<RoAkSoAx> koolhead17: I'd recommend you use dnsmasq
<RoAkSoAx> sudo vim /etc/cobbler/modules.conf and enable DHCP
<RoAkSoAx> for dnsmasq
<RoAkSoAx> and then in /etc/cobbler/settings
<RoAkSoAx> koolhead17: then on a system add an interface with MAC, IP address etc
<koolhead17> RoAkSoAx: i have currently dhcp server whose configuration file is fetched via cobbler only.
<koolhead17> also if i have a configuration in my profile and am using that with my system, will cobbler overwrite the parameters of that profile for my syste?
<koolhead17> *system
<koolhead17> RoAkSoAx: am going to remove this configuration from my preseed and add everything in systems file as you suggested.
<Daviey> soren: I don't think it can be piped to rotatelog.  The other option is logrotate, but that will react too slowly.  ISTM that this is a cleaner solution.
<RoAkSoAx> koolhead17: the way how we are doing it in orchestra is enable dhcp on the cobbler server and add systems with the information we want the dchp server to provide
<Daviey> (Being a non-standard kernel module, would need to be opt-in)
<soren> Daviey: I don't know.. A kernel space solution seems wonky to me.
<koolhead17> RoAkSoAx: ooh, you mean hard coding approach, everything handled by the DHCP server, as we often do in practise with print server in network?
<RoAkSoAx> koolhead17: right, but when iun a cobbler system you add the information for the network for that particular system, then cobbler handless DHCP automatically
<Daviey> soren: Other ideas?
<zul> gah
<Daviey> soren: The only other thing i can think of is polling the serial port.. ?
<soren> Daviey: Why don't you think rotatelogs will work?
<Daviey> soren: Can kvm et al pipe it's output?
<koolhead17> RoAkSoAx: yes i will add ip/subnet/gateway in my systems for particular provision and at same time write the same in dhcp.conf (populated by cobbler) right?
<soren> Daviey: named pipe?
<Daviey> soren: So i looked at a FIFO first, but you can't limit the size?
<Daviey> and FIFO would block, unless it has something cat'ing it
<soren> Daviey: If we were to poll the serial port.. How would that work? I hope kvm wouldn't block waiting for us to read from its serial port buffer.
<Daviey> soren: Honestly, i don't know.. But i did look at using a FIFO first.. but that seems to have the same problems.
<soren> Daviey: Yeah, that's a fair point. killing rotatelogs would hang kvm => suck.
<RoAkSoAx> koolhead17: yes
<RoAkSoAx> koolhead17: though, bye default in orchestra we are using dnsmasq
<Daviey> soren: This emlog is essentially a fifo which is a ringbuffer.
<soren> erk.
<soren> rotatelogs isn't the way to go.
<soren> I forgot its semantics.
<uvirtbot> New bug: #832507 in nova "console.log grows indefinitely" [High,Confirmed] https://launchpad.net/bugs/832507
<Daviey> soren: There is a userspace (perl) fifo ringer implementation, but that seemed much dirtier.
<soren> Daviey: Have you heard of vbuf?
<soren> (I hadn't)
<soren> Uses libvrb (which I also hadn't heard of)
<Daviey> soren: isn't that a circular char device?
<Daviey> as in, when it reaches the bottom it writes to the top?
<soren> I'll know in a few minutes. :)
<Daviey> heh, i've not used vbuf TBH.. but that was my memory.
<soren> At any rate, this isn't hard to solve entirely in userspace with a separate helper.
<soren> If we just want to solve our own problem, that is.
<soren> If we want to write a general-purpose ring-buffer thingamajig I imagine it'll get hairier quickly.
<soren> but..
<soren> Hm.
<Daviey> well it's crazy we don't have something like this in stock kernel TBH.
<Daviey> soren: Looking at the man page for vbuf - looks like it would need a named pipe to use as input.. meaning you can still own the server.
<Daviey> p0wn? Wow, i struggle to keep up with the language.
<soren> Daviey: Let me see what libvirt can do..
<Daviey> soren: tcp could work, that would mean it would need a listener to suck it in.
<Daviey> Hmm, unix socket would be cleaner and easier to track i suppose.
<soren> Daviey: Trying to work out how kvm would respond to a named pipe whose consumer went away.
<Daviey> soren: normally the consumer going away kills the push?
<Daviey> as in, when i tried this with cat /dev/random > fifo & ; cat fifo ; sleep 10s ; kill second cat, kills the first cat.
<soren> Yeah.
<RoAkSoAx> smoser: http://pastebin.ubuntu.com/677445/
<RoAkSoAx> smoser: is the PPA stuff fixed?
<Daviey> RoAkSoAx: is that because the node doesn't have access to the keyserver?
<smoser> well that i just an update there.
<smoser> try re-running that command on the instance and see what happends
<RoAkSoAx> Daviey: that, but there was a recent change with python-software-properties that broke cloud-init too
<RoAkSoAx> smoser: but the stuff of importing PPA's is fixed?
<smoser> RoAkSoAx, it should have been.
<RoAkSoAx> smoser: ok cool thanks
 * RoAkSoAx will be back in half an hour
<glebaron> I have a question about ubuntu 10.04 server with multipath drive arrays attached.  http://paste.ubuntu.com/677378/ is my multipath listing. http://paste.ubuntu.com/677380/ is my ls -al from /dev/mapper/
<glebaron> All is good until I reboot the server. When it comes back up, in /dev/mapper/ all of the mpathxp1's are missing.
<glebaron> Then the only way I can get them back is to run partprobe /dev/mapper/mpathx.
<glebaron> I am thinking that a reboot should not cause it to lose the partition maps.
<ppetraki> glebaron, so what's your /etc/multipath.conf look like?
<glebaron> ppetraki: http://paste.ubuntu.com/677457/
<ppetraki> glebaron, have you rebuilt your initramfs since creating this file?
<ppetraki> glebaron, is multipath-tools-boot also installed?
<elz89> Just wondered, is there something similar to "fpaste" in Fedora?
<Daviey> soren: cracked it!
<soren> Daviey: Cool!
<ppetraki> glebaron, two nits with your config file, 1) blacklisting sda isn't deterministic, you can blacklist by make/model instead
<Daviey> i'm not saying it is the best solution, but at least an option.
<ppetraki> glebaron, 2) change product "VTrak"  => product "VTrak.*"
<ppetraki> glebaron, don't want to be the victim of some regexp ambiguity, even though it's working now
<glebaron> ppetraki: thanks for tips.
<glebaron> I am not only one working on this server, so I will answer to the best of my knowledge.
<ppetraki> glebaron, if you have -boot installed, multipath is run in the initramfs, against the config file, which should create all your maps
<ppetraki> glebaron, np
<ppetraki> glebaron, if you're *not* booting from SAN *and* these maps aren't being created, that's an interesting problem
<glebaron> ppetraki: -boot is installed and intramfs has been updated recently.
<yakster> hello everybody! anyone here know how to make apache case insensitiveâ¦
<glebaron> ppetraki: but we are booting from local disk and not san.
<yakster> say www.GOOGLE.com or GoOGle.com will redirect to a local page like www.Google.com  btw, this is the when listing for external web request
<yakster> I hope that I explained that correctly..
<ppetraki> glebaron, so those udev rules should have been run when the block devices were added initially.
<ppetraki> glebaron, something went wrong
<ppetraki> glebaron, you should be able to run 'multipath -v0' to create new paths. For example, multipath -F, will clear all unused paths, and multipath -v0 should recreate them
<ppetraki> glebaron, what version of ubuntu is this?
<glebaron> ppetraki: 10.04 Server 64-bit
<yakster> anyone know how to make apache case insensitive?
<TheEvilPhoenix> yakster:  it inherits the restrictions of Linux
<koolhead17> RoAkSoAx: not tried dnsmasq before, currently working on some automation. will read up on it.
<TheEvilPhoenix> Linux is case sensitive
<yakster> correctâ¦
<TheEvilPhoenix> yakster:  therefore, Apache is case sensitive
<yakster> I understand thatâ¦.
<TheEvilPhoenix> there's no real method to change that
<soren> Err..
<soren> there is.
<soren> http://httpd.apache.org/docs/current/mod/mod_speling.html
<glebaron> ppetraki: thanks so much for input. Things already looking a lot better.
<yakster> but say I have a webpage, and the what that it was posted isâ¦. http://MySite.dyndns.info/Welcome.html and my dad, being the super smart guy he is, just types in http://mysite.dyndns.info/WELCOME.HTML  he is going to get a 404 error, page not foundâ¦
<TheEvilPhoenix> soren:  orly? *checks*
<TheEvilPhoenix> soren:  is it in the repos?
<ppetraki> glebaron, here's an example of make/model blacklisting: http://lists.alioth.debian.org/pipermail/pkg-lvm-maintainers/2010-May/002910.html
<ppetraki> glebaron, lsscsi is your friend
<glebaron> ppetraki: Will have to wait for a day or so to reboot and see if it's fixed for good.
<ppetraki> glebaron, I understand, what interconnect are you using? SAS?
<yakster> ok, that is great if I have an illiterate person accessing my site, but not for one who likes to type in proper case text.
<koolhead17> soren: ping
<soren> TheEvilPhoenix: Sure.
<koolhead17> RoAkSoAx: thanks. i will update you about development tomorrow. :)
<soren> TheEvilPhoenix: in apache2.2-bin, even.
<smoser> RoAkSoAx, http://paste.ubuntu.com/677474/
<soren> koolhead17: What's up?
<smoser> that is output of an oneiirci instance with cloud-config from http://paste.ubuntu.com/677475/
<smoser> so it seems its working to me.
<glebaron> ppetraki, I do not know about interconnect. Promise unit is SATA disks direct connected via fiber. I inherited it already hooked up. Is there an easy way to find out?
<hallyn> jdstrand: hi - in the qrt, the libvirt testsuite help starts with
<hallyn> USAGE:  *** DEPRECATED ***
<ppetraki> glebaron, yeah, its SAS, it can support either, I looked it up
<hallyn> I don't understnad.  what is deprecated?
<yakster> ok, how do i enable that mod spelling
<koolhead17> soren: doing great. need your mail id. working on jenkins/open stack automation might need your help on that.
<TheEvilPhoenix> soren:  how do i activate that module?
<jdstrand> hallyn: let me look
<soren> TheEvilPhoenix: sudo a2enmod speling, presumably.
<soren> koolhead17: soren@openstack.org
<koolhead17> soren: thanks :D
<jdstrand> hallyn: can you paste with the command?
<jdstrand> soren: dude!
<ppetraki> glebaron, make sure you update the initramfs to reflect your new changes before you reboot. If you continue to have problems we'll have to dig into debugging starting with why the udev rules didn't have the desired impact
 * koolhead17 needs to dig deep inside jenkins now :D
<hallyn> jdstrand: doh!  I was looking at the libvirt-aa-secdriver.sh
<hallyn> Recon I don't need to be running that by hand :)
<jdstrand> hallyn: oh yes, don't use that :)
<soren> jdstrand: Dude, indeed!
<glebaron> ppetraki, thanks. I am already understanding much better. I will update initramfs and reboot soon. If that doesn't work, I will be back with more questions!
<ppetraki> glebaron, :)
<jdstrand> hallyn: that has all be converted over anyway to test-libvirt.py anyway
<yakster> nope that didn't workâ¦
<jdstrand> soren: :)
<yakster> just tried it, and it dosent even remotely work..
<yakster> hello?
<yakster> quiet all the sudden
<Daviey> hallyn: nice fix on the qemu-kvm package.
<Daviey> hallyn: FYI qemu-kvm 0.15 final just hit experimental.
<Daviey> smoser: BTW, i had a kernel panic rebooting an instance earlier.
<Daviey> smoser: It *might* have been my fault, but mentioning it JIC http://pb.daviey.com/KKOF/
<hallyn> Daviey: on qemu-kvm 0.15...  anything more i can do to help that along?
<Daviey> hallyn: I think we really need to open that issue post b1.
<Daviey> I really don't want to screw b1 if it turns ut bad.
<Daviey> out*
<hallyn> Daviey: ok
<CluelessPerson> hey
<CluelessPerson> for some reason samba suddenly stopped working
<CluelessPerson> my server still shows up on the network
<CluelessPerson> but when I try to connect from my windows netbook, it tells me there's no anser/connection
<Daviey> Who wants to earn a gold star?
<Daviey> Someone fixing bug #837049 would make me very happy!
<uvirtbot> Launchpad bug 837049 in php5 "php5 FTBFS (amd64 only)" [High,Confirmed] https://launchpad.net/bugs/837049
<arrrghhh> hey all
<arrrghhh> i want to secure my server with ssh key auth instead of password auth
<uvirtbot> New bug: #837049 in php5 (main) "php5 FTBFS (amd64 only)" [High,Confirmed] https://launchpad.net/bugs/837049
<arrrghhh> my only issue is adding all of the machines i need before disabling password auth - should i just reuse the same key for all machines?  what do i do for mixed environments?  some of the machines accessing the server are linux, some are windows....
<glebaron> arrrghhh: it's a per-user thing. Each user has their own private key on their machine, and their public key is installed on their servers in user accounts that you want them to have access to.
<glebaron> *your servers*
<arrrghhh> yes
<arrrghhh> well this is one server
<arrrghhh> and i am always the client machine
<arrrghhh> but i have many clients
<arrrghhh> cell phone, work PC, home laptop (win&lin)
<arrrghhh> then i run into the issue of "other machines"
<glebaron> the private key has to be on each of those.
<arrrghhh> i rarely need to access my server from other machines, but what do i do when i run into a client that isn't setup?
<arrrghhh> ok so you'd recommend i get all the keys setup, then disable pass-based auth?
<glebaron> just copy the private key to that machine.
<glebaron> yes.
<arrrghhh> ok
<arrrghhh> i guess copying the private key seems difficult across platforms
<arrrghhh> private keys for putty don't seem compatible with linux and visa-versa... no?
<glebaron> they work.
<glebaron> It's not optimal.
<arrrghhh> i remember them not working.  or perhaps i didn't know how to get it to work?
<arrrghhh> ok
<arrrghhh> so i should have a ppk for putty for windows clients, and another key for linux perhaps?
<glebaron> yes.
<glebaron> we have windows users using putty ppk on their windows machines
<glebaron> and connecting to linux servers.
<glebaron> and we have linux/mac users using regular ssh keys.
<glebaron> but we don
<glebaron> t
<glebaron> normally put putty keys on any client other than windows.
<arrrghhh> ah
<arrrghhh> ok
<arrrghhh> i think that was my problem
<arrrghhh> last time i was trying to have one key for all clients
<arrrghhh> and i ran into issues trying to get putty to work with linux priv key, or visa-versa.
<arrrghhh> thanks
<glebaron> :)
<arrrghhh> crap glebaron left.
<arrrghhh> how do i transfer the client key to the host from Windows/putty...?
<arrrghhh> crap
<qman__> ssh-copy-id makes it easy on linux, but it's as simple as putting the public key into ~/.ssh/authorized_keys on the server
<qman__> just ssh in, edit that file, and copy/paste
<qman__> arrrghhh, ^
<arrrghhh> oh
<arrrghhh> ok
<arrrghhh> qman__, i don't have an authorized_keys file in .ssh...
<arrrghhh> would it be in /etc?
<qman__> no
<qman__> the file doesn't exist until at least one key is put there
<qman__> create it
<arrrghhh> heh
<arrrghhh> ok
<qman__> the permissions also have to be right
<qman__> 600 IIRC
<arrrghhh> 700 actually i think
<qman__> 600 on mine
<arrrghhh> hrm
<qman__> 700 would be executable
<qman__> no need to execute keys
<arrrghhh> yea that's true.
<arrrghhh> ubuntu guide said 700
<qman__> folder should be 700
<qman__> file should be 600
<arrrghhh> oic
<arrrghhh> makes sense
<arrrghhh> so i can just take the ppk file
<arrrghhh> and paste the "private-lines" into an authorized_key file..?
<qman__> no
<qman__> public key
<qman__> not the private key
<arrrghhh> ah right
<arrrghhh> private key is client only
<qman__> private key goes to the connecting client
<arrrghhh> server refused our key
<arrrghhh> heh
<arrrghhh> all i did was paste the public key in
<arrrghhh> and took all the end keys out
<arrrghhh> so it's one long line...
<qman__> that's correct
<arrrghhh> hrm
<qman__> each public key you want to allow connections from, goes on one line in the file
<arrrghhh> ok
<arrrghhh> anything else on that line?
<qman__> nope
<arrrghhh> just the jibberish that involves the public key?
<arrrghhh> hum
<arrrghhh> why would the server refuse my key then?
<qman__> like this
<arrrghhh> oh key based auth might not be enabled.
<qman__> ssh-rsa AAAAB....8= ryan@amdk6
<arrrghhh> oh
<arrrghhh> i didn't have the ssh-rsa
<arrrghhh> or the machine @ the end
<qman__> ssh-rsa at the front, description at the back
<qman__> it can be anything, mine happens to be user@host
<arrrghhh> ah, and my authorized_keys is in /etc for some reason.
<qman__> yes, but that's machine wide
<arrrghhh> hrm
<arrrghhh> ok
<qman__> I assume you don't want to allow connections as any user on the machine from your key
<arrrghhh> no
<arrrghhh> i only have this one user, but still no :P
<qman__> a default install has over 20 users
<arrrghhh> yea
<arrrghhh> well
<arrrghhh> i've only created one user
<arrrghhh> so what do i put in for the machine @ the end
<arrrghhh> doesn't matter?
<arrrghhh> cuz it's still rejecting my key.
<qman__> can be anything, it's just a description field
<arrrghhh> so why else would the server refuse my key?
<qman__> plenty of possible reasons
<arrrghhh> heh
<qman__> I'd check /var/log/auth.log
<arrrghhh> k
<qman__> see if it says why
<arrrghhh> i see accepted passwords
<arrrghhh> but no 'rejections' or anything similar from sshd
<arrrghhh> how can i tell that it's pulling from this authorized_keys file...?
<arrrghhh> ah
<arrrghhh> it is pulling from /etc/.ssh/authorized_keys
<arrrghhh> so should i change that in the sshd_config...?
<qman__> looks like your home directory isn't set up right
<qman__> what does cd ~ do?
<arrrghhh> puts me at the user@nas:~$ prompt
<qman__> pwd
<qman__> there
<arrrghhh> /home/user
<qman__> did you set encrypted home directories?
<arrrghhh> nope
<qman__> and the file you created is /home/user/.ssh/authorized_keys
<arrrghhh> yes
<qman__> and /home/user/.ssh is chmod 700
<qman__> and /home/user/.ssh/authorized_keys is chmod 600
<arrrghhh> drwx------   2 user  user      4096 2011-08-29 17:42 .ssh
<arrrghhh> er i don't think auth_keys is 600 hold on
<arrrghhh> k it's 600 now
<arrrghhh> and i still get server refused our key
<arrrghhh> (i removed the /etc/.ssh directory)
<arrrghhh> so this line
<arrrghhh> AuthorizedKeysFile      /etc/.ssh/authorized_keys
<arrrghhh> in my sshd_config
<arrrghhh> doesn't matter..?
<arrrghhh> i shouldn't change that?
<qman__> mine doesn't have that line
<qman__> try commenting it out and restarting sshd
<arrrghhh> k
<arrrghhh> lol
<arrrghhh> i recall vaguely doing that.  i wish i knew why.
#ubuntu-server 2011-08-30
<arrrghhh> that fixed it tho, thanks.
<qman__> that's something you'd do to work around encrypted homes
<qman__> or other weird cases where the home directory isn't available
<arrrghhh> i think i was just being difficult
<arrrghhh> and trying to do things on my own
<arrrghhh> and i didn't understand the gravity of what i was doing ;)
<arrrghhh> i'm slowly starting to grasp this whole pub/priv key auth.
<arrrghhh> not sure why it's taking me so long to wrap my head around it, usually this stuff isn't so difficult for me.
<arrrghhh> but i do appreciate the help, i am now auth'ing at least from my work PC to my server.  now i just need to add the rest & disable password-based auth.  thanks again!
<uvirtbot> New bug: #837085 in nagios-plugins (main) "hardened build option lost in debian/control" [Undecided,New] https://launchpad.net/bugs/837085
<smoser> Daviey, bug 833783 is your boot failure
<uvirtbot> Launchpad bug 833783 in initramfs-tools "boot failure: can't open /root/dev/console: no such file" [Undecided,New] https://launchpad.net/bugs/833783
<Daviey> smoser: ta
<smoser> it seems like a legit race in initramfs to me.
<davros-_> http://www.linuxquestions.org/questions/ubuntu-63/setting-up-dhcp3-server-10-04-4-a-899460-new/
<davros-_> any suggestions?
<StevenR> davros-_: fix one problem at a time?
<davros-_> duh
<StevenR> well, start by fixing eth1, then fix dhcpd, then the other bits that don't work
<davros-_> eth1 isn't rally an issue, that ifconfig is after a reboot
<davros-_> unless eth1 is supposed to hold its settings after reboots
<StevenR> if you want to run dhcpd and give out IP addresses from that interface, then it's going to need to
<davros-_> ahhh ok kinda thought that too
<StevenR> you can't have a dhcp server on a dynamic IP, it's just not going to work
<StevenR> or a "no-ip" interface
<davros-_> how would eth1 be setup to hold an ip
<StevenR> by editing the interfaces file appropriately
<StevenR> /etc/network/interfaces or similar
<davros-_> INTERFACES="eth1"
<StevenR> ?
<StevenR> where's that from?
<davros-_> oops that's from dhcpe interfaces I belive
<StevenR> well. Fix your interfaces file for starters :)
<davros-_> lol
 * StevenR goes to bed
<davros-_> you bee around Wednesday during the day StevenR?
<CluelessPerson> hey all
<CluelessPerson> for some reason I can't connect to samba now
<Arrick> hey all... How easy is it to install lts 10.04 on an hp proliant ml350?
<JadedJacob> hi, has anyone ever used the ddrescue program to recover files from a failing hard drive.
<ryoohki> i notice that after i set a raw partition to be crw-rw-rw- it reverts to crw-rw---- .
<ryoohki> i  there  a seperate process or file in /etc controlling this?
<ryoohki> actually, i menat brw-rw-...
 * RoyK += 0xc0ffee
<jamespage> jcastro: it did but it may be broken as I've not spent time updating to support service exposure - I'll look today
<trapmax> How do i fix these: "File descriptor 15 (socket:[6111]) leaked on lvremove invocation. Parent PID 20901: /bin/bash" notifications, when running a script from crontab? If i manually run it, there is no notification
<uvirtbot> New bug: #816972 in glance (universe) "Packaging needs to be updated to take glance-scrubber into account" [High,Fix released] https://launchpad.net/bugs/816972
<smoser> so how do we look for beta ?
<Daviey> smoser: 'OK' .. not perfect.. but 'OK'
<Daviey> :)
<smoser> is there any reason why we would not want to start ISO testing for ec2 images ?
<Daviey> I don't think there is anything blocking that.. Have you asked utlemming the same?
<jamespage> lynxman: around?
<lynxman> jamespage: indeed :)
<jamespage> lynxman: and how are you today?
<lynxman> jamespage: very good sir, enjoying my extra week in London due to natural disasters ;)
<lynxman> jamespage: and you?
<jamespage> lynxman: good thanks - refreshed after a week off
<jamespage> so I'm patch piloting today and came across bug 820936
<uvirtbot> Launchpad bug 820936 in apache2 "Virtual server setup breaks Rewrite Rules" [Low,Fix released] https://launchpad.net/bugs/820936
<lynxman> jamespage: oh goodie
<jamespage> lynxman: I was a little confused - the bug is marked fixed released - merge proposal still outstanding - is it still needed?
<jamespage> looks like it is to me - but wanted to check with you first
<lynxman> jamespage: oh yes, my mistake on that one, needs to be merged
<jamespage> lynxman: OK - leave it with me and I will review
<lynxman> jamespage: lovely, can't be in better hands
<jamespage> flattering but probably untrue :-)
<smoser> utlemming wont wake for a while, but last night things looked in order.
<lynxman> jamespage: It is true in my head ;)
<lynxman> smoser: congratulations on getting the +1 from cody, I'm officially impressed
<smoser> woowoo.
<CluelessPerson> hey guys
<CluelessPerson> what's the program used to remotely manage sql databases?
<smoser> i would think most of that is due to the vast bug list output which is a result of Daviey's script that scrapes archive update mailing lists.
<smoser> CluelessPerson, there are probably lots.  'mysql' is one.
<CluelessPerson> I have my server running mysql
<CluelessPerson> and I don't remember the name of the remote editor I had
<CluelessPerson> You could monitor, configure, alter and setup permissions and users for the databases
<smoser> phpMyAdmin perhaps
<Daviey> smoser: Are you core now? (minus the admin)
<smoser> well i have 5 "+1" and no -1. so i think i match quorum
<lynxman> smoser: Daviey: oh can I have a copy of the script?
<a_ok> how do I upgrade from 10.4 to 10.10?
<a_ok> distupgrade does not seem to be available anymore
<maxb> It ought to be
<maxb> what do you mean when you say "not available" ?
<Daviey> lynxman: script?
<lynxman> <smoser> i would think most of that is due to the vast bug list output which is a result of Daviey's script that scrapes archive update mailing lists. <-- that
<smoser> lynxman, i'll find it on 2 requests
<smoser> a.) you add crawling of the archive and storing in local database
<smoser> b.) you add a query mechanism for seeing a person's activity during an arbitrary time frame
<Daviey> lynxman: You really don't want it.
<Daviey> it's crap.
<smoser> its useful crap
<Daviey> I'm actually wondering ultimate debian database exposes this now anyway
<smoser> and it will be more useful and less crap when he adds my two feature requests
<Daviey> smoser: I'm scared to touch it. RoAkSoAx touched-it-last, which means he is the new maintainer
<smoser> lynxman, your silence indicates you agree to my demands
<smoser> https://code.launchpad.net/~andreserl/+junk/search_uploads
<lynxman> smoser: I was having coffee, but I do agree :)
<smoser> http://paste.ubuntu.com/677822/
<smoser> that is my local diff, so i had nothing useful
<smoser> Daviey, do we have a UDD for ubuntu?
<Daviey> smoser: Some Ubuntu data is now posted there.
<Daviey> Thanks to the ever awesome Laney.
<Daviey> smoser: bah, who needs a shebang
<smoser> Daviey, yeah, i see that.
<smoser> http://udd.debian.org/schema/udd.html doesn't seem to have changelogs though.
<smoser> or did i just miss it.
<smoser> Daviey, if search_uploads could incrementally crawl archives, and stored data in a db, it would not be crap.
<lynxman> smoser: Daviey: you both were right, the script is crap, very useful crap though
<Daviey> smoser: bah, seems you are correct.
<smoser> lynxman just couldn't pass up a free pass at calling Daviey's work 'crap' (i wouldn't have been able to either)
<Daviey> smoser: Yeah, make a web frontend for it; and expose an API for people to query.
<Daviey> Could call it 'Launchpod' and have python bindings called launchpodlib
 * smoser reboots, hoping beyond hope, that this time when he logs in compiz wont leak memory like its going out of style.
<Ursinha> good morning folks
<lynxman> Ursinha: bom dia
<Ursinha> lynxman: bon dia :)
<linocisco> how to build mail server for windows clients? I have installed [mail server] option during fresh installation. then dpkg-reconfigure postifix. and then telnet localhost 25 fine. and then ??
<lynxman> Daviey: sorry for having a free go at your script, I like it though :)
<linocisco> mail clients will be outlook express or outlook
<Daviey> lynxman: no, it's dirty. It was under a no-free licence to start with.. :)
<lynxman> Daviey: I'm afraid to touch it now
<lynxman> Daviey: although it's never too late to create the GDaBPL
<lynxman> Daviey: (Give Daviey a Beer Public License)
<Daviey> crack on... the reason for it to be non-free was to allow me to not show it in public :)
<Daviey> Then RoAkSoAx moaned.. so i let him have it.
<Daviey> Thankfully smoser is the new maintainer, as he patched it last.
<lynxman> Daviey: oh boy, so if I patch it I'll be the new maintainer?
<Daviey> yus
<smoser> local diffs do not count.
<smoser> that just makes me a leach
<lynxman> 5lol
<lynxman> smoser: you cheat
<lynxman> smoser: or shall I say... core-cheat
<Daviey> lynxman: It does count, sorry - it's derived works. However, don't worry, as you will have the shortest tenure yet.
<a_ok> Ok there are to many conflickting guides out there
<a_ok> I did a edit of the sources file (from lucid to karmic) and apt-get update; apt-get upgrade failes
<jamespage> lynxman: not sure switching to _default_:80 actually resolves the bug report TBH - http://httpd.apache.org/docs/2.2/mod/mod_rewrite.html#vhosts
<Daviey> a_ok: That is generally a bad idea.
<a_ok> Daviey: yeah I noticed. luckaly it failed on fetching so I sould be able to revert it
<lynxman> jamespage: it actually does, I had this problem myself in the past
<jamespage> hmm
<a_ok> Daviey: how do I get rebuild my repository cash?
<Daviey> a_ok: So Lucid is still supported, Karmic is End Of Life.
<a_ok> Daviey: sorry ment maverick
<a_ok> upgrading
<Daviey> a_ok: I'm confused.. you are/were(?) upgrading from what to what?
<jamespage> lynxman: I think that might be true for apache < 2.2
<a_ok> upgrading from lucid to maverick
<a_ok> and than to 11.4
<jamespage> but I can't get globally defined rewrite rules to apply unless I add the options to each virtualhost as described above
<a_ok> I got a better upgrade guide now but I need to restore my apt stuff
<lynxman> jamespage: option 2 is create a global VH and import, which I've done as well
<a_ok> apt-get udate fails
<lynxman> jamespage: nonetheless is good practice to have _default_:80 instead of *:80
<Daviey> a_ok: sudo apt-get update | pastebinit
<a_ok> Daviey: its because I edited the sources.list and did a succefull apt get update with the maverick repositories
<jamespage> lynxman: I agree that _default_ is good practice as it make virtualhosts behave nicely with each other  - just not sure we are actually addressing the bug
<jamespage> or that we should be
<lynxman> jamespage: I see your point, I still think it should be merged and then a word of caution given towards that practice?
<Daviey> In general, increasing the delta with Debian is bad karma.
<jamespage> lynxman: assuming that Daviey's comment was directed at our conversation then I don't think we should merge this change.
<lynxman> :(
<jamespage> rewrites are notorious in my experience for creating unexplained server behaviour
<lynxman> alright, reject the merge then
<jamespage> and although I think that it makes sense to switch to _default_ we should try a push this to Debian first IMHO
<jamespage> it does not provide enough value to risk maintaining an Ubuntu delta
<lynxman> jamespage: I do agree
<jamespage> lynxman: coolio - lemme updates bugs/merge proposals etc... inline with our conversation - would you like to submit a bug to Debian to request the change?
<lynxman> jamespage: never did :)
<trapmax> anyone have experience with confluence's scroll wiki pdf exporter -plugin?
<trapmax> and more specifically its problem with scandinavian alphabet
<linocisco> hi
<a_ok> Daviey: sorry got called away
<linocisco> I tried dpkg-reconfigure postfix. in one step, i am being asked that root and postmaster mail recipient entry
<linocisco> do I need to enter or leave that blank ?
<a_ok> Ill be back later
<linocisco> hi
<linocisco> how to copy data from server to flash drive ? rsync -avP server flashdrive ?
<ersi> linocisco: If you want something even simpler.. just mv or cp data there... it's a regular block device
<ersi> linocisco: but rsync is a good idea, it's made for robustness and error checking
<smoser> linocisco, most likely rsync -avP is going to fail for you
<smoser> unless you've formated your flash drive with ext3 or some sane filesystem rather than FAT32
<linocisco> smoser: mine is with FAT32
<linocisco> hi. telnet example.org 25 is not working. it sid could not resolve. name or service not known
<Daviey> smoser: You fixed sbuild to not need root, right?
<linocisco> hi
<linocisco> please let me know fix for my question
<ersi> linocisco: And if you'd do a "nslookup example.org" what are your output?
<linocisco> ersi:  I got dns server address of my computer.    x.x.x.x#53
<linocisco> ersi: *** server can't find example.org
<ersi> Yeah, not strange at all actually. Since it's a system that doesn't have internet connectivity. You got two options; Either add 'example.org' with the machines local IP or localhost to /etc/hosts or your local DNS server or; Don't use example.org
<zul> morning
<smb> morning
<smb> (is over)
<smb> zul, Any progress on your xenification? :)
<zul> not yet..i been blocked by an ncurses issue when rebuilding xen
<hggdh> Daviey: good morning
<smb> zul, I guess I should not update my o-server then as it seems to still work... :-P But I guess all it would gain is some company while stepping from one pitfall into the next...
<Daviey> hggdh: Hello!
<zul> smb: it would be so much better if you could do it in a vm
<hggdh> Daviey: a very good afternoon for you, and we can chat whenever you want
<Daviey> hggdh: Okay.. give me 15 mins?
<Daviey> then lets cook!
<smb> zul, Well kvm->xen-dom0->xen-pvm sort of works but slooow... multi-stage hvm unfortunately not (without much cheating)
<zul> right
<smb> zul, And of course its HVM which is broken beyond belief...
<zul> smb:  blah
<hggdh> Daviey: certainlu
<airtonix> yay for chrome crashing when recieving clipboard paste data from across synergy screens
<airtonix> i was about to paste a critical pastbin url , now my enthusasim has died
<airtonix> http://dpaste.com/605165/ << ok so I'm on a terminal using ssh to a ubuntu 10.10 server and i start typing : cd sr <then tab>
<airtonix> suddenly, errors
<jamespage> jcastro: etherpad-lite formula now fixed - needed a few changes to support 1.0 which was release a few weeks ago + some open-port fu.
<airtonix> http://www.linuxquestions.org/questions/linux-newbie-8/bash-dev-null-permission-denied-and-growing-size-813604/
<Daviey> zul: Can you try to generate an RSA key pair on your pandaboard?
<Daviey> See if that works on it's own?
<Daviey> zul: It might be a good bug to throw over to racb :)
<zul> racb?
<Daviey> say hello racb. :)
<zul> http://pastebin.ubuntu.com/677961/
<racb> hello!
<zul> hey racb
<zul> you start today?
<racb> I do!
<zul> muahahhahahaaha
<Daviey> zul: Ah super, a bug against openssl with strace would probably help then :)
<Daviey> zul: you don't have a ~/.rnd do you?
<Daviey> and $HOME is valid?
<zul> nope
<zul> oh wait i do have a .rnd
<Daviey> zul: try rm'ing that
<zul> http://pastebin.ubuntu.com/677962/
<zul> still segfault
<Daviey> zul: and you have a valid $HOME ?
<Daviey> (env)
<Daviey> zul: Oh wait
<Daviey> that second pastebin worked?
<zul> it looks like it
<Daviey> Fix Released \o/
<zul> still segfaults though
<zul> lemme try something
<zul> Daviey: nope still busted
 * Daviey blames smoser.
<uvirtbot> New bug: #837385 in dhcp3 (main) "hostname from dhcp STILL not set, despite multiple ubuntu fixes and bug reports" [Undecided,New] https://launchpad.net/bugs/837385
<smoser> RoAkSoAx, awake?
<smoser> kirkland, ?
<smoser> maybe lynxman knows.
<smoser> if i use https://launchpad.net/~orchestra/+archive/ppa/+packages
<smoser> where am i supposed to get distro-info
<smoser> (on natty)
<lynxman> smoser: good question, having a look
<smoser>  ubuntu-orchestra-provisioning-server : Depends: distro-info but it is not installable
<lynxman> smoser: hah, is that only avail in Oneiric?
<smoser> yes
<lynxman> smoser: so looks like this is a bug
<lynxman> RoAkSoAx is carrying most of the Orchestra work now though
<lynxman> while he's not fighting hurricanes
<smoser> ppa:udt-developers/daily seems to be ok
<smoser> bummer. i tlooks like that ppa is busted.
<smoser> ubuntu orchestra is depending on some cobbler stuff too
<smoser> andreserl, you used https://launchpad.net/~orchestra/+archive/ppa/+packages lately?
<RoAkSoAx> smoser: yeah what's up
<RoAkSoAx> smoser: maverick/lucid should ftbfs in the ppa
<smoser> natty builds, but cobbler is expecting ubuntu-cobbler-import
<smoser> and orchetra wants distro-info
<smoser> which is not available in archive
<RoAkSoAx> smoser: ok, will look into that, thanks for pointing me out
<smoser> RoAkSoAx, and now, trying to import an oneiric, cobberl is moaning at me that os-version is not one of dapper -> natty
<RoAkSoAx> smoser: yeah I'm gonna upload cobbler's oneiric to that PPA for natty
<smoser> RoAkSoAx, there is broken (inverted) logic in cobber-ubuntu-import on creation of ISO_DIR
<smoser> http://paste.ubuntu.com/677993/
<zul> Daviey: ok its fixed now
<zul> Daviey: python-greenlet was causing it to segfault on arm
<Daviey> zul: how/why?  Have a patch?
<Daviey> Was it the monkey patching?
<zul> Daviey: its arm specific...ill upload it after the beta
<zul> im up to 3 patches so far
<RoAkSoAx> smoser: k
<smoser> RoAkSoAx, even after i add 'oneiric' to 2 places so that it is a heard of release, it complains to me
<smoser> http://paste.ubuntu.com/678005/
<RoAkSoAx> smoser: where are you adding it
<RoAkSoAx> smoser: did you start cobbler?
<smoser> why?
<RoAkSoAx> err re-start
<smoser> yes after adding oneiric
<RoAkSoAx> smoser: http://paste.ubuntu.com/678007/
<smoser> yeah, thats what i did
<smoser> but after restart it failed like i pated
<smoser> paste
<smoser> d
<smoser> arrgh
<smoser> anyway
<smoser> so that worked on the enxt run. i dont know why.  now, if all i did was install orchestra-provisioning-server, should i be able to tftp boot if my dhcp server has this server as 'next-server'?
<smoser> i answered 'no' to the dhcp/dns question
<smoser> Daviey, did you have a bug link for the possible loss of amis on canonistack?
<RoAkSoAx> smoser: if you answered no, you still need someone that gives dhcp, and that dhcp server should tell that the PXE is cobbler
<RoAkSoAx> smoser: so, yes
<smoser> ok.
<RoAkSoAx> smoser: it should still be able to pxe
<smoser> RoAkSoAx, do you know how i tell dd-wrt to do next-server equivalent?
<smoser> ah. never mind.
<Daviey> smoser, i do not remember deleteing the one mentioned here: https://bugs.launchpad.net/nova/+bug/831704
<uvirtbot> Launchpad bug 831704 in nova "No validation of arch type on image upload" [Wishlist,Confirmed]
<smoser> dhcp-boot=pxelinux.0,tfptd-hostname,tftpd-ip
<zul> smoser: ping when you do cloud-publish-tarball is the architecture hardcoded?
<RoAkSoAx> smoser: yeah it is easy
<smoser> zul, no.
<zul> smoser: ok good
<smoser> you can pass it, or it guesses based on name
<smoser> but it might say GTH if you tell it to do arm
<RoAkSoAx> smoser: add to dns masq dhcp-boot=pxelinux.0,,<ip-iof-pxe>
<smoser> yeah, thats what i did.
<robbiew> Daviey: any sign of Robie?
<smoser> i think that i'm missing a kickstart or something that is causing that pasteibin that i showed, RoAkSoAx
<smoser> zul, it looks to me like it will just pass whatever you give it on trhough (after changing amd64 -> x86_64)
<Daviey> robbiew: Yes.. robbiew meat racb
<Daviey> meet*
<robbiew> ah ha
<racb> hello!
<RoAkSoAx> smoser: are you sure it didn't import the ISO? it might have imported it but still showing the error
<racb> Actually I'm trying to be rbasak
<Daviey> bah, racb was better :)
<rbasak> Only my launchpad account is stuck on racb
<rbasak> And shell accounts are rbasak now
<smoser> i think it did import but is showing the error, you're right
<smoser> RoAkSoAx, but in kickstarts in the web ui, i dont see a 'nqi'
<smoser> should i? or , how do i?
<smoser> as that is really why i'm doing this
<RoAkSoAx> smoser: there should be either a ubuntu-server.seed or a ensemble.preseed
<RoAkSoAx> smoser: both are Non Questions Asks preseed
<rbasak> My attempt at clarity in this matter might have just failed completely :-/
<RoAkSoAx> smoser: cobbler on orchestra's PPA should be published soon, it already finish building
<robbiew> rbasak: changing LP ID /should/ be easy, unless you have PPAs
<rbasak> robbiew: I have PPAs
<rbasak> (some of which are public)
 * rbasak thought everyone would have PPAs!
<kernelpanicker> Can anyone suggest something for spam to use on 10.05 lucid server with postfix/dovecot/mailman?
<robbiew> rbasak: ah...then we can still change it
<robbiew> just takes some LP magic ;)
<robbiew> once you get @canonical.com address and access to the internal irc/wiki, we can get that done
 * robbiew recently changed his from robbie.w to robbiew ;)
<rbasak> robbiew: I'm sorted with those, but won't that break published PPA URLs?
<robbiew> rbasak: nope...they do some sort of cut-and-paste magic
<robbiew> I have a public PPA
<robbiew> hmm
<rbasak> what with an HTTP 301 or somethign?
<robbiew> though I wonder if they do a redirect
<robbiew> I think you can keep the old id for that purpose
<robbiew> I dunno...we can just ask and see what happens
<robbiew> it's just software ;)
<rbasak> :)
<smoser> RoAkSoAx, so sorry to be annoyhing
<smoser> but how will this avoid the install loop ?
<smoser> i dont see a "turn off install late command"
<RoAkSoAx> smoser: no worries ;)
<RoAkSoAx> smoser: uhmmm what preseed are you using?
<RoAkSoAx> smoser: there's a snippet for that
<smoser> i was just looking at /etc/cobbler/ubuntu-server.preseed
<smoser> shouldn't that be the default behavior ? or are we expecting to primarily ship this to system builders that want to test burn-in
<smoser> :)
<RoAkSoAx> smoser: if that's so, we would have to ship that snippet in cobbler
<RoAkSoAx> smoser: but yes, I think it probably should. gonna add it
<zul> hallyn: ping when you are around
<uvirtbot> New bug: #837443 in samba (main) "package samba-common 2:3.4.7~dfsg-1ubuntu3.7 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/837443
<hallyn> zul: i'm around
<zul> hallyn: cool....are you still testing the libvirt FFE
<hallyn> zul: I'm still working on a setup where I can do so yes :)
<hallyn> I tried on an ec2 instance, but with the current libvirt half of the tests failed
<zul> hallyn: heh ok thats all i need to know.
<hallyn> why do you ask?
<hallyn> if you want to try running the qa-regression-tests on your end, and beat me to it, that'd be great
<zul> hallyn: ill have a debdiff i need included for arm
<hallyn> ok
 * hallyn should finally be able to get his pandaboards working, just needs to pick up some cables at frys
<FKman> hello all, i have problem with installing ubuntu 11.04 Server on Xen. Can anybody help me with ?
<zul> hallyn: ill grab it from your ppa and send you a debdiff is that ok?
<zul> hallyn: 0.9.4 ftbfs for me just as a heads up
<hallyn> zul: sounds great, thanks
<hallyn> was the ftbfs in configure with some libxml error that didn't make sense?
<hallyn> (I got that building git HEAD last week by hand)
<zul> hallyn: testsuite failure
<hallyn> feh
<CluelessPerson> how do you setup an easy and temporary port forward?
<CluelessPerson> as in  sudo ufw portforward 9001 192.168.1.1:80?
<Ursinha> CluelessPerson: it depends.. if your computes is accessed directly, you can use the old but gold iptables command line
<Ursinha> or that
 * Ursinha goes google ufw
<jdstrand> actually, ufw doesn't do port forwards via the cli yet
<CluelessPerson> sigh
<CluelessPerson> alright
<jdstrand> you can do them with the ufw-framework
<CluelessPerson> I'm at my brother's house
<jdstrand> see 'man ufw-framework' which has examples for port forwarding and where to do it
<CluelessPerson> SSHing to my server, behind a router.
<Ursinha> hm
<CluelessPerson> I need to access the router firewall and change it
<Ursinha> CluelessPerson: you might be able to do that via web interface
<Ursinha> the router one
<Ursinha> redirecting router ports, not your computer
<Ursinha> CluelessPerson: have you tried that?
<CluelessPerson> no
<CluelessPerson> Ursinha  problem is the web interface isn't enabled for remote
<CluelessPerson> only lan
<CluelessPerson> Ursinha  sorry for long replay
<CluelessPerson> reply
<CluelessPerson> anyway
<CluelessPerson> I have to sleep
<kernelpanicker> where does spamassassin log in ubuntu lucid?
<patdk-wk> no where
<patdk-wk> unless your using spamd, then mail.log
<kernelpanicker> ok... that's why I couldn't find it... how does one monitor what it's doing?
<kernelpanicker> ok thks
<patdk-wk> monitor? it's a client program when you run it, it tells you what it did
<uvirtbot> New bug: #837482 in libaio (main) "package libaio-dev (not installed) failed to install/upgrade: tentative de remplacement de Â«Â /usr/include/libaio.hÂ Â», qui appartient aussi au paquet libaio:i386 0.3.104-1" [Undecided,New] https://launchpad.net/bugs/837482
<Daviey> Server Team meeting starting in #ubuntu-meeting now
<airtonix> sweet! haven't restarted and /dev/null is yet again unwritable by anyone but root
<airtonix> awesome :>
<b0nghittr> would it be possible for the server team to contact Adobe and help them officially support Ubuntu server for their Adobe Flash Media Server? atm they only support RHEL and centos, and needs tweaking to work properly on ubuntu. it would be very helpful.
<RoyK> airtonix: pastebin `ls -l /dev/null`
<airtonix> RoyK: http://dpaste.com/605266/
<RoyK> airtonix: that's wierd
<RoyK> airtonix: it should be 666
<RoyK> airtonix: which ubuntu release is this?
<airtonix> RoyK: 10.10
<RoyK> airtonix: check /etc/udev.d
<RoyK> erm
<RoyK> /etc/udev/rules.d
<airtonix> RoyK: i found that on a previous occasion this returned the /dev/null to a usable state for users : http://dpaste.com/605269/
<airtonix> checking udev now
<RoyK> airtonix: no need to recreate the device - a chmod should do well
<airtonix> RoyK: http://dpaste.com/605271/
<RoyK> grep for null in there
<airtonix> RoyK: http://dpaste.com/605272/
<airtonix> RoyK: http://dpaste.com/605274/
<RoyK> could it be this one? https://bugs.launchpad.net/ubuntu/+source/udev/+bug/63031
<uvirtbot> Launchpad bug 63031 in udev "/dev/null: Permission denied" [Undecided,Fix released]
<airtonix> RoyK: yes it seems to describe my situation as i read the comments
<RoyK> the quick-and-dirty fix is to add chmod 0666 /dev/null to /etc/rc.local
<airtonix> hoho
<RoyK> the _good_ fix is to find out whatever changed the /dev/null perms :P
<lynxman> Daviey: SpamapS: RoAkSoAx: smoser: Have you guys seen this? Trying to install Oneiric server daily (2011-08-26) into an HP DL360 http://lynxman.net/oneiric-ko.png
<smoser> no. but it obviously went read-only.
<smoser> theres probably more info further up
<smoser> if you're in a sane virtual machine, you can shift pageup
<lynxman> smoser: it's a bare metal thorugh iLO
<SpamapS> sounds like he's on a DL360 :)
<smoser> any way to get more console?
<SpamapS> boot to recovery shell.. investigate
<smoser> that bit only tells us what i told you.
<lynxman> SpamapS: k,will go that way :)
<lynxman> smoser: trying to get more console, although this environment is pretty limited
<SpamapS> the older iLO's actually had a better interface than newer ones
<SpamapS> they would scrape the VGA text mode perfectly into an SSH session
<SpamapS> newer ones just use serial
<lynxman> SpamapS: yeah, bit of a pain :/
<Ursinha> frak, forgot the mumble thing
<RoyK> the newer iLOs have crapware and requires an additional license to do the job properly
<lynxman> frak, the message goes away too fast :(
<Daviey> lynxman: on a call :/
<hallyn> jdstrand: when I run qa-regression-test against regular libvirt on ec2, I get 8 failures.  expected?
<crass> anyone know if a simple c program to test for 64-bit registers (which I assume means I can run 64-bit progams)?
<jdstrand> hallyn: last upload I did there were no failures. I don't do it in ec2 though...
<jdstrand> hallyn: actually, that shoudl be the last version I tested in oneiric. I don't recall which that was
<jdstrand> hallyn: can you paste? I will try a run now
<SpamapS> crass: I doubt uname would lie to you.. 'man 2 uname'
<crass> ok, so I believe then that I'm running in 32-bit mode, can I just install a 64-bit kernel to be able to run in 64-bit mode (my processor supports it)
<hallyn> jdstrand: http://paste.ubuntu.com/678188/
<jdstrand> hallyn: I think you don't have everything installed
<jdstrand> hallyn: try: sudo ./test-libvirt.py setup-all
 * jdstrand is running it now
<jdstrand> (the whole thing, not just setup-all :)
<hallyn> jdstrand: d'oh, didn't know about that one :)  thanks, trying
<crass> hallyn: is there a reason not to run your libvirt packages for Oneiric on Natty?
<hallyn> crass: there may be, actually,
<hallyn> crass: not bc of my changes, but bc of the change to use libnl3
<crass> ok, glad I checked. What package is now using libnl3 (not familiar with that lib)
<jdstrand> hallyn: it's new (as of several-quite a few weeks). I got tired of all the setup
<crass> hallyn: also, would it be possible to get the ubuntu-virt libvirt package to build successfully?
<hallyn> crass: libvirt itself...
<hallyn> i didn't know it was failing
<crass> (I think it just needs libxen-dev on the build machine)
<crass> hallyn: do you know what version of libvirt that changed in? 0.9.4?
<hallyn> it's a change specific to the current oneiric version of libvirt's packaging.  (if you mean libnl3)
<hallyn> it's not in upstream git yet
<hallyn> libxen-dev is in build-depends, so it should be there for the ppa build...
<crass> hallyn: I don't know, that's what it says here: https://launchpad.net/~ubuntu-virt/+archive/ppa/+build/2651532 :(
<hallyn> crass: ok.  i hit 'retry build' as that may have been a transient error, but it's kind of an old version relative to oneiric so it might be worht trying again with ubuntu11
<hallyn> oh, no
<hallyn> crass: libxen-dev doesn't exist before oneiric :)
<hallyn> so the packaging has to be updated for the ppa
<crass> hallyn: I don't understand, if libxen-dev didn't exist before oneiric, then why would those packages complain about needing them? since they are before Oneiric
<crass> is it an easy fix?
<hallyn> i can't get to it today, but should be simple
 * crass is not familiar with the ppa build process
<hallyn> crass: the package says "i need libxen-dev'.
<crass> oh, so those packages are updated to build for oneiric?
<crass> hallyn: I also wanted to get your opinion on the state of your spice packages, is it worth trying to use them at this point? ie are the better than the normal libvirt stuff from natty?
<hallyn> crass: hopefully the debian spice packages will get merged soon.  I"d just wait.
<hallyn> 'better' depends on what you mean
<crass> better performance and user experience
<hallyn> spice just provides better remote graphics, if you have high bandwidth to support it
<hallyn> if users want video, then yes.
<crass> ok, so its mainly for doing video and 3d-graphics, but otherwise you won't notice much difference?
<hallyn> right
<koolhead17> hi all
<koolhead17> RoAkSoAx: hey there
<crass> awesome, looks like I don't need to mess with it then, thanks
<crass> however, I would very much appreciate a repo with the latest (or as close to the current releases as I can) libvirt and qemu for natty
<koolhead17> RoAkSoAx: " d-i preseed/late_command string "  is  awesome, i did all the ip address and other done with it, instead doing anything with dhcp server natively
<crass> hallyn: would it be any easier and/or could you provide natty builds for your virt repo? (It has qemu 0.15 and libvirt 0.9.3)
<hallyn> no, it wouldn't
<crass> haha, ok, I should look into setting up my own ppa
<hallyn> here lemme just try s/libxen-dev/libxen3-dev/ and pushing to ubuntu-virt
<crass> hopefully its that simple
<hallyn> did qemu-kvm in that repo build?
<hallyn> well it's probably behind anyway, so i'll push that too
<hallyn> only for natty right now, if you need lucid pls ping me on that
<crass> naw, just natty for me
<adam_g> Daviey: i finished up that cobbler-enroll work @ lp:~gandelman-a/+junk/cobbler-enroll . it does everything the original did, and should be easily extended to meet orchestras needs.
<adam_g> hggdh: ^
<jdstrand> hallyn: ok, I had one failure do to a virtinst change
<jdstrand> hallyn: path was changed. I'll fix and commit
<hallyn> jdstrand: is that this one:
<hallyn> FAIL: test_virt_install_location (__main__.LibvirtTestVirtinst)
<jdstrand> yeah
<hallyn> cool
<hallyn> next to try the candidate :)
<hallyn> jdstrand: say, while i have your ear, can you think of any reason why sshd, during a connect attempt, would fail setgid(0) = -EPERM ?
<jdstrand> not otoh. is it confined in some way? (MAC, etc) it needs CAP_SETGID
<hallyn> no, capsh --print shows root has that...
<hallyn> ok, thx
<Daviey> adam_g: great stuff!  I'll grok that shortly.
<jdstrand> these things use to be able to fail under resource pressure iirc
<jdstrand> hallyn: ^
<hallyn> i was wondering that
<jdstrand> you'd know better than I, but I thought that got fixed lately
<jdstrand> I don't know off-hand if that is in oneiric. I wonder if the fix isn't as robust as it should be
<hallyn> doesn't ring a bell.  and no OOM messages in syslog.  But, bumped memory limit by a fact of 10, maybe it'll help
<hallyn> crass: both pushed.  will take awhile to build i assume
<philipballew> how would i stream video files on my desktop to my laptop.
<philipballew> to make my own youtube basically
<crass> hallyn: thanks a mil, hopefully that will fix the build
 * genii-around slides Tm_T a congratulatory beverage
<jdstrand> hallyn: fyi, this is what I was thinking of: http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=commit;h=72fa59970f8698023045ab0713d66f3f4f96945c
 * Tm_T thanks, bows, and feel very honored and humble
<hallyn> jdstrand: well tbh i'm not sure what ssh is doing makes sense (pointing to a possible configuration problem)
<hallyn> http://paste.ubuntu.com/678233/
<jdstrand> hallyn: weird. nothing is jumping out at me. mand setgid says: The calling process is not privileged (does not have the CAP_SETGID capability), and gid does not match the real  group  ID  or  saved  setgroup-ID of the calling process
<jdstrand> hallyn: fyi, I committed the qrt fix
<kirkland> today's Oneiric server has in the task selection, "Ubuntu desktop USB" ... wtf is that?
<jdstrand> a mistake? :P
<hallyn> jdstrand: hm, i get a lot of 'error: unknown OS type hvm' errors with the new libvirt (23 failures total)
<jdstrand> hallyn: can you paste?
<RoAkSoAx> koolhead17: lol cool ;)
 * RoAkSoAx will be back in an hour
<koolhead17> RoAkSoAx: 3 days ass kicking effort, finally success :D
<hallyn> jdstrand: http://paste.ubuntu.com/678252/
<jdstrand> hallyn: that doesn't seem so good. all it is doing is defining a VM and failing
<jdstrand> hallyn: and that is simply a slightly modified libvirt/qatest/qatest.xml
<Daviey> hallyn: How familar are you with virtio-serial ?
<m_tadeu> hi...my email is being considered as spam (in hotmail only). the first thing poping to my eye, reading the headers is the Message-ID, which is using xxxx@www.domain.com, instead of xxxx@domain.com. How can I change this?
<patdk-wk> what isn't the issue :)
<patdk-wk> or unlikely to be the issue
<m_tadeu> my SPF record seems to be set properly as  "v=spf1 a mx include:mydomain.com ~all"
<hallyn> Daviey: not at all familiar
<Daviey> hallyn: Hmm. I need to be able to get a qemu console into a size limited ring buffer on the host.. got any ideas? :)
<hallyn> Daviey: none without research
<Daviey> hallyn: bug 832507 is pretty serious.. I have commited /a/ fix, but uses a non-standard kernel module.
<uvirtbot> Launchpad bug 832507 in nova "console.log grows indefinitely" [High,Confirmed] https://launchpad.net/bugs/832507
<crass> hallyn: looks like the ubuntu-virt packages require the libnl3-dev as well. Would that be a lot of trouble to add those to natty?
<crass> err, I mean the libvirt package from ubuntu-virt
<hallyn> grr
<hallyn> should just pull that patch converting to use libnl3.
<hallyn> (for the ppa that is)
<hallyn> crass: i'll get that pushed ina bout 5 mins
<crass> thanks hallyn
<hallyn> np - pushed
<kernelpanicker> I changed the DNS for my domain but am having problems because the old IP is persisting somewhere in my BIND setup on ubuntu 10.04 here... any suggestions on how to purge all that?
<crass> kernelpanicker: are you sure that the dns changes have propagated to your server's DNS?
<kernelpanicker> well... not sure... I changed the DNS over 4 days ago, but browsing to the site on the LAN takes me to the old server... but nslookup etc are showing it as the new server...
<kernelpanicker> so I'm assuming it's in my setup here somewhere.
<crass> nslookup from the lan shows the new server ip?
<crass> using which dns server?
<crass> the bind server if its properly configured should update its cache
<crass> ie, it shouldn't be authoratative for that domain, just a forwarder
<hallyn> so libvirt-lxc appears to DTRT and symlink /dev/pts/ptmx to /dev/ptmx.  Then, it merrily mounts a new /dev over top of that which gets a simple /dev/ptmx device (which is broken)
<zul> hallyn: libvirt-lxc mounting has gotten an overhaul in 0.9.4
<hallyn> zul: yeah.  though does it do it the right way :)
<zul> such as life :)
<hallyn> zul: i can't get the 0.9.3-5 candidate to even recognize a 'hvm' type xml
<zul> seriously?
<zul> sheesh
<hallyn> zul: so i guess lxcguest for oneiric will have to patch around that.  You ok with that?
<zul> hallyn: yeah
<hallyn> it's ugly, but...
<zul> whatever works
<hallyn> and yeah, seriously.  have you tried the qa-regression-tests on the candidate?
<zul> nah ive had my plate full
<zul> ...my cup runeth over
<hallyn> zul: good times
<bkerensa> uh oh
<bkerensa> :D
<Technicus> Hello . . . is there a way to setup the server so that there is a mouse?  I have a server running in a virtual machine and would like to select select and copy text, then paste it to the host operating system.
<MacRohard> maybe install gpm ?
#ubuntu-server 2011-08-31
<MacRohard> https://launchpad.net/gpm
<Technicus> I am investigeting . . .
<davros-> evening
<Technicus> Thanks for the suggestion . . .
<Technicus> I have tried following this turorial: < http://mydebian.blogdns.org/?p=148  >, but the machine will not start and VirtualBox provides me with this message: "Failed to open a session for the virtual machine Video Surveillance System. Configuration error: Failed to get the "MAC" value (VERR_CFGM_VALUE_NOT_FOUND). Unknown error creating VM (VERR_CFGM_VALUE_NOT_FOUND).".  I do not understand how to connect to the server with SSH.  How can I do this?
<uvirtbot> New bug: #710040 in lm-sensors (main) "does not support atom d525" [Undecided,New] https://launchpad.net/bugs/710040
<Technicus> I have been trying connect a Ubuntu Desktop host and a Ubuntu Server VirturalBox gest on the same machine with no success.  I have been chatting on the #openssh channel and #vbox channel, I have tried the tutorial: < http://mydebian.blogdns.org/?p=148 >, and other suggestions . . . but I am obviously overlooking something very critical.  I don't know what to do at this point.
<Technicus> How can I get this to work?
<twb> Sorry, I don't support vbox.  Ubuntu recommends KVM and libvirt.
<nandemonai> Technicus: Can the machines ping each other?
<nandemonai> And openssh-server is installed on one or both?
<Technicus> OpenSSH-Server is installed on the server.
<twb> nandemonai: From the error he initially reported, it sounds like vbox is borking at the bridging level, irrespective of the guest OS configuration
<nandemonai> Just saw the above post.
<nandemonai> Technicus: That's a virtualbox issue for sure. How are the network interfaces configured in Virtualbox?
<Technicus> I am trying to figure that out.
<twb> There is probably a vbox channel
<nandemonai> Technicus: Bridged, NAT, host only etc. Something isn't set right. I'd check the virtualbox forums and google your specific error.
<Technicus> The host has an adapter called vboxnet0.
<twb> Unless it's vbox ose it's not really our problem
<twb> Oh, apparently there is no longer an "OSE"; there is just normal (open source) vbox, plus a proprietary Oracle "plugin"
<crass> hallyn: I just checked up on the libvirt build and it failed becaus eo f the libnl3-dev missing dependency :(
<hallyn> crass: gah. forgot the patch didn't touch debian/control.  new upload coming
 * hallyn out
<crass> hallyn: thanks, didn't expect you to be up :)
<twb> dch; debcommit; dch -r; debcommit -r
<uvirtbot> New bug: #837828 in exim4 (main) "package exim4-config 4.69-2ubuntu0.3 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/837828
<linocisco> hi all
<linocisco> how to setup/install ubuntu mail server for windows outlook clients ?
<linocisco> hi all
<twb> Well, you have three choices.  First, deploy a normal SMTP/IMAP setup and work out how to make outlook talk IMAP.  Second, deploy a propriatery MAPI server like Zimbra CS.  Third, deploy a webmail server and tell your users to suck it up and use a web browser instead of outlook.
<twb> Oh, and of course fourth, just get rid of Windows.
<linocisco> twb:  are you answering me ?
<twb> linocisco: yes
<jamespage> morning
<linocisco> twb: thanks. Let me tell you have i have done. I have installed postfix. I have done dpkg-reconfigure postfix and enter settings. telnet myserver.org 25 and ehlo myserver.org is fine. what else I do ?
<linocisco> twb: some said that I need to install dovecot and nmap to check ports.
<twb> 25 is SMTP
<twb> If you want them to be able to read mail (IMAP) as well as sending it (SMTP), you need an IMAP server -- dovecot is one
<twb> IMAP leaves mail on the server; you could use POP3 instead, which downloads it to the desktop, but that is usually not what you want
<twb> Rather: if you're an ISP you probably want POP3, if you're a corporation you probably want IMAP.
<linocisco> twb: Let me recall theory, If I use IMAP,  all mail clients must be always online with ubuntu mail server. Otherwise, they can't read email if LAN cable or WIFI link is disconnected. Right ?
<ersi> twb: He just wants a local server (He's been here for a while) and that server does not have internet connectivity.
<ersi> linocisco: Well, setup both POP3 and IMAP and switch if users are being silly
<twb> linocisco: with IMAP, the mail stays on the server.  Users can only access it when they are have access to the server.
<linocisco> ersi: thanks eris. you always read my mind. knew half of my problems. thanks for coming
<twb> linocisco: with POP3, they download it to their desktop or laptop, which means if their laptop explodes -- no more mail.  If they change laptop -- no more mail.
<linocisco> twb:  can we have a settings to leave a copy of mails on server even though if we use POP3, right ?
<ersi> linocisco: It's one of the things I'm good at, I can't always help with particular stuff - but my memory works wonders sometimes :P
<ersi> linocisco: Yes, there is one such setting - but I wouldn't recommend using it. POP3 is a dumb protocol, and leaving copies on the server usually ends up in tears
<ersi> like.. some silly user whining about three thousand duplicates because Outlook is a piece of.. goo.
<twb> linocisco: you can, yes
<twb> linocisco: the other problem is that outlook is REALLY SHIT, and AIUI only works properly if you use MAPI, which is Microsoft's mess alternative to IMAP
<ersi> IMAP <3
<twb> Even if the users are on Windows, I would probably try to get them onto tbird
<linocisco> ersi: twb: I am not thinking only outlook. May Be thunderbird or any mail clients that will work on windows. because most users are always sticked to windows. But it would be nicer to work with Microsoft outlook because we already paid for office suite.
<twb> tbird sucks pretty hard as well, but probably not as bad as outlook
<ersi> linocisco: You can cache mail locally with IMAP, and make both Outlook shitpress and Thunderbird save copies locally in them and keep it on the server
<ersi> Mail is hard, let's go shopping :-)
<twb> mail is only hard because users won't learn mutt
<twb> Sorry, let me rephrase
<twb> mail is only hard because users
<ersi> Amen.
<linocisco> ersi:  you are funny. btw, let's get down to nitty gitty. other than postfix package. what else should I install ?
<twb> First thing we do, let's kill all the users
<ersi> linocisco: I usually use postfix with dovecot. Other than that, nothing more is needed - besides sasl-auth stuff
<twb> ersi: ldap would be nice
<ersi> yeah, but not absolutely necessary :)
<twb> I also rolled out prayer, mainly because it was not PHP.  It looks OK if you tell it to use the cambridge "theme"
<ersi> but yeah, indeed
<twb> It's no gmail, but OTOH it actually works in w3m and lynx
<ersi> I'm more of a roundcube kind of guy
<twb> roundcube is PHP, so automatic fail in my book
<twb> http://www-uxsup.csx.cam.ac.uk/~dpc22/prayer/
<linocisco> ersi: namely please. dovecot-posfix or dovecot-common or ?? what ? please specific name please!!
<ersi> meh, it's fancy and users love it
<ersi> linocisco: Yes and yes
<twb> My job is to prevent users from having what they want.
<ersi> I'm not actually a sysadmin or have any kind of infra responsibility
<ersi> I'm hired to break stuff :)
<twb> I'm not allowed to set up nessusd yet :-/
<linocisco> ersi:  yes and yes ?? only two packages (dovecot-postfix and dovecot-common ) ??
<twb> linocisco: either just dovecot-postfix, which will try to set it up for you, or if that fails, then dovecot-imapd and postfix
<twb> For me, dovecot-postfix did completely the wrong thing, so I just threw it out and did it right.  dovecot-postfix is only about twenty lines of config files anyway
<linocisco> twb: so what is better? exim or courier or  .. ?
<twb> linocisco: "dovecot-postfix" is a wrapper package.
<ersi> linocisco: Nothing is better. It's a subject of taste.
<twb> dovecot is fine; postfix is fine.
<ersi> Pick one, stick with it until you're mad about it or got time to tinker
<twb> dovecot and postfix are the ubuntu recommended choices
<ersi> can't go terribly wrong with dovecot+postfix mix
<ersi> I like that combo
<linocisco> twb: thanks bro.  postifx = MTA , dovecot is for = IMAP+POP3 , right ?
<twb> linocisco: yes
<twb> I like postfix, especially compared to sendmail or exim.  Dovecot I pretty much just installed and then left alone, and it hasn't caused any trouble
<twb> I'm still using procmail instead of sieve, tho
<twb> Postfix is the LDA
<linocisco> twb: thanks bro. I will google how to edit in dovecot to work with windows mail clients . If I failed, I would come back
<linocisco> twb: that is another thing I dont know. what is procmail ?
<twb> linocisco: also try #dovecot and ##windows
<ersi> linocisco: It's a filtering software, sorta
<twb> procmail is a thing where you can say stuff like "all mail from jerry, put in the jerry folder"
<ersi> linocisco: Not necessary, you can look at it after setting up postfix+dovecot if you got the need :)
<twb> Sieve is basically the same except it's built into IMAP
<twb> Meaning that the MUA can do it, whereas ~/.procmailrc requires a shell account and knowledge of vi
<linocisco> my second mail questions is concerning office deployment
<linocisco> linocisco: in my remote offices, they used government ISP internet connection which allows only 8080 and 443 ports. Our enterprise mail sever use other port. so their only option is to use webmail only. I just want to setup POP3 mail clients for them. which support could ubuntu server perform to accomplish that ?
<linocisco> [14:41] <linocisco> linocisco: in my remote offices, they used government ISP internet connection which allows only 8080 and 443 ports. Our enterprise mail sever use other port. so their only option is to use webmail only. I just want to setup POP3 mail clients for them. which support could ubuntu server perform to accomplish that ?
<linocisco> in my remote offices, they used government ISP internet connection which allows only 8080 and 443 ports. Our enterprise mail sever use other port. so their only option is to use webmail only. I just want to setup POP3 mail clients for them. which support could ubuntu server perform to accomplish that ?
<twb> You can't do POP3 over webmail
<linocisco> I mean just to give them access to download emails and read offline like using pop3 clients. So they don't need to open email and wait for slow internet connection to check old mails
<twb> OTOH you can run anything over any port, so you could just serve IMAP on port 8080
<twb> I hear china are putting black boxes in all the cafe wifi access, too
<lynxman> dannf: ping
<lynxman> er sorry
<lynxman> Daviey: ping
<linocisco> twb: we have very slow internet with expensive charge. we are not china and worse than that.
<twb> linocisco: where are you?
<twb> myanmar?
<eagles0513875> hey guys i am upgrading my server and i was just wondering in regards to dovecot + postfix all i need to do is backup the configs for each as well as my Maildir and then copy it over correct?
<dogears> Hi!  Can I setup a DHCP server on eth0 on a laptop and have the wireless still function on dhcp
<twb> Sure
<twb> Unless you have a GUI and NM running, which will probably try to stop you because they're too damn clever for their own good...
<twb> I know a bunch of people that set up their old laptops as routers or wifi APs
<dogears> whats nm
<twb> network manager
<dogears> of course
<dogears> I am trying to setup a laptop to have dhcp server to prepare new printers but need to have the laptop wireless still work.
<dogears> I can start dhcp server but when I shut down the computer it fails to start on reboot
<dogears> Can't figure
<ichat> hi im looking for some 'guidelines'  to get the best performance out of  my  'server'  its running a 2.1ghz dualcore amd (64bit)  with  2x 1gb ddr2.   3x 1tb sata + 1x ide (20gb laptop drive 4200rpm so its slow).      its going to run  10.4  with  ebox installed...   my question is... how should i install...
<ichat> should i run the 3 sata in software raid...
<ichat> and if so.. is it wise to instal  the system /root   on it
<uvirtbot> New bug: #837991 in apache2 (main) "Update apache2 to 2.2.19-2 to fix CVE-2011-3192" [Undecided,New] https://launchpad.net/bugs/837991
<jetole> Hey guys. Does anyone know which software it is that, when I log into one of my hosts, I receive an email that has a subject "*** SECURITY information for host.domain.com ***" and the email is just the hostname, the date and the logged in user?
<jetole> Hey guys. Does anyone know which software it is that, when I log into one of my hosts, I receive an email that has a subject "*** SECURITY information for host.domain.com ***" and the email is just the hostname, the date and the logged in user?
<jetole> oops. Sorry about posting that twice
<trapmax> jetole: logcheck
<trapmax> possibly
<just-a-visitor> try man sudoers 5
<uvirtbot> New bug: #838010 in augeas (main) "/usr/share/augeas/lenses/dist/aliases.aug does not parse valid /etc/aliases file" [Undecided,New] https://launchpad.net/bugs/838010
<RoyK> just-a-visitor: or man even, man 5 sudoers
<RoyK> or just man sudoers :P
<just-a-visitor> :)
<diimdeep> hello, http://askubuntu.com/questions/59373/stream-a-random-audio-file-from-a-shared-folder-in-network
 * RoyK just ordered 150 Hitachi HDS723020BLA642 drives
 * jetole checks
<jetole> no logcheck
<jetole> default sudoers file
<jetole> I'd really like to know what's sending the email. I have about 30 servers and just noticed it on these two but these servers have been around forever and I have never seen it before
<jetole> though I may want to implement this on other systems once I know what it is
<ersi> I'd check out the mail headers and see if it's smart enough to set some headers telling you WTF it is
<jetole> I know on most of machines which also use ldap and sudo through ldap, it sends a email every time sudo is run but these machines are not using ldap and have a default sudoers file
<jetole> ersi: I did. the only header I see is 'Auto-Submitted: auto-generated'
<jetole> everything else is just standard headers
<zul> morning
<jetole> morning
<jetole> ah crap. It is sudo and I just figured out why
<jetole> the host is using an older domain which we just transferred to a third party server that no longer knows about a lot of these hosts so sudo says unable to resolve the host name
<jetole> It's supposed to be changed to a different domain and that's why
<jetole> thanks for the help guys. Now I gotta go hit the bank before I go into the office
<jetole> later
<ersi> jetole: Awesome that you found out. :-)
<jetole> thanks to trapmax, just-a-visitor and ersi
<jetole> +1 to just-a-visitor for giving me the right answer even when I didn't think that was it. lol
<ersi> ^_^
<uvirtbot> ersi: Error: "_^" is not a valid command.
 * jetole runs to make sure I
 * jetole runs to make sure I'm at work on time
<just-a-visitor> :D
<Daviey> Who wants to do a dhpython2 transition, and enable a test suite during the python build? :)
<smoser> jamespage, or Ursinha can we get 20110831 cloud images on ec2 tests run please ?
<zul> Daviey: ill do it
<Daviey> smoser: Is that considered our candidate?
<jamespage> smoser: ack
<smoser> Daviey, unless you ahve a reason that it should not be
<Daviey> smoser:  no, just wanted to know :)
<Daviey> zul: bug #836677
<uvirtbot> Launchpad bug 836677 in python-dingus "[MIR] python-dingus" [Undecided,Incomplete] https://launchpad.net/bugs/836677
<zul> Daviey: funny i already knew which bug you were referring to
<Daviey> zul: good stuff!
<Daviey> zul: so ipy has been removed from nova in bzr?
<zul> yep
<rbasak> Is installing natty then do-release-upgrade the preferred way to install oneiric? I can't find any docs anywhere
<Daviey> rbasak: That is useful as testing, at this point in the cycle.
<zul> rbasak: easier to start off with oneiric
<Daviey> rbasak: But you can just install from the b1 candidate iso :)
<rbasak> OK so either :)
<Daviey> yup!
<Daviey> rbasak: fresh install, http://cdimage.ubuntu.com/daily-live/current/oneiric-desktop-amd64.iso
<rbasak> Thanks Daviey. This is for a virtual machine, so is there a server ISO too somewhere? I'm still backing up my main machine and I'll probably go maverick->natty->oneiric for that!
<Daviey> rbasak: sure.. just navigate the url above. :)
<rbasak> Daviey: No server iso in that directory :-/
<Daviey> rbasak: http://cdimage.ubuntu.com/ubuntu-server/daily/current/
<rbasak> ah, ok!
 * rbasak will hopefully stop getting lost soon :)
<Daviey> rbasak: If you find out how, please let smoser know how.
<zul> Daviey: stompy will need one as well
<ivoks> SpamapS: hi; can we use 'start on stopped rc-sysinit' for upstart jobs that depend on not-yet-upstartified sysv jobs?
<Daviey> zul: cool, can you update the status on the bug please?
<davros-> this look like it will work? .....  http://pastebin.com/n3y6EfA1
<zul> stompy has been taken care of as well
<Daviey> zul: \o/
<cr3> hi folks, anyone happen to know whether a patch for CVE-2011-3192 will be backported to Apache versions prior to 2.2.20, 2.2.19 is the latest in the repo.
<uvirtbot> cr3: The byterange filter in the Apache HTTP Server 1.3.x, 2.0.x through 2.0.64, and 2.2.x through 2.2.19 allows remote attackers to cause a denial of service (memory and CPU consumption) via a Range header that expresses multiple overlapping ranges, as exploited in the wild in August 2011, a different vulnerability than CVE-2007-0086. (http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2011-3192)
<_johnny> hi. anyone familiar with xinetd and stunnel? i've followed a few "how to"'s, but they all involve wrapping ssl around another connection, rather than binary execution - which i'm having a little trouble with. anyone tried something like this? (i have a xinetd service which runs a shell cmd, and want to convert this to a stunnel conf)
<cr3> advanced search for that CVE under the ubuntu apache project in launchpad bugs, even those fix released, doesn't return anything
<zul> cr3: ask the security folk
<zul> cr3: but afaik its being worked on
<jdstrand> cr3: sbeattie is working on updates for hardy - natty. I imagine once those are out he will either do or coordinate with the server team for oneiric
<jdstrand> s/ do / do the patch himself /
<lynxman> alright, who wants to have fun diagnosing a very nasty server bug with me? :D bug #818177
<uvirtbot> Launchpad bug 818177 in linux "HP DL380G5 root disk mounted read-only on boot and boot fails" [High,Confirmed] https://launchpad.net/bugs/818177
<hallyn> smoser: libvirt is just TOYING WITH ME.  I've got /bin/mount wrappered to spit out some info.  Up ot the last time that /bin/mount is called during bootup, /dev/ptmx is a symlink.  But after i log in, it's not.
<cr3> jdstrand: thanks, where can I look to keep updated about the state of the patch? might there be a bug or something I could subscribe to?
<jdstrand> cr3: I imagine it is going out today, but sbeattie would be in a better position to say. so, either look at http://people.canonical.com/~ubuntu-security/cve/2011/CVE-2011-3192.html or www.ubuntu.com/usn (there is an feed on the latter)
<uvirtbot> jdstrand: The byterange filter in the Apache HTTP Server 1.3.x, 2.0.x through 2.0.64, and 2.2.x through 2.2.19 allows remote attackers to cause a denial of service (memory and CPU consumption) via a Range header that expresses multiple overlapping ranges, as exploited in the wild in August 2011, a different vulnerability than CVE-2007-0086. (http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2011-3192)
<Daviey> RoAkSoAx: I think the whole preseeds need review for cobbler TBH.
<Daviey> (orchestra)
<robbiew> Daviey: hey....you still using that Toshiba netbook to keep your feet warm?
<Daviey> robbiew: Nah, i tried to see if it blended.
<Daviey> (it doesn't)
<robbiew> Daviey: can you send it to rbasak, then?
<Daviey> robbiew: Yep.. not sure how useful it will be long term.
<Daviey> rbasak: Can you let me know your address please?
<Daviey> I don't plan to post it until tomorrow now.
<airtonix> i think i've pretty much isntalled every package with the work xapian in it and yet haystack and xapian-haystack report that i don't have xapian installed.
<RoAkSoAx> Daviey: I don't think there's much review to be done
<RoAkSoAx> Daviey: it is just a simple non questions asked installation
<RoAkSoAx> Daviey: though, minor stuff such as shostname needs to be handled
<davros-> I hate dhcp3 so much right now
<hggdh> SpamapS: why is it that, on RAID1 recovery I get the MDs as a mix of both disks?
<hggdh> SpamapS: 3 mds; one has both disks active, one has disk1 active, and one has disk2 active
<lynxman> hggdh: that sounds like a very bad md crash
<lynxman> hggdh: it's normally either one disk or the other, mixup... eek
<hggdh> lynxman: not really a bad crash. Just the standard ISO test ;-)
<RoAkSoAx> lynxman: dude!!
<RoAkSoAx> lynxman: can we do the thing tomorrow morning? I'm still fighting here with my fixes
<lynxman> RoAkSoAx: don't worry, my machines are as dead as Zed
<lynxman> hggdh: it still terrifies me :)
<hggdh> lynxman: well, TBH, it also worries me... this is why I pinged Clint
<RoAkSoAx> lynxman: hehehe ok
<robbiew> zul: ping
<zul> robbiew: whats up?
<robbiew> zul: are u working on the cobbler/ARM bits?
<robbiew> was thinking rbasak might be able to help
<zul> robbiew: no RoAkSoAx got it working afaik
<robbiew> RoAkSoAx: ^^?
<smoser> hallyn, wow.
<hallyn> smoser: all right, so i've only had two working ways to work around this.
<hallyn> the racy one, is to have /etc/init/ssh.conf re-setup ptmx
<hallyn> the other is to have libvirt not make ptmx a symlink, but a mount
<smoser> ssh doing it is pointless
<hallyn> yes, and racy
<smoser> as its not only ssh that is affected
<hallyn> but the first one :)
<hallyn> ideally a systemtap taplet would show me where devpts is being mounted...
<smoser> libvirt or lxc tools doing *anything* is really garbage to me.
<hallyn> but i'd need linux-image-virtual-dbg which doesn't seem to exist
<hallyn> kees ^
<smoser> just like if my laptop bios decided it shoudl run 'ifconfig eth0 up' for my OS.
<hallyn> smoser: i've tried having /sbin/MAKEDEV create ptmx the right way, but that wasn't enough
<hallyn> right, ideally we find what, in our boot sequence, is creating ptmx the wrong way
<hallyn> and fix that
<hallyn> but failing htat, i'm goign to push a libvirt patch to do /dev/ptmx as bind mount isntead of symlink.
<uvirtbot> New bug: #836668 in pymongo (universe) "[FFE] python-mongodb" [Undecided,New] https://launchpad.net/bugs/836668
<hallyn> (it's already doing symlink, so it's not like it's adding to hackiness)
<smoser> hallyn, really, thats just going to break other things.
<smoser> you're changing an expected environment
<hallyn> what is going to break other things?
<smoser> changing libvirt
<smoser> you risk breaking other working containers
<smoser> just to fix something you dont understand.
<hallyn> I don't think so
<davros-> wtf how can, address 192.168.1.10 not in subnet 192.168.1.0 netmask 255.255.255.255
<hallyn> smoser: there are two ways to provide a correct /dev/ptmx in a container.
<hallyn> of course i can just blame you for insisting on using libvirt-lxc
<hggdh> davros-: well, with the netmask being all ones, it makes sense
<smoser> you can blame ubuntu for choosing libvirt as the prefered hypervisor interface.
<smoser> but blaming me is not going to do much good.
<Daviey> zul: Was enabling the testsuite in python-dingus viable?
<smoser> although i do agree with the choice to use libvirt
<zul> Daviey: i totally missed that part
<hallyn> smoser: it helps me ignore your annoying stmts about 'things i don't understand' :)
<hallyn> anyway i'm off to try stap some more.  bbl
<Daviey> hallyn: stapping is bad
<smoser> hallyn, thats reasonable.
<smoser> i really just dont think that changing a platform to suite an occupant is a good idea.
<davros-> hggdh, what do u mean all 1's as in "1"92."1"68. etc
<Daviey> smoser: I hate to do this, but i really do agree with you.
<hggdh> davros-: netmask, not IP
<just-a-visitor> Because netmask 255.255.255.255 or /32 specifies only 1 IP. You need at least /28 or 255.255.255.240. (In general: 255.255.255.0 will do.)
<zul> Daviey: not really
<Daviey> zul: uh?
<davros-> hggdh, oops
<davros-> thx
<zul> Daviey: you asked me about the python-dingus testsuite being enabled and i just had a look at it and my response is not really worth it
<jamespage> smoser, Daviey: ec2 testing completed - https://jenkins.qa.ubuntu.com/job/oneiric-server-ec2/
<zul> Daviey: since python-carrot is going to go away soon anywyas
<Daviey> zul: Ah.. do you want to comment on the MIR bug why?
<zul> Daviey: sure
<davros-> hmmm to bad it still gives out ip's that don't connect to the internet
<zul> Daviey: commented
<smoser> jamespage, do you have thoughts on https://jenkins.qa.ubuntu.com/job/oneiric-server-ec2/ARCH=amd64,REGION=eu-west-1,STORAGE=ebs,TEST=multi-part-ud,label=ubuntu-server-ec2-testing/lastBuild/console
<smoser> no route to host ?
<smoser> er... i guess it is UserInitiatedShutdown
<smoser> ah. https://jenkins.qa.ubuntu.com/job/oneiric-server-ec2/ARCH=amd64,REGION=eu-west-1,STORAGE=ebs,TEST=multi-part-ud,label=ubuntu-server-ec2-testing/lastBuild/artifact/None/amd64/m1.large/ebs/i-ba545fcc/fd0b2556-95c8-4194-a41b-ecb77335c031-terminated.console.txt it seems to have not come back up after a stop and start.
<smoser> oh wait. it never booted, even better.
<jamespage> nope - it did boot
<jamespage> phase 5/6 is restart and check then shutdown to complete test
<smoser> you mean stop
<smoser> stop and start
<smoser> as opposed to reboot, right?
<jamespage> it booted up first time OK
<jamespage> tested OK
<jamespage> but did not managed the stop/start that well - needed a kick as would not start
<jamespage> it got there in the end (I think)
<smoser> well, did it?
<smoser> unless initramfs has a timeout before just rebooting, i dont know what would have done it.
<jamespage> yeah it did - you can see the 'validated keys' message in the log - but it took a reboot
<smoser> ah. so your test suite kicked it.
<jamespage> and it took a long time to shutdown in phase 6
<jamespage> thats the one
<jamespage> thats the first time I've ever seen that error tho
<smoser> is the "i'm going to reboot that instance for you" logged anywhere ?
<smoser> yeah, its basically "can't find root". blkid /dev/xvda1 never came back.
<smoser> but then, according to https://jenkins.qa.ubuntu.com/job/oneiric-server-ec2/ARCH=amd64,REGION=eu-west-1,STORAGE=ebs,TEST=multi-part-ud,label=ubuntu-server-ec2-testing/lastBuild/artifact/None/amd64/m1.large/ebs/i-ba545fcc/fd0b2556-95c8-4194-a41b-ecb77335c031-terminated.console.txt it never came back up after that.
<jamespage> that might be a time issue on collecting console output
<jamespage> INFO:root:Instance i-ba545fcc not responding after 1200 seconds - rebooting
<jamespage> is the test framework kicking the instance
<jamespage> INFO:root:Host ssh-dss key validated for instance i-ba545fcc (ec2-46-137-133-140.eu-west-1.compute.amazonaws.com) INFO:root:Host ssh-rsa key validated for instance i-ba545fcc (ec2-46-137-133-140.eu-west-1.compute.amazonaws.com)
<jamespage> is it validating the signature of the server post stop/start
<smoser> jamespage, and then https://jenkins.qa.ubuntu.com/job/oneiric-server-ec2/ARCH=i386,REGION=ap-southeast-1,STORAGE=ebs,TEST=simple-user-data,label=ubuntu-server-ec2-testing/lastBuild/artifact/None/i386/m1.small/ebs/i-d5e38880/
<smoser> there is no terminated console ?
<jamespage> smoser: one sec - still catching up
<jamespage> looks like both eu-west-1 tests that failed failed in the same way?
<smoser> i think so, other than the i386 did not get -terminated console collected
<smoser> so we really dont know.
<smoser> https://jenkins.qa.ubuntu.com/job/oneiric-server-ec2/ARCH=i386,REGION=eu-west-1,STORAGE=ebs,TEST=multi-part-ud,label=ubuntu-server-ec2-testing/lastBuild/artifact/None/i386/m1.small/ebs/i-de545fa8/cbfa90d3-2ab8-4b99-aa5f-36d7cba1b072-terminated.console.txt is very interesting. its the first time we've seen the url error on metadata service after i put in the 'ci-info:' information.
<smoser> that ci-info information is printed on 'cloud-init start' job, which should only happen after eth0 is up.  but eth0 does not have a network address.
<smoser> but it clearly didn't timeout, as 2.8 seconds or some such in
<jamespage> did you see the kernel panic at the top of that log?
<smoser> where?
<jamespage> [7334662.954504] Kernel panic - not syncing: Attempted to kill init!
<smoser> oh, yeah.
<smoser> did the test suite kick that ?
<jamespage> yeah
<jamespage> the ap-southeast failure did not actually 'terminate' the machine per say
<smoser> hmm.
<jamespage> the entire test timed out and it scrubbed everything
<smoser> strange.
<Daviey> smoser: directly after the /root/dev/console bug
<smoser> right
<smoser> i just saw that.
<smoser> so that is the bug.
 * Daviey suspects race condition :/
<smoser> and i'm happy to see that
<smoser> as previously i'nd only seen it on kexec'd kernels.
<Daviey> if only we could reliably trigget it :/
<smoser> so for that log, it looks like:
<smoser>  * bug 833783
<uvirtbot> Launchpad bug 833783 in initramfs-tools "boot failure: can't open /root/dev/console: no such file" [High,Confirmed] https://launchpad.net/bugs/833783
<smoser>  (which caused the kernel panic)
<smoser>  * harness reboot
<smoser>  * did not get networking, even though 'ifup eth0' finished.
<smoser> is that right, jamespage i'm not sure about the harness kick.
<SpamapS> ivoks: re the upstart question.. I'd do stopped rc RUNLEVEL=[2345] so you can also start when you switch from runlevel 1 -> 2
<ivoks> SpamapS: ok, thanks
<SpamapS> ivoks: make sure you stop on starting rc RUNLEVEL=[016] so you stop before the depended upon service.
<jamespage> smoser: yeah - that looks right to me
<smoser> then, *another* root/dev/console error
<smoser> and success.
<smoser> wow.
<smoser> if at first you dont succeed.
<jamespage> keep on rebooting!!!
<jamespage> it will get there in the end!
<SpamapS> hggdh: and re the RAID1 oddness.. that has not happened for me.. and I'm not sure how it would decide to choose the volumes that way ... at what step in the tests did you see this?
<hggdh> SpamapS: I installed, booted with both disks, shutdown, removed first disk, rebooted degraded, shutdown, removed the second disk and readded the first, booted degraded, shutdown, added back the second, got it
<hggdh> SpamapS: on AMD64 (KVM)
<smoser> jamespage, is it possible to get a permalink to https://jenkins.qa.ubuntu.com/job/oneiric-server-ec2/ chart ?
<smoser> ah. this looks like maybe https://jenkins.qa.ubuntu.com/job/oneiric-server-ec2/7/
<jamespage> https://jenkins.qa.ubuntu.com/job/oneiric-server-ec2/7/
<jamespage> snap
<jamespage> thats the specific build record so should not change
<SpamapS> hggdh: you broke your RAID badly
<SpamapS> hggdh: I updated the test procedures late in the 10.10 cycle because they instructed you to do that.. booting with one up, then the other, corrupts the array consistently
<SpamapS> hggdh: its an open bug that we don't have a good solution for, and can be avoided by not doing that. ;)
<hggdh> SpamapS: ah. Then we have to update the instructions :-)
<SpamapS> hggdh: I did!
<hggdh> SpamapS: do you have the bug #?
<SpamapS> bug 557429
<uvirtbot> Launchpad bug 557429 in ubuntu-release-notes "array with conflicting changes is assembled with data corruption/silent loss" [Undecided,Fix released] https://launchpad.net/bugs/557429
<SpamapS> hggdh: step 16 f could perhaps using some more clear wording.
<hggdh> SpamapS: yes, looking at it now
<hggdh> SpamapS: although I did not get the destructive effect on the bug
<SpamapS> hggdh: We may need to more strongly state that this is a known bug that has no easy fix, and so that procedure should be avoided..
<SpamapS> hggdh: you did, you just don't know it.
<SpamapS> hggdh: dig through log files, you'll find some nastiness.
<hggdh> will do :-)
<SpamapS> Its *possible* that the writes were all nearly identical in alignment and placement, but given the way ext4 works, thats highly unlikely.
<SpamapS> Have seen it corrupt stuff in /var/lib/dpkg .. /etc .. initramfs.. its just nasty
<smoser> reported bug 838199
<uvirtbot> Launchpad bug 838199 in linux "initramfs blkid /dev/xvda1 did not return, failed boot" [Undecided,New] https://launchpad.net/bugs/838199
<Daviey> smoser: ^^ might be a udev bug
<smoser> i dont think so.
<smoser> blkid blocked.
<smoser> died on IO
<smoser> kernel
<smoser> you disagree?
<smoser> if there wasnt the second failure, i would have really felt like attributing it to hardware failure on the host.
<SpamapS> Do we have the dmesg/console output for that one?
<hggdh> SpamapS: is http://testcases.qa.ubuntu.com/Install/ServerRAID1 better?
<Daviey> smoser: things like http://git.kernel.org/?p=linux%2Fhotplug%2Fudev.git;a=commitdiff_plain;h=2c64f5898ccf7c2dfc330c6b21540d6080d37659 come to mind
<smoser> Daviey, i'm confused.
<Daviey> smoser: noted.
<smoser> jamespage, so i think analysis was wrong for bug 838199
<uvirtbot> Launchpad bug 838199 in linux "initramfs blkid /dev/xvda1 did not return, failed boot" [Undecided,New] https://launchpad.net/bugs/838199
<smoser> of the fialures at https://jenkins.qa.ubuntu.com/job/oneiric-server-ec2/7/
<smoser> its the ap-southeast-1 and the top eu-west- that look alike.
<SpamapS> hggdh: yes! thank you!
<smoser> the bottom eu-west-1 is root/dev/console issue
<jamespage> right
<smoser> i see nothing at all in https://jenkins.qa.ubuntu.com/job/oneiric-server-ec2/7/ARCH=i386,REGION=ap-southeast-1,STORAGE=ebs,TEST=simple-user-data,label=ubuntu-server-ec2-testing/artifact/None/i386/m1.small/ebs/i-d5e38880/796746d6-f691-4b8c-b9c7-58c99b3605e4-stopped.console.txt
<smoser> other than "The disk drive for / is not ready yet or not present."
<smoser> and "The disk drive for /tmp is not ready yet or not present."
<smoser> i'm really not sure how to handle that failure.
<Daviey> lynxman: Have you fixed bug 818177 yet?
<uvirtbot> Launchpad bug 818177 in linux "HP DL380G5 root disk mounted read-only on boot and boot fails" [High,Confirmed] https://launchpad.net/bugs/818177
<lynxman> Daviey: nope, I'm still running through it
<lynxman> Daviey: thanks to apw and your suggestions I've reached a small impass
<lynxman> Daviey: Looks like the culprit is mountall, it sits forever trying to remount the local filesystems, whereas remounting manually works
<lynxman> Daviey: now the question is, how do I know how init is handling the local fs, and if so in upstart systems how can I modify its parameters to be more verbose
<Daviey> lynxman: can you strace mountall ?
<lynxman> Daviey: doing that right now
<Daviey> lynxman: it might be an issue that foundations might be better to help with TBH.
<lynxman> Daviey: I'll move it to ubuntu-devel then
 * lynxman jumps again
<foods> has anyone on the server team pushed anything to openstack/glance via the new gerrit?
<foods> lol
<adam_g> has anyone on the server team pushed anything to openstack/glance via the new gerrit?
<RoAkSoAx> foooooooods..... I mean adam_g :) :P
<zul> adam_g: nop
<zul> adam_g: is it broken somehow?
<adam_g> zul: it seems to be having trouble with my launchpad acct / openid URL and not letting me get anything into gerrit. i was wondering if someone else might be willing to push this  patch to get bug #828719 moving
<uvirtbot> Launchpad bug 828719 in glance "glance/common/config.py find_config_file() loads config files from "."" [Undecided,Confirmed] https://launchpad.net/bugs/828719
<Daviey> adam_g: Ask Monty to delete your gerrit account?
<adam_g> Daviey: it seems lp is giving two different openid URLs for my account. im waiting to get a duplicate accounts merged into mine so the two urls hopefully do not exist
<bluethundr> hello I am trying to update php on a lucid lynx server, but I am getting a gpg failure http://pastie.org/2460990
<bluethundr> under centos I wqould just issue a yum --no-gpgcheck but is there a similar ubuntu oriented command? thanks
<Daviey> adam_g: that is ..odd..
<Daviey> two id's shoudn't cause issue :/
<adam_g> Daviey: it might if gerrit is getting one by resolving your lp ID at login, and another by resolving your name at commit
<Daviey> adam_g: Ahhh!
<philsf> does anybody know if there's a webmail client that shows the threads in linear mode, a la gmail?
<Pici> 'webmail client'?
<philsf> Pici, webmail, imap client
<philsf> (mistyped)
<philsf> meaning, standalone imap client, NOT suid like openwebmail.
<Technicus> Hello . . . how do I configure the firewall of the server edition?
<Pici> philsf: I thought there was a way to do that it mutt, but I haven't used it enough lately to remember the keybindings/settings.
<Myrtti> philsf: I have a feeling I've read somewhere that roundcube might have had something similar, but I really am not sure...
<Myrtti> "Threaded message listing"
<Myrtti> don't know if it's suid tho, *shrug*
<Pici> oh. webmail.  I'm really not doing well answering questions today.
<bluethundr> nm.. got it! all hail google! http://knowledgebase.cc/software/linux/php-linux/php-version-unter-ubuntu-updaten/   :)
<philsf> Myrtti, roundcube is not suid, it's fine. It's actually my first choice for a webmail, for now.
<cloakable> gmail interface is mine <.<
<kirkland> adam_g: ping
<kirkland> adam_g: I hear from agy that he has an "nagios-openstack-plugins" package, that does some nagios based monitoring of openstack
<kirkland> adam_g: i thought this might be a cool thing to squeeze into oneiric
<Daviey> kirkland: he made a package of it?
<adam_g> kirkland: ya! i saw that too
<kirkland> Daviey: that's what he said
<adam_g> kirkland: they are simple checks that use existing plugins.
<adam_g> kirkland: dunno if that should be packaged with openstack, or added to ensemble formulas via a nagios relation on each
<kirkland> adam_g: let's package with openstack
<kirkland> adam_g: i see, i misread
<kirkland> adam_g: Daviey: yeah, i think we should jam that into our openstack packaging
<kirkland> adam_g: Daviey: it's small/trivial enough
<Daviey> Hmm.. i'm not sure TBH.
<adam_g> kirkland: theres some discussion around those packages atm, actually
<Daviey> Does that mean installing on all Orchestra clients?
<Daviey> openstack or not?
<hggdh> we need some help on the server ISO tests: http://iso.qa.ubuntu.com/qatracker/build/ubuntuserver/notcompleted
<kirkland> Daviey: adam_g: i'd see us just shipping the nagios config in /usr/share/openstack/nagios or something
<kirkland> Daviey: adam_g: and nagios would be a suggested package
<adam_g> kirkland: personally dont know if it should be handled in openstack packaging or elsewhere, as this model wont scale well when we start tacking on collectd configs, ganglia, rsyslog, etc
<kirkland> adam_g: fair enough;  openstack-monitoring-plugins
<patrickmw> jamespage, python-jenkins project is working great.  Thanks for pointing me to it
<kirkland> adam_g: as a new package/project
<kirkland> adam_g: and hang -collectd, -ganglia, -rsyslog off of that as binary packages
<RoAkSoAx> smoser: hjowdy!! does clout-init always changes the hostname to ubuntuhost if not specified in cloud-init config
<smoser> no
<smoser> 'ubuntu' is what is in /etc/hostname of a clean image
<smoser> so on first boot, hostname gets set to that, then cloud-init comes by and sets it to the hostname portion of data from the metadata service
<adam_g> kirkland: sounds better to me. it'll just dump some potentially useful stuff into a directory for users to cherry pick from?
<RoAkSoAx> smoser: well I'm installing on bare metal and on first boot, cloud-init is changing the hostname to ubuntuhost
<RoAkSoAx> smoser: 2011-08-31 13:22:53,906 - __init__.py[DEBUG]: handling set_hostname with freq=None and args=[]
<RoAkSoAx> 2011-08-31 13:22:53,918 - cc_set_hostname.py[DEBUG]: populated /etc/hostname with ubuntuhost on first boot
<RoAkSoAx> smoser: however, the hostname was passed to the kernel args via the pxe file
<smoser> hm.. it might do that i guess if there is nothing in metadata. i'd have to look.
<kirkland> adam_g: yeah
<RoAkSoAx> smoser: so in user-data file I should just add
<RoAkSoAx> manage_etc_hosts: True
<RoAkSoAx> hostname: my-hostname-here
<kirkland> adam_g: worst case, /usr/share/doc/examples
<RoAkSoAx> fwereade: ^^
<kirkland> adam_g: just somewhere discoverable in /usr/share
<kirkland> adam_g: we can haggle over the final placement
<adam_g> kirkland: sounds reasonable
<kirkland> adam_g: but the key would be to establish and distribute some best practices, that are immediately usable
<kirkland> adam_g: without being the be-all/end-all configuration of such tools
<kirkland> adam_g: at least for now, users would have to opt-in, and enable them
<adam_g> kirkland: i like that, then any formula can install that package and use the plugins without maintaining them within the formula.
<kirkland> adam_g: if we get to a point where we have concensus on the monitoring best practices, then we can talk about enabling some of the by default (if we ever get there)
<kirkland> adam_g: eggggzactly
<kirkland> adam_g: just putting them *somewhere* would give us a base to improve upon
<smoser> RoAkSoAx, in oneiric if you just set 'fqdn' to fqdn then you'll be set.
<kirkland> adam_g: without that, everyone will write and rewrite their own nagios/collectd/rsyslog/ganglia modules and reproduce lots of work
<kirkland> adam_g: with that shipped base, hopefully they'll start from there, improve upon it, and show us their diffs
<smoser> RoAkSoAx, but please open a bug.. if it is not set, cloud-init should not go messing things up.
<smoser> if not set and not available in metadata or cloud-config that is.
<adam_g> kirkland: yah, sounds good
<RoAkSoAx> smoser: will do
<kirkland> adam_g: cool
<RoAkSoAx> smoser: right but cloud-init should respected the parameter that is passed on the kernel to set the hostname either way, right?
<RoAkSoAx> smoser: i'm gonna file a bug anyways
<smoser> passed on the kernel?
<RoAkSoAx> smoser: bug #838280
<uvirtbot> Launchpad bug 838280 in cloud-init "Cloud-init overwriting hostname given to kernel on baremetal" [Undecided,New] https://launchpad.net/bugs/838280
<smoser> RoAkSoAx, it has nothing to do with bare metal
<smoser> what does the meta-data include that you've seeded ?
<RoAkSoAx> smoser: meta-data only has instance-id
<smoser> yeah, that is the thing. since it is not there, cloud-init is falling back to a default. and it should instead fall back to not setting up hostname.
<RoAkSoAx> smoser: right, but isn't it possible to check if the hostname has been set on a kernel parameter? If so, it should not try to set a hostname again?
<smoser> RoAkSoAx, it wont have been set via a kernel parameter.
<smoser> cloud-init will know nothing about that.
<RoAkSoAx> smoser: I guess that the only "easy" way to address this is to set the hostname on the meta-data on the cloud-init meta-data generated by ensemble
<smoser> as it runs on first boot after install
<smoser> and you wont have that kernel parameter then.
<uvirtbot> New bug: #838280 in cloud-init (main) "Cloud-init overwriting hostname given to kernel on baremetal" [Undecided,New] https://launchpad.net/bugs/838280
<smoser> RoAkSoAx, thats not so easy, and probably wrong, as why would ensemble know the hostname ?
<smoser> it should not have to.
<RoAkSoAx> smoser: in cobbler it kinda does
<smoser> i'd rather it not, and rathre not solve it that way.
<smoser> let me look at fixing in cloud-init.
<RoAkSoAx> smoser: the thing here is that when ever you deploy with ensemble, the deployed machine ends up having its hostname as 'ubuntuhost' instead of the hostname that was told to the machine on the kernel parameter
<RoAkSoAx> smoser: which means that on first boot, cloud-init changes the hostname from whatever it was set to something that it shouldn't (ubuntuhost)
<smoser> RoAkSoAx, i understand that.
<smoser> but onfirst boot, you will not have that kernel parameter.
<smoser> so cloud-init could not even read it if it wanted.
<RoAkSoAx> smoser: righyt, but i *think* it is writing the   /var/lib/cloud/data/previous-hostname with the correct hostname set by the kernel parameter
<RoAkSoAx> smoser: i will have to doubel check that
<smoser> RoAkSoAx, sure, thats fine, but thats only because it finds it there.
<RoAkSoAx> smoser: right that's why I'm sauing, during installation the hostname was set, but when cloud-init initializes post-installation on first boot, it changes the hostname when it shouldn't have
<smoser> i think we're in agreement.
 * RoAkSoAx sometimes wonders if he expresses himself correctly :)
<smoser> RoAkSoAx, can you confirm that it is setting it to the string 'ubuntuhost' ?
<RoAkSoAx> smoser: trying to but ubuntu installations are failing for some kind of archive error
<RoAkSoAx> smoser: give me a sec since I'm gonna try to do it again
<adam_g> lynxman: ping
<lynxman> adam_g: pongie
<adam_g> lynxman: regarding bug #818177, is LVM involved on that system at all?
<uvirtbot> Launchpad bug 818177 in linux "HP DL380G5 root disk mounted read-only on boot and boot fails" [High,Confirmed] https://launchpad.net/bugs/818177
<lynxman> adam_g: definitely
<RoAkSoAx> smoser: btw... cloud init is still failing to import the PPA , but when I do it manually, it doesn't fial to import it
<lynxman> adam_g: so far what we've seen is that udev is not mounting properly
<lynxman> adam_g: does that ring a bell?
<adam_g> lynxman: im seeing some kind of race here, not sure if its related. but only happens when there is are LVs present
<lynxman> adam_g: so looks like its the same one
<lynxman> adam_g: or could be
<lynxman> adam_g: do you have a bug for that race?
<adam_g> lynxman: no, i just hit it yesterday and worked around it but tried creating some LVs to see if it reproduces, and it does
<lynxman> adam_g: so looks like we might be getting somewhere
<adam_g> lynxman: my rootfs is on a raw partition /dev/sda5, i created a VG with /dev/sda7 and a LV.. now boot fails. from what i could tell just now, it looks like pvscan/vgchange was happening before the nodes were created for the PVs in /dev
<lynxman> adam_g: which fits with what I'm seeing as well because udev somehow didn't get mounted on time
<adam_g> lynxman: what do you mean by 'udev didnt get mounted on time'
<lynxman> adam_g: in my case doesn't get mounted at all, but I think this is clearly the race condition
<lynxman> adam_g: udev is needed before it kicks in
<Daviey> bug 838298 is up for grabs for anyone that wants it.. (MIR blocker)
<uvirtbot> Launchpad bug 838298 in python-novaclient "tests installed with package, and to /usr/share/pyshared/tests" [Low,Confirmed] https://launchpad.net/bugs/838298
<smoser> RoAkSoAx, i have no idea why cloud-init woudl fail import from the ppa.
<smoser> can you show me a box without touching it and  let me look aroun d?
<RoAkSoAx> smoser: i'm reinstalling on the test-rig
<RoAkSoAx> smoser: will let you know once it is done
<Daviey> hallyn: Can you create a ~ubuntu-virt vanilla PPA for qemu 0.15?
<Daviey> zul: can you push your libvirt new version to that PPA?
<Daviey> .. want to release note, suggesting people try them.
<zul> Daviey: which one?
<Daviey> zul: You re-merged?
<zul> Daviey: i never got it built properly it always ftbfs
<Daviey> ah.. not a good candidate then :)
<zul> Daviey: i didnt think so :)
<Daviey> zul: i thought it fixed issues for you?
<zul> Daviey: supposedly...but new issues crept up will building it
<zul> s/will/while/g
<Daviey> lovely
<zul> so ill leave that for hallyn in oneirc+1
<Daviey> so i think we cn nack the idea of a new version. :)
<zul> i menally already have
<Daviey> heh
<Daviey> zul: the issues you have.. do you have LP bugs tracking them, linking to upstream commits? :)
<zul> no since its not in the archive yet
<Daviey> uh?
<Daviey> I mean the reasons you wanted a new version?
<zul> Daviey: oh no i havent
<hallyn> zul: didn't you have an arm patch for the libvirt candidate though?
<uvirtbot> New bug: #838298 in python-novaclient (universe) "tests installed with package, and to /usr/share/pyshared/tests" [Low,Confirmed] https://launchpad.net/bugs/838298
<hallyn> Daviey: what precise ppa name should i use?
<zul> hallyn: yeah i do...we are talking about the merge i was woring on (0.9.4) not yours
<Daviey> hallyn: I guess qemu015-testing ?
<hallyn> Daviey: but then libvirt can't go in there :)
<Daviey> well yeah... /me is thinking ON HIS FEAT
<zul> or trying to :)
<Daviey> The updated information that we are dropping the new libvirt, means we are only testing qemu.. right?
<zul> no...we are dropping the libvirt i was working on...no hallyns
<Daviey> hang on.. i thought hallyn was just doing qemu, and zul was doing libvirt.. am i wrong?
<RoyK> anyone tested XEN with ubuntu yet?
<zul> yes you are wrong hallyn was working on both qemu and libvirt
<Daviey> RoyK: yes, some
<Daviey> more testing welcome :)
<RoyK> does it require virt extensions in the cpu?
<Daviey> no
<Daviey> hmm, for hvm it does(?)
<RoyK> xen in mainline kernel is most appreachiated ;)
<RoyK> I've been trying to uses paravirt ubuntu on xencenter and that's not a stroll in the park
<RoyK> hopefully it'll be a bit easier with 12.04LTS when that gets around
<Daviey> RoyK: only if people like you test it during development :)
<RoyK> not that I want that old xencenter beast
<Daviey> RoyK: if you want to test it on oneiric, that would float my boat.
<RoyK> I guess we can release a blade from the xencenter and use that
<RoyK> some dual xeon thing with 16GB RAM
<RoyK> should be sufficient for testing :P
<RoyK> Daviey: any idea if the ubuntu xen will support any redundancy?
<RoyK> say, shared iSCSI storage for the VMs etc
<uvirtbot> New bug: #838309 in samba (main) "package samba 2:3.5.8~dfsg-1ubuntu2.3 failed to install/upgrade: sub-processo script post-installation instalado retornou estado de saÃ­da de erro 1" [Undecided,New] https://launchpad.net/bugs/838309
<Daviey> RoyK: No idea, sorry.  As this is the release we are re-introducing xen, it will have both bugs and feature issues.  Your input, right now, could make a difference :)
<smoser> RoAkSoAx, i'm not sure where 'ubuntuhost' is coming from
<RoAkSoAx> smoser: neither am I :S
<smoser> oh. wait. yes i am.
<RoAkSoAx> smoser: please enlight me
<robbiew> RoAkSoAx: call time?
<smoser>  /usr/share/pyshared/cloudinit/DataSourceNoCloud.py
<RoAkSoAx> robbiew: yep
<RoAkSoAx> robbiew: ready when you are
<RoAkSoAx> smoser: ahh! I see... so we'd need a way to autodetermine the hostname given to the kernel
<smoser> RoAkSoAx, well...
<smoser> it will work if it didn't have that there.
<hggdh> server ISO tests still to be done: http://iso.qa.ubuntu.com/qatracker/build/ubuntuserver/notcompleted
<RoAkSoAx> smoser: right cause previous-hostname does not contain the hostname that was given to the kernel
<Daviey> hggdh: why does iscsi always get left out?
<hggdh> Daviey: IDK. Perhaps because not everybody has an iSCSI setup? But it is always iSCSI...
<smoser> that is not getting set right.
<smoser> let me look at this.
<Daviey> hggdh: iscsi setup is just using two kvm / virtualbox / etc vm's?
<hggdh> Daviey: I would guess it would work, yes
<hggdh> Daviey: problem is time, for me
<Daviey> hggdh: that is what the test case HOW TO suggests :)
<Daviey> hggdh: ack
<hggdh> Daviey: there is a finite number of VMs I can run at the same time ;-)
<hggdh> *very* finite
<Daviey> hggdh: I've had over 50 running on one machine before :P
<SpamapS> Daviey: I was wondering why you were walking funny that day..
<jetole> Hey guys. Does anyone know how to change the domain name on a system? I'm pretty sure it's supposed to be the first domain name listed in /etc/hosts but want to make sure
<Daviey> SpamapS: hah
<Daviey> jetole: /etc/hostname
<jetole> Daviey: no the domain name. not the host name
<jetole> according to the man page for hostname (man 1 hostname), which I am reading now, the dns domain part of a fqdn hostname is set by calling gethostbyname(2) on the set hostname so if the /etc/hosts file is called before the dns when resolving names then I can change it there
<jetole> so I guess I was right, it's /etc/hosts
<smoser> hallyn, what would you expect 'reboot' to do within a libvirt container ?
<aubre> Hey, is there a patched apache coming out for lucid ?
<Pici> aubre: For what purpose?
<andol> Pici: https://issues.apache.org/bugzilla/show_bug.cgi?id=51714 I assume.
<uvirtbot> issues.apache.org bug 51714 in All "Byte Range Filter might consume huge amounts of memory combined with compressed streams" [Critical,New]
<Pici> andol, aubre: looks like there is a bug filed, but no one is assigned to it: https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/837991 :/
<uvirtbot> Launchpad bug 837991 in apache2 "Update apache2 to 2.2.19-2 to fix CVE-2011-3192" [Undecided,Confirmed]
<aubre> Pici: the apache team has fixed it, it just needs to be rolled out: 2.2.20 http://projects.apache.org/projects/http_server.html
<SJr> I keep getting this on my server e-mailed to root
<SJr> /usr/share/sendmail/sendmail: 1177: /usr/sbin/sendmail-msp: not found
<SJr> how do I fix it?
<hallyn> smoser: I'm not sure.  probably kill all tasks and then hang.  what does it actually do?
<smoser> kill all tasks and then hang. i think
<hallyn> smoser: there's currently no way, without introspection, to tell whether a container wanted to reboot or poweroff.  lxc looks a the container's utmp.
<hallyn> dlezcano is pushing a kernel patchset to fix that.
<smoser> RoAkSoAx, so the ony thing I can come up with is that the network isnt really all the way up when this is initially happening
<RoAkSoAx> smoser: right, yeah I was thinking that might be related to a network problem
<smoser> yeah, thats it.
<smoser> completely
<smoser> /etc/init/cloud-init-nonet.conf is at fault
<RoAkSoAx> smoser: is there a way to make cloud init wait for network to be completely up?
<smoser> fix the bug i think
<smoser> :)
 * Daviey hands RoAkSoAx a GOTO 10.
<smoser> RoAkSoAx, http://paste.ubuntu.com/679198/
<smoser> that is what we need for cloud-init starting too early
<smoser> and i think the hostname bug is as easy
<RoAkSoAx> smoser: cool
<RoAkSoAx> smoser: are you commiting that fix?
<RoAkSoAx> to your upstart job?
<smoser> yeah. i committed to trunk and pushed.
<RoAkSoAx> smoser: cool
<RoAkSoAx> smoser: will this hit oneiric soon?
<smoser> well, not till after archive opens obviously. but we'll get one there as soon as we can
<RoAkSoAx> smoser: alright, cool,  as long as it is before the end of next month we are all good
<RoAkSoAx> hehe
<lynxman> Daviey: you're handing GOTOs now? wow
<davros> do eth0 and eth1's mask have to be the same? eth0 is 255.255.252.0 eth1 is 255.255.255.0
<lickalott> are you using them on different networks?
<lickalott> *for
<davros> no one is connected to the modem eth1 is the dhcp listen
<uvirtbot> New bug: #827429 in libcgroup (universe) "split into two packages" [Low,Fix released] https://launchpad.net/bugs/827429
<davros> and it gives out ip's that dont have connection to the internet
<lickalott> no router in between?
<davros> no its a rack server 2 nic's in one box
<lickalott> histoically eth0 = your NIC and eth1 = secondary or wireless interface
<lickalott> ahhh
<lickalott> then i would say yes
<davros> hmmm ok
<uvirtbot> New bug: #838380 in libcgroup (universe) "non-root should not be able to create files in /sys/fs/cgroup" [High,In progress] https://launchpad.net/bugs/838380
<uvirtbot> New bug: #733553 in python-novaclient (universe) "pyc files in source tarball" [Low,New] https://launchpad.net/bugs/733553
<davros> wonder if that will fix the dead ip's lol
<frewsxcv> i have ubuntu-server running on an ec2 instance. if i stop it, will /mnt erase?
<SpamapS> frewsxcv: probably
<SpamapS> frewsxcv: unless you've moved it to be on top of an EBS mount.
<robos> hello: ubuntu in a production environment. Would you go with 11.04 or 10LTS?
<robos> I'm thinking 10LTS and it's not even close. Thoughts?
<Corey> robos: LTS.
<Corey> robos: Otherwise I'd call you a fool.
<robos> hah, okay. Double checking :-) thanks
<Corey> 10.04LTS.
<lenios> why do you think there are releases twice a year if you're a fool to go without lts?
<patdk-lap> many reasons
<patdk-lap> but unless there is something you specifically need in a newer version, using the lts is much better
<patdk-lap> as your not forced to upgrade it every year
<SpamapS> Corey: why would he be a fool for running 11.04 ?
<medberry> support life on non-LTS is 18 mo.s IIRC.
<SpamapS> Nobody asked how long the server ies expected to live.
<lenios> true medberry
<patdk-lap> don't most servers last till the hardware fails? :)
<SpamapS> Or whether he plans to utilize continuous integration...
<SpamapS> patdk-lap: not these days.. they have this thing called virtualization.. ;)
<patdk-lap> ya, I run the vm's the same way
<medberry> so vzn servers can last longer than the hardware that's running them (via migration)
<patdk-lap> till the die a horrable death, just it comes less often :)
<SpamapS> There are many who simply deploy constantly into test VMs .. and this includes testing on newer versions of the OS.
<SpamapS> At my previous company it took 2 years of blood sweat and tears to get rid of RedHat 8.0 because people wanted to keep the same old servers alive until they died.
<patdk-lap> heh, I was stuck on fc1 :)
<patdk-lap> couldn't do much about it, new hardware got rhel5
<patdk-lap> been replacing that as I migrate services around
<SpamapS> The new model was simple.. maintain config management.. use it to actualize everything.. when a new OS release came out (CentOS 5 in this case) try out all known tests on it, and if they all pass, migrate by re-birthing all services.
<SpamapS> so I don't know if I'd call him a fool for considering 11.04
<SpamapS> Perhaps he was just inquiring about the quality of 11.04 vs. 10.04
<SpamapS> not the life cycle. :)
<hallyn> stgraber: if/when you get a chance, could you take a look at my proposed fix for 838380?  If there are no problems I'd like to see it pushed after beta release.
<hallyn> haha, just realized the problem with the sshd lxc template in ubuntu.  hillarious
<soren> hallyn: pray tell :)
<jj995> I was reading http://serverfault.com/questions/190/zfs-vs-xfs/72743#72743 and saw "using lvm snapshots and xfs on live filesystems is a recipe for disaster especially when using very large filesystems." -- have anybody had good/bad experiences taking snapshots with ~8TB on LVM2?
<hallyn> soren: it wants to bind-mount the template itself onto the containers /sbin/init.
<hallyn> the template is under /usr/lib/lxc/tempaltes.  THe contaienr's rootfs is getting set up under /usr/lib/lxc
<hallyn> so we've overmounted it with the root before we get around to bind-mounting it :)
<Corey> SpamapS: I'm a big fan of long term stability for servers.
<Corey> Particularly given some shops' reluctantce to touch things in production.
<SpamapS> Corey: me too.. I think its the simpler approach for most cases.
<SpamapS> Corey: BUT it has massive draw backs for disaster recovery and scaling.
<SpamapS> Which is what CI does well
<Corey> SpamapS: Right, but that requires an ephermeral nature to many of your VMs.
<Corey> "This current one will be shut down and replaced in another two weeks so who cares about patching it" for instance.
<Corey> Requires a bit of a different development mindset. :-)
<hallyn> soren: (opened bug 838410 on it, wont' be fixing it today)
<uvirtbot> Launchpad bug 838410 in lxc "sshd template is broken" [Medium,Confirmed] https://launchpad.net/bugs/838410
<SpamapS> 2 weeks is a pretty crazy life cycle
<SpamapS> 6 months is quite doable
<SpamapS> Corey: the idea is that because you're duplicating the process with every change to the system, you don't care if it lives for 2 weeks or 2 years.
<SpamapS> Having effective, realistic staging capabilities is pretty tricky tho
<uvirtbot> New bug: #838410 in lxc (main) "sshd template is broken" [Medium,Confirmed] https://launchpad.net/bugs/838410
<Corey> SpamapS: Yeah, it's not trivial for sure.
<soren> SpamapS: It's well worth the effort IMO, though.
<Corey> I find that LTS gives added flexibility for "bad practices." :-) I say this while working on an EOL'd environment.
<soren> SpamapS: But yes, very much non-trivial.
<SpamapS> I kept 6 redhat 8.0 servers, behind two firewalls (border and backend), alive until .. well.. crap.. I left in 2010 and they were still running.
<alamar> :D
<SpamapS> Don't tell me about EOL environments. ;)
<TheEvilPhoenix> lol
<alamar> Linux chupachups 2.4.17 #3 Tue Jan 15 16:31:01 CET 2002 i686 unknown
<alamar> ;)
<alamar> it'S scary what's out there..
<SpamapS> alamar: dayum ;)
<stgraber> hallyn: looks good. I'll try to remember to sponsor that tomorrow. If I don't, just poke me.
<frewsxcv> does anyone know if the WSGI module is installed by default on ubuntu?
<hallyn> stgraber: great, thanks
<patdk-lap> heh, there are craploads of companies with nt 3.5 systems still up and running
<patdk-lap> just a few months ago, someone came out and said they had a stock pile of EOL sql servers, some bank
<elz89> frewsxcv: are you still around? I had to install WSGI module on Ubuntu Server 11.04 when I was setting up a firefox sync server a while back.
<frewsxcv> elz89: i learned that mod_wsgi is not what i want. uwsgi is the officially supported module and is included by default with versions >0.8.6
<frewsxcv> i grabbed nginx from the PPA
<lynxman> RoAkSoAx: hey, adam_g fixed the problem we were having, the machine should be ready for getting onwards tomorrow morning :)
<airtonix> i'm having some difficulty getting xapian installed and recognised by haystack. http://invisibleroads.com/tutorials/xapian-search-pylons.html << says install xapian-core xapian-bindings-python
#ubuntu-server 2011-09-01
<elz89> frewsxcv: I was not aware of uwsgi at the time of configuration. Maybe I will look into changing that. I also did not know of nginx, but I am going to give it a try on aspare
<airtonix> nice xapian packages weren't compiled with python
<airtonix> awesome
<elz89> *a spare arch server I have just set up.
<airtonix> i've got a ubuntu 10.10-server virtual machine running inside virtualbox 4.1 headles on a ubuntu 64bit 10.10-server host. it uses bridged networking. when i use ssh to do stuff on the 10.10 vm guest the terminal repsonse to typing is __reallllllyy__ slow, it uses 2gb of ram and has two cores. the server is an i7 3ghz with 8gb of ram. any ideas?
<airtonix> http://ubuntuforums.org/showthread.php?t=1752906
<airtonix> gahhh that did nothing
<airtonix> still slow as a muckraker
<airtonix> type a letter, wait 2secs, letter appears
<twb> airtonix: what diagnostics have you already tried?
<airtonix> not even sure what to do
<airtonix> my other two vm guests ( also ubuntu-server 10.10) don't have this delay response with ssh sessions i have running. i have all three running at the same time and all have ssh sessions going
<twb> Well, for example, have you looked for suspicious resource usage in top?  Have you tcpdumped?  Have you compared the /etc/ trees for different settings?
<twb> Just to be clear, you're sshing *from* a normal machine *to* the VM, right?
<twb> Your terminal emulator isn't running inside an emulated system
<airtonix> <twb> Just to be clear, you're sshing *from* a normal machine *to* the VM, right? <<<< yes
<twb> The particular failure case you're getting sounds to me like swap-thrashing or similar
<twb> It's not likely to be something like vda being slow because it's a qcow2 when the others are raw, because that would only have an effect after you hit the enter key and sent the command, whereas you're saying this is happening on each keypress
<twb> It could also be that the ssh line is flooded, hence tcpdump.
<twb> Could be something obscure like ip n caching isn't happening so every TCP packet needs an ARP handshake first.
<linocisco> hi
<airtonix> twb yeah i don't even know how to deal with any of that
<linocisco> how to download and burn all required packages on CD/DVD and reuse that time to time without internet
<linocisco> ?
<qman__> linocisco, every package you install from the internet gets downloaded to /var/cache/apt/archives/
<qman__> unless you manually delete them or run apt-get clean, they will still be there
<qman__> so if you have a system set up the way you want, just copy all the files from there
<DanaG>   create_connection_server_info failed: NT_STATUS_ACCESS_DENIED
<DanaG> I'm getting that when trying to point Windows Backup at my home share.
<DanaG> oops, wait, fixed that, but now:
<DanaG>   elitebook (192.168.1.143) connect to service dana initially as user dana (uid=1000, gid=1000) (pid 15214)
<DanaG> Nothing after that.
<lickalott> you have a samba share setup and have set everything appropriately?
<DanaG> I can write to files fine, but the "permissions" interface in Windows shows funky results (owner only has "special").
<DanaG> It may very well be Windows failing, but I can't tell that.  I can create files, but not "change permissions" via samba, as I assume the backups thing wants to do.
<lickalott> change the permissions on the samba share
<qman__> yeah
<qman__> samba creates the 'share' permissions that windows expects
<qman__> and approximates NTFS security
<qman__> I couldn't tell you off hand what option precisely you need
<qman__> but there's a lot more of them than the stock configuration lets on
<DanaG> I try to change the permissions, I get "access denied".
<DanaG> How do I get more, but not horrendously much more, debug info?
<DanaG> Hmm, "access mask" bits are all unset on the server's SMB response.
<DanaG> er, "read attributes" and "sync" are set.
<DanaG> So, looks like it's a permissions issue.
<DanaG> Not quite "access denied", but instead, "you didn't give me the permissions I asked for."
<DanaG> er, wait... they're unset on the REQUEST, as well.
<DanaG> Create options: .... .... .... .... .1.. .... .... .... = Backup Intent: This is a create with BACKUP INTENT
<airtonix> twb: through experimentation i think i reducedthe problem but not eliminated it by sudo service <something> stop a bunch of stuff including byobu (which was spiking now and then)
<linocisco> how to burn files  in  /var/cache/apt/archives from server ?
<DanaG> okay, so it's samba being funky.
<DanaG> okay, so it seems things owned dana:staff are erroring, but things owned dana:dana are not.
<DanaG> So, my permissions are funky.
<DanaG> Should I try Samba 4?
<DanaG> oh, and it's on ZFS, by the way.  chacl not supported.
<DanaG> Homes are on zfs, that is.
<DanaG> Because btfs is usesless without a fsck that can actually fix.
<DanaG> To heck with it; I'm trying samba4.
<DanaG> GNUTLS failed to initialise - Error while reading file.
<DanaG> Unable to save certificate in /var/lib/samba/private/tls/cert.pem parent dir exists ?
<DanaG> That's a copy-and-paste.
<DanaG> okay, scratch samba4.
<DanaG> weird... I uninstalled 4 and reinstalled 3, and did smbpasswd, and now it works.  Go figure.
<twb> airtonix: byobu is just a wrapper around screen
<twb> airtonix: it shouldn't be doing anything interesting
<lambda_x> anybody knows why adding -net user,hostfwd directive makes vm use 100% cpu and make starts to choke whole server?
<lambda_x> (kvm, 11.04)
<twb> lambda_x: that's for punching holes throught the userspace NAT?
<twb> Yeah
<twb> What was the full command line?
<lambda_x> one sec
<lambda_x> twb: /usr/bin/kvm -monitor unix:/var/run/kvm/dummy.socket,server,nowait -vnc 127.0.0.1:5908 -name dummy,process=dummy -rtc base=localtime -M pc-0.12 -balloon virtio -m 512 -drive file=/dev/vol0/dummy-boot,if=virtio,index=0,media=disk,boot=on,cache=none -drive file=/dev/vol0/dummy,if=virtio,index=1,media=disk,boot=off,cache=none -vga cirrus -S -daemonize -pidfile /var/run/kvm/dummy.pid -boot c -net nic,vlan=0,model=virtio,macaddr=fe:16:3e:75:89:cd -net tap,vlan=0
<twb> Well, unless you are adding another -net nic, I guess it's confused as shit trying to both DNAT and TUN the one virtual NIC.
<twb> Er, s/TUN/TAP/
<lambda_x> well, so it is impossible?
<twb> You can't have both a userspace NAT and a tun on the same virtual NIC
<lambda_x> to have tapped interface and redirect port?
<twb> s/tun/tap again
<twb> If you want to do port redirection of a TAP you do it in the firewall of the host OS
<twb> Because that's where the NAT, if any, will be
<twb> userspace NAT is in kvm/qemu for when you are running it as a non-root user and don't have permission to TAP
<lambda_x> well, that is good idea... :)
<lambda_x> tyvm twb
<twb> You could've worked if you if you stared at -net manpage entries for a few more hours
<twb> That's what I did
<twb> It is, sadly, not very obvious
<lambda_x> true
<Alan> I was hoping somebody might help me... I'm having issues with disk usage quotas....
<twb> Alan: you forgot to mount it -ousrquota
<Alan> no I didn't :P
<Alan> the quota is working, it's just not correctly
<Alan> so according to dumpe2fs, the block size is 4k
<twb> Wrong kind of block
<twb> (IIRC)
<Alan> ... seriously?
<Alan> what kind of block is a filesystem quota going to use if not a filesystem block?
<twb> http://paste.debian.net/128041/
<twb> That's a working hard quota of 1GiB per user on /home, on ext4 on lucid
<twb> (The inode limits are currently way too high; ignore them.)
<Alan> ok, well thanks for that
<Alan> but I totally want to slap whoever decided that "block" was the correct terminology there
<twb> It's block limit as in blocks vs. inodes
<jamespage> Daviey: want me to pickup the iscsi tests?  still have my test rig intact
<Alan> when you say "block limits" in relation to a filesystem, how is it supposed to mean anything other than filesystem blocks?
<Alan> twb: i understand the difference, but it's still ambiguous terminology
<Alan> i.e. it implies the wrong units
<twb> Alan: the way you should have worked this out, is to set it to a number like 1k or 1M, then dd /dev/zero into a file and see where it chokes
<Alan> especially since the inode limit is the number of inodes
<Alan> the block limit is intuitively the number of blocks...
<twb> FWIW not all filesystems are block-oriented in the first place
<Alan> twb: i thought i had done that, effectively...
<twb> Alan: maybe your test was bogus because quotaon wasn't on?
<Alan> it was on
<Alan> it was an unscientific test though
<twb> Now you know better
<Alan> I set the quota to 1024, and started SFTPing stuff until it errored
<Alan> which it did
<Alan> I didn't actually check exactly how much space i'd used though...
<Alan> so... are the M/G suffixes valid when specifying limits?
<twb> Probably not
<Alan> or just there for convenience in the output?
<Alan> damn :(
<twb> quotatools is from the dark ages
<Alan> so... those are 1k blocks then?
<twb> I think they are 1b blocks
<linocisco> hi
<linocisco> i have only P3 with 2*128MB SDRAM, can I install ubuntu server on that Compaq Ex815 computer ?
<jmarsden> linocisco: I think 10.04 LTS should install on that.  I have installed it on machines about that age and capabiity.  But the best way to find out is to try it :)
<linocisco> jmarsden: thanks man
<jmarsden> linocisco: You're welcome.  have fun :)
<linocisco> jmarsden: that is obsolete in office . so i got it as present
<jmarsden> Yes, it's not much use in a normal office environment as a workstation any more :)
<jmarsden> 3am here, I need to sleep ... goodnight.
<linocisco> jmarsden: I am trying to make the most out of it. with pure CLI server. I want to install mail server on it
<linocisco> jmarsden: ok. good nite
<acalvo> how can I disable the xorg server in a livecd to not start at boot time?
<acalvo> trying to create a custom livecd
<jamespage> Daviey: just looking at the bug queue - does bug 835625 need to be noted for beta-1
<uvirtbot> Launchpad bug 835625 in cyrus-sasl2 "package libsasl2-2 2.1.24~rc1.dfsg1+cvs2011-05-23-4ubuntu2 failed to install/upgrade: libsasl2-2:i386 2.1.24~rc1.dfsg1+cvs2011-05-23-4ubuntu2 (Multi-Arch" [Undecided,New] https://launchpad.net/bugs/835625
<jamespage> ?
<blinkiz> I installed a virtual machine with ubuntu 11.04 server. I can not get the 70-persistent-net.rules to be generated. What am I doing wrong?
<xokvictor_> join #ubuntu-server-ru
<xokvictor> Hi people, i have some problem with UEC. How i can make failover cloud controller?
<just-a-visitor> blinkiz: Check http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=562211 if it applies.
<uvirtbot> Debian bug 562211 in udev "udev: 70-persistent-net.rules is not generated" [Important,Open]
<zul> smb: so those two patches from xen hg will fix your hypervisor bug?
<smb> zul, Yes, first one fixes the disk, second one the NIC
<zul> smb: cool ill get it in after the beta
<smb> zul, Excellent. With those we can at least boot kernels before Oneiric.
<smb> in hvm mode
<linocisco> hi
<linocisco> how to check if my desired domain name is taken or free?
<zul> morning btw
<linocisco> I am setting up ubunt mail server with dpkg-reconfigure postfix
<linocisco> please help in choosing some options
<linocisco> should I choose Procmail for local delievery ? Yes or No ???
<linocisco> hi
<linocisco> hi all
<linocisco> i got error in postfix
<linocisco> what do I do ?
<Pici> Posting the error would be a good start. Use a pasetbin though.
<linocisco> telnet mail.example.org 25 = could not resolve .xxxxxx.org/25:Temporary failure in name resolution
<linocisco> what do I do ?
<Pici> thats not really a postfix error, it sounds like either your DNS is malfunctioning or there is an issue with your domain registration.
<linocisco> Pici: I have no config on network cards and no DNS server. that server is now standalone without internet
<linocisco> Pici: how can I make it work?
<linocisco> Pici: giving IP address can solve ?
<Pici> linocisco: is your server's hostname '.xxxxxx.org' or whatever?
<linocisco> Pici: yes. let say example.org
<linocisco> Pici: how can I check host name ?
<Pici> linocisco: the command is 'hostname'
<linocisco> yes.
<linocisco> Pici: yes. I see
<linocisco> Pici: it is hostname=ubuntuserver
<Pici> linocisco: so use that name instead of '.xxxxxx.org'
<jamespage> zul, Daviey, RoAkSoAx: any of you guys have a bit of time to sponsor a few jenkins related merge proposals for me?
<zul> how many beers are we talking about? ;)
<linocisco> Pici: I got it
<linocisco> Pici: i got telnet  = fine. ehlo xxxx.org = fine
<linocisco> Pici: btw, can I install dovecot from ubuntu server CD ?
<ersi> :|
<jamespage> zul: beer (>=4) | beer (< 5)
<jamespage> :-)
<ersi> In about two hours, we will be beering at the office \o/
<zul> jamespage: oh hell ok....give them here
<ersi> Thanks for remining me :-)
<jamespage> zul: ta
<Daviey> jamespage: oh joy, multiarch fallout.
<jamespage> Daviey: well maybe
<jamespage> the last change was a multiarch change
<jamespage> but I can't reproduce ATM
<Daviey> :/
<linocisco> hi
<linocisco> how can I add apt proxy on 10.04 LTS server?
<linocisco> i found no apt.conf file under /etc/apt
<linocisco> hi
<uvirtbot> New bug: #838768 in nova (universe) "Should default to --use_deprecated_auth" [Undecided,New] https://launchpad.net/bugs/838768
<Daviey> ttx: ^^ seen that?
<linocisco> i found no apt.conf file under /etc/apt
<linocisco> i found no apt.conf file under /etc/apt
<ttx> Daviey: ack
<ersi> linocisco: Just add the CD into /etc/apt/sources.list
<linocisco> ersi: uncommenting ??
<ersi> Yes. And if it's wrong, correcting it.
<linocisco> ersi: why would I need CD while my server is connected to internet ?
<ersi> I'm sorry, I can't help you. I'm about to lose it when I'm trying
<Daviey> Hmm.. i had better get the TechnicalOverview for Beta 1 done, and known bugs.  Anything jumping out at people for stuff to inlclude.
<linocisco> ersi: lose what?
<Daviey> RoAkSoAx: ensemble api work, all works now?  Is that something i can add?
<ersi> Control of my mind, and my ability to be helpful. I'm gonna go do something else now. Good luck.
<linocisco> ersi: ok
<linocisco> hi  all. I lost apt.conf file under /etc/apt
<linocisco> what do I do ??
<linocisco> hi  all. I lost apt.conf file under /etc/apt
<cloakable> shocking
<linocisco> hi all please help me
<linocisco> hi all
<soren> How can you lose a file?
<linocisco> soren: I dont know . I found apt.conf.d folder instead of apt.conf file
<linocisco> soren:  because I need to point correct apt proxy to download files with apt-get install command
<linocisco> soren: actually I didn't configure apt  and network during installation
<soren> There is no apt.conf by default.
<soren> Just create it.
<linocisco> soren:  that's all ?
<soren> Well...
<soren> You have to put your configuration in there.
<soren> Or put it in a file in apt.conf.d.
<linocisco> soren: Acquire::http::Proxy myproxy.net:8080 ??
<soren> Have you tried?
<soren> Try before you ask.
<linocisco> soren: thank alot. I put outside of apt.conf.d folder. it is working now
<linocisco> soren:  you are long term ubuntu server geek. I saw your name in mailing list for such a long time.
<soren> linocisco: Yes. I'm old.
<soren> It's true.
<hallyn> Daviey: we both forgot.  I can't create an ubuntu-virt ppa.
<soren> I can.
<zul> soren: you are decreped
<linocisco> soren: I am now updating with apt-get update. then I will try to install (which package) to work with microsoft outlook
<soren> zul: I'm almost sure that's not a word.
<linocisco> soren:  some said. I need to install dovecot-common
<soren> No idea. I live in the land of fairies and unicorns where there are no Microsoft clients.
<zul> soren: you are right its spelled decrepit :)
<soren> hallyn: Do you want me to?
<soren> hallyn: Wait, hang on.
 * hallyn hangs on
<soren> hallyn: There. You can now.
<soren> hallyn: Have fun.
<Daviey> soren: May i have that foo aswell please ? :)
<linocisco> soren: do you know what packages we need to install to work with microsoft outlook ? I am setting up mail server to be used by microsoft outlook client
<soren> Daviey: I may be bribed into bestowing such foo upon you, yes.
<soren> linocisco: Nope, no idea.
<linocisco> soren: that is real difficult thing I even can' t find on youtube
<soren> On... youtube?
<Daviey> soren: I'll let you buy me a drink at ODS>
<soren> Daviey: You know how to make me feel special.
<nigelb> Reverse psychology at work right there.
<Daviey> soren: There is $50 in my drawer, feel free to take that on your way out.
<soren> Um.
<soren> No.
<soren> Just.. no :)
<hallyn> soren: thanks :)
<soren> Certainly.
<roasted> if I'm on an Ubuntu LTS running as a server, no network manager, should I have my DNS servers within the interface file AS WELL AS etc resolv? Or just etc resolv?
<roasted> Because I'm having some DNS issues on my ubuntu thin client LTSP server and I can't trace if it's network config or the server itself.
<hallyn> Daviey: so do you still think i should call it qemu-0.15, or should i make a 'upstream-candidate' ppa for more general use?
<hallyn> i fjeer to have ubuntu-virt end up like my acct with too many undeletable ppas...
<soren> roasted: Using DHCP?
<zul> if you call it upstream-candidate then you can reuse it for other things later
<roasted> soren, DHCP is handled by windows. DHCP is disabled on the server. The server has a static IP for both interfaces.
<soren> roasted: Then put it in resolv.conf
<roasted> soren, it is... :(
<Daviey> hallyn: upstream-candidate sounds ok to me
<roasted> soren, I really can't figure this one out.
<Daviey> fishy-software-is-it-crack also sounds OK
<roasted> soren, now all of the sudden its working. what the...
<soren> roasted: Whatever is in resolv.conf is what counts.
<roasted> soren, It's strange. The clients themselves cannot ping by host name, however my laptop (plugged into a line that feeds into the Ubuntu server) can ping by hostname fine.
<soren> Er..
<soren> BAck up a bit.
<soren> What are you trying to do, exactly? This doesn't sound like DNS at all.
<roasted> soren, basically I have a thin client server here. IT's 10.04 64 bit. It has 2 network ports. each one is subnetted into a different IP range from the main network.
<roasted> soren, both switches are Dell 3548's. One network port to one switch. Then the switch connects to 30 systems in the lab.
<roasted> NIC 1 - 3548 - 30 thin clients that PXE boot to it. 10.52.18.X network on triple 255 subnet.
<roasted> NIC 2 - 3548 - 30 thin clients that PXE boot to it. 10.52.17.X network on triple 255 subnet.
<roasted> The main network is 10.52.X.X (third octet being whatever building it is) on 255.255.252.0.
<roasted> I'm just kind of confused on where it could be.
<roasted> The LTSP guys are also telling me I could put a DNS entry in the LTS.conf file, which controls how LTSP operates.
<roasted> one of the LTSP devs told me to do that, and then said: I have found that in some of the versions, we don't have the initramfs code to update resolv.conf on the client from dhcp
<roasted> perhaps I should try it quick
<soren> You're confused where *what* could be?
<roasted> soren, the issue.
<roasted> soren, sec, Im trying the devs suggested fix.
<soren> And I'm asking what the issue is.
<roasted> yeah, no dice.
<roasted> on the clients, I cant ping by hostname.
<roasted> however, I can plug in my laptop to a port the client was plugged into, and I can ping by hostname then
<Pici> It sounds like there may be an issue with your dhcp server.
<soren> You can't ping what by hostname?
<soren> Can you ping google?
<Pici> nm, that line of questioning is probably better.
<roasted> oh wow
<roasted> I can ping google. I can't ping our file server.
<ersi> There's your problem
<RoAkSoAx> Daviey: ensemble api?
<soren> hallyn: Does this look familiar? error: Unable to create cgroup for instance-00000002: No such file or directory
<roasted> looks like this may be it
<roasted> http://ubuntuforums.org/showthread.php?t=1626167
<hallyn> soren: yes
<hallyn> soren: stop libvirt-bin; start libbvirt-bin
<soren> hallyn: What gives?
<hallyn> well lemme know if that fixes it first :)
<soren> It did.
<soren> Thanks :)
<roasted> bingo. fixed.
<RoAkSoAx> Daviey: all the orchestra stuff is in ensemble trunk now
<hallyn> soren: bug 828061
<uvirtbot> Launchpad bug 828061 in libcgroup "cgroup-bin prevents libvirt from starting" [Critical,In progress] https://launchpad.net/bugs/828061
<hallyn> soren: best workaround is swap cgroup-lite for cgroup-bin
<soren> hallyn: Otherwise I'll have to restart libvirt-bin every time I.... what?
<hallyn> I need to insert a different barrier between cggconfig and libvirt upstart jobs
<hallyn> every time you boot
<soren> Oh.
<hallyn> libvirt-bin just needs to start a bit later
<soren> It's because cgroups-bin reclassifies the existing libvirtd or something?
<hallyn> right,
<soren> Hmm.
<hallyn> and i think most often, cgconfig gets the list of pids,
<hallyn> then libvirt spawns off 8 threads,
<hallyn> then cgconfig reclassifies the pids it has
<hallyn> whcih do not include the newly spawned threads
<hallyn> so really while cgconfig starts, the whole system should be frozen
<hallyn> the libcgroup stuff is going to cease to be developed, with functionality going into systemd,
<hallyn> and i think that's the right move.
<soren> Oh.
<hallyn> the run-time daemon thing is inherently racy.  there are unsolvable races requiring a different kernel-user api
<soren> Do you have a plan for fixing this?
<hallyn> cgroup-lite :)
<hallyn> but no, i could stand to have a chat for ideas if you're up for it
<hallyn> could help to pull in speakman
<hallyn> uh, SpamapS  i meant
<hallyn> soren: to be more precise, the libvirt one i can fix, but the general case is problematic
<hallyn> hm, maybe the libcgroup-libvirt-waiter.conf should turn into a more generic libcgroup-pause-all-daemons-using-libcgroup.conf
<speakman> :p
<hallyn> speakman: apologies :)
<speakman> hallyn: no worries :D
<soren> hallyn: That was what I was going to suggest :)
<afeijo> hi guys, I have a new acct that is not getting any emails. How to proper set that acct inbox? using postfix here
<rbasak> hi utlemming, Daviey has asked me to sort out the qemu arm image - I understand you built the binary image for it?
<utlemming> rbasak: yup
<rbasak> I can't get it to run :-)
<hallyn> soren: a more generic waiter job?  I'm not sure of the best way to get that done though
<utlemming> rbasak: well, your my new best friend
<rbasak> Daviey wasn't sure if it had ever worked
<rbasak> lol
<rbasak> I'd like to start by reproducing your build
<utlemming> rbasak: the image works -- I used qemu-system-arm to get the image created
<utlemming> rbasak: the problem is the kernel
<rbasak> lol
<rbasak> well that's what I was trying to get to run :)
<rbasak> I'd said to Daviey before that the userspace will probably all work and the issue is the kernel :-P
<utlemming> rbasak: initially we were told to use the linario-vexpress kernel, but that kernel is not compiled by Ubuntu
<utlemming> so we got told to use the OMAP kernel. The OMAP kernel requires a bootloader (u-boot) I believe to be installed on the image
<hallyn> soren: i'll send an email to ubuntu-server with the problem and what we're dong so far
<rbasak> I think we need a versatileab kernel
<rbasak> qemu-system-arm doesn't appear to support OMAP
<utlemming> rbasak: :)
<utlemming> rbasak: hang out in #ubuntu-arm and they'll tell you otherwise :)
<utlemming> what your hitting is that the OMAP and Beagle emulation requires a special boot loader installed on the image
<rbasak> But qemu refuses to start without a kernel image
<rbasak> (in ARM mode)
<rbasak> Oh, hang on
<rbasak> Ah, I should be using -M beagle or something?
<utlemming> yes
<utlemming> then the images need to have u-boot or some sort of firmware installed
<rbasak> OK I follow
<rbasak> is there a spec for what the bootloader built into -M beagle is doing?
<utlemming> http://elinux.org/BeagleBoard
<Daviey> utlemming: Oh? I thought qemu-system-arm could exec the kernel directly?
<utlemming> Daviey: yeah, it can, but the OMAP kernel says, "no boot device found"
<robos> hello, am I seeing this right? Ubuntu 10LTS will install solr version 1.4?
<soren> robos:       solr | 1.4.0+ds1-1ubuntu1 | lucid/universe | source
<soren> robos: Yes.
<Pici> Seems like it.
<Pici> yep.
<robos> yikes. Even 11.04 is 1.4.1. The latest version out is 3.1
<rbasak> utlemming: I'm pretty sure that message is from the qemu's beagle bootloader emulator rather than the kernel
<utlemming> rbasak: interesting
<rbasak> utlemming: going the beagle emulation route seems a bit obtuse for what we need
<rbasak> utlemming: http://code.google.com/p/qemu-omap3/wiki/UserManual
<rbasak> (have to generate a separate nand image maybe, etc)
<rbasak> utlemming: what do you think of using a different machine type (and thus kernel) to save having to create an additional intermediate image?
<utlemming> rbasak: "bit obtuse" would be overly generous
<rbasak> utlemming: (given that qemu already supports not having to go down the bootloader route for arm in versatibleab)
<rbasak> I mean we're not trying to emulate beagle here, qemu is our target
<utlemming> rbasak: I'm absolutely fine with that -- but AFAIK, we don't have a kernel with the meta package that will work on versatibleab
<rbasak> OK, I don't really know what would be involved in adding a different kernel
<hallyn> stgraber: ping for pushing libcgroup 0.37.1-1ubuntu6 for bug 838380 ?
<uvirtbot> Launchpad bug 838380 in libcgroup "non-root should not be able to create files in /sys/fs/cgroup" [High,In progress] https://launchpad.net/bugs/838380
<stgraber> hallyn: archive is still frozen for beta1
<jamespage> robos: solr is well behind at the moment - however work is going on to package lucene3 + solr3 (now part of the same source package) in Debian which should land next release (12.04)
<hallyn> stgraber: oh, i thought i heard a rumor that it was unfreezing today :)
<hallyn> drat
<hallyn> stgraber: are you on ubuntu-server m-l?
<hallyn> zul: do i understand right that there is nothing you need from the candidate libvirt 0.9.3-5 merge?  you just need an arm patch that can just as well go into the current package?
<robos> jamespage, when do you think the next LTS version will be out?
<zul> hallyn: it can go in the current package i just have to rediff it
<robos> Well, you probably can't answer that. :-)
<robos> But... I wonder if compiling solr and making it an ubuntu package will do the trick
<hallyn> zul: ok, we have enough crap to fix without the merge, i'm thinking of marking the FFE bug invalid
<zul> hallyn: ok cool with me
<hallyn> for that matter, Daviey was right to ask if there was anything in qemu 0.15 that we needed.  I'm not convinced.
<zul> ill get on it after what im done here
<hallyn> great, thx.  i'll roll it in with the other two fixes i'm waiting to push
<zul> which fixes?
<hallyn> Daviey: pls do put out the call for testing for qemu 0.15 like you were going to, and we'll see if anyone says "i need this"
<hallyn> zul: for bugs 828211  and 832123
<uvirtbot> Launchpad bug 828211 in libvirt "libvirt lxc should not rely on 'ip' and 'ifconfig' inside container" [Medium,In progress] https://launchpad.net/bugs/828211
<uvirtbot> Launchpad bug 832123 in lxc "when booting oneiric cloud-images under libvirt, no pty is available" [High,In progress] https://launchpad.net/bugs/832123
<zul> hallyn: cool beans
<stgraber> hallyn: yeah, it should unfreeze once beta1 is out a bit later today. I'm not subscribed to ubuntu-server
<utlemming> rbasak: hang on -- it looks like Daviey might have you put on a wild goose chase
<utlemming> rbasak: I'm now seeing a linux-image-versatile (https://launchpad.net/ubuntu/+source/linux-meta)
<hallyn> stgraber: ok, i wanted to have a conversation about fixing the race with libcgroup startup.  i'll cc: you?
<jamespage> robos: should be april 2012
<rbasak> utlemming: OK, so can we build the image using that?
<rbasak> In fact to test I can just grab the binary image
<stgraber> hallyn: yep, just CC me
<hallyn> great, thanks
<robos> april 2012 for LTS? sweet!
<jamespage> robos: the packages should land in Debian in the next month or so - its just not good timing for oneiric as that would be very later in the cycle
<hallyn> LOL! of course there *is* a simpler solution to the libcgroup startup.  Make '/' the default cgroup.
<rbasak> <rbasak> utlemming: OK, so can we build the image using that?
<rbasak> <rbasak> In fact to test I can just grab the binary image
<rbasak> It doesn't seem to be in the latest build :-/
<jamespage> robos: oh - and solr skipped version 2.x altogther so its not that out-of-date :-)
<eagles0513875> hey guys i am having issues after following the postfix and dovecot wiki's to setting up the servers. now my issue it seems like i am having issues authenticating with the imap server
<utlemming> rbasak: humm....where are you looking for the kernel?
<eagles0513875> can anyone help me with debugging the issue as I am getting increasingly frustrated
<rbasak> utlemming: same page, https://launchpad.net/ubuntu/+source/linux-meta
<utlemming> rbasak: your right
<eagles0513875> any ideas as to my issue ?
<eagles0513875> any dovecot imap authentication experts in here cuz i am at a total lost
<skaet> anyone know if Daviey's around?
<hallyn> skaet: he was...
<skaet> hallyn, thanks.
<eagles0513875> hey guys question
<eagles0513875> what should my permissions be set to for squirrelmail
<patdk-wk> both dovecot's and postfix's websites explain how to do it correctly
<patdk-wk> permissions? doesn't matter at all
<eagles0513875> patdk-lap: im following ubuntu squirrelmail documentation
<eagles0513875> well im having issues with it loading right
<patdk-wk> oh, the website won't even load?
<eagles0513875> ERROR: Could not complete request. Query: SELECT "INBOX" Reason Given: [SERVERBUG] Internal error occurred. Refer to server log for more information. [2011-09-01 17:16:39]
<eagles0513875> and
<eagles0513875> ERROR: Could not complete request. Query: LSUB "" "*" Reason Given: [NOPERM] No permission to read subscriptions
<cloakable> That sounds like an IMAP error, not a squirrelmail error
<eagles0513875> cloakable:  O_o
<eagles0513875> how so
<cloakable> what?
<eagles0513875> how is that an imap error
<cloakable> dovecor isn't giving squirrelmail permission to check folders
<cloakable> *dovecot
<eagles0513875> and how do i fix that
<cloakable> Have a look at the permissions on your mailfolders.
<eagles0513875> the permission is fine
<cloakable> Check your dovecot configuration to see if it's looking in the right place. Check the error log
<cloakable> try logging in using telnet
<eagles0513875> logging in via telnet works
<cloakable> hmmm.
<stgraber> hallyn: archive unfrozen. Looking at that upload now.
<hallyn> stgraber: great, thx.
<stgraber> hallyn: uploaded
<eagles0513875> cloakable:
<eagles0513875> problem solved
<eagles0513875> need to update the dovecot wiki
<jmarsden> eagles0513875: Go ahead and update it, wiki are editable :)
<eagles0513875> ya i will
<jmarsden> Ok, good.
<eagles0513875> already updated the dovecot wiki with an issue i discovered with encrypted home directories
<eagles0513875> now the problem im having it doesnt seem like my emails are coming through to my server
<aleuck> hello, i  have 2 HDs (sda and sdb) and sda is not listed on blkid
<aleuck> how can i format it to ext4 and mount?
<jibel> smoser, Hi, why did you set bug 1 for test case http://iso.qa.ubuntu.com/qatracker/result/6432/519 ? Is there a real bug or an infrastructure failure or ?
<uvirtbot> Launchpad bug 1 in ubuntu "Microsoft has a majority market share" [Critical,In progress] https://launchpad.net/bugs/1
<smoser> jibel, i updtaed it. i there were 2 boot failures seen there, i dont have a bug for the second, but i changed it to the first.
<jibel> smoser, cool. thanks
<smoser> but i do find it annoying that i can't log a failure without a bug
<Daviey> RoAkSoAx: Have you been able to document ensemble for bare metal?
<RoAkSoAx> Daviey: i've notes here and there but nothing definitive just yet
<RoAkSoAx> Daviey: wanted to actally have things working on the test righ though I'm just waiting for a fix from cloud-init to get that working again
<RoAkSoAx> Daviey: ut i will start writing a wikipage soon
<RoAkSoAx> Daviey: this weekend/next week
<smoser> SpamapS, around ?
<uvirtbot> New bug: #825127 in python-novaclient (main) "[MIR] python-novaclient" [Undecided,Incomplete] https://launchpad.net/bugs/825127
<hggdh> RoAkSoAx: please tell me when you have it -- so that I can build a test env Foolowing The Standard
<RoAkSoAx> hggdh: coolio, will do
<RoAkSoAx> hggdh: I promise it will be ready for next week ;)
<hggdh> RoAkSoAx: heh. No need to promise ;-)
<RoAkSoAx> hggdh: hehe, better to promise that not do so :P
<DanaG> Say, my tests over gigabit ethernet have given me only 40 megabytes per second on file transfers... is that normal?
<Daviey> RoAkSoAx: Okay, thanks
<DanaG> One end: Ubuntu 11.04, on HP Microserver with 5GB RAM and two Samsung F4 drives with ZFS mirroring.
<DanaG> The other end: HP EliteBook 8530w -- Intel 82567 ethernet, ICH9M-E, and an OCZ Vertex.
<uvirtbot> New bug: #838931 in backuppc (main) "package backuppc 3.1.0-9ubuntu1 failed to install/upgrade: le sous-processus script post-installation installÃ© a retournÃ© une erreur de sortie d'Ã©tat 1" [Undecided,New] https://launchpad.net/bugs/838931
<patdk-wk> danag, and what is the test?
<DanaG> ah, the test case can be either simple file copy (samba), or Windows 7 backup (also samba).
<patdk-wk> those are horrible tests
<patdk-wk> 40MB/sec sounds right then
<DanaG> ah.
<DanaG> That may just be the answer, then.
<patdk-wk> using like ftp, around 60MB/s
<DanaG> What's most likely the bottleneck: network bandwidth, samba, or drive?
<patdk-wk> who knows wit hthe laptop
<DanaG> I've benchmarked file copy from SSD to SSD, through same switch, at like 80 megabytes per second.
<patdk-wk> but tuning tcp windows makes a huge difference at gigabit speeds
<patdk-wk> I've benchmarked 110MB/sec over gigabit on my systems
<DanaG> By TCP window do you mean jumbo frames?
<DanaG> The Broadcom 5723 in the microserver doesn't do jumbo frames.  :(
<maswan> DanaG: tcp window has nothing to do with jumbo frames
<DanaG> ah, recieve window size.  As in sliding window algorithm?
<DanaG> I've taken networks class, but forgot the precise name of the algorithm.
<maswan> yeah, for tcp. doesn't matter much for low-latency transfers though
<maswan> I'd test pure network performance and see if it is there, using iperf
<eagles0513875> DanaG: how much ram is in the laptop. as well  what kinda hdd is in said laptop
<patdk-wk> for low latency I find larger windows hurt performance
<eagles0513875> and jumbo frames will only cause more issues
<DanaG> The 80 test was from EliteBook 8530w to a Fujitsu Pentium M system.
<patdk-wk> why would jumbo frames cause more issues?
<patdk-wk> jumbo frames works extreemly well, and no issues
<eagles0513875> DanaG: i know what your talkin bout i just took a compression class and studied that this past year
<eagles0513875> patdk-lap: wont you have massive amounts of fragmentation
<patdk-wk> why?
<patdk-wk> 1500 -> 9000? why would increasing it cause fragmentation?
<DanaG> EliteBook had 4 gigs of RAM and an Intel 82567, with OCZ Vertex 2.
<patdk-wk> why do you have fragmentation at all i nthe first place?
<eagles0513875> patdk-lap: woudl depend on the route taken not all routes to the destination support jumbo frames
<patdk-wk> eagles0513875, like we said *DIRECT*
<DanaG> Target was a Pentium M + ICH6 AHCI + OCZ Vertex 1, with Broadcom NIC of some sort.
<DanaG> Now the Vertex 1 is back in the EliteBook, since the Vertex 2 died.  Twice.
<patdk-wk> and you still wouldn't have fragmentation due to pmtu over a long multihop path
<eagles0513875> DanaG: i think the 2 biggest issues would be the speed of the nic's as well as the rpm's of the disk drives
<eagles0513875> if they are SSD then the bottle neck woudl be the NIC
<patdk-wk> not all ssd's can substain >100mbit
<patdk-wk> and rpm's don't matter for the disk
<patdk-wk> doing sequential read of the drive will give you it's max
<patdk-wk> normally between 80-120MB
<DanaG> Direct-attached benchmark of the Samsung was at least 80, I believe.
<DanaG> I'll bet ZFS is part of the bottleneck.
<patdk-wk> so you will never go >80MB/s
<patdk-wk> zfs will cause some latency, slowing things down
<patdk-wk> also depends on how everything is attached
<patdk-wk> does the nic and drives have their own pcie lanes?
<patdk-wk> do they share a southbridge? ....
<patdk-wk> if they are all connected over pci, then max would be 50MB/sec
<patdk-wk> half for drives, half for nic
<patdk-wk> same goes for your laptop
<patdk-wk> you could always try doing a transfer from /dev/zero to /dev/null
<patdk-wk> if you get gigabit
<patdk-wk> then it's a harddrive + nic sharing pci most likely
<DanaG> It's on PCIe.  Not even one PCI legacy slot in the thing.
<eagles0513875> hardly anything has pci any more
<rbasak> utlemming: I've got your image booting with a different kernel image now, but it seems to require an initrd?
<utlemming> which kernel image are you using?
<rbasak> utlemming: fedora's :-)
<rbasak> (for the moment)
<DanaG> The southbridge is an SB850, I think.
<utlemming> lol
<DanaG> Server kernel.
<DanaG> http://paste.flashrom.org/view.php?id=578
<DanaG> lspci.  somebody else's paste.
<rbasak> the initrd is going to be awkward if we're going to boot it like this
<rbasak> oh that's interesting, qemu does support an initrd
<utlemming> rbasak: there is a linario vexpress kernel that works, but networking is flaky
 * utlemming looks for URL for linario kernel
<rbasak> utlemming: what are you doing with the initrd? I tried the one in your image but of course that's mismatched with the kernel now and it doesn't seem to work
<utlemming> rbasak: nothing, I would fully expect the in-image initrd to completely fail
<utlemming> it was generated in qemu-arm-static (i.e. chroot)
<uvirtbot> New bug: #838959 in rabbitmq-server (main) "rabbitmq-server startup_log should include a date/timestamp" [Undecided,New] https://launchpad.net/bugs/838959
<utlemming> rbasak: I just found something that might help -- from linaro https://wiki.linaro.org/Platform/DevPlatform/Ubuntu/ImageInstallation
<utlemming> rbasak: the problem with this is we _can't_ ship any of this in the default cloud images. We're can only ship stuff from main or restricted
<DanaG> Once btrfs gets a fixing fsck, I'll probably switch.
<rbasak> utlemming: their linaro-media-create looks like it'll be able to help, then we could use our omap image
<utlemming> rbasak: that is the road that I just started barking down
<rbasak> utlemming: I was reluctant because it's ugly, more work and harder to debug, but if they've already done it...!
<utlemming> rbasak: agreed, its the reason why I didn't do it as well. I'm more likely to try and implement a toolchain that has sane instructions and good documentation
<rbasak> utlemming: I need to go. Do you want to see how far you get or would you like me to look at it further tomorrow morning?
<utlemming> rbasak: I'll beat my head against this for a bit. I generally am on around 14:00 UTC, we can sync up then if that works for you
<rbasak> utlemming: OK, speak to you tomorrow
<eagles0513875> any mail experts in here
<eagles0513875> need some urgent help
<axisys> mount.nfs keeps the uid/gid as it is shared by nfs server.. but mount.nfs4 adds some odd uid/gid.. like below
<axisys> http://pastebin.com/AZ4XGhqi
<axisys> line 14 (nfs) and line 29 (nfs4)
<axisys> any idea why?
<axisys> mount.nfs works.. but mount.nfs4 b0rkes it
<jj995> axisys: see https://help.ubuntu.com/community/SettingUpNFSHowTo, and the section on mapping ID names with /etc/idmapd.conf
<jj995> anybody know what version of ubuntu started defaulting NFS to version 4 instead of version 3?
<smoser> RoAkSoAx, adam_g bug 838968 is what is stopping cloud-init and apt from working sanely on mabolo
<uvirtbot> Launchpad bug 838968 in ifupdown "static-network-up event does not wait for interfaces to have an address" [Undecided,New] https://launchpad.net/bugs/838968
<axisys> jj995: looks like what you are showing is for nfs server..
<axisys> jj995: I am trying to do nfs client
<axisys> as you can see on line 14.. nfs keeps the uid/gid sane.. unlike in line 29 it does not with nfs4
<jj995> axisys: from the link I gave, if the client is mounting nfs4, "In order for the ID names to be automatically mapped, both the client and server require the /etc/idmapd.conf file to have the same contents with the correct domain names."
<axisys> jj995: oh ok
<elb0w`> How do I format a drive into XFS?
<GrueMaster> cjwatson: Ping.  Do you remember the name of the IPv6 test suite you mentioned at UDS?
<axisys> jj995: thanks, that was it!
<Technicus> Hello . . . I have been trying for a few days to connect with a guest operating system ( Ubuntu Server Edition ) from the host operating system ( Ubuntu Desktop Edition ) via SSH.  I have tried port forwarding, changing the guest network device, and taking suggestions from people on the channels #VBox, and #OpenSSH.  Nothing I have done was successful. How do I establish this connection?
<lickalott> internal or external network?
<lickalott> have you edited the sshd files on the server at all?
<ScottK> Did you installl openssh-server on the server?
<ppetraki> and of course, you've installed openssh-server on the guest, since it's not installed by default
<lickalott> https://help.ubuntu.com/10.04/serverguide/C/openssh-server.html
<smoser> well, ugly as all sin, but https://gist.github.com/1186970 would help us wait for IPV4 to comeup (bug 838968)
<uvirtbot> Launchpad bug 838968 in ifupdown "static-network-up event does not wait for interfaces to have an address" [Undecided,New] https://launchpad.net/bugs/838968
<smoser> SpamapS, ^
<Technicus> lickalott, ScottK, ppetraki: I have tried Host-only adapter, Internal Network, Bridged Adapter, and have it currently set to NAT.  I have not edited the sshd files, openssh-server is installed.
<lickalott> can you ping the box?
<Technicus> How do I do that?
<Technicus> Ping the ip address or ping the name of the server?
<SpamapS> smoser: interesting
<SpamapS> smoser: I wonder if there isn't some netlink socket that we can listen to for these things, a-la network manager... IIRC, n-m doesn't poll..
<hallyn> stgraber: sorry to bother you again, but can't think of anyone else to ping on this - if you get a chance could you look at the proposed debdiff for bug 838410 ?
<uvirtbot> Launchpad bug 838410 in lxc "sshd template is broken" [Medium,Triaged] https://launchpad.net/bugs/838410
<smoser> yeah, there has to be some way. we're interested in network interface changes.
<hallyn> zul: will you be able to get that patch for libvirt to me today?
<zul> hallyn: ill do my best
<zul> do you have a bzr tree
<lickalott> either should suffice
<hallyn> zul:  i don't, but debdiff is at people.canonical.com/~serge/libvirt-0.9.2-4ubuntu12.debdiff
<hallyn> (i'd toss up a bzr tree, but since it's so out of sync i don't have the bandwidth right now, don't ask)
<lickalott> if you're Nat'd correctly you should "hop" all the way through to the server.  if you can't ping it successfully then i would look at your routes and/or NAT statements then re-engage
<stgraber> hallyn: looking
<hallyn> stgraber: thx.  noone really cares (obviously) about sshd template, but it's something ppl often try for fun...
<stgraber> hallyn: looks good, uploaded
<stgraber> oh, one note actually :)
<stgraber> hallyn:  -- Ubuntu <ubuntu@server-639.localdomain>  Thu, 01 Sep 2011 16:01:31 +0000
<hallyn> doh
<hallyn> is that in quilt file?
<stgraber> nope, debian/changelog
<stgraber> replaced with your @ubuntu.com
<hallyn> stgraber: great, thanks
<hallyn> i'll send the template patch upstream of course
<stgraber> good thing lintian detected it ;) I didn't see it by reading through the diff
<hallyn> at some point...
<hallyn> hm, guess i only built it with fakeroot debian/rules binary
<hallyn> i need to get my sbuild box up
<zul> smb: xen fix has been uploaded
<zul> hallyn: people.canonical.com/~chucks/libvirt-0.9.2-4ubuntu12.debdiff
<hallyn> zul: thx, i'll test a bit and push
<zul> hallyn: ok ask if you need someone to upload
<zul> its already signed locally ;)
<elb0w`> Im trying to run mysql server using a different datadirectory
<elb0w`> and its not liking it
<elb0w`> when I try to start the service it just hangs
<elb0w`> fuckin apparmor isnt it
<hallyn> zul: gah, bad build.  a few more minutes, and i'll ask you to push your built version :)
<hallyn> (bad build = my fault, not the package)
<hallyn> zul: success - pls push :)
<hallyn> now on to this libcgroup crap.
<zul> hallyn: uploaded
<hallyn> zul: rockin'!  thx.
<ChmEarl> how to downgrade a minor version of python2.7?
<ChmEarl> nm - downgraded the 3 files from apt-cache
<lickalott> ChmEarl i would downgrade the file from the apt-cache dir
<lickalott> :p
<ChmEarl> lickalott, first time I ever did this
<astrostl> anyone know about an httpd update for https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2011-3192 ?
<uvirtbot> astrostl: The byterange filter in the Apache HTTP Server 1.3.x, 2.0.x through 2.0.64, and 2.2.x through 2.2.19 allows remote attackers to cause a denial of service (memory and CPU consumption) via a Range header that expresses multiple overlapping ranges, as exploited in the wild in August 2011, a different vulnerability than CVE-2007-0086. (http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2011-3192)
<astrostl> red hat has issued one
<jdstrand> astrostl: it was just published
<astrostl> for lucid?
<jdstrand> for hardy-natty
<astrostl> so, not for the *LONG-TERM SUPPORT SERVER DISTRIBUTION...*
<jdstrand> hardy through natty
<astrostl> i seriously don't get ubuntu sometimes
<jdstrand> hardy, lucid, maverick, natty
<Pici> astrostl: read again
<astrostl> ooops :)
<astrostl> not showing on an apt-get update here
<astrostl> i'm constantly screwed by bugfixes that are only applied to new releases, pardon the outburst!
<jdstrand> it will be available in approximately 12 minutes directly from LP, about 45 from security.ubuntu.com
<astrostl> tyvm
<mmncs_> Hi everybody and first thanks for your time. I have a VPS 512 MB RAM running Ubuntu 11.04, MySQL and NGINX standard conf, but after 24 hours it is slow as hell, almost unreachable which makes it worthless. Any help on this issues would be most appreciated, here is the output from top: http://pastebin.com/eziWYrK8
<Daviey> Who wants to have a go at testing ubuntu-server beta1 candidate, last minute respin? :)
<TheEvilPhoenix> Daviey:  sure, if it runs in a VM
<hggdh> done
<ignarps> I have an upstart process stuck in stop/waiting
<ignarps> When I try to run start <service> it tells me that.  How can I reset this stop/waiting
<Daviey> TheEvilPhoenix: sounds good to me!
<Daviey> TheEvilPhoenix: automated test cases have been achieved, but hand testing is helpful - http://cdimage.ubuntu.com/ubuntu-server/daily/current/oneiric-server-i386.iso
<hggdh> and http://cdimage.ubuntu.com/ubuntu-server/daily/current/oneiric-server-amd64.iso
<hggdh> plus http://iso.qa.ubuntu.com/qatracker/build/ubuntuserver/notcompleted with what is missing
#ubuntu-server 2011-09-02
<zul> SpamapS: Red Hat 8.0? talk about old school ;)
<uvirtbot> New bug: #834172 in orchestra (universe) "dnsmasq does not include domain name in hostnames" [Low,Fix committed] https://launchpad.net/bugs/834172
<TheEvilPhoenix> Daviey:  if you're still around, i initiated the download.  Had to change location, hence the delay.  The image is being downloaded as we speak, i'll load it up into VBox shortly.
<TheEvilPhoenix> Daviey:  want anything specific from the image tested?  ;P
<uvirtbot> New bug: #838968 in ifupdown (main) "static-network-up event does not wait for interfaces to have an address" [High,Triaged] https://launchpad.net/bugs/838968
<maxb> Can anyone recommend which software included in Ubuntu they prefer for managing IPsec VPNs? (racoon? isakmpd? openswan? strongswan?)
<twb> I'm still using openvpn (i.e. not ipsec) because I'm lazy.
<maxb> openvpn is very nice and managable, but sometimes other people want you to talk IPsec to them :-/
<twb> Nod
<twb> For that I use vpnc ;-)
<twb> (That cisco junk is actually gre/ipsec or l2tp/ipsec or something, under the hood.)
<DanaG> oh yeah, no openvpn on iphone OS.
<DanaG> I've set up openswan... took a whole lot of fiddling.
<pleia2> I've used openswan but strongswan has much better documentation and I'd probably use that if I was going to do it again
<twb> DanaG: yeah, well, no freedom on the iphone either, so Captain Fail for that device
<DanaG> I'm looking for something to replace my iPod Touch with, that won't require a data plan.
<DanaG> I'm thinking unlocked Nexus Prime once it's out, but that's expensive.
<DanaG> Is Ubuntu Mobile even still relevant?
<DanaG> I haven't heard from it in a while.
<DanaG> s/from/about/
<DanaG> Oh yeah, so now I have an ipmi card in my microserver, and got ipmi_si to load.  Now, what do I do with it?
<jmarsden> Hmmm, you should be able to do all sorts of remote monitoring and remote control stuff, but I don't know what the open source IPMI client tools are called, never mind whether they work :)   freeipmi-tools package might be worth a look?
<DanaG> Too bad HP failed to hook up the UART in the thing.
<DanaG> That is, the IPMI chip supports SOL, but the motherboard doesn't offer it to the OS.
<DanaG> Oh, and the ACPI tables declare the kcs at the wrong base address.  Had to override that.
<twb> Isn't "ubuntu mobile" just the ubuntu normal desktop now?
<jmarsden> Oh, that's a bit sad... I have worked on some 1U servers with a recent Intel motherboard (1200BTS and E3-1230 CPU, or close to that) that comes with a little extra IPMI addon, separate management NIC, etc.  But in the Windows world, not the Linux one, unfortunately :)  They do what they say they do... remote power on/power off, access to BIOS setup screens, etc. etc.
<DanaG> Yeah, but vKVM is java only.
<DanaG> And it has a rather hideous filename that confuses many browsers.
<DanaG> kvmViewer.jnlp(blahblahthere'sadateinheresomewhere)
<DanaG> Or rather, there's even an apostrophe somewhere.
<jmarsden> Sounds like a nice way to test browsers :)
<DanaG> Do any of you guys have contacts with HP?
<DanaG> The thing also locks the non-HDD SATA ports to legacy IDE mode.  So, you can't use AHCI on them!
<DanaG> And they don't mark the ahci ports hot-swappable even though they basically are.
<DanaG> There's a modded bios that fixes that, at least.
<twb> I wish theme=dark was the default in d-i
<uvirtbot> New bug: #839263 in nagios3 (main) "package nagios3-common 3.2.3-1ubuntu1.2 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/839263
<twb> I have a remote host, and a monkey has plugged a USB HDD into it.
<twb> I suspect he's plugged it into a USB1 port -- how can I test this?
<lickalott> mount /dev/sdb (or the like) to a folder and see whats inside
<twb> I mean how can I test USB1 vs. USB2
<jmarsden> twb: doesn't    sudo lsusb    say whether a hub is a USB 1.1 or a USB 2 hub?
<lickalott> Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
<lickalott> Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
<lickalott> Bus 005 Device 002: ID 046d:c517 Logitech, Inc. LX710 Cordless Desktop Laser
<lickalott> Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
<lickalott> Bus 004 Device 002: ID 046d:c01b Logitech, Inc. MX310 Optical Mouse
<lickalott> Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
<lickalott> Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
<lickalott> Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
<lickalott> Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
<lickalott> output of lsusb
<lickalott> good call jmarsden.  i obviously didn't understand the question
<twb> Mine's dumber than that
<lickalott> ?
<jmarsden> twb: There should be some lsusb option that will convey the info to you... lsusb -t maybe?  Or even the verobsity of   sudo lsusb -v -v
<twb> Sorry, afk w/boss
<twb> http://paste.debian.net/128140/ and http://paste.debian.net/128141/
<twb> It's a hardy box apparently
<twb> That doesn't mention which HCI is used
<jmarsden> twb: Do a full lsusb -vv  and look for stuff about "power" -- USB 1.1 does not do power switching and 2.0 does, and in Lucid at least the full output tells you about that, per hub.
<jmarsden> But I wonder if the Hardy lsusb is less informative?
<twb> OK, 005:001 is EHCI
<twb> UHCI is the 1.0 speed one; EHCI is the 2.0 one
<philipballew> how would i make a vpn server
<twb> philipballew: 1) get a server; 2) set up VPN.
<jmarsden> twb: So... your monkey managed to find a USB 2.0 port, it would seem :)
<twb> jmarsden: yep, and I'm just impatient
<lickalott> damn monkey
<twb> (I'm copying an ubuntu mirror to it because the site is only connected to me via 56k, and not to the internet at all :-/)
<jmarsden> twb: Persuade the boss to replace it with hardware that does USB 3.0 and the impatience might go down slightly?
<philipballew> twb, do i need a special type of router?
<lickalott> no
<twb> philipballew: if the VPN terminats on the server, no
<jmarsden> philipballew: If you want the *router* to do the VPN work, it needs rouetr software that knows about VPNs.  But you can just pass that through to the server and run VPN software on the server, which is more usually flexible if you have a Ubuntu server around :)
<jmarsden> s/more usually/usually more/    :)
<philipballew> i have a ubuntu server chillin a few thousand miles from me. can i do all this from ssh? twb
<philipballew> jmarsden,
<twb> If you can get into it, sure
<twb> Of course if you fuck up, you won't be able to get in to fix it without a LOM
<philipballew> whats lom mean?
<philipballew> but even if i mess up i can still ssh into it probably
<jmarsden> LOM == Lights Out Management, a way good server hardware has of being controlled remotely even when bad things are done to it...
<jmarsden> Basically a little microcontroller that runs even when the main machine is powered down/hung/etc.
<jmarsden> The name comes from the idea that you no longer need lights on in the server room... you'll never need to be in there :)
<philipballew> yeah. I just want a vpn to avoid my college always watching everything i do
<philipballew> its annoying
<jmarsden> Just ssh to the remote server... they can't see what you type over SSH... so you can do whatever commands you want to do on the remote server, unobserved.
<jmarsden> No need for a VPN.
<philipballew> can I browse the web though jmarsden
<jmarsden> Well, sure.  You can run a text mode browser on the remote server :)  or, you can put a browser on the remote server and do ssh X forwarding to see the screen from it on your local workstation.
<philipballew> how fast would that be?
<jmarsden> It depends what you are doing.  What makes you think a VPN would be faster?
<jmarsden> If you are new to this stuff, X over SSH is a lot simpler to try out than setting up a reall VPN between the two sites, I would think.
<philipballew> jmarsden, yeah. im fairly good with networking. x over ssh might work, but i dont wanna have x on my server and have more things and proccess on it that might have problems
<jmarsden> All you need on the server is the client parts of X, the X server is local to you on your workstation.  Unless you have a VERY weedy server, there shouldn't be an issue there.
<philipballew> hum. I might look at how hard a vpn is to set up first
<philipballew> !vpn
<ubottu> For more information on vpn please refer to https://wiki.ubuntu.com/VPN
<eagles0513875> hi guys i need some help from an email expert
<eagles0513875> i have no idea where to begin looking to debug an issue with my emails not being delivered to my maildir but to /var/mail which i dont want
<jmarsden> eagles0513875: That's just a matter of configuring the MTA to deliver it the way you *do* want :)
<eagles0513875> jmarsden: i have checked both postfix and dovecot configs 3x over and made sure they were set to the way i had them before
<eagles0513875> which they are
<eagles0513875> yet
<eagles0513875> even though they are exactly like the old configs it still not delivering to the right location
<jmarsden> You diffed the two sets of config files and that are 100% identical -- no output from diff at all?
<jmarsden> s/that/they/
<jmarsden> If so, and you restarted all the relevant daemons, whatever else changed in your machine between "old" and "new" is responsible...
<jmarsden> But I am being asked to go AFK for a while, bad timing...
<eagles0513875> jmarsden: well i backed up the old configs and printed them out and went line by line as i did an upgrade to my server so this is a clean install
<eagles0513875> any other email experts in the house here
<jmarsden> That's ... an unusual approach.  Restore the old files to some temporary location, diff -ur the two trees of config files!   No need to waste paper and then read line by line :)
<eagles0513875> good point will do that
<eagles0513875> :D
<greppy> eagles0513875: pastebin your /etc/postfix/main.cf and /etc/dovecot/dovecot.conf?
<eagles0513875> give me a sec greppy
<eagles0513875> http://pastebin.com/SnuqSBMV (postfix) http://pastebin.com/1gsjYQAH (dovecot)
<greppy> eagles0513875: line 230 of dovecot, uncomment mail_location
<eagles0513875> if you look a few lines above it i have line 224 uncommented in regards to mail_location
<greppy> ah, missed that :) you may need to remove the leading whitespace, I know dovecot has had issues with that in the past in various placs
<eagles0513875> you mean move it to the beginning of the line
<eagles0513875> greppy: sent myself a test email and still nothing :(
<eagles0513875> at least nothign in the inbox in squirrelmail showing up there
<eagles0513875> should i do a diff on the old and new configs and see what that turns up
<greppy> yeah
<eagles0513875> ok
<eagles0513875> sigh this has been driving me crazy for over 24 hrs lol
<greppy> the other thing to look at is the mailbox_command in your postfix config
<eagles0513875> that is set for using spamassassin via procmail
<greppy> using dovecot deliver instead of procmail may be the missing piece
<eagles0513875> didnt have any issues with it before
<greppy> on a side note, you may want to look at using amavis for spam scanning since you can also tie clamav into it, instead of shoving it off using procmail.
<eagles0513875> greppy: oddly enough i have never gotten any spam
<greppy> heh
<eagles0513875> well let me get this issue solved first then i might migrate
 * greppy has had the same domain and email address for ~15 years.
<eagles0513875> do you have any good how to's on amavis
<greppy> http://www200.pair.com/mecham/spam/
<greppy> that's the way that I usually handle it, it's for debian, but with a minor amount of tweaking it can be done. :)
<eagles0513875> kool will have to give it a shot
<eagles0513875> greppy: i got something interesting to pick ur brain
<eagles0513875> regarding vhosts and vhost logging
<eagles0513875> if you want a nice puzzle to pick ur brain with this morning
<greppy> heh
<greppy> you can ask, I may or may not have an answer or time :)
<eagles0513875> for some reason 2 lines in a vhost cause apache to fail to start or restart
<eagles0513875> if i comment them out then things work fine
<eagles0513875> greppy: everything looks fine
<eagles0513875> woudl this be an issue with squirrelmail
<greppy> nope, squirrelmail talks to dovecot to find out where stuff is.
<greppy> it's agnostic
<greppy> what two lines?
<eagles0513875> greppy: im gonna try resetup squirrelmail cuz we changed the default way it was setup in apache
<eagles0513875> im wondering if that is the issue
<eagles0513875> emails are going through
<eagles0513875> to my server as nothing has bounced back
<eagles0513875> greppy: running a quick test with thunderbird to see that its not squirrelmail which isnt configured right
<eagles0513875> sigh
<eagles0513875> might have to purge dovecot and postfix and setup again from scratch
<greppy> what is procmail doing with the mail?
<greppy> I want to say procmail doesn't do Maildir, but could be wrong.
<eagles0513875> greppy: it was before i upgraded my server
<eagles0513875> all its doing is spam filtering
<eagles0513875> spamassassin seems to work with procmail
<uvirtbot> New bug: #839390 in apache2 (main) "Apache+Kerberos not working anymore since update today" [Undecided,New] https://launchpad.net/bugs/839390
<eagles0513875> greppy: gonna try copy my configs over the old ones
<iuytfr> hello
<iuytfr> when yo create raid 6 on 8 disk of 100 Go what is capacity of disk we got ?
<twb> Wikipedia tells you how to calculate effective capacity
<twb> "The usable capacity of a RAID 6 array is (N-2) \cdot S_{\mathrm{min}}, where N is the total number of drives in the array and S[min] is the capacity of the smallest drive in the array."
<iuytfr> so the capacity on my case is 600 Go
<iuytfr> am i right ?
<twb> I think so
<twb> Personally I would throw out that SCSI/SAS stuff and just get a two- or three-way RAID1 2TB SATA array
<eagles0513875> ugh this is frustrating
<eagles0513875> tbh twb wish my server was hot swappable
<eagles0513875> instead of simple swap
<twb> All SATA is hot-swappable :P
<eagles0513875> twb: the server has to have the sas backplane to support hot swapping
<twb> Pish tosh
<twb> Just open the case and unplug shit, I'm sure it'll be ok
<eagles0513875> lol
<eagles0513875> think i need to reformat my server again have a majorly screwedup dovecot+postfix setup
<eagles0513875> that i have no earthlyl idea where on earth i went wrong with
<iuytfr> well i have an other question when you create raid 10 on 25 disk of 100 GO what is the capacity we got ?
<twb> iuytfr: 10 would normally have an even number of disks
<_ruben> mdadm's "raid10" doesn't have that "limitation" tho
<_ruben> iirc
<twb> Yea, mdadm will let you do stupid things
<_ruben> yup :)
<twb> We have a bunch of -l1 -n3's for read-heavy workloads :P
<twb> Goddamn boss is not doing his job and convincing customers not to buy hardware raid
<twb> So I am also ending up having to babysit a few LSI junkpiles
<_ruben> nice
<twb> Most don't even have BBUs
<iuytfr> what is the capacity ? twb for raid10
<iuytfr> 25 disk of 100 go
<twb> iuytfr: I'm not your mum.  Do your own homework.
<twb> iuytfr: https://secure.wikimedia.org/wikipedia/en/wiki/RAID
<iuytfr> catch your word
<iuytfr> be carefull
<Daviey> jamespage: did you see we had a post-release panic last night?
<eagles0513875> twb: question
<eagles0513875> if i copy the configs over the default postfix and dovecot configurations
<jamespage> Daviey: nope - missed that one - what happened?
<eagles0513875> configs from my old setup which worked
<eagles0513875> i shoudl be able to have instant running email server right
<Daviey> jamespage: The 'released' and QA'd image was deleted :)
<Daviey> jamespage: So had to re-QA the next daily.
<jamespage> Daviey: doh! - how did that happen?
<Daviey> "oh joy"
<jamespage> yikes - I can imagine
<jamespage> I must have been in bed by then
<Daviey> jamespage: it was cleaned up by the build process cron.
<Daviey> jamespage: Yeah, thanks.. Glad to hear you got a good nights sleep. :)
<lynxman> soren: Congrats on your move to nebula ;)
<Daviey> jamespage: Hmm, that jenkins branch FTBFS for me :/
<jamespage> Daviey: grr - lemme check again
<jamespage> Daviey: what error do you get
<Daviey> jamespage: http://pb.daviey.com/EVG7/
<soren> lynxman: Wow. You're quick :)
<jamespage> Daveiy: hrmm - thats the missing fonts issue
<Daviey> jamespage: let me update my chroot, that might be causing the issue.
<Daviey> :/
<nigelb> soren: Congrats!
<lynxman> soren: I live in the bleeding edge, it hurts sometimes :)
<soren> lynxman: Living under a rock isn't much better, let me tell you.
<nigelb> soren: You'll still continue to work on OpenStack at Nebula?
<soren> nigelb: Yup.
<nigelb> Excellent!
<soren> nigelb: Yup :)
<nigelb> And I assume we'll continue to see you at UDS?
<Daviey> ... then hangovers will continue for the rest of is.
<Daviey> us*
<nigelb> I still giggle about soren and sbeattie and the lady with flowers
<jamespage> Daviey: I just tried another sbuild of a fresh copy of that branch and it builds OK for me....
<Daviey> nigelb: Ah, we told her they were together?
<soren> nigelb: Yeah, can't keep me away from UDS.
<soren> nigelb: Heh, yeah, I wish someone had filmed that. :)
<soren> Daviey: I was (for reasons the escape me) sitting on sbeattie's lap.
<soren> s/the/that/
<nigelb> Daviey: soren was sitting in steve's lap (there was no space). The flower lady went behind me (I was right next to them) and circled around. Then she saw them.
<soren> Ah. No space. Good excuse. Thanks.
<nigelb> THe expression on her face made us all burst out laughing
<Daviey> nigelb: I was there.. i "helped" the confusion with her.
<nigelb> haha
<nigelb> Daviey: oh yeah
<nigelb> I have a photo of that
<nigelb> well, not of the lad'y expression
<Daviey> jamespage: Odd, just made sure my chroot was up to date, and it still failed.
<jamespage> you are building -0ubuntu2?
<jamespage> just checking :-)
<Daviey> root@voodoo:~/jenkins-1.409.1# head -n1 debian/changelog
<Daviey> jenkins (1.409.1-0ubuntu2) oneiric; urgency=low
<Daviey> jamespage: I don't understand the error i am seeing, looks kinda odd that it would be resolved with a font inclusion.
<jamespage> Daviey: so testGraph generates graphics using Java - which include text which needs fonts to generate
<Daviey> jamespage: OK, i'll upload it if you are certain.. but you handle any breakage :).  I would like to understand why it is failing in pbuilder for me.
<jamespage> I would rather figure out why it builds for me and not for you first
<jamespage> Daviey: pls can you post a full build log - I'll see if there are any diffs
<Daviey> jamespage: Full failure - http://pb.daviey.com/RNTH/
<Daviey> hah
<jamespage> Daviey: is the pbuilder environment still available - the actual test log would be helpful to
<Daviey> jamespage: sure
<Daviey> jamespage: which file do you want?
<jamespage> it will sit in core/target/surefire-testreports/ - file should refect the name of the test
<eagles0513875> hey guys any squirrelmail experts in here
<Daviey> jamespage: http://bootie.daviey.com/~dave/temp-jenkins-fail-logs/
<Daviey> lynxman: urgh, that hd just failed again
<jamespage> Daviey: hmmm -http://bootie.daviey.com/~dave/temp-jenkins-fail-logs/hudson.model.LoadStatisticsTest.txt
<jamespage> different error - bah!
<lynxman> Daviey: oh oh :(
<lynxman> Daviey: happy Friday?
<Daviey> lynxman: something like that
<Daviey> jamespage: It was an X running?
<jamespage> Daviey: well maybe
<jamespage> when I use sbuild it running on my laptop - which does have a :0 display running
<jamespage> trying it in PPA now - might work
<Daviey> jamespage: cool
<Daviey> lynxman: I've added it back, once more - i am a bad man.
<lynxman> Daviey: you like to play RAID russian roulette :D
<Daviey> lynxman: I really don't want to visit the datacentre it is in. :/
<lynxman> Daviey: where it is? Docklands?
<jamespage> Daviey: will build in 2 hours - catchup later
<jamespage> https://launchpad.net/~james-page/+archive/junk/+build/2761003
<Daviey> jamespage: groovy
<uvirtbot> New bug: #839489 in mysql-5.1 (main) "package mysql-server-5.1 5.1.54-1ubuntu4 failed to install/upgrade: the script post-installation subprocess returned exit code 1" [Undecided,New] https://launchpad.net/bugs/839489
<uvirtbot> New bug: #839490 in nut (main) "NUT is not shutdown UPS" [Undecided,New] https://launchpad.net/bugs/839490
<eagles0513875> hey guys any email experts in here
<soren> Just ask your question.
<soren> a) Noone will admit to being an expert in anything. They'll never be left alone.
<eagles0513875> for some reason squirrelmail isnt picking up my new emails in my Maildir
<soren> b) Chances are you don't need an expert.
<eagles0513875> yet i have been over the configs for dovecot and postfix time and tiem again and they are just as i had them setup before
<eagles0513875> i upgraded my server and it now has 10.04.3
<eagles0513875> prior i was on 10.04.2 and squirrelmail worked like a charm
<patdk-wk> royk?
<uvirtbot> New bug: #839509 in nut (main) "nut can not log to syslog when shutdown" [Undecided,New] https://launchpad.net/bugs/839509
<RoyK> patdk-wk: ?
<patdk-wk> attempting to try openindiana
<patdk-wk> your using yours strickly for backups?
<RoyK> patdk-wk: three machines are strictly for backup
<RoyK> one is a fileserver, mostly NFS, and another is yet a backup (zfs receive from the fileserver)
<patdk-wk> been attempting to get root over nfs, but it keeps being denied
<RoyK> yet another is a dedicated fileserver with striped mirrors (which is FAST!)
<patdk-wk> ya, I'm building a striped mirror zfs with 4 ssd's
<patdk-wk> should be fast
<RoyK> only SSDs?
<patdk-wk> na, 24 drives, 4 ssd
<RoyK> what sort of SSDs?
<patdk-wk> mlc
<RoyK>  /j #openindiana
<patdk-wk> I am
<Doonz> whats a recommended distro for a zfs based fileserver for the less experienced user
<Pici> Ubuntu!
<ppetraki> Doonz, sorry, no zfs support in Linux last time I checked, at least not kernel level
<Pici> Oh, well, nevermind then.  I just didn't know what else you'd expect to hear from an Ubuntu channel. Maybe ##linux would be a better place to ask.
<Pici> er.. no, that doesn't make sense either. nevermind me.
<ppetraki> Pici, you'll get the same answer, no distro supports it except for maybe through fuse, which sorta defeats the purpose
<Pici> ppetraki: Right, which is why I said that my suggestion didn't make sense ;)
<ppetraki> Pici, :
<ppetraki> Pici, :)
<ppetraki> Pici, do you know which features in ZFS that are most important to you?
<Pici> ppetraki: you mean to ask Doonz
<ppetraki> Pici, hmm, need more coffee
<ppetraki> Doonz, do you know which features in ZFS that are most important to you?
<ppetraki> Doonz, much of what it does is supported in a stacked manner e.g. MD, LVM + FS of choice
<Doonz> ppetraki: just looking for a simple fileserver os for 120Tb of redundant data
<Doonz> needs are iscsi nfs smb
<ppetraki> Doonz, oh, that's all :)
<Doonz> from what ive been reading seems like zfs does what i want natively
<Doonz> ppetraki: yeah this is a pure fileserver
<ppetraki> Doonz, you can use MD to create a software raid 6, and then serve that to LVM as a physical volume
<ppetraki> Doonz, then LVM can carve up volumes as you see fit to be served raw through iscsi or add an fs and serve them NFS or whatever else suits you
<Doonz> hmm
<Doonz> now im conflicted
<Doonz> thanx ;)
<ppetraki> Doonz, ZFS's greatest feature is it basically squashes all these layers for you, and adds some data integrity smarts
<Doonz> yeah
<ppetraki> Doonz, but its nothing that hasn't been done before
<ppetraki> Doonz, and btrfs will soon be it's peer
<Doonz> ive been reading about that when is its supposed stable release
<ppetraki> Doonz, hey, options are good.
<ersi> Pici: There's only experimental FUSE support, and last time I checked it was slow as watching paint dry
<Doonz> too many too many
<Doonz> :)
<ersi> I meant Doonz
<Doonz> ok thanx im going back to the pdf land to keep reading
<ersi> Sorry, my eyes aren't with me this late in the afternoon
 * Pici makes note anyway
<ersi> Your best bet if someone tells you they WANT ZFS, is either FreeBSD or Solaris
 * ersi shrugs
<ppetraki> ersi, join the club :)
<ersi> The what club? ZFS club?
<ppetraki> ersi, the I need coffee because I'm chatting with the wrong person in irc club
<ersi> Haha
<Pici> The too-tired-to-use-irc properly club.
<ersi> Well, we've been changing a lot of stuff in our software and I've been trying to verify and bug test the changes
<ersi> and quite a SUPER IMPORTANT PART which.. we need ready by.. yesterday.. So, mind = slow ;D
<ersi> Ooh, time for release meeting \o/
<Doonz> ersi: yeah im installing opensolaris 11 rght now
<ppetraki> Doonz, if it works for you then more power to you. Just keep in mind that HW doesn't last forever and you might find when its time to turn it over that the community supporting opensolaris is way behind in hardware enablement. Then your invested in both a filesystem no GPL kernel will ever support *and* stuck in the past when it comes to acquiring new HW.
<Doonz> yeah
<Doonz> well the good part is , is that my old fs will still be the production one and the new fs will be in cevelopment so im pretty much going to try everything out
<utlemming> ping rbasak
<rbasak> hello!
<rbasak> how'd it go?
<swiss_chris> hello - I have the desktop version of ubuntu but want to access my apache 2 web server remotely from another computer on the same home network (via ip address would be fine). Can anyone give me a hint as to where to start? Running nmap on my ip address indicates that port 80 is open. Typing <my_ip_address>:80 into the address bar of my browser gives me "Not Found...". What am I missing here...?
<smoser> SpamapS, when you arive https://bugs.launchpad.net/ubuntu/+source/upstart/+bug/839595
<uvirtbot> Launchpad bug 839595 in upstart "failsafe.conf's 30 second time out is too low" [Undecided,New]
<utlemming> meh. I hit a bug with the linario toolchain that I have to work out -- it kept erroring out whilst copying files from the host into the chroot. The problem is definitely qemu, not the kernel or the image.
<smoser> utlemming, starting http://pad.ubuntu.com/server-p-cloud-brainstorm
<ncampion> smoser: is there a place where i can find the ami id's of the 'official' ubuntu amis, a tracker or some sort?
<smoser> ncampion, yes!
<smoser>  /query
<jamespage> Daviey: hmm - a long two hours - https://launchpad.net/~james-page/+archive/junk/+build/2761003
<smoser> http://ubuntu-smoser.blogspot.com/2011/07/how-to-find-right-ubuntu-ami-with-tools.html
<smoser> ncampion, ^
<linocisco> how to add normal user to give shutdown and install right ?
<smoser> and http://ubuntu-smoser.blogspot.com/2011/07/how-to-find-right-ubuntu-ami-on-ec2.html
<ncampion> smoser: thanks
<linocisco> smoser: that is not gonna help me
<SpamapS> smoser: ACK
<swiss_chris> if nmap says port 80 is open, could it be apache that is blocking remote access to my web server? if so, how to I allow it?
<Pici> swiss_chris: Are you nmapping localhost?  or from another machien?
<swiss_chris> i'm nmapping my ip address from my local machine.
<swiss_chris> Pici:i'm nmapping my ip address from my local machine.
<swiss_chris> I just discovered a little detail however. The site I'm trying to access remotely is currently being served through a virtualhost with a hostname different from "localhost".
<Daviey> jamespage: hmm, is that 2 hours metric time?
<jamespage> Daviey: 2 hours launchpad tim,e
<swiss_chris> So I guess I would need to set up some kind of similar virtualhost so that all external requests to my webserver are redirected to the desired directory?
<swiss_chris> Is this possible with virtualhost config?
<Daviey> Ah, next Wednesday?
<Daviey> zul: Fancy talking rbasak through how to make a new upstream snapshot of python-novaclient (including the bug # it fixes)?
<Daviey> (which he fixes upstream incidently [kudos])
<zul> Daviey: sure....gimme a bit
<zul> rbasak: ping
<rbasak> zul: pong
<zul> rbasak: lemme know when you are free
<rbasak> zul: ready right now
<m4xx> has anyone installed ubuntu-server on a lenovo thinkserver using a raid 1
<lynxman> RoAkSoAx: ping
<SpamapS> m4xx: whats a thinkserver?
<SpamapS> sounds like a laptop with a cracked screen.. ;)
<zul> server that makes you think?
<m4xx> http://www.cdw.com/shop/products/Lenovo-ThinkServer-TD230-1029-Xeon-E5620-2.4-GHz/2155166.aspx
<uvirtbot> New bug: #839657 in rabbitmq-server (main) "Error message about parsing /etc/hosts with rabbitmqctl" [Undecided,New] https://launchpad.net/bugs/839657
<RoAkSoAx> lynxman pong dude where were u ;)
<lynxman> RoAkSoAx: I'm waiting for you in the car, with my jacket on :D
<lynxman> RoAkSoAx: check siteam ;)
<RoAkSoAx> lol
<RoAkSoAx> give me 15 mins
<rsr> hello, I am trying to create a nat with ubuntu server, I have added these rules to iptables
<rsr> iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
<rsr> iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
<rsr> and iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
<lynxman> RoAkSoAx: cool
<rsr> But nat refuses to work. I have tried pinging from the network interface ping -I eth1 www.google.com, it resolves the ip but gives me network unreachable. Can anyone help me diagnose the problem?
<Ursinha> Daviey: hello?
<lynxman> smoser: congrats!
<hallyn> zul: uh, i guess we *do* have a reason for doing the libvirt merge - see last comment on https://bugs.launchpad.net/bugs/368962
<uvirtbot> Launchpad bug 368962 in libvirt "Can't reboot kvm virtual machines using virsh" [Wishlist,Triaged]
<zul> arrrrgh
<zul> hallyn: what was the problem with 0.9.3 it didnt recognize hvm or something?
<rbasak> utlemming: ping
<hallyn> zul: yeah, but that was on an ec2 instance, so may have been 'correct'
<hallyn> today, actually, finally, i may be able to test on actual kvm-capable hardware
<dxd828> Gday!
<Daviey> Hello Ursinha !
<uvirtbot> New bug: #839715 in rabbitmq-server (main) "rabbitmq-server segfaults on startup under lucid lxc container in oneiric" [Undecided,New] https://launchpad.net/bugs/839715
<hallyn> jdstrand: here i go, running test-libvirt.py on my main laptop.  hope it goes gently :)
<Daviey> sidnei: You managed to make nearly all of your kernel modules non-free! :)
<sidnei> Daviey, really? /me checks
<sidnei> i guess stallman would be proud of me
<Daviey> sidnei: bug 839715
<uvirtbot> Launchpad bug 839715 in rabbitmq-server "rabbitmq-server segfaults on startup under lucid lxc container in oneiric" [Undecided,New] https://launchpad.net/bugs/839715
<sidnei> how's that even possible? maybe a bug in apport?
<Daviey> sidnei: That is THE most non-free kernel i have ever seen.
<sidnei> Daviey, maybe open-vm-tools replaces all those modules?
<Daviey> sidnei: pass.
<Daviey> sidnei: Maybe you are just evil? :)
<sidnei> i could be :)
<sidnei> i mean, might as well be running windows once you're that far
<Daviey> sidnei: ask jamespage for tips on how to run wndows and make it look like linux.
<sidnei> hehe
<Daviey> rbasak: How did you get on with python-novalcient?
<jamespage> Daviey: ha ha ha
<zul> Daviey: its uploaded
<zul> well you start off with kde....and go from there
<Daviey> jamespage: sorry :)
<Daviey> zul: Great!
<Daviey> rbasak: process was ok?
<jdstrand> hallyn: that should in general be ok, but I run it in a vm
<jdstrand> and it does fine there (though obviously kvm isn't tested)
<zul> smoser: did you get that packaging change in?
<jdstrand> well, maybe not obviously, but I have intel here, so no nested kvm
<Daviey> rbasak: The email address you have as DEBEMAIL, might want to add that to your launchpad profile - so launchpad knows which packages you uploaded :)
<Daviey> zul: Are you doing a ova upload?
<Daviey> nova*
<hallyn> jdstrand: shouldn't we be close to haveing nested kvm on intel working?
<zul> Daviey:  yes
<smoser> i pushed merge proposal
<smoser> (untested)
<jdstrand> hallyn: I don't know-- that would be awfully cool though :)
<smoser> zul, https://code.launchpad.net/~smoser/nova/lp833530/+merge/73850
<Daviey> zul: Are you adding the 'don't use keystone' bug fix?
<zul> Daviey: yep
<Daviey> cool
<uvirtbot> New bug: #839764 in openssh (main) "package openssh-server 1:5.8p1-1ubuntu3 failed to install/upgrade: sub-processo script post-installation instalado retornou estado de saÃ­da de erro 2 " [Undecided,New] https://launchpad.net/bugs/839764
<koolhead17> hi all
<hallyn> zul: hey, would you care to sponsor the trivial fix for bug 827798 for me?
<uvirtbot> Launchpad bug 827798 in lxc "LXC works without warning regardless if cgroup namespaces are properly available" [Low,In progress] https://launchpad.net/bugs/827798
<uvirtbot> New bug: #839796 in nova (universe) "nova-manage complains about missing nova-compute.conf" [Undecided,New] https://launchpad.net/bugs/839796
<zul> hallyn: sure
<hallyn> great, thx
<zul> hallyn: done
<TheEvilPhoenix> Daviey:  just an FYI, tried that server install image youasked me to test... it decided to segfault for me whilst trying to get a list of packkages to install.  not sure whether its the image or my system... but still.
<smoser> Daviey, so where is the "Daviey cares about these bugs" list ?
<Daviey> smoser: the same list, shared at every meeting - http://status.qa.ubuntu.com/reports/ubuntu-server/release-bugs.html
<Daviey> TheEvilPhoenix: Oh golly!
<Daviey> TheEvilPhoenix: What sg faulted?
<Daviey> seg*
<smoser> Daviey, bug 827590, bug 827598,
<uvirtbot> Launchpad bug 827590 in nova "cloud-init does not mount ephemeral0 on /mnt in nova" [Medium,Confirmed] https://launchpad.net/bugs/827590
<uvirtbot> Launchpad bug 827598 in nova "ephemeral device does not have a filesystem" [Medium,In progress] https://launchpad.net/bugs/827598
<smoser> then, i consider bug 837102 and bug 836759 of moderate concern
<uvirtbot> Launchpad bug 837102 in nova "nova writes libvirt xml 'driver_type' based only on FLAGS.use_cow_images" [Low,Triaged] https://launchpad.net/bugs/837102
<uvirtbot> Launchpad bug 836759 in nova "nova does not grow qcow disk images correctly" [Medium,Triaged] https://launchpad.net/bugs/836759
<robbiew> adam_g: ping
<adam_g> robbiew: hey
<Daviey> smos	Thanks!
<robbiew> adam_g: so have you registered for ODS?
<robbiew> http://summit.openstack.org
<adam_g> robbiew: the summit, no. saw it was full as of earlier this week, so figured i missed my chance. i can wait list, tho. i was about talking attending the conference
<Daviey> smoser: Are you likely going to drive 837102 ?
<TheEvilPhoenix> Daviey:  unsure, the segfault caused the installer to </disappear>, and "Segmentation Fault" occurred, prior to the VM halting
<robbiew> adam_g: ok
<TheEvilPhoenix> Daviey:  i had the same issue with a Debian unstable image a few days ago, not sure whether its the VM system or not
<smoser> bug 837102
<uvirtbot> Launchpad bug 837102 in nova "nova writes libvirt xml 'driver_type' based only on FLAGS.use_cow_images" [Low,Triaged] https://launchpad.net/bugs/837102
<robbiew> we have one free sponsor pass left, so should be good to go
<adam_g> robbiew: ok, so shall i put my name down? i assume that pass will get freed up if my talk is accepted?
<robbiew> adam_g: make sure to arrange travel and hotel with Grant
<robbiew> adam_g: that's a good question
<TheEvilPhoenix> Daviey:  for the next hour or so, or until i surface again, send msgs to me via privmsg
<TheEvilPhoenix> my ZNC will log em
<robbiew> med_out: zul: do speakers get free passes?
 * robbiew assumes so
<TheEvilPhoenix> </shifting location, out of contact with internet for brief period>
<zul> i think so
<med_out> robbiew, no idea.
<med_out> would make sense though
<med_out> I suspect most of the speakers are also at the Summit.
<adam_g> ugh. having 2 24" monitors doesn't make it easier to multitask. now im trying to do 16 things at once instead of 8
<med_out> heh.
<robbiew> adam_g: heh...I'll update the wiki...claim the free pass...I've locked it down anyway
<robbiew> no more additions
<adam_g> robbiew: 10-4, thanks
<Daviey> TheEvilPhoenix: Thanks!
<zul> but but my cat would like to come
<robbiew> ttx: any idea on when folks waitlisted will hear back?
<TheEvilPhoenix> Daviey:  i can't tell where it segfaulted at all, because i never got past the image... but the fact it also happened with an unrelated ISO as well... makes me think it was my VM
<leftyfb> Does anyone know if the apache in 8.04 will be updated with the latest patch for: http://mail-archives.apache.org/mod_mbox/httpd-announce/201108.mbox/%3C20110826103531.998348F82@minotaur.apache.org%3E
<smoser> Daviey, what do you think about bug 810019
<uvirtbot> Launchpad bug 810019 in distribute "UserWarning printed on import pkg_resources'" [Medium,Confirmed] https://launchpad.net/bugs/810019
<smoser> it is more than a little bit annoying
<ChmEarl> today I update onerirc 11.10 and pygrub (xen 4.1.1) is now broken
<ChmEarl> oneiric
<jmarsden> ChmEarl: ask in #ubuntu+1
<m4xx> i've got a new box with ich10r i'm trying to install with raid 1, i've set up the raid in the bios, when i get to the "detect disks" on first boot of the disk it says it detects and ata raid and asks me if i wish to enable it, if i hit yes, nothing is listed in the partition manager, if i "go back" and re-try teh detect disks section i get presented with a list of drivers. anyone care to point me in the right direct
<uvirtbot> New bug: #828366 in virt-viewer (main) "Caps lock state of VM (Windows XP using KVM) gets reversed" [Low,New] https://launchpad.net/bugs/828366
<SpamapS> wtf, we just had asterisk added to our triage list?
<uvirtbot> New bug: #495174 in asterisk (universe) "pbx_lua unable to load modules" [Low,Incomplete] https://launchpad.net/bugs/495174
<uvirtbot> New bug: #517144 in asterisk (universe) "Asterisk 1.6.2.0~rc2-0ubuntu1.2  libpthread-2.10.1.so" [Medium,Confirmed] https://launchpad.net/bugs/517144
<uvirtbot> New bug: #567483 in asterisk (universe) "installing with "aptitude install asterisk" gives me a report about violated dependencies (libc6-i686)" [Undecided,Invalid] https://launchpad.net/bugs/567483
<uvirtbot> New bug: #578613 in asterisk "asterisk seg faults in masq_park_call" [High,Fix released] https://launchpad.net/bugs/578613
<uvirtbot> New bug: #665347 in asterisk (universe) "Asterisk crashes in res_odbc" [Medium,New] https://launchpad.net/bugs/665347
<uvirtbot> New bug: #685389 in asterisk (universe) "package asterisk 1:1.6.2.7-1ubuntu1 failed to install/upgrade: el subproceso script post-installation instalado devolviÃ³ el cÃ³digo de salida de error 1" [Undecided,Invalid] https://launchpad.net/bugs/685389
<Doonz> So does zfs handle the "Green" drives ok or does it have the same problems as a hardware raid card
<blackowl> hey guys um i own a nas and a server i would like to help with testing releases and data storage
<uvirtbot> New bug: #839980 in mysql-5.1 (main) "package mysql-server 5.1.54-1ubuntu4 failed to install/upgrade: problÃ¨mes de dÃ©pendances - laissÃ© non configurÃ©" [Undecided,New] https://launchpad.net/bugs/839980
#ubuntu-server 2011-09-03
<mrmadmoneyman> Can anyone answer a question for me about WAN ingress policing with TC?
<mrmadmoneyman> (I have been reading the advanced linux router guide_
<mrmadmoneyman> )
<pmatulis> !ask | mrmadmoneyman
<ubottu> mrmadmoneyman: Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-)
<uvirtbot> New bug: #840099 in backuppc (main) "package backuppc 3.2.0-3ubuntu4 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/840099
<uvirtbot> New bug: #840175 in apache2 (main) "apache2-mpm-prefork depend is wrong" [Undecided,New] https://launchpad.net/bugs/840175
<uvirtbot> New bug: #840182 in apache2 (main) "package apache2.2-common 2.2.17-1ubuntu1 failed to install/upgrade: corrupted filesystem tarfile - corrupted package archive" [Undecided,New] https://launchpad.net/bugs/840182
<uvirtbot> New bug: #839773 in libvirt (main) "virsh crashed with SIGABRT in raise()" [Medium,New] https://launchpad.net/bugs/839773
<pr0zoid> How can i remotely image by ubuntu-server.  I want to take a full backup of my remote server (all i have is command line interface).
<savoy9020> How do you make ubuntu-server reprobe hardware.. My server vm's won't see network nics when I move them to another host..
<uvirtbot> New bug: #840386 in munin (main) "Update munin to bugfix release 1.4.6" [Wishlist,In progress] https://launchpad.net/bugs/840386
<uvirtbot> New bug: #840449 in munin (main) "Munin expects current and minimum value for fan sensors, but only current is provided by lm-sensors" [Medium,Triaged] https://launchpad.net/bugs/840449
<uvirtbot> New bug: #840474 in samba (main) "Can't connect via samba from Vbox XP Guest when restricting IP with hosts allow in smb.conf" [Undecided,New] https://launchpad.net/bugs/840474
<philipballew> how would i run chrome or some browser through ssh from my server to me?
<jmarsden> philipballew: ssh -X you@remotemachine.example.com somebrowser
<philipballew> jmarsden, ok. so do i only need to install the chromieum browser or anything else?
<philipballew> philip@philip-Studio-1558:~$ ssh -x philip@philipserver1.no-ip.org chromium-browser
<philipballew> philip@philipserver1.no-ip.org's password:
<philipballew> (chromium-browser:23738): Gtk-WARNING **: cannot open display:
<jmarsden> philipballew: -X and -x are not the same thing.
<jmarsden> In fact they are opposites :)
<philipballew> haha, thats true. :) I should have seen that
<uvirtbot> New bug: #840513 in openldap (main) "package slapd 2.4.23-6ubuntu6 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/840513
<spacin> firestarter? usefull or totally not worth the time
<BrixSat> hey im having a problem
<BrixSat>  WARNING: Can't change user id unless you are root.
<BrixSat> on this init.d script http://pastebin.com/L2h5FPgw
<jmarsden> BrixSat: well, the obvious question is... *are* you running it with root priviledges?
<BrixSat> yes :)
<BrixSat> im as user root
<BrixSat> but i was expecting that to auto-hapen at boot time
<jmarsden> BrixSat: The error is coming from start-stop-daemon but I can't see why, if it is really being run as root.
<BrixSat> :S
<BrixSat> jmarsden:  if i put it in rc.local will it run as root?
<jmarsden> It should, yes.
<BrixSat> how do i disable the icecast from init.d?
<BrixSat> update-rc.d -f apache2 remove got it :D
<jmarsden> apache2 and icecast are not the same thing :)
<uvirtbot> New bug: #840188 in cobbler (universe) "cobbler crashed with IOError in check_setup(): [Errno 5] Input/output error" [Undecided,New] https://launchpad.net/bugs/840188
<uvirtbot> New bug: #699793 in dahdi-linux (universe) "package dahdi-dkms 1:2.2.1 dfsg-1ubuntu2 failed to install/upgrade: dahdi kernel module failed to build" [Low,Incomplete] https://launchpad.net/bugs/699793
<uvirtbot> New bug: #695556 in asterisk (universe) "Asterisk "locks up" the system when an external process is called from the 'h' extension with a lower priority than Asterisk" [Medium,Fix released] https://launchpad.net/bugs/695556
<Chris89x> hello, is there a similar command for "chkconfig service_name off" for 10.04 lts?
<uvirtbot> New bug: #840641 in nova (universe) "nova ftbfs (Sphinx? Segmentation fault)" [High,New] https://launchpad.net/bugs/840641
<mASOUD_42> hi i typed "pkill -KILL -u root" in a ssh .and now i cant access it anymore .what should i do?
<mASOUD_42> hello?
<jmarsden> mASOUD_42: I think you should (a) not type in commands as root that you do not fully understand, and (b) reboot the machine you killed.
<mASOUD_42> ty, the problem was that, lest say i first logged in at 1:08 and then at 1:25 my internet connection disconnected. and when i reconnected again , when i typed "w" it showed 2 roots logged in , i mean the last one was still logged in
<mASOUD_42> how could i log that out?
<jmarsden> So you should have figured out which one was your current session and which was not, and killed the ONE process you wanted to kill, not all processes owned by root!!
<mASOUD_42> no the both were named root
<mASOUD_42> i still have the terminal open
<mASOUD_42> http://paste.ubuntu.com/681455/
<jmarsden> You are confused.  root is a user name, not a process ID.
<jmarsden> You need to kill off the bash shell on pty/0 (look at the output you pasted).
<mASOUD_42> how?
<jmarsden> ps ax | grep bash |grep pty/0      or any similar way to look for the PID.
<jmarsden> Then kill that PID.
<mASOUD_42> ty
<jmarsden> You're welcome.
<mASOUD_42> so now i need to contact the support and ask them to restart the server? it was a vps
<jmarsden> I would think so, as far as I can see your pkill killed off all root-owned processes, which would effectively kill the server completely.
<mASOUD_42> :\
<mASOUD_42> i read the thing in here http://www.cyberciti.biz/faq/linux-logout-user-howto/
<jmarsden> mASOUD_42: You did not read it very carefully.  It says that is how to "logout other users" --
<jmarsden> It does not say it is safe to use for logging out root.
<mASOUD_42> is it like that if we kill all processes owned by a user then we logged him/her off?
<jmarsden> Yes.  But if you do that for root, you also killed off all the daemons running as root, such as sshd ...
<mASOUD_42> yes i think now i understand what i just did.
<kalkin-> hi
<kalkin-> i setup postfix on my server as satelite host
<kalkin-> but how can i setup that mails which are sent by root or any other youser contain no-reply@foo.bar and not root@myhost.bar
<kalkin-> ?
#ubuntu-server 2011-09-04
<pmatulis> kalkin-: investigate 'postfix rewrite sender'
<kalkin-> pmatulis: thanks, thats exactly what i needed
<Technicus> I have Ubuntu Server Edition installed on VirturalBox.  Guestadditions is installed, how do I change the resolution so that when I go full screen it will fill the screen?
<jmarsden> Technicus: Why do you need to?  Just ssh to the server VM from a GUi terminal of your choice :)
<Technicus> Ok . . . that is a preferred option.  I am having trouble with ssh.
<Technicus> I set the network adapter to NAT and Host Only Adapter, but with Host-Only Adapter, I can connect with SSH but there is no connection with the internet.  Someone suggested that I mak two adapters, but I do not yet understand how to do that.
<jmarsden> Technicus: Set networking to Bridged, and bridge you eth0 (or whatever the main NIC on your host machine is)
<jmarsden> You will need to shut down the VM, change the setting, and restart the VM.
<Technicus> jmarsden: Ok so I just did that.
<jmarsden> OK.  So now your VM has an Ip address on your local (real, physical) subnet, right?  So ssh from a terminal emulator on the host PC to the IP of the VM.
<Technicus> Please explain how to find the answer to your question . . .
<jmarsden> What IP address does your BM now have?
<jmarsden> *VM.
<Technicus> 192.168.56.10/24
<jmarsden> OK, so from a terminal on the host, do   ssh you@192.168.56.10
<Technicus> ssh: connect to host 192.168.56.10 port 22: Connection timed out
<jmarsden> Did you install openssh-server in the VM ?
<jmarsden> If not, do so :)
<Technicus> It is installed.
<Technicus> I was able to connect to it with the host-only network adapter.
<jmarsden> Then either it is not running, or that is not really the IP of the VM, or you have a firewall on the VM...
<jmarsden> Yes, but host only is relatively useless for a server... it can't serve anything to anyone except the local host if you do that :)
<jmarsden> OK, on the VM, what does     sudo netstat -ntlp |grep :22   output ?
<Technicus> Just a moment . . . there was an error and I have to restart it.  It is slow to restart.
<jmarsden> It shoudn't be slow to restart... unless you don't have enough RAM for it?
<Technicus> How much ram do you suggest?
<jmarsden> Well, for a bare server doing nothing, 256MB is enough... but you need to really have that real physical RAM free for it, so it doesn't swap
<Technicus> LISTEN
<Technicus> So It appears to be functional now.
<Technicus> I can at least SSH from the host to the guest not to see if the internet connection is avaliable.
<jmarsden> Looks like you are all set. ping 8.8.8.8 from the VM to check internet connectivity
<Technicus> It is working!
<Technicus> Thanks a lot!
<Technicus> Would it be difficult to setup for a static IP address?
<jmarsden> No, see the relevant part of the Ubuntu Server Guide.
<jmarsden> https://help.ubuntu.com/10.04/serverguide/C/network-configuration.html   and look for "Static IP Address Assignment"
<Technicus> This is great . . . thanks lots!
<jmarsden> You're welcome.
<philipballew> anyone know a cheep domain registry site
<lickalott> you want to create a domain or DNS it?
<lickalott> philipballew
<lickalott> if you come back while i'm gone:  1and1.com for creation, dot.tk for DNS
<philipballew> lickalott, i want to make my wordpress link to www.philipballew.com
<philipballew> and i want to own that
<lickalott> you need both then.
<philipballew> how should I go about that
<lickalott> 1and1.com for domain space/name then a port 80 redirect from your wordpress to www.philipballew.com
<ubun> im trying to install ubuntu10.04 server into a HP Proliant g360 g3. when i get to the partitioning part it says the disc is too small??? (btw im a noob)
<airtonix> ubun: if you simply typed a number in the partition size assuming it would be gigabytes then that is your problem
<airtonix> ubun: it's megabyes by default when you don't add "GB" to the end of the number
<bdmc> Folks,  I have a problem with an upgrade that I am working on.  The machine was running 9.04 and when we rebooted it for maintenance, the machine owner saw the message that said to run do-upgrade ( the upgrade to Natty ) and asked that that be done.  Unfortunately, we are now in a state where none of the recent kernels 2.6.38-x seem to run at all, and the original kernel, 2.6.28-14-server will at least boot, but there are some
<maxb> Your message was truncated at "... there are some"
 * maxb spends a good few seconds doing the mental maths to establish that 9.04 is jaunty
<bdmc> sorry. it went on to say that there are serious errors ( can't read /proc, etc ) so the machine is basically dead.  Any suggestions?
<maxb> I feel you've not fully explained the situation, because the ubuntu upgrader would never have gone directly from jaunty to natty
<bdmc> and yet it did -- there was only the one command.  However, the machine has been given updates continually, but never rebooted, so you're right, things have been updated, just running on the old kernel until now.
<bdmc> maxb: yes, we have tried downloading various kernels, including "linux-image-server" which said that it was installing, but I can't find it.
<maxb> I'm sorry, but at the current level of detail of your explanation, I really have no insight into what state your machine is in
<maxb> I can definitely say, however, that the Ubuntu *supported* upgrade processes are only from one release to the next immediately following it, and from one LTS to the next LTS
<maxb> I am extremely sceptical that do-release-upgrade would have violated this
<bdmc> Sorry.  I have been up for more than 24 hours, and may not be thinking as well as I should.  Obviously, the do-release-upgrade was misled if the system was in any other state.  I can't get into it at the moment ( I am on the other side of the continent from the machine. )
<bdmc> Could it have had its sources list updated to maverick and updates done properly successfully, even without the kernel being updated ( rebooted ).
<maxb> It's possible
<maxb> Pretty hideously unwise to have performed an upgrade to a new release series and not have rebooted, thoguh
<bdmc> Two questions.  One, if such a thing was done, and therefore most of the machine should have been in a "maverick" state, why would a Natty upgrade with a Natty kernel have failed so?  ( Yes, there was, shall we say, resistance to rebooting while the system was "running."
<bdmc> Secondly, during the Natty upgrade, I did see a couple of error messages about improper version numbers.  One was libxen3, if I remember correctly, and the other was cnews.
<maxb> I'm afraid you're not really giving enough info about the failures to infer anything useful
<maxb> You've said there's some problem booting the natty kernel - but what?
<maxb> As for the error messages during the upgrade, I'd need to see the exact messages, or a closer approximation of the exact messages, to understand what they were complaining about
<maxb> I suggest sleeping, and coming back here once you have access to the server
<maxb> :-)
<bdmc> True.  I am going to call it a night for a while.  I am trying to see through someone else's eyes, too, which is a problem.  He has just told me that he was paying attention during the boot, and it looks as if it was stopping on a fsck error.
<maxb> a fsck error is a whole different issue entirely
<maxb> you really really need to get some exact error messages to better understand the problem
<bdmc> I agree.  I was fighting to see what was happening during the boot process, and finally got that message from him.  The system is back up and running now, so I am going to get some sleep and "finish" later.  Thanks for the encouragement.
<shadeslayer> hi, i need some help with lvm, i'm completely new to it and was wondering how to increase my swap size
<shadeslayer> well .. i think i need to shrink my root partition first to free some space ...
<uvirtbot> New bug: #841033 in cloud-init (main) "package grub-legacy-ec2 0.6.1-0ubuntu8 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/841033
<Dice-Man> hi
<Dice-Man> how set up quotas on my ssh server ?
<Dice-Man> *to
<Dice-Man> the server is runnin on a xubuntu desktop version
<Dice-Man> i've tried quota with sudo apt-get install quota but when running the cmd quotacheck -vagum it doesn't run
<Dice-Man> i got something about lstat
<udienz> Is there anybody to merging tftp-hpa now?
<udienz> if no one doing merge i'll take it
<Technicus> Hello . . . I have a server running ( this is the first time I have ever managed a server ), I have configured it for ZoneMinder video surveillance following the tutorial: < http://www.zoneminder.com/wiki/index.php/Ubuntu_Server_10.10_32-bit_with_ZoneMinder_1.24.2_from_source,_FFmpeg,_libjpeg-turbo,_Webmin,_Cambozola >.  I am having troubles logging into the web admin interface.  How do I make a user, my Ubuntu username an
<dxd828> Hey guys, do you know if it would be possible to update cms systems like wp with puppet on multiple servers?
<Slyboots> Hi
<Slyboots> Im having some serious issues with Ubuntu 11.04 with an AMD Fusion motherboard.. any ammount of network traffic seems to cuase the machine to kernel panic
<Slyboots> Get a few messages about "NOZH: Local_softirq_pending 08"; Followed by "BUG : unable to handle kernel paging request at ffff88022c5c3000"
<Slyboots> Then kernel panic
<multiHYP> hi guys, is there something like homebrew for mac for the ubuntu?
<multiHYP> im running 11.04 server
<multiHYP> why is my server by default has no user directories? public_html or pictures or documents etc.?
<multiHYP> where to go if i have ubuntu server setup/config questions?
<Pici> multiHYP: Here is a good place, but I'm not sure why you would think that you should have 'user directories' by default.
<multiHYP> cause in the uni we used to have those by default, i guess it was made by the system admin
<Pici> You'd need to set that up yourse,lf, I *think* apache's mod_public does that, but don't quote me on that.
<multiHYP> might actually try nginx
<multiHYP> someone suggested for scala and lift development thats better...
<lenios> apache mod_userdir does that
<lenios> or are we talking about /home dir for each user?
<multiHYP> lenios_: no just the ~/public_html/ or wherever else I shall put my .html .php .war pages and applications.
<multiHYP> although somebody recommended me to use nginx
<multiHYP> there is no easy way of installing nginx alone without 100 different settings and adjustments on linux ubuntu 11.04.
<multiHYP> do i need to open the same udp ports as well for ssh or any other case?
<multiHYP> im using ufw
<qman__> SSH does not use UDP
<qman__> also, ubuntu is secure by default -- that means no extra functions are installed
<multiHYP> so 80 for webserver, 22 or any other custom for ssh, 443 for https
<qman__> if you want things like userdir, you have to set it up
<multiHYP> well i had to setup iptables, thats not exactly secure by default
<qman__> yes, it is
<qman__> even though iptables has no rules by default, there is also nothing listening by default
<multiHYP> maybe to outsiders, which i still don;t think it is, but i want it even sorted for multiple users..
<multiHYP> i did setup iptables and now ufw, so hopefully iptables settings are gone
<multiHYP> sftp uses the same port as ssh, even if i customised ssh?
<qman__> yes
<qman__> sftp is a function of the SSH server
<multiHYP> also only tcp right?
<qman__> it is not separate
<multiHYP> ok, thanks qman__ :)
<qman__> no firewall is configured out of the box in order to make it easier to set things up, but there also is nothing running and listening out of the box
<qman__> so unless you make changes, it is, in fact, secure
<multiHYP> yes obviously, but if you run something like webserver, ssh or sftp connections etc then things need to be secured.
<multiHYP> im in that phase now
<multiHYP> how can i remove the previous settings i specified for uptables? i want to stick to ufw since its much easier.
<qman__> ufw is just a frontend to iptables
<multiHYP> i remember some /etc/iptables > and the iptables-save/-restore <
<qman__> it depends on how you configured it
<multiHYP> so how can i double check the state of iptables now?
<multiHYP> nano /etc/iptablesâ¦.
<qman__> if you just created some rules via 'sudo iptables ...' then they will be overwritten by UFW
<multiHYP> no i overwrote the file and had .test.rules and a default preup and all that
<multiHYP> i followed a good article but forgot to bookmark it
<qman__> then you'll have to change the files back to default
<multiHYP> don't know what was the default :(
<qman__> though UFW may override it, I don't know enough about UFW and how specifically it interacts with iptables
<qman__> you can always check the current firewall with sudo iptables -L
<multiHYP> yes it overwrites i just want to double check just in case
#ubuntu-server 2012-08-27
<lickalott> anyone here running a znc/bnc?  I'm having oidentd issues.
<lickalott> is there a way to configure the oidentfile to auto choose based on the ip the connection is using?
<lickalott> right now it shows my actual user name from my server vs my irc nick.  and when one of my users is still connected and I do a /znc jump i get a module in use error
<riz0n> Hello guys, I am setting up an Ubuntu Server 12.04, and am going by the Server Guide. I am trying to set up SASL Authentication in Dovecot, on page 237, it refers to editing a file /etc/dovecot/dovecot.conf. It wants to uncomment lines auth default section, in socket listen{ }. The problem is, the dovecot.conf file does not contain the lines as the documentation includes. Can someone
<riz0n> tell me the file that I need to edit that contains this portion of the configuration? Thanks.
<kees> riz0n: in 12.04 the dovecot configure got spread out. look for auth.conf, maybe?
<kees> *configuration
<lickalott> gents, running into an odd issue.   I installed ubuntu to /dev/sda but added 3x 500GB drives after installation.  they showed up as sdb, sdc and sdd.  I rebooted and not /boot shows up against sdb and my mounts in fstab won't mount (likely because /dev/sdb1 is supposed to be mounted to /media/<FOLDERNAME>)  anyone know whats going on here?
<lickalott> SO...is there anyone even in this chan or is it just for show?
<Super_Dog> lickalott:  have you messed with the boot order in your BIOS?
<Super_Dog> the drives you added may have re-arranged the order depending on their precedence in your hardware...
<lickalott> no BIOS messing.  There are 3x IDE drives and 1x SATA.  2x IDE drives are cable select on EIDE 1 and the 3rd IDE drive is "slave" on EIDE 2 along with a DVD drive.
<lickalott> i even went as far as to move the jumper to master on SDA
<lickalott> but why would it move /boot to sdb (which is #2 IDE on EIDE 1 and marked as cable select)?
 * lickalott sratches head
<Super_Dog> IDE will often take precedence hen these are added - especially to older system...
<Super_Dog> hen = when
<lickalott> should I place the jumpers in the proper config (i.e. master, slave, etc...) for them all?
<Super_Dog> Hold on...
<Super_Dog> was the SATA drive the original drive you installed to?
<lickalott> neg
<lickalott> master cable slot on cable select
<Super_Dog> sounds to me like your drives got re-assigned when you added all the drives afterwards...
<lickalott> agree
<Super_Dog> why else would your boot drive go from /sda to /sdb...?
<Super_Dog> Again - you may have to play with your boot order in your BIOS - problem with older machines is that they may not have very granular options there...
<lickalott> i had to rebuild the server because i was SURE that /boot was on sda.  After I rebooted (after installing the drives) i didn't check, assuming it was the same and formatted all the new drives.  POOF went my install
<Super_Dog> Did you image your hard drive?
 * lickalott looks down in shame
<Super_Dog> clonezilla / dd - both will work
<lickalott> i was in the middle of tar'ing up my ~ folder when i rebooted.  I had two ssh windows open and forgot I was tar'ing it up for a back up.
<Super_Dog> I like installing with all the drives in place - when adding storage I like to use LVM.
<Super_Dog> different strokes for different folks - sure you'll get some differences of opinion on this
<lickalott> if this continues to pain me i may rebuild again with all in place
<Super_Dog> is your bios crappy and doesn't allow you to play with it...?
<Super_Dog> boot order can often be re-prioritized in a lot of modern bios settings
<lickalott> it's an OLD machine.  I'd have to look and see if there is an update, but the bios that's on it now is limited.
<Super_Dog> i try to never mix older IDE with SATA.  boot order can be a royal nightmare with that setup
<Super_Dog> if you have to add drives - be careful and image everything.... but you should be able to go back to your single drive setup and boot just fine
<Super_Dog> don't know why that wouldn't fly for you if you are doing just a plain jane install of a debian server or what not.
<lickalott> it got stuck on boot tonight (init 6 via putty) and when it didn't come back I had to go to the back room and check on it.  That's when i saw that it wasn't mounting properly.  I commented out my entires in fstab and rebooted and it came up fine.
<Super_Dog> good... at least you're back to where you were...
<Super_Dog> nothing lost?
<lickalott> not this time, no
<lickalott> it's just odd that it swapped drive spots
<Super_Dog> you have some spare add-in cards?  Sometimes those are more reliable than the mo-bo adapters on the old mobo's...
<Super_Dog> does it support pci-express?
<lickalott> no...   This was a frankenstien job.  I had some drives in my winblows server that I upgraded and decided to throw the old ones in my ubuntu server
<lickalott> no (PICe)
<lickalott> it's OLD... like P4 old
<lickalott> lol
<lickalott> i'm going to change my fstab to sda1 (mounted to /media/<FOLDERNAME> instead of sdb1.  If he's happy with /boot on sdb, he can keep it....
<Super_Dog> good luck with that...
<Super_Dog> although I have had some luck with various PCI controllers.  JMicron gets a bad wrap on WinBloze platforms but it has been successful for me on Linux server installs on old Frankenstein units...
<lickalott> i'm too far into the rebuild now... took me 3 days to get ipv6 working properly with an HE tunnel.  worst case (for the time being is I take #2 on EIDE 2 out and operate without 500GB
<Super_Dog> You don't need any of the Promise or JMicron raid controller cards either...  Just get a standard run of the mill PCI controller did the trick for me on an old P4 system.
<Super_Dog> Those will usually default to a later drive precedent than any of the on-board mobo controllers.
<lickalott> i have a frys right up the street  :-D
<Super_Dog> $10-$20 no more... I ordered some no names with the same chipset on e-bay and they worked fine...
<Super_Dog> some of the old P4's have a bios setting for booting add-in cards first... be careful with those
<lickalott> Super_Dog, there was an issue with my fstab entries.  Doesn't solve the movement of /boot from sda to sdb but I think it would be fine now if I continued (keeping sdb as my primary)
<lickalott> Super_Dog do you have anything auto mounted in fstab besides the default stuff?
<lickalott> Super_Dog, i was able to use BIOS to force the main/primary drive.  works now.  thanks for the assist man!
<Super_Dog> You are welcome sir...
<Super_Dog> lickalott:  consider yourself lucky....  Those old mobo's can just kill your mind...
<Kartagis> hi
<Kartagis> I'm in the process of purchasing a dedicated server, but I need some advice. can I ask here?
<Super_Dog> Yep...
<drag0nius> hello
<drag0nius> whats some nice web shell?
<Kartagis> we are 3 people purchasing it
<Kartagis> we want to virtualise it so no one interferes with the other
<Super_Dog> What you trying to do with it...
<Super_Dog> Three virtual machines running on the server?
<Kartagis> I am being asked what OS. do I choose an OS or do I choose a rescue system?
<Kartagis> yes, virtual machines
<Super_Dog> You can run Ubuntu Server if you wish...
<Super_Dog> You can use VM-Ware.  Virtual Box.  KEMU.  Others...
<Kartagis> one guy says we need rescue system if we need to virtualise
<Super_Dog> Is this small business or big enterprise node or what?
<Kartagis> hetzner esx4, if you are interested to know
<Kartagis> 2x3tb hdd
<Kartagis> 32gb ram
<Kartagis> 8 IP addresses
<Super_Dog> Rescue system - I would suggest using system cloning or imaging and redundant storage solution...
<Super_Dog> Lots and lots of ways to skin the cat there....
<Kartagis> all I need to know is if we really need rescue system to virtualise
<Kartagis> if we do, I'm going to change the order
<Kartagis> I've already placed it
<Super_Dog> Short answer is probably "No, you don't".  But depends on what "rescue system" is being defined as.  It may be a good idea...
<Super_Dog> One person's "rescue system" is another person's redundant storage solution with off-site backup provisions.
<Super_Dog> Can you define exactly what the "rescue system" is?
<Kartagis> my OS options are centos, debian, ubuntu server, opensuse. what should I choose? please reply regardless of the channel
<Kartagis> Super_Dog: if I only knew
<Super_Dog> :-)
<_ruben> you already mentioned esx, that'd be your os
<Super_Dog> Hate to recommend.  Of course I am running Ubuntu Server - ergo my presence on this channel.
<Super_Dog> Correct ESX4 is essentially an OS by VMWare that will let you run your virtualized machines on it...
<Super_Dog> All of those options are freely downloadable...
<Kartagis> _ruben: oh, I thought esx4 was the name of the plan
<_ruben> could be, but if that's the case, i'd assume the server would come installed with esx :)
<Super_Dog> Not a big deal.  Now if you are getting support for ubuntu - that's a different story...  There's free open source and then you can buy Ubuntu support packages from Canonical.
<Super_Dog> google esx4 and you will find out more about it...
<Super_Dog> It's VMWare server virtualization product...
<Super_Dog> Pretty slick from what I hear but I don't run it myself.
<_ruben> vmware makes pretty great products, yet the really cool stuff is rather pricey
<Super_Dog> _ruben speaketh the truth on price
<_ruben> but for the simple tasks, the free stuff suffices just fine
<Kartagis> I hear about Xen. can I use that on ESX4?
<_ruben> no
<Super_Dog> http://www.vmware.com/products/datacenter-virtualization/vsphere/pricing.html
<_ruben> nesting virtualization products hardly ever works
<_ruben> xen provides virtualization, as does esx
<_ruben> (tho one shouldn't be using esx nowaydays, but esxi instead)
<Super_Dog> Yeah, that gets a bit convoluted... I would virtualize off the ESX
<Super_Dog> Isn't it called VSphere now?
<_ruben> vSphere is the name of the whole productline
<_ruben> where esxi is the name of the hypervizor
<Super_Dog> OK...  I'm rusty on the new nomenclature as I've stuck with the free open source solutions...
<Super_Dog> Should be ESX5 it looks like...  http://www.vmware.com/products/vsphere/esxi-and-esx/overview.html
<Super_Dog> I mean if you are buying new and all...
<Super_Dog> or I should say "ESXi"
<_ruben> ESX5 doesn't exist, ESXi5 does
<_ruben> 4 was the latest with both ESX and ESXi
<_ruben> bah .. bug #1020914 is still present in 12.04.1 :/
<uvirtbot> Launchpad bug 1020914 in mdadm "The disk drive for /boot is not yet ready or not present" [Undecided,New] https://launchpad.net/bugs/1020914
<Super_Dog> Windows 8 Enterprise Link:  https://24.234.137.89:8100/ajaxplorer/data/public/d5c2b5ed9cc035118988891268f0cf9a.php?lang=en
<Lachezar> Hello all... How do I make a RAID1 with 2 GPT disks detected and mounted at boot time?
<Kartagis> Super_Dog: hetzner.de says "Das Rescue-System ist ein minimales Linuxsystem mit den wichtigsten Tools"
<_ruben> Kartagis: so (as I expected) it's aimed at recovering an existing installation
<_ruben> ok, this is weird, regarding the failure to mount /boot at boot time. if i strace mountall in the iniramfs prompt, i get nothing relating to /boot
<ogra_> check your fstab :)
<_ruben> /dev/md0 /boot ext2 relatime 0 2
<_ruben> nothing wrong there i'd say
<ogra_> and /dev/md0 exists at that point ?
<blackshirt> your /boot reside on own partition?
<_ruben> mount -a -t ext2 does work for instance
<_ruben> but mountall just stalls
<blackshirt> ruben, are using grub?
<_ruben> yes, default 12.04 install
<blackshirt> are using multi disk array?
<_ruben> updated to 12.04.1 this morning (issue exists ever since installation)
<ogra_> did you file a bug ?
<_ruben> box has 2 disks, 2 partitions on each: small one for md0 for boot, and rest for md1 for lvm for rest
<blackshirt> i think you need separate partition maybe
<_ruben> https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1020914
<uvirtbot> Launchpad bug 1020914 in mdadm "The disk drive for /boot is not yet ready or not present" [Undecided,New]
<blackshirt> are you grun can boot md/lvm based devices?
<ogra_> well, grub seems to get along since he ends up in an initrd ;)
<ogra_> so kernel and initramfs are loaded and executed
<_ruben> /boot resides on a seperate mdadm volume formated as ext2
<_ruben> just seems mountall can't cope with it somehow
<_ruben> current workaround: add nobootwait to fstab and mount /boot to rc.local
<blackshirt> i thin you should treat your /boot not to use lvm/md
<ogra_> sounds like a race in the initramfs scripts
<_ruben> ogra_: how would that explain mount to work, but not mountall?
<_ruben> (which is something i discovered today)
<ogra_> no idea, but mountall runs scripted, if you run mount manually there is a lot time passed already
<_ruben> if i mount manually, and then invoke mountall, boot continues (but in a odd state, as mountall keeps lingering)
<ogra_> i would think assembling md0 takes to long here
<ogra_> or mountall is to fast trying to use it ... as you want to put it :)
<_ruben> i wonder why md0 would take longer than md1
<ogra_> no idea, just a theory
<_ruben> cuase when i end up in the recovery shell, the lvs are all mounted already :)
<ogra_> yeah but probably not while mountall runs
<_ruben> current wild-guess is some trigger mis-firing or something, as stracing mountall shows it ending up in a select() call
<_ruben> friggin' plymount and all that :P
<_ruben> plymouth
<ogra_> you could talk to xnox in #ubuntu-devel, he is our raid specialist
<ogra_> well, libpÃ¼lymouth is needed for communication purposes
<ogra_> poor mans dbus :)
<KAVI> This Configuration Possible or not ?   1St server have a 4 core , next server have a 4 core and these 2 combine to another server its also 8 core its possible in ubuntu Server
<KAVI> s
<_ruben> KAVI: not in a trivial way .. grid computing is one way, but you can't just run arbitrary stuff that way
<kavi_> This Configuration Possible or not ?   1St server have a 4 core , next server have a 4 core and these 2 combine to another server its also 8 core its possible in ubuntu Server
<kavi_> how to configure ?
<_ruben> KAVI: not in a trivial way .. grid computing is one way, but you can't just run arbitrary stuff that way
<kavi_> cloud computing is possible or not
<kavi_> I want to this configuration in cloud computing using ubuntu server its possible or not .that's why asking sir
<jgcampbell300> is this a good place to ask some newbie quesitons about ubuntu server
<_ruben> jgcampbell300: sure
<jgcampbell300> i need a sub network for an experament ... can i use 192.168.1.1 and 192.168.0.1 on the same network to seperate some computers ?
<_ruben> usualy, yes
<jgcampbell300> cool ... when setting up maas ... i would config a nic to my outside lan say 192.168.1.1 and maas to the 192.168.0.1 where all the clients will be when i get them set up ... is that correct
<_ruben> well, things (can) go wrong when you want to do dhcp for both for instance .. and no experience with maas here
<jgcampbell300> i plan on being static on the main lan ... i think i need to be dhcp on the cloud side dont i... for up/down scale ?
<_ruben> sounds reasonable
<jgcampbell300> thanks ... im gona have to go read up on iface eth1 inet6 ... never set that up befor
<jgcampbell300> so can i use the same netmask 255.255.255.0?
<Kartagis> I purchased an additional IP block
<Kartagis> I tried to add them with ip route add 5.9.222.96/28
<Kartagis> ifconfig shows me the same thing
<_ruben> jgcampbell300: yes
<_ruben> Kartagis: same as what?
<zul> morning
<_ruben> g'day
<Kartagis> _ruben: what it was before
<jgcampbell300> ok got a small problem ... network restart ...Failed to bring up eth0.
<jgcampbell300> when i did /etc/init.d/networking restart
<jgcampbell300> wonder if i should reboot
<_ruben> Kartagis: ifconfig, and route as well, are braindead commands; use the ip command only
<Kartagis> _ruben: I use ip
<_ruben> Kartagis: no you dont: 13:54 < Kartagis> ifconfig shows me the same thing
<Kartagis> [14:54:14] <Kartagis> I tried to add them with ip route add 5.9.222.96/28
<Kartagis> as per the provider's faq pages
<_ruben> that part is good, the ifconfig part is wrong
<Kartagis> I use ifconfig to check if the IP addresses have been added
<_ruben> ip route doesn't add ip addresses
<_ruben> and for ip addresses, there's ip address, ifconfig is not to be used
<Kartagis> hrm
<Kartagis> what do I use?
<_ruben> use for what? showing ip addresses? ip address
<Kartagis> for adding IP addresses
<Kartagis> do I use ifconfig?
 * Kartagis hides
<_ruben> read the manpage of 'ip'
<Kartagis> hmm
<Kartagis> it seems they have been added
<Kartagis> I can see them in the admin page
<RoyK> !ip
<jgcampbell300> i need some help ... i installed maas-dhcp and it changed my dns up on the server i cant apt-get update now ... i can ping ip addresses on the net but cant ping www.whatever.com ... any ideas
<jgcampbell300> i need some help ... i installed maas-dhcp and it changed my dns up on the server i cant apt-get update now ... i can ping ip addresses on the net but cant ping www.whatever.com ... any ideas
<Kniggedigge> hi guys, can someone please help me? i dont know which packages to install, to get the required libraries for ./configure
<Kniggedigge> http://pastebin.com/mFYsUEZm
<RoyK> Kniggedigge: roy@smilla:~$ apt-cache search libavfor
<RoyK> libavformat-dev - Development files for libavformat
<RoyK> ...
<Kniggedigge> yes i know, but i have to install a newer version and i dont know how...
<Kniggedigge> configure: WARNING: Requested 'libavformat >= 52.110.0' but version of libavformat is 52.64.2
<RoyK> Kniggedigge: then I guess you should look for a ppa somewhere, and if you can't find that, maybe resort to compiling libav* from source
<Kniggedigge> hmm ok thanks RoyK ;)
<RoyK> Kniggedigge: lrwxrwxrwx 1 root root 22 Jun 12 18:53 /usr/lib/x86_64-linux-gnu/libavformat.so -> libavformat.so.53.21.0
<RoyK> that's on Precise
<RoyK> Kniggedigge: which distro version are you on?
<Kniggedigge> 11.4
<RoyK> better upgrade, then
<Kniggedigge> hmm so theres no other way to install bino?
<Kniggedigge> :(
<RoyK> run do-release-upgrade to take it to 11.10 and then another time to take it to 12.04
<yeats> !info libavformat-dev
<ubottu> libavformat-dev (source: libav): Development files for libavformat. In component main, is optional. Version 4:0.8.1-0ubuntu1 (precise), package size 651 kB, installed size 1818 kB
<RoyK> Kniggedigge: you can install packages from newer distros, but it's not very clean. also, 11.04 has less than two months left of its support. better upgrade...
<Kniggedigge> yeah well then I'm getting into bigger trouble ;)
<RoyK> Kniggedigge: 12.04 is LTS, so 5 years support
<jgcampbell300> ok another newb question comeing at ya ... i edited /etc/resolv.conf ... now how do i reload it
<RoyK> jgcampbell300: no need
<jgcampbell300> RoyK: ya just saw that on google ... thanks tho ... hmm do you know of any place i would be haveing issuses .. i installed maas-dhcp ... changed the resolv.conf back to my isps dns ... maybe there is another place to change it
<jgcampbell300> woo hoo ... nm .. got it
<jgcampbell300> was pinging a bad address .. thanks guys ... gota go to work
<RoyK> jgcampbell300: keep in mind that if dns servers are set in /etc/network/interfaces, /etc/resolv.conf may (read: will) be overwritten
<jgcampbell300> RoyK: ya i was looking for the interfaces but theres nothing in there about dns .. i need to resetup my eth0 for maas but will have to wait till i get back
<jgcampbell300> RoyK: at least i can update it now
<jgcampbell300> RoyK: thanks again ... gota go
<uvirtbot> New bug: #1042263 in mod-auth-mysql (main) "Error with special characters in users passwords" [Undecided,New] https://launchpad.net/bugs/1042263
<Daviey> adam_g: Not sure i am a fan of the extra digit added to nova's version.  If multiple snapshots are required the same day, isn't ".n" suitable?
<koolhead17> hi all
<jgcampbell300> i have installed maas and am looking for the proper place to add a upstream dns server ... can anyone help
<jgcampbell300> i have installed maas and am looking for the proper place to add a upstream dns server ... can anyone help
<jgcampbell300> whops ... sorry
<dormito> Im trying to setup an Ubuntu server as a samba/ldap domain controller (in a largely windows enviorment) Im trying to use the guide/documentation at https://help.ubuntu.com/12.04/serverguide/network-authentication.html however it seems that the guide is out dated/incomplete (or wrong, Im not really sure which).ATM Im having trouble getting settting up TLS with samba
<arrrghhh> hey all.  upgrading my server, and it seems like my session died
<arrrghhh> i had it in a screen window, and when i go back into the box it says the screen session is already attached
<arrrghhh> nothing there.  when i try to apt-get install, update, etc it says something has a lock on dpkg.  last i saw there was a question about samba on the updater.  what can i do?
<arrrghhh> hrm.  i guess this is why you don't upgrade over ssh :/
<Pici> arrrghhh: then re-attach the screen session, use -DR
<arrrghhh> Pici, the screen session is gone
<arrrghhh> yet dpkg and samba still seem to be waiting
<maxb> You just claimed it was already attached?
<arrrghhh> it was
<arrrghhh> i closed the session
<arrrghhh> relaunched another one
<arrrghhh> managed to get the samba screen re-attached
<arrrghhh> i was comparing the configs, and the screen went all wonky.  so i hit ctrl-c
<arrrghhh> i thought that would kill it, but it seems it's still running "somewhere"
<arrrghhh> /var/lib/dpkg/info/samba-common.postinst seems to still be running when i lsof |grep dpkg
<arrrghhh> maxb, Pici ?
<arrrghhh> should i just reboot the server at this point?
<arrrghhh> not sure where to go with it...
<jgcampbell300> can anyone here tell me where to add two dns addresses on ubuntu 12.04 with maas and mass-dhcp ... there is some dynamic resolv happening and i dont want to put it in the wrong place ... i read there is prepend domain-name-servers some where but cant find it
<Vampy> does any one have a guide/howto setup a MAAS server behind a DHCP enable router that still PXE boots the nodes?
<jgcampbell300> i just found this /etc/dhcp/dhclient.conf there is a commited out like #prepend domain-name-servers 127.0.0.1; would i be safe adding prepend domain-name-servers 8.8.8.8, 127.0.0.1, x.x.x.x; ?
<jgcampbell300>  
<jgcampbell300> Vampy: im accualy working on that now ... with a few problems but seems to be working so far ... https://wiki.ubuntu.com/ServerTeam/MAAS/AddNodes
<Vampy> jgcampbell300, i'm on the step behind you..
<jgcampbell300> Vampy: mine seems to be working so far ... i just cant get my dns to stay put ... it keeps changing resolv.conf to nameserver 127.0.0.1 and i need it to go to an outside dns for things it dosnt know about
<jgcampbell300> Vampy: do you have two nic's in your controler and a switch between controler and nodes
<Vampy> currently, no but a second nic and switch would be easy to get to
<Vampy> assumin the box has room for another card.. i've looked inside for a few months now
<Vampy> jgcampbell300, you are suggestion that i use 2 nics and a switch, would you say that bridging the nic's in the controller is best?
<jgcampbell300> Vampy: im not totaly sure accualy ... from what i have read there are like 50 ways to set it up ... but i didnt want any of my cloud jacking with the rest of my network so i used the two nic's in my server to seperate everything
<jgcampbell300> Vampy: im just trying to learn all i can about these systems
<jgcampbell300> i have been haveing a heck of a time getting anyone on irc that can help me with this ... was thinking about looking for a maas channel
<Vampy> wait.. i've got a theory here.
<Vampy> according to your idea.... the chain would run as such.. router to eth0 (controller) eth1 to switch switch to nodes.. eth1 runs as a independent dhcp with new ip addresses to nodes only while eth0 prevides wan access to all ..
<Vampy> eth1 would have a custom pool of ips that differ from the router (ie 10.100.10.1-254)
<jgcampbell300> Vampy: from what i see yes ... im not sure yet but i think there may need to be a bridge at eth 0 1
<Vampy> agreed or else the nodes would not have inet/wan access.
<jgcampbell300> Vampy: ya ... my eth0 is set to x.x.1.x static ... while i have eth1 x.x.0.x with a dhcp server attached to eth1
<Vampy> so how would you set the dhcp to run totally on eth1?
<Vampy> dhcpd.conf?
<jgcampbell300> what has been driveing me nuts is figgureing out where ot pu new dns addresses since there is a dynamic resolv happening here some hwere ... every time i restart or anything with network it resets it back to local
<jgcampbell300> i am thinking so .. im reading it right now .. and thats the best i have found for it so far ... but just not sure at this point
<jgcampbell300> i installed maas-dhcp and thats when dynamic resolv started happening ... i think
<jgcampbell300> and i have googled a million sites trying to figgure out the propper place ot change it
<Vampy> are you in a productions enivor?
<Vampy> or are you like me , running it within your "home" to get an idea of what your doing before going mission crit/prod eniv with it?
<jgcampbell300> all at home for now ... i need ot know how it works befor
<jgcampbell300> just found a page that makes a bit more since
<Vampy> there is a maas chan #maas im in there as well
<Vampy> gettin a little info from dannf inthere
<jgcampbell300> nice ... gona join there ... dont think anyone is home here
<uvirtbot> New bug: #1042367 in samba (main) "package smbclient 2:3.6.3-2ubuntu2.3 failed to install/upgrade: lecture courte (short read) lors d'une copie de buffer pour backend dpkg-deb pendant Â«Â ./usr/bin/rpcclientÂ Â»" [Undecided,New] https://launchpad.net/bugs/1042367
<capitaninsaneoh> Im setting up backup to an external USB drive.  Should i format to ext4 or fat32?
<capitaninsaneoh> Or Ntfs?
<arrrghhh> capitaninsaneoh, depends
<adam_g> Daviey: re: versioning. we need to be able to bump the version of the orig tarball in this case, not hte ubuntu versioning. im hesitant to add a point to the upstream version (2012.1.3.1), but maybe to the datestamp? ie, 2012.1.2+stable-20120824.1-$hash-0ubuntu1 , eh?
<arrrghhh> if you need it to work with Windows systems, NTFS.
<arrrghhh> if you are only using it on a linux box, ext4
<arrrghhh> don't do fat32...
<capitaninsaneoh> arrrghhh, I'm only using it for server backup
<capitaninsaneoh> arrrghhh, on Ubuntu server
<arrrghhh> capitaninsaneoh, if that drive is only going to be connected to that ubuntu server system, ext4 is fine.
<capitaninsaneoh> arrrghhh, Yes for backup on Ubuntu server
<capitaninsaneoh> Thanks!
<arrrghhh> capitaninsaneoh, if that drive is only going to be connected to that ubuntu server system, ext4 is fine.
<arrrghhh> np
<capitaninsaneoh> Have a great day!
<capitaninsaneoh> +)
<peeps[work]> where does root mail go by default?
<arrrghhh> you too
<arrrghhh> peeps[work], /var/mail
<peeps[work]> what sort of things send mail?  would a cron job with an error return code prompt an email?
<peeps[work]> i see no mail
<peeps[work]> arrrghhh, how can I set to send it to a real email address?
<arrrghhh> peeps[work], you'd need some sort of a relay server
<arrrghhh> smart host
<arrrghhh> smtp forward
<arrrghhh> etc
<arrrghhh> peeps[work], https://help.ubuntu.com/12.04/installation-guide/i386/mail-setup.html
<arrrghhh> that's if you want to setup the ubuntu server as the mail system
<jkyle> I'm curious why installing the ruby interpretor pulls in x11
<jkyle> anyone know?
<arrrghhh> jkyle, is it a required dependency or optional?
<arrrghhh> you can install it with --no-depends or something like that.
<arrrghhh> as for answering your actual question, i don't know... sorry :?
<arrrghhh> :/
<jkyle> arrrghhh: drilling down to see where it's coming from
<arrrghhh> brb
<jkyle> looks like it's this dependency: libfontconfig1
<jkyle> --no-install-recommends cleans it up
<bitmonk> anyone have an issue with the installer popping up a dialog about grub at the end of successful, automated preseed installs?
<gyre007> anyone here running rabbitmq on Ubuntu server ?
<bitmonk> it happens for us on lucid, maverick, and precise (the only versions we use) and says something like "continue without installing grub?" pressing yes seems to work, but it would be ideal to not have to answer. ;)
<bitmonk> gyre007: i have, what's up?
<gyre007> bitmonk
<gyre007> im curious about one thing
<bitmonk> shoot :)
<gyre007> how do I print out the configuration
<gyre007> he reason I'm asking is that there is a rabbitmq process running on port 5672 but I've no idea what it is?im guessing ssl ?
<gyre007> as the other processes I can see what they are from /etc/rabbitmq/rabbitmq.conf.d
<gyre007> -rabbitmq_stomp tcp_listeners [{"0.0.0.0",61613},{"0.0.0.0",6163}]
<CharlieSu> Is it possible for me to install 2:3.5.4~dfsg-1ubuntu8.5 from Ubuntu 10.10 onto a 12.04 server?  I ask because there is a show-stopper bug that hasn't been fixed yet in 12.04  https://bugs.launchpad.net/ubuntu/+source/samba/+bug/967410
<uvirtbot> Launchpad bug 967410 in samba "Windows clients cannot connect to shared printers " [High,Confirmed]
<bitmonk> what's its' parent process?
<gyre007> sh -c /usr/lib/rabbitmq/bin/rabbitmq-server
<bitmonk> and that process' parent is init?
<gyre007> nope
<gyre007> parent of that process is su rabbitmq -s /bin/sh -c /usr/lib/rabbitmq/bin/rabbitmq-server
<gyre007> rabbitmq  6660  6659  0 21:33 ?        00:00:00 sh -c /usr/lib/rabbitmq/bin/rabbitmq-server
<gyre007> second
<gyre007> parent is: su rabbitmq -s /bin/sh -c /usr/lib/rabbitmq/bin/rabbitmq-server
<gyre007> damn copy paste
<gyre007> strange?it looks like rabbitmq-server is running in 2 instances ?
<gyre007> I have set up ssl config on port 5671 by creating /etc/rabbitmq/rabbit.config and exported a path to it in /etc/rabbitmq/rabbitmq.conf.d50_stomp (export RABBITMQ_CONFIG_FILE="/etc/rabbitmq/rabbit") and on restart I can see 2 services listening now..one strange one on 5672 and second on 5671 (my ssl config) - both are listening in the same proces PID
<gyre007> so Im guessin the 5672 is probably some default SSL rabbitmq stuff bitmonk?but im not sure?and even if it is how do I get rid of it ?
<spartan2276> Can anyone help me to configure a DNS server using bind?
<spartan2276> I have tried using the Ubuntu Guide here: https://help.ubuntu.com/community/BIND9ServerHowto
<spartan2276> but I have had zero luck in authenticating to the domain all I keep getting is Error code: DNS_ERROR_BAD_PACKET (0x0000251e)
<skrite> hey all, if i have a mysql-cluster set up with two computers, and i want to add a third, how big a deal is that?
<adam_g> SpamapS: ping
<spartan2276> Can anyone help me to configure a DNS server using bind?
<spartan2276> I have tried using the Ubuntu Guide here: https://help.ubuntu.com/community/BIND9ServerHowto
<spartan2276> 04:47:09 PM
<spartan2276> but I have had zero luck in authenticating to the domain all I keep getting is Error code: DNS_ERROR_BAD_PACKET (0x0000251e)
<SpamapS> adam_g: pong, sup?
<adam_g> SpamapS: wondering best way to fix the nova ftbfs in proposed. the fix requires a new tarball. can we reject the upload to proposed and start over again, or do i expand upon that changelog?
<adam_g> there were two fixes that landed in stable branch upstream on friday, so it would make sense to try to include those as well
<uvirtbot> New bug: #1042431 in lxc (universe) "lxc-start-ephemeral needs tweak to handle centos and fedora hostname setting" [Undecided,New] https://launchpad.net/bugs/1042431
<SpamapS> adam_g: if another upload is what fixes it.. thats fine.. just use -v properly (the last version in release/-updates/-security) so the changelog reflects the whole change.
<SpamapS> adam_g: upstream or in debian dir, doesn't really matter how you fix it :)
<adam_g> SpamapS: 10-4, thanks
<smw_> Anyone here know vmbuilder? I ran sudo vmbuilder kvm ubuntu -v --arch=amd64 --suite=precise --mem=1024 but now I don't know what to do
<smw_> how do I find the image?
<smw_> I want to install and start it
<smw_> ah, found the image
<uvirtbot> New bug: #1042468 in python-quantumclient (universe) "python-quantumclient is uninstallable on Quantal." [Undecided,New] https://launchpad.net/bugs/1042468
#ubuntu-server 2012-08-28
<SpamapS> collectd->
<SpamapS> doh
<SpinningWheels> i was attempting to install a deb that requires "sun-java5-jre | sun-java6-jre | default-jre-headless", i have installed openjdk-7-jre-headless. i noticed that apt-get install default-jre-headless installs 6. is there a way to use openjdk-7 to satisfy "default-jre-headless"?
<arrrghhh> hey all.  having a case of the dumb.  i upgraded my server, and dnsmasq broke.  i put the old config file back in, and dhcp works - but dns does not.  i assume this relates to my resolv.conf on the server?
<blackshirt> what you mean with broke, not fully correctly installed?
<arrrghhh> sorry, config file was put to default.  i saved the old one.  copied everything from it, pasted it into the new one (and made sure everything else was commented out) and restarted dnsmasq
<arrrghhh> earlier i had to fix a dns issue by removing some lines from /etc/resolv.conf.  but it seems something else is auto-generating that file, because those lines are back.
<blackshirt> was dnsmasq correctly started?
<arrrghhh> dnsmasq did start, it's handing out IP's
<blackshirt> why you said dns doesn't work?
<arrrghhh> the server can resolve hostnames
<arrrghhh> but clients cannot
<arrrghhh> i have to statically configure dns on the clients to point to opendns for example
<arrrghhh> otherwise, no name resolution on the client ^^
<arrrghhh> hrm i think i ran into this before... let me see
<blackshirt> are you firewall permitted this?
<blackshirt> sure
<blackshirt> I mean are you sure firewall permitted this?
<arrrghhh> my firewall allows access to 192.168.0.0/24
<arrrghhh> on any port
<blackshirt> some options on dhcp-server to pass your dns server to client..have you check it?
<SpinningWheels> i was attempting to install a deb that requires "sun-java5-jre | sun-java6-jre | default-jre-headless", i have installed openjdk-7-jre-headless. i noticed that apt-get install default-jre-headless installs 6. is there a way to use openjdk-7 to satisfy "default-jre-headless"?
<arrrghhh> http://www.stgraber.org/2012/02/24/dns-in-ubuntu-12-04/
<arrrghhh> interesting read, but i'm not sure i'm closer to fixing the issue.
<blackshirt> have you analyze your log to watch them?
<arrrghhh> blackshirt, no, i shall do so now.
<arrrghhh> does dnsmasq make a log, or just syslog?
<blackshirt> you shoul do them...
<arrrghhh>  nas dnsmasq-dhcp[14012]: DHCPACK(eth0) 192.168.0.236
<arrrghhh> i see dhcp working.  i don't see anything about DNS... where should i look?  thank you for helping :)
<blackshirt> I don't know exactly..but you check it basically
<blackshirt> Usually log reside on syslog
<arrrghhh> sooo... syslog?  i'm sorry i don't know where to look for this issue.
<arrrghhh> ok
<arrrghhh> i don't see anything DNS related unfortunately... just DHCP.  :(
<blackshirt> just tail your syslog ... And see if your dns works normally
<blackshirt> Look at /var/log syslog
<arrrghhh> should i restart dnsmasq or something?  that's what i was doing, and i only see stuff about dhcp.
<blackshirt> do some dns related test, with lookup,dig,hostname from the client...
<blackshirt> Or other tools
<blackshirt> Sorry guys, i have going to work now..
<arrrghhh> ok thanks for helping
<blackshirt> sorry can't help more
<arrrghhh> no worries, you've pointed me in the right direction
<arrrghhh> :)
<arrrghhh> hrm.  definitely entires for DHCP requests.  nothing for DNS lookups...
<arrrghhh> SpinningWheels, sorry you haven't gotten any response.  did you try doing the install with --no-depends or whatever and seeing if you can force it after the fact?
<arrrghhh> my server is listening on port 53
<arrrghhh> nslookup fails on the server, but ping is successful?  odd
<arrrghhh> anyone else around can help with dnsmasq/DNS in general?
<arrrghhh> so from the clients, dns does seem to work locally
<arrrghhh> but dns still doesn't work to the 'net... wth?
<arrrghhh> OK i've narrowed it down
<arrrghhh> if nameserver 127.0.0.1 is in my resolv.conf, i can't nslookup on the server.
<arrrghhh> so the server isn't forwarding DNS.  i saw this in syslog
<xnox> we run dns on 127.0.0.1 by default now, for the sake of network manager integration.
<xnox> check dnsmasq configuration and fix that.
<arrrghhh> dnsmasq[26090]: using local addresses only for domain localnet
<arrrghhh> i guess that changed?  i'll re-read it
<xnox> also resolvconf package "manages" resolf.conf file in a dynamic way
<arrrghhh> yea, i've noticed that part.  i read a blog
<arrrghhh> http://www.stgraber.org/2012/02/24/dns-in-ubuntu-12-04/
<arrrghhh> so i understand why you do those things now (which on the desktop for VPN users, that's awesome)
 * xnox ponders if stgraber enjoys nick highlights from his blog URLs =)
<arrrghhh> i'm just reusing my dnsmasq config from 10.04, and perhaps there's something else i need to define now.
<arrrghhh> lol
<arrrghhh> i've hightlighted him 2x now then... ;)
<arrrghhh> it's a great read
<xnox> .... yes you need "migrate" =)))))
<arrrghhh> hah.  ok, i'll go rtfm.
<xnox> good luck =)
<arrrghhh> hah thx
<stgraber> xnox: hehe, I'm used to it ;) the dns link is posted around quite often ;)
<arrrghhh> stgraber, thx for writing it.  great article.
<roasted> hello!
<roasted> dumb question - I know you can install the GUI on the server variant, but when I start it up, would it go to a regular CLI login first?
<arrrghhh> depends on what runlevel the GUI is set to i suppose...
<arrrghhh> but i am not the person to ask about putting a GUI on a server.  i would do my best to steer you away
<roasted> I was just thinking how little I use the GUI on my powerbook. I thought it'd be nice to have the server version running since it's such a low powered machine, and only start the GUI if absolutely needed.
<roasted> well it's also not for a true server :P
<arrrghhh> hah
<roasted> really just meant for as much resource saving as possible
<roasted> I literally only use it for SSH work. It has nothing on it whatsoever.
<xnox> roasted: $ sudo stop lightdm
<xnox> bang, you have a server =)
<roasted> xnox, but, the GUI would still start up by default each time, right?
<arrrghhh> well, if you install the gui stuffs - can't you just change the runlevel the GUI runs at?
<roasted> I'd rather have it start in CLI mode, and only start the GUI manually if I need.
<roasted> I have no idea - I never tinkered with runlevels, so I'm drawing a bit of a blank there.
<arrrghhh> i think that's the right term
<xnox> roasted: disable lightdm upstart job by renaming the conf file to .conf.disabled i think.
<roasted> think? :P
<arrrghhh> well that would work too
<arrrghhh> derp
<arrrghhh> then it won't run the upstart job
<arrrghhh> roasted, after you rename that file, run update-rc.d
<arrrghhh> hell you could just disable the job with update-rc.d huh
<arrrghhh> i forget how handy that thing is
<arrrghhh> The disable|enable API is not stable and might change in the future.
<arrrghhh> lol
<arrrghhh> xnox, ok so is there some delta page on dnsmasq or dns in general?  i'm confused as to what i need to do here.  i was reading the ubuntu community doc on dnsmasq, and i don't see anything wrong with the config
<arrrghhh> https://help.ubuntu.com/community/Dnsmasq
<xnox> not sure. last time I did networking "by-hand" was in 8.04 =_
<xnox> =)
<arrrghhh> poo
<xnox> so i rather know this in theory, than practice =)
<arrrghhh> well i'm combing the new dnsmasq.conf file nao
<arrrghhh> maybe something will pop out @ me
<arrrghhh> my config right now is stupid simple, so it's possible there's some new option i need to enable.
<arrrghhh> ** server can't find google.com: REFUSED
<arrrghhh> :(
<arrrghhh> i don't get what i'm missing
<arrrghhh> would someone mind at least looking at my config?
<arrrghhh> http://pastebin.com/FMWW5fH3
<arrrghhh> this is the same 'working' config from 10.04.  why is it not working now?
<arrrghhh> stgraber, would you mind helping me?  i'm really not sure what would've changed so drastically that broke this...
<blackshirt> hello
<blackshirt> can someone guide me to setup postfix with ldap backend for virtual user and domains? or direct me to right guide :D
<arrrghhh> blackshirt, http://www.postfix.org/LDAP_README.html ?
<arrrghhh> hrm
<arrrghhh> it seems a lot of people are disabling dnsmasq on the desktop because it fails in the same way my server is failing.... :/
<arrrghhh> http://mark.orbum.net/2012/05/14/disabling-dnsmasq-as-your-local-dns-server-in-ubuntu/
<arrrghhh> http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=6&cad=rja&ved=0CE0QFjAF&url=http%3A%2F%2Faskubuntu.com%2Fquestions%2F131342%2Fproblems-with-dnsmasq-in-ubuntu-12-04&ei=6jc8UM-eLsaayQHx0oGQDw&usg=AFQjCNGcSsI_hs94hHc8KTXMvWe9Og4NAg&sig2=1I0CwEjCj_mafvKS38cJEQ
<arrrghhh> derp
<arrrghhh> http://askubuntu.com/questions/131342/problems-with-dnsmasq-in-ubuntu-12-04
<arrrghhh> wow.  well, that was it i guess.
<arrrghhh> i needed to add some 'server' lines...
<tgm4883> Sf82Tb77
<arrrghhh> whoops
<tgm4883> fun
 * arrrghhh has done it
<tgm4883> good thing I don't use the same password in multiple places
<arrrghhh> lol
<Vampy> does any one know if the Horde4 Guide on Ubuntu Wiki works for 12.04LTS as written?
<lickalott> gents, trying auto mount partitions for NFS shares via fstab.  If I leave "default" it won't let me write to the folder.  if I use (rw,user,no_subtree_check,async) the network connection times out.   Can anyone help?
<lickalott> anyone know of a way to see/monitor if someone is accessing your NFS shares.  *they are hosted over apache
<arrrghhh> lickalott, try Server:/share  /media/nfs  nfs  rsize=8192 and wsize=8192,noexec,nosuid
<arrrghhh> 'Server' can be IP or hostname
<arrrghhh> /share is the location
<arrrghhh> on the server *
<osmosis> how can I get libboost1.37-dev  for precise?
<arrrghhh> /media/nas is the location on the client
<osmosis> this is empty,  https://launchpad.net/ubuntu/precise/amd64/libboost1.37-dev
<arrrghhh> osmosis, https://launchpad.net/ubuntu/+source/boost1.37/1.37.0-3ubuntu3 ?
<osmosis> cool, howd you find it?
<arrrghhh> er is that jaunty?
 * arrrghhh is confused
<osmosis> 12.04
<osmosis> that link only has a tar.gz....not a deb
<arrrghhh> no i know.  that link i sent tho
<osmosis> uhhh
<arrrghhh> yea that's the source.
<osmosis> yah, says jaunty
<arrrghhh> ze tarball
<arrrghhh> that's the newest i could find...
<arrrghhh> what do you need it for?
<lickalott> exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/media/files".
<lickalott>   Assuming default behaviour ('no_subtree_check').
<lickalott>   NOTE: this default has changed since nfs-utils version 1.0.x
<lickalott> i get that when I restart the service arrrghhh
<arrrghhh> lickalott, please ask a question, and pastebin multiple lines.
<osmosis> arrrghhh, its a compile dependency for nightly build of an app
<arrrghhh> i don't see any issues?
<lickalott> rog
<lickalott> sorry
<osmosis> arrrghhh, so is there a way to install the src as a deb?
<osmosis> or can I just rip a libboost 1.37 deb from an older release
<arrrghhh> osmosis, you can compile it
<arrrghhh> and that might or might not work... ha
<arrrghhh> lickalott, so are we talkin fstab or exports?
<lickalott> fstab
<lickalott> would exports be better?
<arrrghhh> you get that output from mount -a after modifying fstab?!?
<osmosis> so im just suppose to make install a tar.gz on my system?  feels archaic
<arrrghhh> well wait we need to establish what you want lickalott
<arrrghhh> osmosis, you're building for nightly stuff... might want to stick to something that's already working.
<osmosis> arrrghhh, only the nightly has the feature I need
<arrrghhh> otherwise, you're on bleeding edge stuff - so you've gotta do some leg work, if you want to be on bleeding edge.
<arrrghhh> exactly
<osmosis> maybe ill just go sleep in a capsule and wake up in 6 months
<arrrghhh> gotta be on bleeding edge?  gotta do some work.
<arrrghhh> mmmmk
<lickalott> you know what.... i'm an idiot
<lickalott> i've been messing with the wrong file
<arrrghhh> hah, it happens
<lickalott> here's my entries in exports -  *(rw,sync,no_root_squash)
<lickalott> just found out another tidbit.  I can copy from the laptop that i'm on now, but I get denied when i use my windows server.  Both mounted as "mapped network drives" the same way on each machien
<lickalott> *machine
<arrrghhh> windoze can't do nfs...
<lickalott> sure she can
<lickalott> windows 7 comes with an NFS client install.
<lickalott> you have to actually install it, it's not default
<arrrghhh> oh
<arrrghhh> ew.
<arrrghhh> i guess i've always stuck with samba on windoze.
<arrrghhh> can't help ya there... assuming you give the same permissions in the share on the server, it has to be how it's setup on the client.
<lickalott> i did too, but recently I've been messing around with a raspberry pi and media streams better to the pi over NFS.  Then i realized that NFS has faster transfer rates so i went with it.
<arrrghhh> NFS is a much better protocol, yes.
<lickalott> only problem is hanewin is only good for 30 days then it's 29.95
<lickalott> but i will be purchasing shortly.
<lickalott> i'll mess around more.
<lickalott> thanks for the assist arrrghhh.  appreciate your time!
<arrrghhh> np
<Kartagis> with xen, can I create the vm on a different partition?
<bencer> jdstrand: hi, we have reported LP: #1042260 did u have the chance to have a look at it?
<SpamapS> Kartagis: typically you attach some kind of block device to a VM
<SpamapS> Kartagis: that can be a physical device, an LVM volume.. or even a file on disk.
<Kartagis> thanks
<lynxman> morning o/
<jamespage> morning lynxman
<lynxman> jamespage: morning sir :)
<nandersson> Hi, I read on Phoronix that there are plans to drop the alternate CD. How would that affect unattended installations with debian-installer in future LTS? Like Ubuntu 14.04?
<xnox> nandersson: ubiquity supports unattended installations using debian-installer preseed files for a long time.
<xnox> nandersson: https://wiki.ubuntu.com/UbiquityAutomation
<xnox> nandersson: and since you are on the server channel, server continuous to use debian-installer as it always had.
<nandersson> xnox, Ok, thanks for the link!
<chmac> I've resized an LVM physical volume and the underlying mdX raid array. I now want to resize the two disk partitions. I tried deleting them and creating new partitions of a smaller size with parted, but the system wouldn't boot. How do I resize the partitions?
<xnox> chmac: did you restore partition table back? did you back it up?
<xnox> chmac: what did you use to resize?
<chmac> xnox: It's a brand new machine, so I just re-installed fresh.
<xnox> chmac: so no need to resize anymore?
<xnox> =)
<chmac> xnox: I used parted rm / mkpart to delete and create new partitions, but it warned that the partition was active, although the system was in rescue mode.
<chmac> xnox: Trouble is, the installer partitions the whole disk, even though I'm only using 80%, and I want the extra space unpartitioned because it's an SSD
<xnox> unmount filesystem, vgchange -a n $vgname, mdadm /dev/md$n --stop
<xnox> then resize
<chmac> The process was pvresize, mdadm --grow /dev/md5 --size 80G
<chmac> xnox: How do I achieve the resize step?
<xnox> oh ok.
<chmac> Ok, I figured the mdadm part wouldn't be running in rescue mode, but maybe I missed the --stop command
<xnox> chmac: surely you grow mdadm first, then grow the pv which is on top of mdadm
<AlphaWolf> I've been trying to remove some old partitions from a Windows installation to free up some space but when I've rebooted it will only see the original boot partition, no others (e.g. what "/" and "/swap" was on). I have gone into recovery mode from the install CD but could not mount a partition, and I'm now in the built-in shell. Is there much I can do, or should I just re-install at this point?
<chmac> xnox: I'm actually shrinking, so it's the other way round :-)
<xnox> ok.
<chmac> AlphaWolf: Do you have backups of all the data?
<chmac> AlphaWolf: There is something about Windows only seeing partitions which come *before* linux partitions on the disk. I've run across it on usb thumb drives where I created a fat partition and a luks encrypted partition.
<AlphaWolf> Any media I have is on another drive, but I do have data that I would rather not lose on the primary drive
<chmac> AlphaWolf: Then I'd suggest trying to at least recover before you re-install. Do you want the fastest solution or the greatest data retention?
<AlphaWolf> Data retention for now please
<chmac> xnox: After I've stopped the md5 array, should I be able to simply delete and recreate the partitions with `parted rm && parted mkpart` ?
<chmac> AlphaWolf: Are you still running windows on this drive?
<AlphaWolf> I believe it would be possible
<AlphaWolf> chmac: I will give it a try now and see
<chmac> AlphaWolf: I'm not sure what that means. Do you *want* to be able to run Windows?
<chmac> AlphaWolf: I'm trying to understand whether or not you need to keep Windows happy, or whether you're only thinking about Ubuntu (much simpler)
<AlphaWolf> chmac: I have moved from Windows to Ubuntu, but kept Windows just in case, but I'm now looking at removing it
<AlphaWolf> chmac: (it being Windows)
<AlphaWolf> I currently get "error: no such partition. grub rescue". I also have the install CD for Ubuntu and Windows available
<chmac> AlphaWolf: Ok, gotcha. So when you say that you can only access the boot partition, you mean from the ubuntu rescue cd?
<chmac> AlphaWolf: Can you pastebin the ouput of `fdisk -l` for that drive?
<chmac> AlphaWolf: I'm struggling to understand the specific details of what you've done, what you're trying to do, and what's not working.
<AlphaWolf> chmac: Give me a moment, I will give a little more information
<AlphaWolf> chmac: I started by using "sudo parted -l", which showed 6 partitions on that drive. 1 was marked as boot (and was the old Windows partition), 2 and 3 were created/used by Windows. 4 was empty space, 5 was (I think) Ubuntu's "/" and 6 was (again, I think) Ubuntu's "/swap". I tried removing 2, 3 and 4, but all I can "see" is 1 right now.
<chmac> AlphaWolf: Ok, so you deleted those partitions along the lines of `parted rm 2` and `parted rm 3` etc, is that correct?
<chmac> AlphaWolf: Do you have a record of where each partition started and stopped?
<AlphaWolf> chmac: No, I guess that would have been a good idea though
<chmac> AlphaWolf: Yeah, for future reference, it's helpful to run `sudo parted /dev/sda unit s print`
<chmac> That prints your partition table by sector, then copy / paste into a file or something, even a photo will do :-)
<chmac> AlphaWolf: Ok, so now you're in the rescue CD terminal, is that correct?
<chmac> AlphaWolf: What do you get now, if you run that command?
<AlphaWolf> chmac: When entering rescue mode I can see the internal drive and another external drive. The internal drive only shows "/dev/sda1", which was my boot drive, but cannot be mounted
<AlphaWolf> I can choose "Do not use a root file system"?
<chmac> AlphaWolf: Sounds good
<AlphaWolf> chmac: Partition Table: msdos
<AlphaWolf> chmac: Number: 1, Start: 1049kB, End: 106MB, Size: 105MB, Type: Primary, File System: ntfs, Flags: boot
<chmac> AlphaWolf: You can only see 1 partition?
<AlphaWolf> chmac: Previously there were 6 there, but now just the 1
<chmac> Ok
<chmac> AlphaWolf: When you issued the rm commands, did you repeat the print command after each one?
<chmac> AlphaWolf: So did you go `sudo parted print` then `sudo parted rm 2` then `sudo parted print` then `sudo parted rm 3` or just print, rm, rm, rm, rm?
<chmac> AlphaWolf: I'm guessing that your actual data is fine, the problem is, you deleted the partition by mistake.
<chmac> So to recover the data, you only need to find the start and end of the partition, recreate it, and you should be golden.
<AlphaWolf> chmac: I didn't use rm, rm, I used something else, but I did not use print, no. I checked it at the end but it only showed 1 partition
<AlphaWolf> chmac: Ok, hopefully I'll be able to do that :)
<chmac> AlphaWolf: What did you use to remove the partitions?
<chmac> AlphaWolf: Ok, so maybe the partition numbers changed as you deleted the intermediary partitions, and then you accidentally deleted the one you wanted to keep, because it came after the others.
<chmac> After you delete 2, 3 might become 2, depending on the circumstances I think, I'm not certain.
<AlphaWolf> chmac: That would make sense, although I did only delete 3 of the 6 partitions, but I see how it could have gone wrong
<chmac> AlphaWolf: I'm guessing that's what the issue is. On the plus side, Ubuntu will most likely boot right backup if you can figure out the partition layout that was in place.
<chmac> AlphaWolf: There are tools which will scan your disk and try to recreate your partition table. I don't remember the name off the top of my head, but I recommend some searching, I'm sure you'll find something.
<AlphaWolf> chmac: I'm trying to find the command I used. I do remember I got a list of commands, using "d [partition number]" to delete and then "w" to write the changes
<chmac> AlphaWolf: Were you using fdisk maybe?
<chmac> AlphaWolf: To be honest, it's not so important, the partitions are now gone!
<AlphaWolf> chmac: Fair enough, but yes, fdisk sounds right!
<chmac> AlphaWolf: Ok, so here's what it could be. If the first 1 or 2 or 3 were primary partitions, and then the rest were within an extended partition, you might have deleted the extend partition and therefore deleted all the contained logical partitions.
<AlphaWolf> chmac: I'll take the disk out and pop it in another PC and try and recover the partitions, thanks.
<AlphaWolf> chmac: Ok, that's exactly what I did now you say it :P
<chmac> AlphaWolf: You should be able to do that from the rescue system.
<chmac> AlphaWolf: https://duckduckgo.com/?q=ubuntu+recover+partition+table
<chmac> AlphaWolf: http://www.mohdshakir.net/2008/01/03/recover-lost-partition-table-using-ubuntu-live-cd-gpart
<AlphaWolf> chmac: I'll need a Live CD for that, right? That's not going to work in the built-in shell?
<chmac> AlphaWolf: Ahh, sorry, I thought you were running the live CD, but you're not, my bad.
<chmac> AlphaWolf: Are you running a shell from just the /boot partition?
<uvirtbot> New bug: #1042711 in php5 (main) "php: throw and catch within a destructor causes exception on-the-fly to be lost" [Undecided,New] https://launchpad.net/bugs/1042711
<chmac> AlphaWolf: I'm not familiar with that, but I'm guessing that yes, you'll need a rescue CD / USB device, or an alternate machine.
<AlphaWolf> chmac: When I enter rescue mode I cannot mount a disk, so I will create a Live CD and have a go at that tutorial, thanks
<chmac> AlphaWolf: Good luck, and when you get your data back, use this as a poignant reminder to improve your backups ;-)
<AlphaWolf> I'm moving my drives around so I can make backups, but it wrong somewhere :P
<chmac> :-)
<chmac> Question: Is there an equivalent of /etc/init.d/lvm stop these days / on ubuntu?
<xnox> chmac: not really. What do you want to achieve?
<chmac> `mdadm --stop /dev/md5` says "Cannot get exclusive access to /dev/md5:Perhaps a running process, mounted filesystem or active volume group?"
<xnox> chmac: $ sudo vgchange -a n $vgroup
<xnox> no stop volume group
<chmac> 0 logical volume(s) in volume group "vg" now active
<xnox> good
<xnox> chmac: try mdadm again =)
<chmac> Ooh, now it works!
<chmac> xnox: I could have sworn I did that already, DOH!
<chmac> xnox: Thanks
<xnox> np
<chmac> Do I need to worry about "Warning: The resulting partition is not properly aligned for best performance." on an SSD?
<patdk-wk> you should
<chmac> patdk-wk: I get the same error if I recreate the partitions that I had before...
<patdk-wk> how are you making them?
<chmac> patdk-wk: I'm guessing I want to align to the 512K block size of the drive, does that sound right?
<chmac> patdk-wk: `parted /dev/sdb mkpart 1234s 2123s`
<patdk-wk> 1234?
<chmac> patdk-wk: Just made those up
<patdk-wk> I'll make up some answers then :)
<chmac> actual is 21527100s 210270780s
<chmac> :-)
<patdk-wk> I believe parted uses 1mb alignment
<patdk-wk> so try making it divisable by 2048
<chmac> patdk-wk: I'm resizing partitions on a live system, but the root partition is probably misaligned, is it worth trying to fix that as well do you think?
<patdk-wk> hmm, your going need to actually move the physical data first
<chmac> patdk-wk: Right, which could be a PITA, although it's a fresh install, so no actual data
<chmac> As in, nothing I'd need to save, it can all be recreated.
<chmac> patdk-wk: I've read about how SSDs need to erase in blocks, usually 512K I think was the number, so I should probably align to that, does that sound correct?
<patdk-wk> yes and no
<patdk-wk> there is no filesystem that works on blocks that large
<patdk-wk> so there is no point in attempting to make sure it fits that
<patdk-wk> but, since ext* works on a 4k block, normally, it would be helpful to make sure none of the 4k blocks are split between two 512k blocks
<chmac> patdk-wk: Ahh yes, ok, that makes sense. So I want my 4k blocks to start on a 512k block, but it won't make a huge impact
<chmac> I'll probably leave our current data where it is, and try to get this partition laid out properly, it's lvm on mdadm anyway, so there's all kinds of alignment going on!
<chmac> patdk-wk: Thanks
<chmac> patdk-wk: I'm creating an extended partition and a logical partition within it, would you recommend pushing the logical partition down to 4k in from the beginning of the extended partition?
<chmac> Never mind, there's already data on the disk, I forgot about that, so I can't move the start of the partitions, I'll just leave them where they were.
<raub> I posted a question in launchpad and now realized it is attached to cups instead of libapache2-mod-auth-kerb. How do I move it to the proper location?
<soren> zul: Thanks for noting in the changelogs of the cloud archive backports why they're needed. That's very helpful.
<zul> soren:  no problme
<zul> soren: patches accepted for anything you seen
<soren> I have a few lined up.
<zul> soren: sweet
<AlphaWolf> chmac: I have run a GParted live CD but didn't find anything. I'm running it using the command line, but I doubt it'll find anything. I also tried a Windows program, EaseUS Partition Recovery (http://www.easeus.com/partition-recovery/download.htm) and that found the first couple of NTFS partitions. Would it help if I recover them and then try GParted again? Thanks for all your help!
<AlphaWolf> chmac: I'm just not sure if recovering the NTFS partitions could somehow make it hared to find the ext4 ones?
<chmac> AlphaWolf: As far as I'm aware, the type of partition isn't relevant.
<chmac> AlphaWolf: GParted won't recover partitions I don't think, I think you need a specific partition recovery tool for that.
<chmac> Something to scan the disk and try to find partitions
<chmac> There might be something funky about recovering logical partitions inside an extended partition, I'm not certain how that works.
<xnox> AlphaWolf: http://www.cgsecurity.org/wiki/TestDisk
<jibel> smoser, when do you plan an upload of cloud-init with the fix for bug 1042459 ?
<uvirtbot> Launchpad bug 1042459 in cloud-init "user creation happens after ssh configuration" [High,Fix committed] https://launchpad.net/bugs/1042459
<smoser> jibel, its up there. https://launchpad.net/ubuntu/+source/cloud-init
<jibel> smoser, ah thanks, I missed it.
<smoser> it didn't get a ubuntu bug task
<smoser> so it did'nt get auto closed on that non-existant task
<AlphaWolf>  xnox: Thanks, I'll give that a shot!
<AlphaWolf> xnox: Thank you so much, it seems to have found them instantly!
<AlphaWolf> So, I guess the next question is how can I remove some partitions (NTFS) and make my ext4 partitions larger?
<xnox> AlphaWolf: restore & backup your data first.
<xnox> AlphaWolf: then you can, almost do anything.
<xnox> AlphaWolf: extending partitions to the right is easy. moving the beggining of the partition is harder. You can do it with dd, but it's also easy to screw up and wipe your data
<AlphaWolf> xnox: Which is why we do backups before we move any data. How can I back up an ext4 partition? I'm guessing GParted will do that?
<xnox> AlphaWolf: no Gparted will not do that. Gparted modifies your HDD without promising anything. If Gparted fails to do something, you get to keep both pieces.
<xnox> AlphaWolf: https://help.ubuntu.com/community/BackupYourSystem
<xnox> AlphaWolf: use dd, rsync, tar, rsnapshot whatever to backup your data. Setup a ~similar test to what you are trying to do with fake disks in a VM, and try to complete what you are trying to do now there.
<xnox> better safe than sorry.
<AlphaWolf> xnox: Can I boot into the drive from a VM on another computer? The computer this is running off does not have and HDs to store the backup on
<xnox> AlphaWolf: NFS or SSH is a better way to remotely access a data for backup purposes.
<xnox> AlphaWolf: read available options on the page I linked. It covers backups over the network.
<AlphaWolf> Ah, I didn't see the "remote" column, so I'll give that a crack. Thank you very much!
<uvirtbot> New bug: #1042856 in libapache-mod-auth-kerb (universe) "libapache2-mod-auth-kerb using krb5passwd and keepalive and credential delegation loses delegation after first request on connection" [Undecided,New] https://launchpad.net/bugs/1042856
<utlemming> smoser: looks like the quantal build that just finished didn't get the new cloud-init
<smoser> amd64 did
<utlemming> ami-198e3970
<utlemming> 	
<utlemming> ?
<utlemming> I just launched that one and its showing the old code
<utlemming> wait...nevermind
<smoser> http://paste.ubuntu.com/1172266/
<utlemming> smoser: something's up with your MP changes: http://paste.ubuntu.com/1172268/
<smoser> suck. i
<smoser> i actually know that i fixed that last night.
<smoser> carp
<smoser> i did fix it
<smoser> and didnot commit
<smoser> http://paste.ubuntu.com/1172275/
<smoser> utlemming, so its less severe than before, buecase right now only ssh_import-id is busted
<smoser> but.. i'll fix and upload
<utlemming> right...I'll build a new one when it lands and try again
<smoser> uploaded
<Kartagis> my route table says static.97.44.9. under gateway. what is the IP here?
<chmac> Kartagis: I think you'll get your ip from `sudo ipconfig` rather than from `sudo route`
<guntbert> Kartagis: chmac: for seeing your ip config type     ip ad
<guntbert> chmac: no sudo needed here :)
<Psi-Jack> Hmm. Anyone here by chance ever used Samba4's new fangled ADS server? ;)
<chmac> I have an SSD partition which begins at sector 20482875s, does the 5 mean that the 4k filesystem blocks are misaligned with the SSD 4k pages?
<soren> adam_g: Do you guys just push to the OpenStack packaging branches directly or do you use Launchpad reviews?
<uvirtbot> New bug: #1042968 in asterisk (universe) "Russian core sounds must be packaged" [Undecided,New] https://launchpad.net/bugs/1042968
<adam_g> soren: which ones?
<Vampy> anyone here used turnkey linux?
<soren> adam_g: E.g. lp:~ubuntu-server-dev/python-swiftclient/quantal/
<adam_g> soren: yes, but those are mostly used for tracking work elsewhere these days
<TheLordOfTime> Vampy:  probably not the right place to ask about it, this is an Ubuntu channel
<adam_g> soren: the majority of the packaging work takes place in branches off lp:~openstack-ubuntu-testing, where changes go and get added to automated builds, then packaging gets pulled from there and uploaded into ubuntu, and (ideally) synced to ubuntu-server-dev
<Vampy> TheLordOfTime, wasnt sure as turnkey is ubuntu based
<adam_g> soren: changes usually sit as UNRELEASED in the -proposed packaging branches for a while between uploads
<TheLordOfTime> Vampy:  so's Mint.  but its not an official derivative, so its not supported in the Ubuntu channels
<TheLordOfTime> last i heard, at least...
<TheLordOfTime> so i assume the same would apply (the same applies to backtrack linux too)
<soren> adam_g: What purpose does it serve to sync things over to ubuntu-server-dev?
<Vampy> understood..thre was only a few (less then 5) in the turnkey chan so i thought maybe someone here might have soem input
<TheLordOfTime> Vampy:  you could try in #ubuntu-offtopic, or in ##linux
<adam_g> soren: thats a good question.
<soren> adam_g: Heh
<soren> adam_g: Ok, so ~ubuntu-openestack-testing is where all the action is?
<adam_g> soren: IIRC, originally it was meant to separate our testing efforts from the ubuntu branches. new packaging chnages could be added to the testing branches, tested, merged into ubuntu-server-devs and new releases cut from there. never really ended up that way.
<soren> adam_g: I see mostly automated commits there... What are they based on?
<adam_g> soren: ideally merged into lp:~ubuntu-server-dev via review
<adam_g> soren: the flow for, say, folsom nova is like this:  i commit packaging fixes to lp:~openstack-ubuntu-testing/nova/quantal-folsom-proposed. during an automated build, the bot pulls lp:~penstack-ubuntu-testing/nova/quantal-folsom-proposed , merges it with lp:~openstack-ubuntu-testing/nova/quantal-folsom, adds a changelog entry with git commits in new package, builds and uploads to PPA
<adam_g> ugh lagged hotel wifi
<soren> adam_g: Hm.. It's not really sinking in here, I'm afraid :) Let's say you wanted to make a change to the packaging of e.g. python-swiftlient... What would you do?
<soren> adam_g: Oh, ok.
<adam_g> soren: lp:~openstack-ubuntu-testing/python-swiftclient/quantal-folsom-proposed
<soren> adam_g: Gotcha
<adam_g> soren: those changes also get synced back to the precise folsom branches, manually ATM
<adam_g> soren: some point soon, we're gonna reconfigure all of this so that there is one place to propose folsom nova changes, and they sync back to quantal and precise branches automatically
<soren> adam_g: Ah, I see you already fixed the first bug I was battling-
<soren> I was looking at the branches under ubuntu-server-dev.
<adam_g> soren: yippie. which one?
<soren> http://bazaar.launchpad.net/~openstack-ubuntu-testing/python-swiftclient/quantal-folsom-proposed/revision/12
<adam_g> ppa:openstack-ubuntu-testing/folsom-trunk-testing packaging should all have the most recent packaging updates from those branches, assuming they're still building okay
<adam_g> Provies, nice.
<soren> Yeah, I didn't want to say anything :)
<adam_g> soren: i startd this a few weeks ago and would like to expand it as we figure out how this is going to scale into the future: https://wiki.ubuntu.com/OpenStack/Packaging
<soren> adam_g: Cool.
<soren> adam_g: And in the openstack-ubuntu-testing team, do you just push changes to the -proposed branches, or do you you impose a formal review process?
<soren> I'm guessing the former.
<zul> for the cloud-archive stuff? its a formal review process
<adam_g> soren: yea. directly atm. hasn't really been much interest outside of me/chuck. often times a jenkins build will block on some packaging change, i commit and retrigger
<soren> adam_g, zul: You guys crack me up.
<adam_g> zul: i'm not talking about the cloud archive, just our general packaging work
<zul> adam_g: ah....
 * zul disapears for a while
<ChrisNZ> Wow, lots of linux people
<ChrisNZ> Would i be in the right place to ask a SAMBA question, sharing across subnets?
<ChrisNZ> hello?
<TheLordOfTime> patience
<ChrisNZ> ok lol
<ChrisNZ> while im being patient we could talk about life
<uvirtbot> New bug: #1043004 in lxc (universe) "--bindhome option should be on lxc-create, not on lxc-ubuntu" [Undecided,New] https://launchpad.net/bugs/1043004
<lifeless> ChrisNZ: this is a fine place to ask. in fact...
<lifeless> !ask | ChrisNZ
<ubottu> ChrisNZ: Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience
<uvirtbot> New bug: #1043010 in samba (main) "Update to 3.6.7" [Wishlist,Triaged] https://launchpad.net/bugs/1043010
<lickalott> guys...i'm 10 google pages into vsftpd and cannot find out how to change the location of the default ftp path.  can someone point me in the right direction?
#ubuntu-server 2012-08-29
<uvirtbot> New bug: #1043052 in lxc (universe) "[FFE] add pre-mount container startup hook" [Medium,New] https://launchpad.net/bugs/1043052
<uvirtbot> New bug: #1042864 in maas "Failure to enlist due to preseed errors" [Critical,In progress] https://launchpad.net/bugs/1042864
<babyface> utlemming,  job: precise-server-ec2-daily and quantal-server-ec2-daily  are failing
<uvirtbot> New bug: #1042868 in maas "MAAS doesn't have permissions to write in /etc/bind/maas" [Undecided,New] https://launchpad.net/bugs/1042868
<uvirtbot> New bug: #1043121 in maas (main) "deployed node cannot be looked up with dnsmasq on MAAS" [Undecided,New] https://launchpad.net/bugs/1043121
<jamespage> babyface, would you be interested in learning how to diagnose failures with the ec2 tests?
<babyface> jamespage,  I'm afraid I dont have that much efforts due to I have to handle all the failures on the iso and upgrade test on quantal and precise.  but anyway,  if you have any stuff(wikis, docs...) on ec2 test, you can share with me , I wanna learn more things when I'm not that busy
<jamespage> babyface, hmm - well I took a look at the test failures you reported above
<jamespage> its only around 2 failures of the 30 or so test cases run
<jamespage> some are related to not being able to access launchpad for some reason
<jamespage> and the others are ec2 related - i.e. something odd going on the in the ec2 infrastructure itself
<babyface>   jamespage    you mean this one?  bzr: ERROR: Invalid http response for https://xmlrpc.launchpad.net/bazaar/: Unable to handle http code 502: Bad Gateway ,  ok, that's a test environment issue,  next time, when I met this kind of error, I will wait for the result of next run
<jamespage> yes - we get that sort of transient issue from time to time
<jamespage> generally if the majority of tests are passing and you see one or two failures its one of these two issues
<jamespage> trends are more interesting - i.e. a single arch failing across the board, a single instance type failing across the board
<Kartagis> do I have to reboot my computer to be able to install a vm? I've installed xen
<babyface> jamespage,  ack. I will ignore the test envrionment issue later and  report you the real failure
<soren> zul, adam_g, Daviey: If you're interested, I have branches of your Quantal packaging branches here: lp:~cisco-openstack/$project/folsom  I'll be pushing my changes there, and probably pulling from your Quantal branches regularly. If you want to pull from my branches too that'd be cool. I don't see much reason for divergence.
<Hexch> Hi I have two ubuntu x64 10.04 server where I have attached an qnap - freenas 8.2 as nas.
<Hexch> On the one server I see lots of nfs not responding
<Hexch> The other one works great
<Hexch> what could be wrong?
<MatBoy> what shall I do these days for Ubuntu... istall open-vm-tools that include the packages for the guest or just add e vmware repo ?
<AlphaWolf> I'm trying to extend the partition my install is on, and I messed it up last time. I've tried GParted but I get the error "Cannot have a partition out the disk!". Here's my current output: http://pastebin.com/A95tkvzw I only need the data on the last 2 partitions (5th is / and 6th is /swap). Also, I have full backups now :)
 * xnox ponders to suggest wipe them all and restore from backup.
<xnox> AlphaWolf: disks are linear. You now have "small partition"-> loads of empty space -> "partition" -> "linux" -> "swap"
<xnox> AlphaWolf: what you want in gparted is to Move the partition from current location to the "loads of empty space"
<xnox> it will only be possible to extend it up to the "partition"
<xnox> unless you create LVM out of "Loads of empty space"
<xnox> move your install there
<xnox> and then add the space where "linux" is currently on to the LVM.
<xnox> to use both for the LVM
<xnox> please note your install will not boot from LVM, unless you chroot into it and modify it to support LVM.
<AlphaWolf> xnox: If I were to just wipe it, is it best to do a re-install (re-formatting during install) and then restore, or wipe and restore? It sounds like that's harder to mess up!
<xnox> AlphaWolf: no, because wiping everything will remove the other W95 partition you preserved.
<xnox> AlphaWolf: a single partition cannot "jump" a partition in the middle of the disk.
<xnox> And /dev/sda4       251138097   312590879    30726391+   f  W95 Ext'd (LBA)
<xnox> is in the middle of nowhere.
<xnox> What is /dev/sda4 ? is it actually real
<xnox> because it overlaps with /dev/sda5
<xnox> I'd delete /dev/sda4 and then move the linux partition with Gparted or simply dd to the beginning of the free space
<xnox> and extended up-to swap.
<xnox> then resize filesystem and you are done.
<AlphaWolf> I'm not too sure, it was created when I installed Ubuntu Server.
<AlphaWolf> I know it's an "extended" partition
<AlphaWolf> GParted does not show my partition table though, which I guess is to do with it being incorrect
<soren> xnox: sda4 is an extended partition. DOS partition tables only allow 4 partitions. To overcome this limitation, a primary partition can be desginated an "extended" partition which in turn can be split into a bunch of "logical" partitions.
<soren> Logical partitions are the ones numbered 5 and up on Linux.
<soren> fdisk always shows it this way.
<soren> (i.e. the extended partition spans the logical ones)
<xnox> soren: aha. /me uses GPT for the past 6 years.
<xnox> soren: AlphaWolf: but you have deleted two primary partitions.
<xnox> soren: can you convert extended into primary?
<soren> xnox: You trend setter, you.
<soren> xnox: I think you can make an extended partition primary instead, but it's a wee bit delicate.
<soren> (like everything else that involves fiddling with you partition table)
<xnox> soren: it's fine AlphaWolf has backups =)
<soren> Just delete the logical partitions and the extended one and make a new primary one that has the same start and end as the logical one it's meant to replace had.
<soren> I'm not sure why, though?
<AlphaWolf> So basically extend partition 1 and then move the data to it?
<sazawal> Can I make bottom panel in gnome 3 shell to unhide when a notification is received? Say a notification for message received in pidgin or xchat?
<_ruben> err, how's that related to ubuntu-server?
<zul> soren: cool ill have a look today
<jibel> how can I update grub configuration when provisioning local cloud images with cloud-init ? I want to add an argument to the default kernel command line
<Daviey> soren: How do you feel about pushing back to the core branches?
<soren> Daviey: As long as everyone's happy with the fact that I'm not testing this at all on Quantal, sure.
<soren> Daviey: I even applied for membership yesterday.
<Daviey> soren: hah.. well.. the ~ubuntu-server-dev branches should be the source of precice and quantal.
<soren> Daviey: They're not.
<soren> Daviey: TAlked to adam_g last night.
<soren> See scrollback.
<Daviey> soren: hmm, i thought that was the direction we are going in?
<Daviey> .. reading
<Daviey> soren: gah, yes.. sorry i mixed ~ubuntu-server-dev with ~openstack-ubuntu-testing,
<soren> Whatever direction you're going, the current state of affairs is that the -proposed under the testing team is where stuff happens.
<soren> If you want to keep that team tightly controlled, that's fine. You can pull from my branches at your own pace.
<soren> I have a shell script that does the same in the opposite direction.
<soren> It's a 15 second task each day. Meh.
<Daviey> soren: I'm quite keen on one place being the source of goodness, with specific changes not suitable for Ubuntu being separate.
<Daviey> Ie, your tree only has the minimal magic you need.
<Daviey> (Which IMO, should be less work for you long term anyway)
<soren> ATM there's *no* diff. I don't really expect there to be.
<Daviey> (and it's good for the soul)
<Daviey> rocking
<soren> I just need to be on top of the flow of updates, that's all.
<soren> So I'll be keeping my own branches, but I'm fine keeping them in sync.
<Daviey> ok, sounds good.
<uvirtbot> New bug: #1043292 in puppet (main) "Hung agent processes" [Undecided,New] https://launchpad.net/bugs/1043292
<Daviey> jamespage: hey.. can you comment on what it is asserted against? https://jenkins.qa.ubuntu.com/view/Quantal/view/ISO%20Testing%20Dashboard/job/quantal-server-amd64_minimal-virtual/111/testReport/test/MinimalVirtualTest/testInstallSize/ kkthnx
<Daviey> max_install_size == ?
<jamespage> Daviey: https://jenkins.qa.ubuntu.com/view/Quantal/view/ISO%20Testing%20Dashboard/job/quantal-server-amd64_minimal-virtual/111/artifact/tests/test_cases/minimal-virtual/test/*view*/
<jamespage> tells me that max_install_size on i386 is 668000 with some extra headroom for amd64 (due to multiarch)
<jamespage> Daviey, units are 1k blocks
<Daviey> jamespage: mind if i add a logigng output saying what the testing size is? :)
<jamespage> Daviey, feel free
<jamespage> are we actually going to fix that or are we dropping minimal-virtual
<jamespage> ?
<Daviey> jamespage: well, smoser seems to have stolen the task of fixing it.. and it seems it can be fixed
<smoser> i have stolen no such tasks
<smoser> if i'im going to steal something its going to be something i *want*
<Daviey> rocking.
<smoser> so it seems that this is definitely fallout of the kernel removal of -virtual
<smoser> but it really seems to me that it is test case failure
<smoser> as the kernel packages seem to me to be correct
<smoser> linux-virtual gets you a small /lib/modules (22M)
<smoser> the cloud-images get this right.
<smoser> linux-generic gets you a big /lib/modules.
<smoser> i'm not sure how we're selecting the kernel in that test, but that would seem to me to need updating.
<AlphaWolf> Is it a good idea to store temp downloads in the /tmp folder? I assume there's some sort of garbage collection so I might lose files?
<sagaci> AlphaWolf, as long as you don't need the files after reboot
<soren> zul, adam_g: Just a heads up: I've pushed a bunch of changes to all of your -proposed branches. None of them should affect the built packages, but just fixes divergence in the packaging, enabling test suites, etc.
<zul> soren: coolio
<soren> I hate version control. It'll forever remember my poor grammar. Gah.
<ogra_> ahah, same here
<ogra_> though i hate *all changelogs* not vcs ones specifically :)
<mojolive> hi, I'm having a serious issue. We were having some performance issues with our RAID-5 set, rebooted the server and now all disks are spares (minus 1 that apparently died on the reboot). Any ideas?
<g0bl1n> hi, what's the channel for ubuntu in the cloud ? AWS
<g0bl1n> I believe there's one...
<g0bl1n> join ##aws
<g0bl1n> there ;)
<zul> hallyn: ping
<hallyn> zul: .
<zul> hallyn: so 0.10.0 is out but im going to wait until 0.10.1 is out to ask for a FFE
<hallyn> zul: ok
<hallyn> zul: note, the 0.9.13 is serving us well - it fixed hggdh's bug of multiple VMstarts in parallel failing
<TheLordOfTime> anyone know where i can find pending SRU stuff for a given bug (and the progress on said SRU(s) )
<zul> hallyn: sweet...yeah i think 0.10 will be easier to maintain in the long run..
<mojolive> g0bl1n: #aws is generic amazon web services, but if it works for you ;)
<g0bl1n> mojolive: it did :-)
<mojolive> I have been having some issues with iscsi on the 3.2 kernel so I went to 3.5 (from mainline repository)
<mojolive> 3.5.0 was fine, 3.5.2 was fine, now on 3.5.3 and it doesn't seem to create the configfs for target. any ideas? Most scripts that restore the config bork out on /sys/kernel/config/target (folder) missing
<uvirtbot> New bug: #1043437 in nova (main) "multiple login forms / distored web-page when trying to switch projects" [Undecided,New] https://launchpad.net/bugs/1043437
<Psi-Jack> Hmmm. So I have this server that is throwing me a lot of "Resource temporarily unavailable" messages from a daemon process manager called daemon. Seems that this issue only occurs on the actual user running the process, so I'm starting to think it may be some kind of resource limits.
<chmac> How do I undo the effects of `sudo passwd www-data` ?
<Psi-Jack> vipw -s, and set the password to *
<chmac> Psi-Jack: Ok, thanks, will check that out.
<chmac> RTFM, man passwd reveals `sudo passwd -d www-data` :-)
<Psi-Jack> Not quite the same thing.
<chmac> Psi-Jack: Ahh, you're quite right, it sets a blank password in /etc/shadow, not *
<chmac> Psi-Jack: What's the difference in practice?
<Psi-Jack> Precisely.
<Psi-Jack> Blank password = no password. * == disabled, no password will ever work, period.
<chmac> Psi-Jack: Well I'm gladd I asked, not sure I'd ever have found vipw otherwise :-)
<chmac> `ssh www-data@host` works if I set a password, but it's not accepting the key in /var/www/.ssh/authorized_keys
<chmac> Permissions are 700 on /var/www/.ssh/ and 600 on /var/www/.ssh/authorized_keys
<chmac> Got it, syslog, chmod g-w /var/www/... :-(
<chmac> Psi-Jack: Thanks for your help
<uvirtbot> New bug: #1043448 in amavisd-new (main) "incorrect NEWS file packaged" [Undecided,New] https://launchpad.net/bugs/1043448
<jkyle> is it possible to tell dpkg to look for dependencies in the current directory and install them if needed?
<jpds> jkyle: No, you'll need to just add the packages to the dpkg -i command.
<jgcampbell300> !juju
<ubottu> Juju is a open source devops platform created to allow rapid deployment of applications in the cloud
<jgcampbell300> !juju channel
<Psi-Jack> Heh
<Psi-Jack> Oh god.
 * Psi-Jack wonders why he's talking to himself.
<jgcampbell300> hello
<Psi-Jack> I effing HATE that term that's so meaningless, "in the cloud"
<jgcampbell300> you know i hate it to ... but im trying to learn
<jgcampbell300> and its kicking my ass
<Daviey> zul: hey, any progress on essex->folsom on precise upgrade testing ?
<zul> Daviey: just debugging my script
<Daviey> coolio
<jgcampbell300> does anyone here know where the configs for MAAS are handeled
<cr3> has anyone tried deploying the current quantal server image with MaaS? if so, I suspect it should be using the new filesystem.squashfs for super fast deployment, right?
<jgcampbell300> cr3: i am haveing a hard time just getting my system built lol ... do you know where maas leaves its configs for dhcp and dns ?
<cr3> jgcampbell300: I believe MaaS provisions those with dnsmasq, so probably its config is where you want to look
<jgcampbell300> cr3: ok thank you
<jgcampbell300> wow i need to find a good uptodate guide to install all the latest packages to build a web hosting cloud
<jgcampbell300> ok ... i checked .. /etc/dnsmasq.conf and it says addn-hosts = /var/lib/cobbler/cobbler_hosts ... so i check that file and its empty .... i wonder if i have finaly found the place i can add in dns name servers
<smoser> hallyn, ping
 * Daviey eww's at SpamapS.. help2man in packaging.
<smoser> hallyn, http://paste.ubuntu.com/1174511/
<smoser> maybe stgraber would know.
<smoser> what am i doing there that is stupid
<smoser> system (remote system) goes to lunch when i do that.
<smoser> by goes to lunch, i mean network connection dies
<smoser> comon... someone help me
<soren> Daviey: The idea is fine... I just wish help2man's output was nicer.
<smoser> there, that guy can help.
<smoser> soren, what am i doing wrong.
<soren> Man, where to start?
<soren> :p
<smoser> thank you. thank you very much.
<soren> Does it disappear immediately?
<soren> As soon as kvm starts?
<soren> Or does it happen after a little bit (i.e. likely when the guest starts booting and fiddling with networking)
<smoser> the last thing that gets printed is the brctl line
<smoser> i dont eve see the kvm echo
<smoser> but if i comment out 'kvm' everything is fine.
<Daviey> soren: No, if i wanted the same content in --help as i did man.. i'd just --help.
<Daviey> soren: using it as a base, sure.
<soren> smoser: Interesting.
<soren> smoser: Where is this server?
<Troy^> does anyone have any experience with streaming 1080p files from a weird file server to a wireless comp/laptop on 802.11n?
<Troy^> wired*
<soren> smoser: In Canonical's data centre?
<smoser> soren, on a canonistack instance
<soren> smoser: Oh.
<soren> That's why.
<soren> The bridge probably grabs the tap device's mac.
<soren> Try hardcoding the bridge's mac to match eth0's.
<soren> smoser: I think adding "bridge_hw 00:11:22:33:44" or whatever to /e/n/interfaces should do the trick.
<smoser> http://paste.ubuntu.com/1174543/
<smoser> ?
<stgraber> smoser: it's likely that your bridge will change mac address when doing that
<stgraber> smoser: which may explain what you're seeing
<smoser> right
<smoser> i think i might be just doing osmething stupid
<smoser> it might be better to just put an lxc instance on the lxcbr0 and then try to kvm off that.
<smoser> rather than having the "real eth0" on a bridge.
<smoser> hard coding the MAC didn't seem to have an affect
<hallyn> smoser: eth0 is on br0?  and that's done right?
<smoser> it was.
<hallyn> heh just noticed the comment.
<hallyn> that green is too 'ignore me'-ish
<hallyn> smoser: wondering whether your instance is behind some 'smart' switch which says "you're trying dhcp with >1 mac, I'm shutting your port down"
<soren> smoser: Don't you have VNC access?
<hallyn> have you tried it on a local kvm instance?
<smoser> hallyn, afte rlxc-create -t... and lxc-start, how do i get an IP address of the system ?
<smoser> and what is the escape key after i did 'lxc-console'
<hallyn> smoser: after lxc-console, escape with 'ctrl-a q'
<hallyn> (ctrl-a ctrl-a q if in screen)
<hallyn> smoser: if you're on quantal, you can get the ip address using:
<hallyn> sudo lxc-attach -s NETWORK -n q1 -- /sbin/ifconfig
<hallyn> but, i thought lxc-ip was supposed to be shipped with lxc now
<hallyn> stgraber: ^ ?
<stgraber> hallyn: nope, lxc-net was removed and replace by "ip netns"
<stgraber> *replaced
<hallyn> smoser: so i did:
<hallyn> lxc-ps -n q1 (to get ip of container's init)
<hallyn> mkdir -p /var/run/netns
<hallyn> ln -s /proc/$containerinit/ns/net /var/run/netns/q1
<hallyn> sudo ip netns exec q1 ifconfig -a
<hallyn> $profit
<Daviey> cr3: hey
<Daviey> roaksoax: can you confirm bug 1042906 for cr3 please?
<uvirtbot> Launchpad bug 1042906 in live-installer "live-installer/net-image cannot fetch-url to unexisting /tmp/live-installer directory" [Undecided,New] https://launchpad.net/bugs/1042906
<cr3> Daviey, roaksoax: hi folks, you guys might like to know about bug #1042906. the workaround for now is simply to create /tmp/live-installer in the early_command of the preseed
<cr3> roaksoax: if you can confirm you've observed the same problem, that would be a good start but there's more
<uvirtbot> New bug: #917615 in nova "Inappropriate exception handling on kvm live/block migration" [Undecided,Confirmed] https://launchpad.net/bugs/917615
<uvirtbot> New bug: #1038266 in nova "Deleted floating ips can cause instance delete to fail" [Undecided,Confirmed] https://launchpad.net/bugs/1038266
<riz0n> Hey guys I have a question about Ubuntu Server and Postfix. In my main.cf file, I the home_mailbox = Maildir/ but when messages come in, they go to mbox unless I comment the mailbox_command = procmail -a "$EXTENSION" line out. Do I need the mailbox_command line in tihs file? If so, what should I change about this to make it go to Maildir/ instead of mbox? Thanks.
<Daviey> cr3: If there is no response, i will validate.. and work on it tomorrow.
<cr3> roaksoax: once you actually work around to downloading the filesystem.squashfs, the installation proceeds nicely until the end. but, when the system reboots, there's no kernel in /boot/vmlinuz-* so the system just boots into memtest86 because grub finds no other sensible boot option
<cr3> Daviey: the installation goes blazingly fast though, so it's a well worth investment of time
<roaksoax> Daviey: confirm
<cr3> roaksoax: yay! what have you done so far in an attempt to get support for the filesystem.squashfs file?
<roaksoax> cr3: today was just starting to look at it, 4 months ago i got it working with cjwatson's help
<cr3> roaksoax: so there's hope :) any hints I might try to get this working? for example, did you encounter the problem with booting into memtest86 when you last worked on the problem?
<roaksoax> cr3: nope, it was pretty simple but tricky, and unfortunately i lost all the changes i had made, so I'm trying to figure it out all over again
<cr3> roaksoax: I'm working in the same direction then, so maybe we can keep each other posted on our progress as cjwatson is currently on holidays :)
<roaksoax> cr3: alright cool :)
<adam_g> roaksoax: where in cobbler or preseed can i set the kernel parameter that will be used on first boot after installation?
<roaksoax> adam_g: let me check
<roaksoax> adam_g: i believe it is this: d-i     debian-installer/add-kernel-opts        string $kernel_options_post
<adam_g> roaksoax: sweet, thanks
<cr3> roaksoax: I need to jet but I haven't found anything significant yet. by the way, when I said that there was no kernel installed under /boot/vmlinuz* I didn't mention that I noticed the linux-image-*-generic package installed: chroot /target dpkg -l 'linux-image-*' returned something starting with 'ii '
<arrrghhh> hey all.  anyone use PS3MediaServer?  I'm having an issue with it, after i upgraded my server and subsequently PS3MediaServer, and now it won't start with the upstart job.  It seems eventually it starts OK if I  open it manually in a screen session or something
<arrrghhh> bbl....
#ubuntu-server 2012-08-30
<drag0nius> hello
<drag0nius> anyone knows some good ajax irc?
<drag0nius> *client
<uvirtbot> New bug: #967984 in iscsitarget (universe) "iSCSI target administration utility (ietadm) not detecting inactive sessions" [Undecided,Confirmed] https://launchpad.net/bugs/967984
<uvirtbot> New bug: #1010854 in linux (main) "linux-crashdump not producing coredump nor rebooting" [Medium,Confirmed] https://launchpad.net/bugs/1010854
<uvirtbot> New bug: #1043582 in lxc (universe) "lxc-create -t ubuntu-cloud --userdata ignores userdata" [Medium,Triaged] https://launchpad.net/bugs/1043582
<arrrghhh> back.  anyone use PS3MediaServer?  trying to get the upstart script to work, and I don't quite understand it.
<smoser> hallyn around?
<smoser> maybe sgraber is, he peobably knows.
<arrrghhh> smoser, are you talking to me...?
<smoser> either one. how would i start dnsmasq that lxc runs to specify 'dhcp-boot=pxelinux.0,hostname,ip.address'
<smoser> arrrghhh, no, was talking to stgraber but yeah, did typo
<luigy> anyone can help w/ iptables and openvpn.  I can successfully connect, but need further configurations to be able to access my NFSshare which is hosted on the same machine as my openvpn server
<smoser> it loks like the answer is 'edit /etc/init/lxc.conf'
<arrrghhh> how about help troubleshooting an upstart script in general?  i can start the program by hand, but the upstart script just says "ps3mediaserver stop/pre-start, process 14131" - and there is no pid 14313
<arrrghhh> er
<arrrghhh> 14131.   either way.
<uvirtbot> New bug: #1043588 in lxc (universe) "sudo restart lxc-net does not kill/restart dnsmasq" [Undecided,New] https://launchpad.net/bugs/1043588
<hallyn> lastlog hallyn 3
<drag0nius> hmm
<drag0nius> everything on my server is running extremely slow
<drag0nius> got 2gb ram c2d 2.4 ghz
<drag0nius> usage is ~5% cpu 400 mb ram, but everything is like running on some overloaded machine
<drag0nius> any ideas?
<lickalott> run top and see whats eating resources
<drag0nius> the thing is
<drag0nius> there isnt anything
<drag0nius> htop tells me usages is ~5% on cpu and ~300 mb on ram
<lickalott> there isn't any processes running when you do top
<drag0nius> i mean nothing is eating much resources
<lickalott> ahh
<lickalott> maybe it's network related?  are you running any network intensive apps
<drag0nius> dont think so
<drag0nius> and im also running it on lan
<lickalott> no znc/bncs running
<drag0nius> what?
<lickalott> k
<lickalott> nvm then
<lickalott> whats your uptime?
<drag0nius> i restarted like hour ago
<lickalott> hrmmm
<lickalott> netstat -a | grep 80  or any other ports that you have open to apps (ftp, telnet, etc...)
<drag0nius> also when i log with putty
<drag0nius> theres like 5 sec delay after entering username and receiving password propmpt
<lickalott> is that what you're baseing your opinion off of?  or is it slow on the actual machine too?
<lickalott> mine slows down also, at times, when using putty
<drag0nius> i recently installed ajax app
<drag0nius> and its extremely slow
<drag0nius> like responding to action after few sec
<lickalott> are you scripting or using java intensive apps?
<drag0nius> i wrote java app using vaadin
<drag0nius> and another one is just some irc client for web
<drag0nius> sec
<lickalott> java and ajax are resource hogs (usually)
<drag0nius> try http://d3calc.drag0nius.pl/
<drag0nius> friend told me its extremely slow through internet, locally its simply not as fast ast it could be
<drag0nius> first load takes lots of time afaik
<drag0nius> im just out of ideas
<drag0nius> if you entered it nothing unusual happened
<lickalott> yeah that took 4-EVA to load
<drag0nius> just had around 1 mb uploaded
<drag0nius> min ago
<drag0nius> so out of ideas?
<drag0nius> seems like it tries to use least resources or something
<drag0nius> its lazy xD
<lickalott> lol
<lickalott> have any ram laying around?  maybe try to feed the beast a little more
<drag0nius> it has 2gb of some very fast ram
<drag0nius> uses just 320 mb
<drag0nius> according to htop
<drag0nius> im using server also as a router directly connected to modem
<drag0nius> pings are fine and everything
<drag0nius> just stuff running directly on it
<lickalott> it is recent?
<drag0nius> what you mean?
<drag0nius> parts, applications?
<drag0nius> maybe you know some reasonable web irc client with logging in?
<lickalott> i mean is it recent that it started running slow.
<lickalott> what do you mean "web irc client with logging in?"
<drag0nius> irc client running on ajax preferably
<drag0nius> with user accounts, so it would remember history etc
<lickalott> i don't.  only 3rd party irc client I know of that's not mainstream is written in python
<drag0nius> well i want something that does not require anything but browser
<lickalott> mibbitt
<drag0nius> does it allow any kind of irc server?
<drag0nius> from first glance looked like connecting to just theirs
<lickalott> never used it.  it's the only one I know of that's web based
<lickalott> see if they have an api and reverse engineer that shit for java...LOL
<lickalott> wondering if anyone has seen this issue before:  I have 3 drives mounted through fstab and shared out via NFS through exports.  Im also sharing one of the drive out via apache.  After the apache installation and configuration, I can't "map network drive"(nfs) the one drive that is shared out on apache
<drag0nius> yeah, mibbit is banned from both servers i use
<Rask> If this is the wrong channel in which to ask this, feel free to redirect me... My experience level is "enough to be dangerous".  :)  I'm trying to recover some data off one drive out of a RAID 1 array using a USB-SATA adapter.  Ubuntu sees it and fdisk -l shows that it has three "Linux raid autodetect" partitions on it.  How would I go about mounting this-- and indeed, is it doable?
<lickalott> cd /dev
<lickalott> see if you have sdb1 (2, 3) or sdc1 (2, 3) available
<lickalott> then just try mounting them to a folder
<lickalott> sudo mount /dev/sdb1 ~/test
<Rask> mount: unknown filesystem type 'linux_raid_member'
<lickalott> hrmmm
<lickalott> wait 1
<Rask> Sure.  (Thanks!)
<lickalott> mount -t ext2 maybe?
<lickalott> http://www.linuxquestions.org/questions/linux-software-2/recovering-data-from-remaining-raid-1-disk-723225/
<lickalott> time to go wash my arse.  good luck man
<Rask> Thanks, I'll give that a read.
<stgraber> smoser: yeah, overriding the upstart job (lxc-net.conf) would do the trick, alternatively, set one of the variables in /etc/default to contain the extra options you want (hackish, but will avoid changing the upstart job)
<Rask> lickalott: Just got it mounted.  Thanks for that link.  :D
<ogra_> rbasak, seeing your mail to the cross-distro list ... shouldnt you first discuss with dhcp upstream ? they need to make a protocol change for what you ask for ... (or if you want to work around changing the protocol you should explain how you plan that in the bug)
<eagles0513875_> hey guys I am looking at the https://help.ubuntu.com/community/XenProposed and im a bit lost as to the networking portion a bit are there any xen specialists in here?
<rbasak> ogra_: DHCP?
<rbasak> ogra_: the point is to not use DHCP
<ogra_> rbasak, well, DHCP is doing the negotiation of what gets sent on the server side if you use PXE
<rbasak> ogra_: no, the pxelinux emulation does
<ogra_> you only describe changes to the client side and an option that the server needs to understand
<rbasak> The change I've proposed is to U-Boot's pxe emulation only. Nothing else needs to change.
<rbasak> Definitely not DHCP
<ogra_> buut youo dont describe how the server side is supposed to learn about it
<rbasak> It will learn it when U-Boot fetches default.<arch>-<subarch> via TFTP
<rbasak> Which I propose it does at the pxelinux emulation stage, before falling back to "default" as normal
<ogra_> so how does pxe learn about <arch>-<subarch> *without* making changes to the protocol ?
<rbasak> pxe?
<rbasak> I don't understand your question
<ogra_> well, on the server side *something* needs to decide if it sends an arch specific kernel or the default
<rbasak> Forgetting MAAS for a moment to keep this simple, if you're doing things manually you just dump a pxelinux.cfg/default.arm-highbank in your tftp server which refers to a highbank kernel
<rbasak> And leave pxelinux.cfg/default for i386/amd64
<rbasak> If you want to use a pxelinux.cfg/01-ab-cd-ef-12-34-56-78 to be specific to the MAC, then you need to know the architecture (or pick it up via the existing DHCP mechanism)
<ogra_> still, thats done using a defined protocol ... you are proposing changes to that protocol that also should go into the upstream definition of that protocol
 * eagles0513875_ wonders what is being discussed here
<rbasak> No, it's not done using a defined protocol
<rbasak> It's a U-Boot invented emulation
<rbasak> pxelinux only exists for Intel and so this is not needed there
<ogra_> eagles0513875_, bug 1041092 (and a mail to the cross-distro list about it)
<uvirtbot> Launchpad bug 1041092 in u-boot-linaro "Add netboot architecture detection without using DHCP" [Undecided,New] https://launchpad.net/bugs/1041092
<rbasak> I admit that other theoretical pxelinux emulators that work cross-arch will ideally do the same thing
<rbasak> This is what I meant by "<arch> and <subarch> must be defined in a new pxelinux emulator namespace"
<rbasak> But this change *only* applies to pxelinux *emulators* on non-Intel architectures
<rbasak> Which, AFAIK, is only U-Boot currently
<ogra_> right, i just think you should make the PXE side "more official" by having such a change made rather globally ....
<rbasak> Please, stop calling it PXE. It's pxelinux (if upstream) or pxelinux emulation (if U-Boot)
<rbasak> There is no official definition
<rbasak> I'd happily try and make it more official, but it doesn't apply to pxelinux upstream. Only to pxelinux emulators
 * ogra_ remembers how ultra painful it was to make PPC macs work as thin clients when he did LTSP ... it would have massively helped back then to have such a feature just in the PXE setup 
<rbasak> (since PXE doesn't exist for ARM)
<ogra_> well, PXE is the protocol ... pxelinux uses that protocol
<rbasak> No, it does not
<rbasak> Well OK, it does
<ogra_> or a superset of it :)
<rbasak> But I am not proposing any change to the PXE protocol
<rbasak> U-Boot does not use any PXE protocol
<ogra_> well, it pretends to :)
<rbasak> No, it does not
<rbasak> It pretends to do what pxelinux does
<rbasak> But that part of what pxelinux does is not part of PXE
<rbasak> That part is invented by pxelinux as an extension to the syslinux configuration format
<rbasak> The only relevant group in its definition is pxelinux/syslinux upstream
<rbasak> It is very important here to not conflate the two parts.
<ogra_> well, its your project ... i would just think having a properly defined RFC or some such that defines arch specific support in netboot protocols which then get implemented in teh bootloader might be better than hacking up one bootloader specifically ... at least for the long term
<rbasak> OK, so you're proposing that I take my proposal and write it up into an RFC?
<soren> I'm with rbasak on this. pxelinux.0 is loaded using PXE, but its protocol for finding its boot configuration is a pxelinux thing. I very much see the value in having upstream pxelinux be part of the process of adding this extension, though.
<ogra_> i'm just thinking we have an opportunity here to take the lead as MaaS upstream to do it properly
<rbasak> I can write to the pxelinux guys and explain what U-Boot is doing, and ask them to hold the master namespace definition.
<rbasak> in case there are any other people who want to emulate them
<rbasak> on different architectures
<rbasak> But I don't expect them to adopt this at all, since it only applies to non-Intel, which they don't do
<ogra_> (i dont think what you ask is wrong, i'm fully with you on it ... i just think we should think more globally and fix it for everyone if we start such an initiative)
<rbasak> Who is everyone?
<rbasak> I'm only aware of U-Boot
<ogra_> well, there are hundrets of arm bootloders :)
<ogra_> we have linke 5 or so in the archive even
<rbasak> Do any of them emulate pxelinux?
<soren> a) It's their configuration protocol we're extending. b) What if they ever decide to expand into the non-Intel space?   I think it's important to have them be part of the process.
<ogra_> most of tehm can do netbooting one or the other way
<rbasak> soren: sure. I have no objection to getting in touch with them.
<ogra_> some of them can be scripted to fake PXE like u-boot does
<rbasak> Apart from some upstream pxelinux mailing list (which I doubt pxelinux emulator maintainers read!), where else is an appropriate forum, apart from the cross-distro list?
<ogra_> (oh, and tehre is always PPC and MIPS too that have their own bootloaders ;) )
<ogra_> rbasak, probably start with syslinux upstream and walk up the hierarchy from there, not sure
<Kingsy> can someone in here talk to me about atopsar? I have no experience with analyzing load averages andf stuff, so it would be cool if someone could talk through some stuff?
<Kingsy> I have been googling but there doesnt seem to be any good tutorials on it, and the manual page is alot to take in
<soren> rbasak: hpa seems to be the primary contributor to pxelinux. You could poke him and ask for the best forum for this discussion.
<th0mz> does any1 have an idea how to get support for SVG in php5-imagick ? please
<th0mz> (Ubuntu 10.04.4 LTS")
<th0mz> ok found.
<jamespage> xnox, the issue you are seeing with duplicate login forms in the dashboard - is that for 12.04/essex or 12.10/folsom
<jamespage> ?
<jamespage> openstack dashboard that is
<xnox> jamespage: the one on canonistack =)
<jamespage> xnox, right-oh - essex/12.04 then
<eagles0513875_> !openstack
<Daviey> xnox / jamespage: when switching to a different region?
<jamespage> Daviey, yeah
<xnox> Daviey: yes.
<Daviey> yeah, i'm getting that aswell.
<xnox> Daviey: i filed a bug on launchpad. no screenshots, but you can access it yourself =) ah...
<jamespage> Daviey, can you see instances running in the second region?
<jamespage> Daviey, bug 1043437 for reference
<uvirtbot> Launchpad bug 1043437 in horizon "multiple login forms / distored web-page when trying to switch projects" [Medium,New] https://launchpad.net/bugs/1043437
<jamespage> xnox, Daviey: dupe of bug 1033934
<uvirtbot> Launchpad bug 1033934 in horizon "Attempting to change regions in the dashboard does not display correctly" [Undecided,Confirmed] https://launchpad.net/bugs/1033934
<uvirtbot> New bug: #1042274 in mysql-5.5 (main) "package mysql-client-5.5 (not installed) failed to install/upgrade: æ­£è¯å¾è¦ç /usr/bin/innochecksumï¼å®åæ¶è¢«åå«äºè½¯ä»¶å mysql-server 5.5.27-2" [Undecided,Invalid] https://launchpad.net/bugs/1042274
<Daviey> jamespage: have you fixed it yet?
<jamespage> Daviey, hey - I'm just the triage monkey for today! :-)
<uvirtbot> New bug: #1042901 in mysql-5.5 (main) "package mysql-server-5.5 5.5.24-0ubuntu0.12.04.1 failed to install/upgrade: podproces instalovanÃ½ post-installation skript vrÃ¡til chybovÃ½ status 1" [Undecided,Invalid] https://launchpad.net/bugs/1042901
<Daviey> jamespage: Oh well :)
<cefege123> how can I configure my vps and namecheap domain, so it works on the internet ( I have 5 domains, and 5 websites, 1 vps)
<jamespage> bah - I really hate that bacula/mysql-server all-in-one ordering issue
<jamespage> it bites upgrades as well as fresh installs...
<jamespage> anyone know what the default network bonding mode is off the top of their heads?
<soren> jamespage: none?
<jamespage> soren, probably - I missed the "bond-mode 802.3ad" in the provided configuration
<jamespage> oops
<soren> I mean.. If you don't configure an interface for bonding, it won't have a bonding network mode.
<soren> If you do configure it for bonding (i.e. enslave it under a bond device), the default mode is 0 (balance-rr).
<patdk-lap> hmm, default bond mode of rr is kind of evil
<thierry__> hi everyone ,i'm working on a ubuntu-server image on my pandaboard, i just found out that usb keys are not mounted automatically, and fstab is not updated since the distro is minimal , i was wondering if there is a package that allows solving this problem?
<uvirtbot> New bug: #1035594 in nova "Floating IP addresses leak if you delete an instance with one assigned" [Undecided,New] https://launchpad.net/bugs/1035594
<cr3> Daviey: thanks for fix bug #1042906 so quickly!
<uvirtbot> Launchpad bug 1042906 in live-installer "live-installer/net-image cannot fetch-url to unexisting /tmp/live-installer directory" [High,Fix released] https://launchpad.net/bugs/1042906
<Daviey> cr3: it was an accident.. i put in a "sleep 7d ; dput *.changes" ... but i mistyped d and put m :(
<Daviey> cr3: we have a reputation to keep up.
<streulma> hello
<Daviey> hello hello, i don't know wy you say hello, say good.
<streulma> can I install Ubuntu Server with a proxy server ?
<Daviey> You say goodbye and I say hello Hello hello.
<Daviey> streulma: Have you tried?
<streulma> yes
<streulma> but not with the standard installer
<Daviey> what installer did you use?
<streulma> the standaard ubuntu server installer 12.04
<streulma> but
<streulma> with setting Expert mode it is possible
<streulma> but my install failes when it will install the server
<cr3> roaksoax: hi there, any progress on getting the quantal server image to netinstall?
<cr3> roaksoax: are you also at the point where the system installs but boots into memtest86? :)
<Daviey> cr3: so, i used mini.iso.. with a preseed.. and the kernel installed as expected
<cr3> Daviey: mini.iso also has a filesystem.squashfs? does it have a netboot initrd.gz separate from the cdrom install one?
<Daviey> cr3: No, i added to the preseed to use a filesystem.squashfs
<Daviey> cr3: it's basically the same thing.. miminal image, 20-30MB, and pulls down everything it needs from the interwebs
<roaksoax> cr3 not yet will work on it later today
<cr3> Daviey: ah, this thing: live-installer/net-image. that's really good to know, where can I find this mini.iso?
<cr3> Daviey: could I have a look at your pxelinux.cfg and preseed?
<cr3> roaksoax: looks like Daviey did all the progress, curse him and his timezone!
<Daviey> cr3: I repacked with, http://pb.daviey.com/pVfb/
<roaksoax> Daviey could you pastebin your preseed?
<Daviey> http://pb.daviey.com/OLbz/
<cr3> Daviey: wait, what? modules=live-installer, I wonder if that was my problem all along :(
<Daviey> cr3: the mini.iso doesn't include it by default.
<hallyn> utlemming: are you interested in fixing bug 1043582 ?
<uvirtbot> Launchpad bug 1043582 in lxc "lxc-create -t ubuntu-cloud --userdata ignores userdata" [Medium,Triaged] https://launchpad.net/bugs/1043582
<hex__> hi to all
<cr3> roaksoax: when you install the quantal server image, you use initrd.gz, linux and pxelinux.0 from this directory in the image, right? install/netboot/ubuntu-installer/amd64/
<roaksoax> cr3 yes and now but cobbler imoorts that and uses it
<hex__> Aug 30 16:28:25 hex-SATELLITE-C660 kernel: [68029.901192] [UFW BLOCK] IN=eth0 OUT= MAC=b8:70:f4:5f:04:10:00:e0:4c:eb:a8:e1:08:00 SRC=177.9.119.106 DST=10.10.10.249 LEN=40 TOS=0x00 PREC=0x00 TTL=107 ID=23340 DF PROTO=TCP SPT=60913 DPT=47248 WINDOW=0 RES=0x00 ACK URGP=0
<uvirtbot> New bug: #1043864 in nova (main) "nova-compute does not start at boot due to unsatisfied libvirt-bin dependency" [Undecided,New] https://launchpad.net/bugs/1043864
<hex__> this is what I get on ufw
<hex__> can anyone help
<hex__> ??
<cr3> roaksoax: do you happen to know if /boot/vmlinux-* is supposed to be installed come from the filesystem.squashfs or from linux-image-*-generic package in the pool?
<cr3> s/installed come from/installed from/ :)
<roaksoax> cr3 the linux image should  copied over to the filesystem by live-installer
 * roaksoax bbl
<hallyn> stgraber: you have fixes queued up for SRU, but do you have any for quantal?  If not i'll push fix for bug 1043582
<uvirtbot> Launchpad bug 1043582 in lxc "lxc-create -t ubuntu-cloud --userdata ignores userdata" [Medium,Triaged] https://launchpad.net/bugs/1043582
<stgraber> hallyn: I don't have anything queued for quantal. I'd like to drop the lxc-wait from debian/local, but the upstream lxc-wait is kinda broken upstream (not actually doing any waiting) so don't do that until it's fixed upstream ;)
<hallyn> stgraber: ubuntu:lxc updated.  I"ll test and push in a bit
<hallyn> stgraber: two of those should be SRUd to precise
<stgraber> hallyn: when you have a sec, can you look at that lxc-wait bug, my current guess is that I messed up the merge of lxc_wait
<hallyn> stgraber: oh, ok.  will do
<cr3> Daviey: odd, I installed the contents of daily/current/quantal-server-amd64.iso over the network using your pxelinux.cfg and preseed, but I still get the same outcome: no kernel installed so boots into memtest86
<cr3> Daviey: where can I download this mino.iso of yours?
<cr3> mini.iso even, mino is probably for fish
<hallyn> utlemming: (in case you read yoru backlog - pls ignore my previous comment about userdata;  i've taken the bug)
<utlemming> hallyn: okay....the first comment was before I got in. but yes, I'm more than happy to take that bug if you're not attached. I need bugs for fixing.
<Daviey> cr3: http://cdimage.ubuntu.com/netboot/
<cr3> roaksoax: I just had a look at the mini.iso that Daviey seemed to have tested, probably the one from archive, and I really don't see how that could work more than the server image. have you tried both the server iso and the mini one?
<hallyn> cr3: you mean the mini iso at http://archive.ubuntu.com/ubuntu/dists/${release}/main/installer-amd64/current/images/netboot/mini.iso ?
<hallyn> i use those freuqently to install generic images, and bootstrap a desktop (apt-get install ubuntu-desktop)
<hallyn> have had no problems, but it's been a few weeks
<hallyn> but maybe that's not what you're talking about :)
<cr3> hallyn: I just discovered it, seems neat! the problem I'm referring to is netinstalling recent quantal server images because of the filesystem.squashfs that was introduced. it looks like my system boots into memtest86 because there's no kernel under /boot/vmlinuz-*
<hallyn> ah i see
<hallyn> so initramfs on the image needs a squashfs moudle perhaps?
<smoser> cr3, you're reporting that the server netboot.iso is hosed?
<smoser> thats bad.
<hallyn> stgraber: tests pass, pushing to quantal
<hallyn> smoser: wasn't something about squashfs and server iamge installs discussed in team mtg a few weeks ago?
<cr3> smoser: I don't know what's the problem yet, I just hear that roaksoax is experiencing something similar but Daviey isn't
<hallyn> utlemming: well hm.  my proposed fix doesn't seem to be fixing it
<hallyn> d'oh, wrong package being tested.  nm
<cr3> hallyn: I'm not sure whether the problem is with the initramfs on the image, because the installation seems to complete just fine. I would imagine that there would be a failure mounting the filesystem.squashfs that would interrupt the installation if it were a problem with the missing squashfs module
<hallyn> true
<cr3> hallyn: emphasis on "seems", of course, because the installation is obviously not fine if it reboots into memtest86 by lack of any installed kernels :)
<hallyn> cr3: can you see update-grub running during install?  no errors?
<cr3> hallyn: yeah, I even ran it myself during the late_command, and it runs just fine. the problem is that there's no /target/boot/vmlinuz-* file, so update-grub does the right thing and only adds memtest86 to its options
<cr3> hallyn: however, I also ran chroot /target dpkg -l 'linux-image-*' at the same moment and the linux-image-*-generic package was installed (ii in the output of dpkg)
<hallyn> jinkeys
<hallyn> smoser: ^ whose domain would this be, cjwatson?
<cr3> hallyn: I'm guessing, and I could be very wrong, that the vmlinuz-* file should be copied from the filesystem.squashfs file but it isn't. I'm also guessing that the dpkg database is also copied from the filesystem.squashfs which is why it's showing that the package is installed even though it's not
<hallyn> cr3: can you paste the precise url for the iso you are using?
<cr3> hallyn: I'm rsync'ing from rsync://rsync.cdimage.ubuntu.com/cdimage/ubuntu-server/daily/current/quantal-server-amd64.iso, and the .disk/info says: Ubuntu-Server 12.10 "Quantal Quetzal" - Alpha amd64 (20120830)
<Daviey> cr3: Can you clarify your exact test.. I am confused.. you are using the large iso, but also preseeding?
<Daviey> cr3: If you are preseeding, why are you using a remote squashfs?
<TheLordOfTime> (repost) is there a way to easily find the name of the source package of a given installer package within apt-cache, without me going to packages.u.c to find the source package?
<cr3> Daviey: I'm just using the large iso to get a few files from: 1. the initrd.gz, which is the same as on the mini.iso; 2. the linux, also the same; 3. pxelinux.0, which is not on the mini.iso; 4. and the packages that I make available by http
<cr3> Daviey: I'm preseeding with the same preseed as you provided, where I only added a like for my workaround: d-i preseed/early_command string mkdir /tmp/live-installer
<hallyn> cr3: but to be clear, your installed rootfs has no vmlinuz in /boot right?
<hallyn> cr3: bc otherwise i'd assume you're missing one of the files pxelinx needs (i've had trouble with that before)
<cr3> hallyn: right, I even checked from grub and no vmlinuz in /boot
<hallyn> stgraber: yeah, upsream lxc-wait doesn't have that problem.  i'll dive into the code now :)
<Daviey> cr3: i'm doing a respin right now.. which should remove the need for the early_command
<cr3> Daviey: for completeness, this is my preseed: http://pastebin.ubuntu.com/1176188/
<cr3> Daviey: thanks, but I doubt this is causing the problem with the missing vmlinuz under /boot. I'm really surprised it works for you, do you have a syslog from the installation that I can try to compare with mine?
<cr3> Daviey: since I'm also using the same pxelinux.cfg as you, the debug level should make it possible to more or less diff side by side
<cr3> ... after removing the timestamps, of course
<Daviey> cr3: i did have one..
<Daviey> i can make another
<cr3> Daviey: cheers, I'm making another one too so I'm not quite ready yet either. can you email me the file when it's ready, that'd be really appreciated
<Daviey> cr3: no, i'm not going to mail you an iso. :)
<SpamapS> Daviey: don't judge me (re help2man)
<hallyn> stgraber: phew :)  just needed to initialize my_args.timeout to -1.
<hallyn> feh
<rbasak> TheLordOfTime: by installer package, do you mean udeb or something else? Your system's apt-cache won't have the udeb package list I don't think. (or will it?)
<cr3> Daviey: the installation syslog!
<hallyn> i just deleted bzr/lxc.  but that's the shared repo holding all my various lxc trees.
<TheLordOfTime> rbasak:  i mean by the binary .deb that is built from the source
<TheLordOfTime> for example:
<TheLordOfTime> the 'nginx' source package is the source package that produces nginx-full
<rbasak> OK
<TheLordOfTime> if i didn't know that, is there a way outside of going to packages.u.c to find nginx-full's "source" package
<rbasak> So "Source" field of "apt-cache show" will give you that
<Daviey> cr3: I'll pastebin that.
<hallyn> stgraber: i'll stage the fix for upstream lxc in ubuntu:lxc.  If you want to drop debian/local/lxc-wait, i'm good with that
<TheLordOfTime> rbasak:  and if a package doesn't have a Source: field?
<TheLordOfTime> then should i assume its a metapackage?
<rbasak> I'm not sure, but I've always assumed that without a Source: field the source has the same name
<rbasak> eg. the hello package doesn't have a Source: field
<rbasak> I haven't found a counterexample but I don't know that it is defined this way either
<rbasak> People in #ubuntu-devel may know more.
<TheLordOfTime> rbasak:  i just needed to know if it'd show in apt-cache
<hallyn> stgraber: pushed.  (let me know if you want me to just drop our lxc-wait and push;  else i'l lleave it to you)
<cr3> Daviey: is that why you have your own pastebin, so that you can upload files that you want to pastebin instead of actually copying and pasting which might be problematic for over 10K lines?
<TheLordOfTime> because i'm writing a utility script for me to generate a link to the security team's CVE trackers for source packages, i'm going to tweak that script to grep through the actuasl package's apt-cache info and find the "Source: " field, if the package itself (when tested in the URL search) isnt a source (it'll 404 if that's the case)
<TheLordOfTime> rbasak:  i wasnt sure where to ask, but i know apt-cache is CLI, so... :P
<TheLordOfTime> rbasak:  thanks though :)
<Daviey> cr3: I pastebin'd /dev/random before :)
<cr3> Daviey: I should introduce you to someone who listened to /dev/random
<Daviey> cr3: well, it
<Daviey> cr3: well, it's not finished.. but i think there is enough for you there.. http://pb.daviey.com/1HTR/
<cr3> Daviey: thanks! by the way, I noticed a step near the end called something like "remove-live-packages" that sounded suspicious, so I would like the whole thing when you got it
<rbasak> TheLordOfTime: you could use grep-dctrl against http://www.archive.ubuntu.com/ubuntu/dists/precise/{main,universe,...}/source/Sources.gz/bz2 and look for a Binary entry that matches the pacakge you're looking for
<rbasak> That would be cleaner I think - rather than relying on your system's apt cache
<rbasak> There may be a library that already does this that I'm not aware of though
<TheLordOfTime> rbasak:  true, but this is a Bash script, trying not to have to delve into actual programming :P
<TheLordOfTime> trying to make it very simplistic
<Daviey> cr3: it is odd that it pulls the kernel down twice.  I suspect it installs the one shipped, then does a dist-upgrade
<TheLordOfTime> but if i need to redo it with some other program, i can do so, it'll just take longer :P
<TheLordOfTime> hggdh:  FINALLY YOU APPEAR
<TheLordOfTime> you've been disappeared for a WHILE
<cr3> Daviey: where do you see it being pulled down? searching your pastebin for linux-image doesn't return anything :(
<hggdh> hi TheLordOfTime, I am on holiday :-)
<TheLordOfTime> hggdh:  ah.  so... not a good time to ask you to check an email string and give your opinions (for bugsquad stuff)
<TheLordOfTime> :P
<Daviey> cr3: updated, http://pb.daviey.com/Z608/
<hggdh> TheLordOfTime: well, I am actually not here, but I might be able to look at an email sent to me ;-)
<cr3> Daviey: sweet! I haven't noticed that in some of my syslogs but I'll have another look at my other syslog once I can get at it, shouldn't be too long now
<Daviey> i'm using a us mirror for some daft reason
<TheLordOfTime> hggdh:  heh.  i sent it to the bugsquad mailing list, its just finalizing that whole core vs. noncore stuff i brought up a couple of months ago.  just getting opinions on wording it for the footnotes on the Importance docs, then adding that, and that's the end of it :)
<hggdh> heh
<hggdh> will look at it, if I have not to get to a party somewhere and drink a bit more of beers
<TheLordOfTime> :P
<cr3> Daviey: us > uk, of course
<TheLordOfTime> hggdh:  have a beer on me :P
<hggdh> will do
<cr3> even Python thinks so: "us" > "uk" == True
<Daviey> cr3: I disgaree, http://pb.daviey.com/gaVw/
<cr3> Daviey: "win" should be in caps to emphasise how awesome you are, then I'd agree with your code
<Daviey> cr3: ok
<cr3> Daviey: I did some side by side diff of our syslog and I wonder if you would have the same problem as me if your kernel was not updated: http://paste.ubuntu.com/1176273/
<cr3> bbl
<cr3> Daviey: oh, one more thing, you might like to make sure you're using the latest filesystem.squashfs to prevent the upgrade. mine is from today's image, md5sum: 0027b200eb2c3c24218d252bcdad8345
<Daviey> cr3: mine was from today
<Daviey> df423b2e84af20365dc123645b1b4deb  filesystem.squashfs
<Daviey> cr3: a newer iso has just been published btw http://cdimage.ubuntu.com/ubuntu-server/daily/20120830.2/
<roaksoax> cr3: so what i did, IIRC was only to download the filesystem.squashfs and place it under /caspter, and the live-installer took care of everything
<Daviey> roaksoax: you shouldn't need to do that..
<roaksoax> Daviey: right, so I'm pressuming that cjwatson added support to live-installer to use the squashfs image from /tmp
<Daviey> roaksoax: no, wget's it
<Daviey> roaksoax: di live-installer/net-imagestring http://10.10.0.33/~dave/filesystem.squashfs
<roaksoax> Daviey: yeah I have that, but my point being is that it stores it under /tmp/filesystem.squashfs
<Daviey> roaksoax: live-installer respects /install and /casper, but also supports net-image preseed
<Daviey> roaksoax: well the fact it is in /tmp is irrelevant
<roaksoax> Daviey: right! but I guess i didn't express myself correctly, as what I did was during the product sprint and that';s what I was talking about :)
<roaksoax> sorry for the confusionm
<Daviey> ahh
<Daviey> i understand now :)
<sbeattie> adam_g: do your keystone/precise-proposed packages include the fix for LP: #998185 ? (I kind of assume so, since you got the other two token expiry issues incorporated)
<TheLordOfTime> LP Bug 998185
<uvirtbot> Launchpad bug 998185 in keystone "Once a token is created/distributed its expiry date can be circumvented" [Medium,Fix released] https://launchpad.net/bugs/998185
<TheLordOfTime> (so you can see the subject ;) )
<zul> smoser: http://pastebin.ubuntu.com/1176358/ thats the error i get
<smoser> zul, sorry for ignorance
<smoser> but what is iscsiadm doing here?
<zul> smoser: its doing the libvirt attach-device call basically
<smoser> what does that have to do with the instance?
<smoser> its just setting up device on the host that is connected to the scsi target?
<zul> smoser: i think so....im still trying to figure stuff out as well
<zul> smoser: its blowing my mind man :)
<zul> smoser: here is a bit more info in the traceback http://pastebin.ubuntu.com/1176374/
<dassouki> is tehre a way to list the 10 largest folders in total size in the system
<cr3> roaksoax: first, I think you meant that you placed the filesystem.squashfs under /install rather than /casper. second, are you saying that netinstalling the current quantal server image now works for you?
<roaksoax> cr3: i meant when I first did it, i put it in /caspter IIRC
<roaksoax> cr3: now, we should only need to specify the URL for it
<roaksoax> cr3: and that should be it
<cr3> roaksoax: ok, I'm glad to hear that works for you. I'll keep troubleshooting my side on the assumption that my configuration is wrong
<noslin005> hi, people , someone know a good site that talk about mounting a server hardware
<patdk-wk> heh?
<patdk-wk> normally follow the instructions that came with the mounting hardware
<Troy^> hello when setting up a raid 1 with ide drives, do they both have to be on the same channel for instance on the same primary chain or can one be on each primary and secondary what will perform best?
<patdk-wk> Troy^, makes 0 difference at all
<patdk-wk> so make them primary on their own channels, if you can
<Troy^> hmm so no diff even if i put them prim and secondary on the first channel? it would make for better airflow? unless you think i would get better performance
<drag0nius> is there something like webadmin for Squid?
<drag0nius> with gui?
<RoyK> !webmin
<ubottu> webmin is no longer supported in Debian and Ubuntu. It is not compatible with the way that Ubuntu packages handle configuration files, and is likely to cause unexpected issues with your system.
<RoyK> drag0nius: squid is rather easy to configure if you read the manual and the config file
<drag0nius> the config file has couple thousand lines ;/
<RoyK> yep
<patdk-wk> hmm, I have many config files that are only 20lines, but end up at a few thousand when I'm done configuring it
<patdk-wk> like, apache :)
<drag0nius> i dont need that much for now at least ;d
<RoyK> drag0nius: just ask what you want to do
<RoyK> !ask
<ubottu> Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience
<drag0nius> i've tomcat/glassfish/apache2 running on server
<drag0nius> wanna make access to them with urls not ports
<drag0nius> i've domain already
<drag0nius> right now using dns redirect, but its not sufficient
<drag0nius> like i want drag0nius.pl/apps/ direct to drag0nius.pl:8080
<drag0nius> without changing url base
<drag0nius> probably later would get to manually configuring, but need something quick now
<drag0nius> hmm
<drag0nius> found artica
<drag0nius> looks good
<cr3> Daviey: I got it to work... by using the old filesystem.squashfs from 20120829 which resulted in upgrading the same 14 packages as you, including the linux-image-*-generic package.
<cr3> Daviey: I'm still of the opinion that using the latest filesystem.squashfs which doesn't upgrade the linux-image-*-generic package will result in a broken installation.
<cr3> Daviey: and, I would suggest that someone confirm this because I don't think QA tests network installs yet
<Daviey> cr3: Ah, balls.. I am testing yesterdays iso
<cr3> Daviey: no worries, as long as we're now comparing oranges and oranges, I'm reassured to keep plowing ahead. will keep you posted
<Daviey> cr3: so, the other thing that differs.. mini.iso is created from debian-installer uploads, and only then.
<Daviey> cr3: https://jenkins.qa.ubuntu.com/view/Quantal/view/ISO%20Testing%20Dashboard/job/quantal-server-amd64_default/141/console .. latest iso .*30.2 seems to be using life-installer ok
<cr3> Daviey: cdrom install != network install != usb install
<cr3> Daviey: you're welcome to comparing other fruits to a network install of the current image, that doesn't fix the current image unfortunately :(
<cr3> Daviey: what files other than initrd.gz and vmlinuz are used from the mini.iso for a network install using MaaS?
<Daviey> cr3: no shit :)
<cr3> Daviey: I found the problem that I'll be documenting in a bug shortly, it's all in: live-installer/usr/lib/live-installer.d/ubuntu-kernel
<Daviey> cr3: debian-installer, which produces the network installer, was uploaded 20 hours ago
<cr3> Daviey: in short, the filesystem.squashfs does not contain /boot/vmlinuz-* as can be seen from mounting it. the kernel is actually copied from the cdrom which doesn't freaking exist in a network install
<Daviey> cr3: well, this was actually by design.
<cr3> Daviey: if it seems to have worked in a network install, it's only a fluke that you've had a more recent kernel in the archive otherwise it cannot work
<cr3> Daviey: so, by design, how should the network install get its kernel under /boot?
<Daviey> cr3: good question.
<Daviey> cr3: wing and a prayer ?
<cr3> Daviey: it's not important to know the design to report a bug though, so I'll continue gathering the necessary information for a good bug report. I'd rather leave the design to the experts :)
<Daviey> cr3: Colin is not around this week. He'd probably be the best person to take this, if not.. roaksoax or myself will.
<cr3> Daviey: I'll try to find a way to workaround that problem too so that certification can continue to test server images, but at least I'm confident about the problem now. I hope this will be useful for your team as well
<Troy^> i didn't choose degraded array for my raid 1 setup what is the downfall?
<uvirtbot> New bug: #1044044 in cloud-init "ubuntu user not in correct groups" [High,Triaged] https://launchpad.net/bugs/1044044
<cr3> Daviey, roaksoax: I reported bug #1044049 describing the problem installing the latest quantal server image over the network, I hope this will help make maas even better!
<uvirtbot> Launchpad bug 1044049 in live-installer "live-installer/ubuntu-kernel cannot work on a network install" [Undecided,New] https://launchpad.net/bugs/1044049
<Troy^> omg on the package install selection i hit enter trying to select first package and it goes auto to continue not selecting the package... spacebar is to select packages so now i have server installing with no packages lol
<Daviey> cr3: no doubt it will, thanks for your detailed digging
<cr3> Daviey: I'll have a workaround soon, so you can either use sleep 7d or wait for cjwatson, I won't be blocked on this
<Daviey> cr3: well, i'm worried that i set unreasonably fast turnaround expectations.. so this one might need to be a sleep 14d.
<cr3> Daviey: as long as it makes it before the quantal release, I'll be happy for the sake of MaaS
<Daviey> cr3: super.
<Troy^> does openssh server auto start on boot?
<wedgie> by default, yes
<fij0> hello
<fij0> there is a way to put ascii characters in console witout use unicode wit the combination of ctrl + shift + u ?
<epifanio> Hi All
<epifanio> I'm running a virtual server on a machine of my institute, uname -a give me :  Linux ecoop.aquarius.tw.rpi.edu 2.6.32-28-vserver #55~ppa1-Ubuntu SMP Fri Feb 4 21:25:09 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
<epifanio> i have sudo rights on this machine .. and i was trying to install some packages unsing apt-get ... unfortunatly it gave me "unmet dependencies" , so i tryied to do an :   apt-get updarte & apt-get upgrade
<epifanio> but now .. the apt-get instruction is stuck on the cron packages ... the ssh shell prompt show me : http://paste.debian.net/186732/
<epifanio> is in this state that is half hour .. i'm worried
<epifanio> have you any clue on what's going wrong ? or how to try to debug the problem ?
<epifanio> to stop the process .. i opened an other ssh connection and i run :  sudo fuser -cuk /var/lib/dpkg/lock; sudo rm -f /var/lib/dpkg/lock
<epifanio> then i used : dpkg --configure -a
<epifanio> the log in the shell is :  http://paste.debian.net/186736/
<epifanio> thanks for any help!
<epifanio> 'm trying again with : sudo aptitude safe-upgrade
<epifanio> after a long list of unpacking / preparing replacement .. it is still stuck on cron : http://paste.debian.net/186737/
<epifanio> :'(
<ariel__> i wanna ask a question how do people find your server online
<epifanio> ?
<SpamapS> ariel__: are you asking how you might advertise the existence of your server that you build on Ubuntu?
<Troy^> question openssh only seems to run on a linux box reboot when i log in to a user. how do i get the box to auto login?
<Troy^> in /etc/ssh/sshd_config file do i need to uncomment "#" the line with port 22 if i want to change it to Port 2277 for example?
<Troy^> and sudo /etc/init.d/ssh restart does not work i even tried service ssh restart
<Troy^> :S
<Troy^> lol
#ubuntu-server 2012-08-31
<ariel__> no i mean when you installed all the server how do people find your server like in the address
<jeeves_moss> I just upgraded my server and now Dovecot won't start.  What is the trouble shooting process to find out why?  I've looked in the logs, and there is nothing other than Postfix
<uvirtbot> New bug: #1016428 in juju "Juju deployments allow remote SSH access with hard-coded password" [High,Fix released] https://launchpad.net/bugs/1016428
<drag0nius> is there some simple web manager tool for Squid?
<drag0nius> or some tool like "give me squid config and i can edit it in nice way"
<drag0nius> then you load it manually
<drag0nius> i've tried Artica and it's quite too heavy weight ;d
<chmac> I'd like to run a second monitor "virtually" with vnc. I saw on askubuntu.com about xdmx, might be the solution. Is it possible? Easily achieved? Any howtos? :-)
<rbasak> chmac: you're better off asking in #ubuntu. This is the server channel. No X here!
<chmac> rbasak: Yeah, I did ask in there, but I typically find it's mostly beginners questions over there, and there's an awful lot of chatter so much appears to fall through the cracks... :-)
<drag0nius> how to get webmin into apt?
<drag0nius> i see theres version for 12.04
<drag0nius> but doesnt find package
<rbasak> !webmin | drag0nius
<ubottu> drag0nius: webmin is no longer supported in Debian and Ubuntu. It is not compatible with the way that Ubuntu packages handle configuration files, and is likely to cause unexpected issues with your system.
<Perry_> hi everyone!  Init.d script works when run manually, but not on reboot with ubuntu 12.04 server.
<drag0nius> it says 1.590 supports ubuntu
<Perry_> same scripts, configuration, etc works correctly with ubuntu 9 server
<Perry_> any ides?
<Perry_> ideas*
<drag0nius> update-rc.d
<drag0nius> Perry_ afaik update-rc.d makes init.d scripts run on startup
<Perry_> drag0nius,  i know the way to run a script on startup. The same script works properly on another machine with an older version of ubuntu server.
<Perry_> i'm trying to run on startup the TRACD daemon.
<Perry_> i have no errors in the logs, simply it doesn't start. If i run the same script manually (in the shell) it starts correctly.
<drag0nius> webmin does not find my squid3
<drag0nius> neither can it reinstall it
<rbasak> !webmin | drag0nius
<ubottu> drag0nius: webmin is no longer supported in Debian and Ubuntu. It is not compatible with the way that Ubuntu packages handle configuration files, and is likely to cause unexpected issues with your system.
<koolhead17> http://paste.ubuntu.com/1177475/ what am i missing here :(
<melmoth> koolhead17, hmmm, this file (/etc/ssl/certs/ca-certificates.crt) does not belong to any package, i guess it s made on the fly.
<melmoth> i have no idea wich package install script made it (may be openssl ?)
<koolhead17> melmoth: whats the way out to get it working
<koolhead17> u want me to install that pkg
<koolhead17> b4 trying that command?
<melmoth> give it a try yep
<Hayate> hello
<koolhead17> melmoth: no luck :(
<maxb> The package is ca-certificates
<jdrab> koolhead17: what ubuntu version gives you that error? just out of curiosity
<koolhead17> jdstrand: precise
<roaksoax> Daviey: howdy!
<roaksoax> Daviey: this time you haven't filed for a standing FFe for maas right?
<uvirtbot> New bug: #1044351 in asterisk (universe) "upgrade from lucid to precise removes most of my gsm sound files" [Undecided,New] https://launchpad.net/bugs/1044351
<Daviey> roaksoax: I haven't.. can you.. i will review.
<roaksoax> Daviey: will do
<roaksoax> Daviey: bug #1044367
<uvirtbot> Launchpad bug 1044367 in maas "[FFe] Standing feature freeze exception for maas in Quantal" [Undecided,New] https://launchpad.net/bugs/1044367
<Daviey> roaksoax: I don't really want to ack a 'Standing' exception this moment, but do you have a major disruptive change ready to upload?
<Daviey> roaksoax: note that B1 comes out next Thursday.
<roaksoax> Daviey: the disruptive change is all of what is detailed on the FFe, but the quantal support
<Daviey> roaksoax: fwiw, i think i'd rather have a less polished Beta 1.. than a totally wrong version one.
<roaksoax> Daviey: i';;m currently in the process of testing things to make sure everything is working just right
<Daviey> roaksoax: right, but a massive disruptive chang, as a one off, doesn't need a standing exception
<roaksoax> Daviey: right, the satnding is for upcoming uploads, as I'm sure there will be upstream fixes
<roaksoax> Daviey: i can file again i'll change it
<Daviey> roaksoax: Okay, when do you think the massive change will be ready for Quantal uploaading?
<roaksoax> Daviey: hoping that today
<ogra_> fixes dont need FFes ;)
<roaksoax> ogra_: i know right :). But in this case, fixes means features hehe
<ogra_> yeah, that would need one indeed ...
<uvirtbot> New bug: #1044367 in maas (main) "[FFe] Standing feature freeze exception for maas in Quantal" [Undecided,New] https://launchpad.net/bugs/1044367
<Daviey> roaksoax: If the disruptive change is uploading by your Monday lunchtime, then consider that a one off ack.  For a standing exception, lets dig deeper. deal?
<roaksoax> Daviey: deak
<roaksoax> deal*
<Daviey> roaksoax: As soon as the upload is built and published, can you ping me.. or someone elese on the release team, and request a respin.  Would like to get images with it on, asap for testing.
<roaksoax> Daviey: will do
<hallyn> SpamapS: (and Daviey ) an SRU big-deal-coming-up warning:  bug 997978 suggests I'll want to ask for the full quantal kvm, seabios, and vgabios packages to be SRUd to precise.
<uvirtbot> Launchpad bug 997978 in qemu-kvm "KVM images lose connectivity with bridged network" [High,Confirmed] https://launchpad.net/bugs/997978
<soren> hallyn: Oh, is that finally fixed?
<hallyn> soren: it seems to be fixed with backported packages from quantal, yes
<soren> hallyn: What was the problem?
<hallyn> soren: hahahaha
<hallyn> i'll spend an hour or two trying to find relevant-looking commits, but ...
 * soren blinks
<hallyn> full backprot from quantal fixes it.  that's 1.1.0 to 1.2.0 qemu-kvm version difference
<hallyn> iow i have no idea
<Daviey> soren: You might remember this... around 2007-2010.. can't remember exactly, there was disucssion of doing a major qemu-kvm SRU to bump versions.. I can't remember if it actually happend.. can you?
<soren> Daviey: I have no recollection of that.
<soren> A complete backport sounds very risky to me.
 * soren needs to run
<SpamapS> Seems like 1.1 users would be quite appreciative if we found those commits and helped make a 1.1.1 with them. :)
<kermit> how do i light up a drive for identification?
<rbasak> smoser: re: bug 1028501 - any thoughts on what we can do for precise ephemeral images using cloud-init? I've just hit this. Do we need an SRU? Can we even do one?
<uvirtbot> Launchpad bug 1028501 in cloud-init "cloud-init selects wrong mirrors for arm" [Medium,Fix released] https://launchpad.net/bugs/1028501
<rbasak> I didn't realise that the ephemeral images needed apt-get to work :-/
<rbasak> Or can we get MAAS to supply the user data workaround perhaps?
<rbasak> Thoughts?
<utlemming> rbasak: I think a user data workarond is a hack
<rbasak> Yes, it is
<rbasak> utlemming: but the alternative is to SRU the arm mirror handling to precise, which I'm not sure is SRU-able. Or is there another way?
<smoser> rbasak, we can sru the same fix back to the older cloud-init.
<rbasak> smoser: ok, thanks
<smoser> actually, for htis case, rbasak user-data work around is not a hack.
<rbasak> Which method would you prefer?
<smoser> as we also have to suppor the case of an intenral (offline) mirror.
<smoser> so cloud-init needs to be told what the right mirrors are (and maas needs to know this)
<arrrghhh> hey all.  i am having issues with the ps3mediaserver application - i can run it manually, but the upstart job fails.  how can i troubleshoot the upstart job failing?
<rbasak> arrrghhh: lots of help at http://upstart.ubuntu.com/cookbook/ if you're not aware of that?
<arrrghhh> rbasak, never have seen that, thanks.  i didn't create this upstart job, just not sure why it's failing - and i don't know how to start troubleshooting the 'why', because the app runs fine manually.
<roaksoax> Daviey: ok so I think I have decided not to upload MAAS to Quantal yet becuase MAAS no longer works with an external DNS/DHCP
<smoser> rbasak, so i'd like to sru the cloud-init fix
<Daviey> roaksoax: That hasn't been fixed yet?!
<smoser> as it has another benefit
<Daviey> roaksoax: That is a reasonable beta 1 release note IMO.. Don't block on that
<smoser> utlemming, are you able to work on bug 1041384 ?
<uvirtbot> Launchpad bug 1041384 in cloud-init "userless conf fails assign user[0] as the default user" [High,Confirmed] https://launchpad.net/bugs/1041384
<roaksoax> Daviey: yeah it seems to be minimal, but looking more into it atm
<rbasak> smoser: ack. Thanks!
<arrrghhh> rbasak, so I'm stuck here.  "If the pre-start process fails, the goal is changed from start to stop, and the stopping(7) and stopped(7) events are emitted with appropriate variables set denoting the error." - where's the error?
<arrrghhh> /var/log/upstart/<upstartjob>.log doesn't contain really anything.
<utlemming> smoser: yeah, I can hash that out today, I think
<hallyn> soren: sigh, the only thing i can find in git log from v1.0..v1.1.0 is some small fd and memory leaks.  Noone has mentioned OOMing when they had net trouble under heavy load...
<zul> Daviey:  the tgtadm stuff got merged into cinder, im just backporting the fix for nova-volume now
<Daviey> w00t
<uvirtbot> New bug: #1044430 in php5 (main) "Php ftp_* eats memory" [Undecided,New] https://launchpad.net/bugs/1044430
<donspaulding> I upgraded a server from Jaunty -> Karmic -> Lucid using old-releases.ubuntu.com and apt-get upgrade && dist-upgrade today and then tried to go from Lucid -> Precise using do-release-upgrade.  The do-release-upgrade told me the upgrade completed, but with errors, and now I've got circular dependency problems in apt
<donspaulding> http://pastie.org/private/nax1kzzd5khmnzfvcevnq
<donspaulding> I restarted in between the J -> K and K -> L upgrades, but I haven't restarted since attempting the L -> P upgrade.
 * donspaulding is afraid the server might not come back up.
<donspaulding> can I safely install the .debs for the unmet dependencies from packages.ubuntu.com?  Manually installing util-linux seems like the sort of thing I shouldn't be doing.
<cwillu_at_work> donspaulding, you're already in an undefined state, so...
<cwillu_at_work> donspaulding, so long as you've got some way of mounting the system to do recovery, it's hard to screw things up too badly
<ona_matt> Anyone know why the installer is not adding the dns domain to resolv.conf that is provided by DHCP Â (this is in 12.04)
<uvirtbot> New bug: #1044516 in quota "repquota not working correctly with xfs" [Medium,In progress] https://launchpad.net/bugs/1044516
<Troy^> ok so i'm trying to add an external 3tb drive to my linux server. i plugged it in and it seems to have labeled it as /dev/sdc and /dev/sdc1
<Troy^> how is it mounted. how do i get it to auto mount on boot
<Troy^> this is a huge learning curve setting up a media server from command line lol. then having it all integrated with sickbeard, couch potato and sabnzbd
<quest> hey, does anyone knows how can i get the DVD version of ubuntu server?
<Troy^> quest ? like have it sent in mail?
<Troy^> quest the .iso can be burnt to both cd and dvds
<quest> Troy^: i meant a big ISO file that contains a repository in it
<quest> i found it here: http://releases.ubuntu.com/precise/?C=S;O=A
<Troy^> never heard of it.. but honestly that is what internet is for
<Troy^> there all cd versions there
<quest> yeah but i want to be able to work offline too
<quest> debian has DVD versions
<Bitz> hello there
<Troy^> from that list quest there doesn't appear to be dvd version of server
<Bitz> My question: I have an ubuntu server version and I have some problem with it. I was watching movie on my samsung tv using dlna and suddenly it just snapped and everything went out then i reset and the the load gets stuck at some point but most of the functions work. I am not sure what happened... log files are not very helpful ... I am using DLNA (serviio)... So my question is...can
<Bitz> anyone give me an idea why does it get stuck during startup...and can dlna cause this?
<Troy^> Bitz: i have no clue what you are saying
<Bitz> haha
<Bitz> do u have a question?
<Bitz> so the main point is
<Troy^> just poorly worded i guess?
<Bitz> i was watching movie through dlna and suddenly lost connection
<Bitz> and the server stuck
<Bitz> i reset and then wont load fully however it does load all the essencial programs
<Bitz> is it clear<
<Bitz> ?
<Troy^> you mean your ubuntu server gets stuck?
<Troy^> at boot?
<Bitz> zep
<Bitz> yep
<uvirtbot> New bug: #1040033 in qemu-kvm (main) "Fresh VM installs via preseeded oneiric isos sometimes fail with filesystem issues" [Critical,Triaged] https://launchpad.net/bugs/1040033
<Bitz> after successfully starting rsync
<Bitz> wont reach login section
<Troy^> that is a little more clear Bitz
<hallyn> jdstrand: still playing, but i'm thinking the bug is in mkisofs (bug 1040033)
<uvirtbot> Launchpad bug 1040033 in qemu-kvm "Fresh VM installs via preseeded oneiric isos sometimes fail with filesystem issues" [Critical,Triaged] https://launchpad.net/bugs/1040033
<hallyn> jdstrand: if i manually do virt-install - with either qcow2 or raw - i always get the same error if i use the same preseeded file
<jdstrand> hallyn: why would downgrading qemu-kvm help me then?
<jdstrand> (that is certainly interesting)
<hallyn> jdstrand: that's a good point
<hallyn> it woulnd't.  back to the drawing board for me
<Troy^> can samba be setup to share more then one path?
<patdk-wk> troy, only if you *really really* want too
<Troy^> is that not advised?
<Troy^> like can i create a specific point and then link all the different folders on different drives i have? etc?
<patdk-wk> no, you just have to have the *drive* to do it :)
<patdk-wk> what? what does that mean?
<patdk-wk> create a specific point?
<Troy^> like /srv/samba/share
<hallyn> jdstrand: no wait, maybe i'm having a different local problem
<patdk-wk> I *guess* you could do that, kind of backwards
<Troy^> then link all my stuff like ln -s /srv/samba/share /var/run/usbmount/ln -s /home/shared/movies /var/run/usbmount/WDC_WD10_EADS-00L5B1_1/movies/
<Troy^> well then how is it suppose to be done? that is why i am asking.
<patdk-wk> normally by editing smb.conf
<arrrghhh> Troy^, why not bind mount?  or just directly linking to it?
<Troy^> doh!
<arrrghhh> yea, just edit smb.conf and directly link to the folders you want to share
<Troy^> ok so there will be multiple paths
<arrrghhh> hehe ;)  you can have as many shares as you desire
<Troy^> alright that is what i wanted to know thanks
<patdk-wk> all of those options are possible
<soren> hallyn: Oh, does it OOM? I thought it just silently stopped working.
<patdk-wk> you just have to pick the *best* for you :)
<Troy^> what is with your * *
<arrrghhh> it's like bolding a word
<arrrghhh> without being able to [b] [/b]
<patdk-wk> oh? I should use colors instead? :)
<arrrghhh> lol
<Troy^> uhh no you just use it in every sentence lol
<Troy^> :P
<patdk-wk> well, my * key keeps getting in the way
<patdk-wk> hmm, no irc color support here
<Troy^> good lol
<Troy^> does a samba server have to be part of a workgroup or should be? never used it before just editing the smb.conf now
<cwillu_at_work> can't not be
<arrrghhh> Troy^, you have to define a workgroup
<arrrghhh> but it can be anything
<Troy^> ok cool thanks. not familiar with how windows network/workgroups work
<arrrghhh> Troy^, https://help.ubuntu.com/community/Samba
<arrrghhh> don't need to be ;)
<Troy^> smb.conf is huge and the share paths aren't located there
<Troy^> not even sure what i should touch tbh
<arrrghhh> share paths are in smb.conf
<arrrghhh> Troy^, please read the link i sent
<arrrghhh> lots of good info there
<donspaulding> OK, I think I've managed to get my upgrade to work its way back to 12.04.  Is there a tool that can tell me how far off my currently installed packages are from a stock precise install?
<Troy^> i am reading some of it and some of it is way too much
<hallyn> jdstrand: th eerror i get every time with this preseeded .iso created by uvt is 'The installer encountered an error copying files to the hard disk"
<hallyn> soren: sorry, what are you referring to?
<jdstrand> that's not what I get
<jdstrand> it's usually somewhere else. I can try again
<arrrghhh> Troy^, one step at a time
<uvirtbot> New bug: #1039166 in qemu-kvm (main) "high use of resource" [High,Incomplete] https://launchpad.net/bugs/1039166
<Jay_Levitt> I'm using Ubuntu 12.04 cloud images to spin up instances. On a regular EC2 instance, I end up with "hostname --fqdn" showing the ip-*.internal hostname. But on a VPC instance, I get "hostname: Name or service not known". Looks like on the VPC, /etc/resolv.conf expects to find a nameserver at 10.0.0.2, and there's no such thing.. what would set that config file?
<jdstrand> hallyn: k, here it failed, but during hardware detection
<hallyn> jdstrand: of the cdrom, by chance?
<soren> hallyn: 15:25 < hallyn> soren: sigh, the only thing i can find in git log from v1.0..v1.1.0 is some small fd and memory leaks.  Noone has mentioned OOMing when they had net trouble under heavy load...
<soren> hallyn: ...which I completely misunderstood :)
<hallyn> soren: oh.  right.  ok :)
<soren> hallyn: I thought you were saying that none of the commits mentioned OOMing under load.
<jdstrand> hallyn: no, "Compilation failed in require at /usr/lib/perl/5.12/Hash/Util.pm line 32."
<jdstrand> et al
<jdstrand> it tried to run debconf-communicate and died
<soren> hallyn: Yikes, btw:
<Jay_Levitt> Oh, duh.. instances get their IP address from DHCP, and Amazon offers AmazonProvidedDNS at 10.0.0.2.  But it only resolves public DNS; it doesn't add entries for your VPC hosts.
<soren> $ git log -p qemu-kvm-1.0..qemu-kvm-1.1-rc4 | grep ^commit | wc -l
<soren> 2880
<hallyn> soren: yeah
<jdstrand> 1.1~rc+dfsg-1ubuntu10
<hallyn> jdstrand: ISTM the iso is bad, or qemu-kvm's cdrom drive is broken in q?
<hallyn> lemme try with the plain desktop iso again
<jdstrand> hallyn: I used oneiric desktop amd64
<jdstrand> hallyn: original iso has md5 of: 62fb5d750c30a27a26d01c5f3d8df459
<hallyn> jdstrand: oh, this was not using uvt?
<jdstrand> hallyn: but, I don't think it is the iso. if I downgrade qemu-kvm to hardy's, I installed like 30 times with no error
<jdstrand> hallyn: yes it was
<jdstrand> uvt takes the original iso and creates a new one witht he preseeds
<hallyn> right, so q is why does virt-install using the original iso not do that?  (testing 2 more times to confirm)
<jdstrand> I was giving you the original iso since our install isos would be different
<soren> hallyn: a281ebc11a6917fbc27e1a93bb5772cd14e241fc perhaps?
<hallyn> jdstrand: might be worth trying with 0.11 machine type (or something)
<hallyn> soren: would that be for virtio block, net, or both?
<hallyn> might be worth a test
<jdstrand> I do have pc-1.1, though if your virt-install uses the same one that shouldn't be it
<jdstrand> hallyn: do I use --machine=pc=1.0?
<jdstrand> err pc=1.0
<jdstrand> meh
<jdstrand> pc-1.0
<hallyn> jdstrand: maybe.
<hallyn> (mine is also using 1.1)
<hallyn> soren: i assume you're thinking commit a281ebc11a6917fbc27e1a93bb5772cd14e241fc for fixing the bridge-network bugs?
<soren> hallyn: Anything that uses a virtio ring buffer, afaict.
<soren> hallyn: Yeah, sorry I'm a little terse. :/
<jdstrand> hallyn: I doubt it is the cdrom though-- I was having problems with exisitng machines immediately after upgrading to quantal, but it wasn't until later that I could find something reproducible
<Troy^> here is my minimal smb.conf http://paste.ubuntu.com/1178566/
<arrrghhh> Troy^, what's wrong?
<soren> hallyn: yeah, I think that's a pretty good candidate.
 * soren disappears
<hallyn> soren: thanks
<Troy^> alright now going to check and see if i can connect to the workgroup
<arrrghhh> Troy^, restart smbd (sudo service smbd restart) after smb.conf changes.
<hallyn> jdstrand: ok, scratch that.  error with stock oneiric installer too.  i guess i may have to grab another host where i can start bisecting
 * jdstrand nods
<jdstrand> it's an annoying bug. thanks for looking into it :)
<jdstrand> hallyn: it alsways seems to be me, doesn't it? :P
<jdstrand> that's actually not fair-- last cycle I don't think I had any weird problems like this
<Troy^> arrrghhh: did that can't seem to find the workgroup
<Troy^> i don't even know where i search for a workgroup in windows 7
<arrrghhh> Troy^, i just do a win+r, \\ip
<arrrghhh> then it shows the shares.  if you have dns, you can do \\hostname
<hallyn> jdstrand: on, actually it's not always you - but it's always someone with a thinkpad!  :)
<jdstrand> heh
<hallyn> on tuesday i'll bisect with a vostro (needs some setup).  we'll see if it's immune
<hallyn> have a good holiday :)
<jdstrand> you too!
<Troy^> arrrghhh: tried that it sais connecting to the IP of the linux machine brings up the workgroup thing.. but it shows host as \\HTPC\ and asks for a username now that HTPC host is the actual windows machine itself which is going to retrieve the stuff from samba so it's really weird
<Troy^> asks for username and pass
<arrrghhh> yea.  put in the user/pass on the ubuntu box
<Troy^> tried
<Troy^> didn't work
<arrrghhh> erm
<arrrghhh> does for me...
<Troy^> maybe your smb is different?
<Troy^> it keeps saying connecting to domain \\HTPC
<Troy^> arg lol
<Troy^> which isn't right i even did \\192.168.2.6\MediaShare and the login dialog comes up but my ubuntu user credentials dont work
<arrrghhh> Troy^, could be my smb.conf.   what is your 'security' setting?  in smb.conf, should be either security = user or security = share IIRC
<Troy^> i pasted my smb.conf it is minimal
<arrrghhh> yea, put the in the [global] section
<arrrghhh> security = user first
<arrrghhh> security = share i think if you want no psw
<Troy^> ok i'll try now
<arrrghhh> tryin to think what else is required.... try that, restart smbd again.
<Troy^> ok done i connected showed me my 3 shares can't access any of them sais something wrong with network blah blah
<arrrghhh> so it shows the shares
<arrrghhh> but when you click on 'em you get said error?
<Troy^> yes
<arrrghhh> dafaq
<arrrghhh> i wonder if the workgroup name needs to match the clients
<Troy^> oh wait
<arrrghhh> i didn't think that had to
<Troy^> hold on
<arrrghhh> ?
<Troy^> ok should now work
<arrrghhh> Troy^, you got it?
<Troy^> yes tyvm arrrghhh
<Troy^> anyway to get windows to auto mount that?
<Troy^> streaming 1080p videos via 802.11n perfectly
<arrrghhh> Troy^, windows help... sorry.  i would guess mapped drive, but not the place :P
<Troy^> oh and via usb2.0 ext
<Troy^> i was skeptical
<arrrghhh> ?
<arrrghhh> oh, performance-wise.  yea, 1080p isn't so heavy on b/w - so long as you're not transcoding.
<Troy^> yea ext usb2.0 drive to wired linux server box and then the htpc is wifi 802.11n now everytime when HTPC is not in use it can be put asleep
<Troy^> as htpc won't be the load processing sabnzbd, sickbeard and couchpotato
<arrrghhh> yea
<arrrghhh> that's cool :)
<Troy^> now for the final part of this i need to install an internal 2tb sata drive.. NTFS as well get it to auto mount and set it up for samba
<arrrghhh> why NTFS?
<arrrghhh> if it's in the linux server
<Troy^> well it has all my stored movies and videos
<arrrghhh> (you can share any drive to windows)
<arrrghhh> i have ext4 drives in my linux box shared over samba to windows clients
<Troy^> wouldn't want to reformat it
<arrrghhh> oh, as in there's already data on it
<arrrghhh> bummer
<Troy^> is ntfs a performance issue?
<arrrghhh> Troy^, yes there is overhead
<arrrghhh> and other issues (permissions mainly)
<arrrghhh> i was touchy about overhead, and noticed a lot back when you had to use ntfs-3g
<arrrghhh> i think it's better now tho
<arrrghhh> i haven't used fuse/ntfs-3g or whatever it's called now in years.
<Troy^> yea :S
<arrrghhh> well stream something
<arrrghhh> and get htop or watch it in top whatever
<Troy^> i am now the ext is ntfs that im streaming from now.. thing is the linux server is going to be doing alot of extracting and that will even probablly take a while on usb2.0
<Troy^> i âwill have it autosetup to download from newgroups and auto extract
<Troy^> actually well it can all be done by the internal 2tb then transfered to the ext 3tb
<uvirtbot> New bug: #1044553 in cloud-init (main) "'password' setting broken by usergroup changes" [Undecided,New] https://launchpad.net/bugs/1044553
<Troy^> main reason i go the external because the htpc was the one doing all the processing. always on. and it has usb3.0 plus limited drive spaces
<uvirtbot> New bug: #1044559 in maas (main) "apt-get remove maas --purge while maas is running prevents full database purge" [Undecided,New] https://launchpad.net/bugs/1044559
<ruben23> hi guys any one can help me resize my server, coz i have bought a hosted srever 1 terabyte size but somehow, when they setup it just like this--------------->http://pastebin.com/NP9iL3DC, and they told me that i need to resize it on my endwhich i have this---------------->http://pastebin.com/g4a7JNRB---anyone can help me how do i resized this to fit my 1 terabyte storage..please
<Troy^> alright just installed 2tb internal into my linux machine
<Troy^> i see the drive there now mounting it?
<Troy^> can someone help me
<Troy^> http://pastebin.com/Kyx6kLGe
<Troy^> i created /media/2tbint/ and chaged its permissions to 777
<Troy^> actually i figure it out i was trying to mount the wrong drive
#ubuntu-server 2012-09-01
<uvirtbot> New bug: #1044318 in quantum "pre-1.5 OVS has trouble with floating ips when pinging from the same box" [High,Confirmed] https://launchpad.net/bugs/1044318
<Troy^> hello where do i find the motd login file? for server? I like some of ubuntu's default message like the info it provide i would like to add on but /etc/motd/ shows exactly as it sais on login not scripts
<donspaulding> hey peeps, upgraded my server today from jaunty -> karmic -> lucid -> precise but I'm sure I've borked something along the way.  I got a kernel panic on reboot after the lucid -> precise upgrade and it looks like it's caused by the default kernel in /boot/grub/menu.lst not having any initrd.img file associated with it.
<donspaulding> I've got a mediocre KVM over IP console up, so I can't even pastebin the menu.lst, but I'm wondering if I need the default entry that's there.  Right below the "Ubuntu 12.04.1 LTS, kernel Default" (and its associated Recovery entry), I've got what looks like a legit 3.2.0-29-generic kernel entry.
<patdk-lap> they are exactly the same
<patdk-lap> you can always try it
<patdk-lap> but it should be more of the same error
<donspaulding> patdk-lap: well, the one that specifies itself as 3.2.0 also has an 'initrd' line below the 'kernel' line in the menu.lst, which makes me think it's not quite the same as the one above it.
<patdk-lap> oh, so just the line is missing, not the file
<patdk-lap> it will bot on that fine
<patdk-lap> the recovery basically means, no to *hide* stuff by default
<donspaulding> patdk-lap: yeah, I'm saying I've got 4 boot entries total, one labelled "Ubuntu 12.04.1 LTS, kernel Default", one labelled "Ubuntu 12.04.1 LTS, kernel Default (recovery mode)", one labelled "Ubuntu 12.04.1 LTS, kernel 3.2.0-29-generic", and one labelled "Ubuntu 12.04.1 LTS, kernel 3.2.0-29-generic (recovery mode)"
<donspaulding> patdk-lap: The ones labelled with the actual 3.2.0 version number have initrd lines, the ones labelled "Default" don't have initrd lines, but they're the default boot target.
<patdk-lap> I wonder if those are some xen boot thing, or if it's really messed up
<donspaulding> I want to get rid of the ones labelled "Default", my question is:  what's the best way to do that?
<patdk-lap> ya, I'm not sure, cause I'm not sure how they got in there
<donspaulding> patdk-lap: even kernels booted under xen need an initrd, no?
<patdk-lap> well, kernels don't need initrd's
<patdk-lap> but the way ubuntu does it, it's basically required, yes
<donspaulding> patdk-lap: it's not outside the realm of possibility, but this is not a dom0 or domU machine.  Bare metal all the way.
<donspaulding> can I just delete entries straight out of /boot/grub/menu.lst?  Or do I have to do it through some other method and then run update-grub again?
<donspaulding> Also, on precise, should I be running update-grub2?  Because I think I somehow don't have grub2 installed.
<Nephelo> I've got a problem with my Ubuntu 10.04 Server. I'm using smbd and mdadm (raid 5). Now I've got the problem that the server freezes after a time, accessing via smb. Nothing in syslog and RAM seems ok.
<rtcg72a> Hi there, I am trying to use apt-get install to install a few dependencies for a package named webmin. However i get a message saying that the package is not available, but is referred to by another package. What can I do?
<Troy^> where do i find the defalt motd script? like i like how the default has some info about the system i would just like to modify that further and add to it
<patdk-lap> in /etc/update-motd.d
<nandersson> Hi, I would like to change GRUB_DEFAULT in /etc/default/grub, but how on earth can I find out just what integer to put when /etc/grub.d/* shows a mess like several rows looking like http://stevenf.com/notes/index.php/?Create+bootable+ISO+disk+images+in+Linux
<nandersson> is there  a way to do a text-render of the content in /etc/grub.d/* so I now whether to put GRUB_DEFAULT=3,4,5, or whatever?
<nandersson> Now it feels like fumbling in the dark.
<nandersson> I want to boot a row in a submeny, and just how do I do that?
<nandersson> ok....did a little googeling and, surprise, surprise. It is IMPOSSIBLE :D http://ubuntuforums.org/showthread.php?t=1195275
<Troy^> can anyone tell me where the login motd is? so i can edit the default one
<marc_12314> is that what your looking for  https://wiki.ubuntu.com/UpdateMotd  ?
<sw> Troy^: /etc/motd
<Troy^> sw looked in there but i don't see any of the script portions like i like the fact the default has ram and harddriv info etc.
<sw> Troy^: scripts update /etc/motd with that. it reads from /etc/motd though. and that's what you asked for ;o
<sw> ^ link is prolly what you want then
<uvirtbot> sw: Error: "link" is not a valid command.
<Troy^> sw: just didn't know how to explain it :P
 * sw gags uvirtbot with ubottu 
<Troy^> hmm /etc/update-motd.d seems to have some of the scripts but still seems like something is missing
<Troy^> ahh here we go /usr/bin/landscape-sysinfo
<Troy^> ok so if i wanted to add a program such as hddtemp to the motd. could i just add in the motd.tail /usr/sbin/hddtemp /dev/sda
<Troy^> is there a way to perform a internet speed test on my ubuntu server?
<xpistos> Hello. Is there a channel for Landscape?
<Kartagis> Troy^: speedtest.net
<Troy^> ? command line only?
<lickalott> working something out right now Troy^
<lickalott> gimmie a min
<lickalott> http://stackoverflow.com/questions/426272/how-to-test-internet-connection-speed-from-command-line
<lickalott> was going to try first to see if it worked (command line only) but the download for the .zip is 15 mins
#ubuntu-server 2012-09-02
<uvirtbot> New bug: #1044594 in cloud-init "Cloud-archive not loading types correctly" [High,In progress] https://launchpad.net/bugs/1044594
<methods> anyone know how to get snmpwalk to show names instead of numbers ?
<jetole> Hey guys. Does anyone know a command that will show me which license a debian package has?
<jetole> for example if I have apache install and then I could run `apt-some-cmd --option1 apache2` and it would say apache2 has the apache license or something?
<anti-freeze> jetole, try using aptitude to find the package and get some info
<jetole> yeah I'm looking into this now and it looks like licensing info isn't a standard part of a .deb file (surprised) but for the most part, for most applications, it looks like /usr/share/doc/<pkg>/copyright is the file to look at
<jetole> This is going to make a compiling a collection of licenses we use much more difficult
<Zaehlas> I critically need assistance troubleshooting dhcp.   clean server install, being forced to use the new isc dhcp.  I have checked config and log files best I can.  The server gets requests, and answers them, but my systems wont connect.  I have a paste with a few of the config files, log of the requests, and the running process..  http://pastebin.com/ZC3pHGhA
<Zaehlas> Please help...  I've been fighting this issue for a few days now.  Canonical support won't be available for days, and I have to pay for an entire year it seems, not one issue.  Prior to this clean install, the server and dhcp WORKED
<randomDude> so i've logged into my ubuntu 12.04 server via ssh, and the greeting message reveals it needs a restart. is there a way to reveal why it needs a restart?
<andol> randomDude: Apart from parsing logs I am not sure if there is a trivial/obvious way to find out why, but it is most likely due to a new kernel-upgrade having been installed.
<andol> randomDude: That login message is display when the file /var/run/reboot-required is detected, and I assume that file is created by package upgrade which requires a reboot.
<RoyK> Zaehlas: check the logs first
<andol> randomDude: Hmm, did a bit of reading, and the package(s) in question might be listed in /var/run/reboot-required.pkgs
<randomDude> andol: contains :  linux-image-3.2.0-29-generic linux-base
<randomDude> andol: cheers
<Zaehlas> RoyK, hmm..  problems reading?  check the paste above, it has my logs in it.
<Zaehlas> The switch is working, I've swapped cables, definitely having some sort of an issue with pinging on my internal network, but I have less than 3 houirs to get to the airport, and this all worked 2 days ago, I'm sure its not a hardware issue
<uvirtbot> New bug: #1044964 in samba (main) "File Server could did not crash, but causes errors often (dup-of: 913809)" [Undecided,New] https://launchpad.net/bugs/1044964
<mwcampbell> libvirt newbie question: If I want to delete the entire configuration for a VM, do I just delete the config file in /etc/libvirt/qemu, or is there a virsh command I can use to do that?
<mwcampbell> ah, found the undefine command
<Nephelo> Hello, I've got a problem with my Ubuntu 10.04 Serversystem. I'm using mdadm (raid 5) and smbd for network shares. After a time the server freezes, only a reset helps. RAM is ok, and FS seems to. I tried to create an archive and this works. Nothing in syslog. Can someone help me?
<Nephelo> No one :(
<uvirtbot> New bug: #1045027 in qemu-kvm (main) "iPXE kills kvm with KVM: entry failed, hardware error 0x80000021" [Undecided,New] https://launchpad.net/bugs/1045027
<Troy^> Nephelo: hmm that is weird im using raid 1 and samba but not sharing the Raid 1 drives. I do sometimes get very laggy input/output for some reason.. I don't know where to start
<Patrickdk> for nephelo, dunno
<Patrickdk> troy, using mdadm or dmraid?
<Nephelo> I think the problem is not the Filesystem itself, because creating a big archive works.
<maxb>  /goe
<maxb> oops
<Jeffsi> Hello, I have a ubuntu server and recently whenever I login it gives me this[ http://pastebin.com/PzUftHQQ ] I have no idea how to fix the fifth line so it displays correctly. In addition to this I am having difficulty using commands like ifconfig which give a similar error. Anyone know what may be causing this?
<TheLordOfTime> Jeffsi:  unknown paste ID
<TheLordOfTime> your pastebin is invalid,.
<Troy^> he deleted it, because it did work
<TheLordOfTime> ah
<Jeffsi> TheLordOfTime: i still have the issue but i set the timer too short
<Jeffsi> http://pastebin.com/bTRePbrd
<TheLordOfTime> Jeffsi:  can you show me a pastebin of the output of `ls -alshF /bin/` | grep ls  ?
<TheLordOfTime> if 'ls' is missing on your system, something is wrong :P
<Jeffsi> no such file or directory
<Patrickdk> or path is messed up
<Patrickdk> or your running a funky shell :)
<TheLordOfTime> oh right, forgot ls is gone
<TheLordOfTime> damn it xD
<TheLordOfTime> WOAH HOLY CRAP
<TheLordOfTime> whoops
<Jeffsi> i have no idea how this could have happened
 * TheLordOfTime just segfaulted PHP on a production server
<Patrickdk> TheLordOfTime, that can't be *too hard* to do :)
<TheLordOfTime> well... memory level segfaults are kind of hard to do
<TheLordOfTime> and this one's a damn oddball one
 * TheLordOfTime thinks he knows what happened and goes to fix
<TheLordOfTime> Jeffsi:  what's `echo $PATH` show?
<Jeffsi> TheLordOfTime: http://pastebin.com/Lhmdvn13
<Troy^> can you guys help me i have tried several different things to get hddtemp to run now root so i can add it to my login motd
<Troy^> fizz@server:~$ hddtemp /dev/sda
<Troy^>  /dev/sda: Permission denied
<Troy^>  
<TheLordOfTime> Troy^:  try with sudo
<TheLordOfTime> /dev/ is a protected folder, last i checked
<Troy^> well i know it works with sudo
<Troy^> i want to use hddtemp without it for a login motd
<Patrickdk> well, it's not going work without sudo
<Patrickdk> so you need to give permission to run that using sudo in sudoers
<Patrickdk> or, have the motd update script run it instead
<Troy^> hmm?
<Jeffsi> TheLordOfTime: did that help at all? I'm really stuck and tempted to clean install
<TheLordOfTime> Jeffsi:  that's what i'd advise, back up any critical data
<Jeffsi> looks like i will have something to do this evening then, but that is probably best. Any idea how this could have happened? Im guessing maybe an update broke it.
<Troy^> Patrickdk: if i use the motd update scripts won't it still need sudo?
<Patrickdk> not sure, if those scripts are run as root, no
<Troy^> ?
<Troy^> no meaning?
<TheLordOfTime> Troy^:  he's not sure dude
<TheLordOfTime> i'd ignore the "no" part, just assume "not sure"
<TheLordOfTime> but until you test it, well...
<Troy^> lol ok
<Troy^> -rwxr-xr-x   1 root root   29 Sep  2 15:51 51-hddtemp-info so this in /etc/update-motd.d should run right?
<Troy^> doesn't seem to be working
<Troy^> there we go got it to work without sudo
<TheLordOfTime> anyone able to help me diagnose an issue with 'gssd' where it can't start?  A user needs it to mount their nfs share(s)
<TheLordOfTime> http://pastebin.com/E8YQ7UTJ  <-- relevant output and log files
<Patrickdk> TheLordOfTime, try doing what the logs tell you :)
<TheLordOfTime> Patrickdk:  right, not helpful, considering nothing's using it...
<TheLordOfTime> just looked too
<TheLordOfTime> lemme try a reboot
<Patrickdk> hmm?
<TheLordOfTime> see if that fixes the thing
<TheLordOfTime> Patrickdk:  nothing shows in *any* output anywhere
<TheLordOfTime> the last time i ran into this i had to reboot to get it to force unmount
<TheLordOfTime> and stop being busy
<Patrickdk> ya, I've had that issue with lsof
<Patrickdk> looks like nothing is, but something is, just using it under a different name
<TheLordOfTime> Patrickdk:  https://pastebin.com/jJbaYNp8
<TheLordOfTime> that's the lsof output, when grep'd
<Patrickdk> that is an issue
<TheLordOfTime> indeed.
<TheLordOfTime> now, how do we resolve it?
<TheLordOfTime> what is rpc.idmapd anyways?
<Patrickdk> the other part that is *required*
<Patrickdk> hmm, it shouldn't be erroring
<Patrickdk> what version ubuntu?
<TheLordOfTime> precise
<TheLordOfTime> 64bit
<Patrickdk> wonder why it's attempting to unmount
<TheLordOfTime> good question
<TheLordOfTime> but i cant start gssd
<TheLordOfTime> which means the system can't mount the NFS share
<TheLordOfTime> because gssd is *required* in order to use this NFS system
<Patrickdk> sure it can, it just can't do it using nfs4 :)
<TheLordOfTime> <TheLordOfTime> because gssd is *required* in order to use this NFS system  <--0
<TheLordOfTime> well nfs4 is the only way to access it, so...
<TheLordOfTime> i guess that there's no solution?
<Patrickdk> you updated?
<TheLordOfTime> updated about 6 hours ago
<Patrickdk>                 # ignore any failures caused by the filesystem still
<Patrickdk>                 # being in use
<Patrickdk>                 umount "$PIPEFS_MOUNTPOINT" || true
<TheLordOfTime> running 12.04.1, so...
<TheLordOfTime> unless something went fubar...
<Patrickdk> so those *errors* don't mean anything
<Patrickdk> it's something else
<TheLordOfTime> well then why isnt gssd starting...
<TheLordOfTime> that's all that shows in the upstart logs for it
<TheLordOfTime> and gssd doesn't have its own logs
<TheLordOfTime> would there be anything in dmesg?
<Patrickdk> doubt it
<Patrickdk> normally it would be in daemons or messages
<TheLordOfTime> should i file a bug on gssd then?
<TheLordOfTime> because this seriously fubar's systems
<Patrickdk> well, I have no issues running gss and idmap
<Patrickdk> but still, it should say somewhere why not
<TheLordOfTime> wait i think i fixed it
<TheLordOfTime> apparently a config option was skipped, lemme test this now...
<TheLordOfTime> ahhh
<TheLordOfTime> there we go
<TheLordOfTime> i missed one configuration instruction to edit the nfs_common defaults
<TheLordOfTime> adding that made it start and stay started
 * TheLordOfTime facepalms
<Patrickdk> :)
<TheLordOfTime> HOW THE HECK DID I MISS THAT
<Patrickdk> I guess you already setup kerberos?
 * TheLordOfTime has the urge to repeatedly facedesk, but resists that urge
<Patrickdk> really pointless to run gss without kerberos, cause gss depends on kerberos
<TheLordOfTime> already set that up
<TheLordOfTime> keytab, krb5.conf, etc. everything's configured for that
<TheLordOfTime> just forgot to change the nfs_common defaults in /etc/default/
<TheLordOfTime> whats the difference between krb5p and krb5i?
<TheLordOfTime> one's privacy, one's integrity, but what's that defined to be
 * TheLordOfTime kind of inherited this nfs share system, and didn't really want to inherit it
<TheLordOfTime> (work sucks)
<TheLordOfTime> (sometimes)
<RoyK> unemployment sucks a bit more ;)
<TheLordOfTime> good point
<MTecknology> How can I put something in /etc/fstab so that it doesn't try to mount until there's a network connection?
<Patrickdk> _netdev
<MTecknology> If I add _netdev I get "unknown option _netdev (ignored)"
<Patrickdk> when doing what?
<Troy^> may need to stick some more ram in my server
<MTecknology> In /etc/fstab I have this -  luxor.ngx.cc:ngx        /mnt/ngx        glusterfs       defaults                0 0
<Patrickdk> oh, netdev I think might be limited to nfs usage
<MTecknology> oh :(
<Patrickdk> but I dunno :)
<Patrickdk> never used glusterfs
<TheLordOfTime> Patrickdk:  ooh, where'd the syntax be for that, since that may make these nfs share thingy's easier :P
<Patrickdk> man mount?
<TheLordOfTime> i meant in the fstab entry
<TheLordOfTime> (as in where would it go in the fstab entry)
<Patrickdk> man fstab?
<Patrickdk> suprisingly mount options go in the fstab options column :)
<TheLordOfTime> :P
 * TheLordOfTime tries not to mess with fstab, probably why he hasnt memorized the syntax
<jgcampbell300> can anyone tell me the cobbler command to make cobbler use 8.8.8.8 for eth1 in dns ... i have cloud on eth0 and connection to internet on eth1 ... but it is resetting my dns to local so i cant resolve updates or anything else on the net
<jgcampbell300> is there a way to make cobbler use local dns and then search external dns if i cant find what it is looking for
<jgcampbell300> i need help please with external dns working with ubuntu 12.04, cobbler, dnsmasq, maas-dhcp, juju
#ubuntu-server 2013-08-26
<TJ-> tgm4883: see http://archive.ubuntu.com/ubuntu/pool/main/l/linux/  ... or for out-of-support packages, http://old-releases.ubuntu.com/ubuntu/pool/main/l/linux/
<tgm4883> TJ-, TheLordOfTime 12.04. It's not on either of those sites, although I found it on the kernel team PPA. I'm going to attempt using a newer version and seeing if that will resolve the issue I'm having
<tgm4883> if not, I'll attempt the one from the PPA
<TheLordOfTime> tgm4883, i was going to link you to the archive for it
<TheLordOfTime> because i believe i could get those packages
<TheLordOfTime> tgm4883, but why do you specifically need the linux-image-server package?
<TheLordOfTime> i don't think there's a server-specific image now, but I might be wrong...
<tgm4883> TheLordOfTime, not necessarily the server package, but that particular version number
<tgm4883> I've got a test server and a production server. The test server works fine, but the production server doesn't work quite right when it comes to backing up the virtual machine, so I'm trying to make it identical to the test server (the previous admin didn't do that)
<Patrickdk_> exactly, how do you backup the machine?
<tgm4883> Patrickdk_, using a windows program called vm explorer. It uses the vmware API to backup the machine
<TJ-> tgm4883: https://launchpad.net/~canonical-kernel-team/+archive/ppa/+sourcepub/3248693/+listing-archive-extra
<tgm4883> TJ-, awesome, I'll grab that. Thanks
<Patrickdk_> funny, vmware has no backup api, only a cbl
<tgm4883> Patrickdk_, we use this http://www.trilead.com/
<Patrickdk_> ya, looking, I use veeam
<dassouki> looking for a document collab and task management CMS of some sort for a group of 10 folks .. low budget so FOSS is ideal
<soy_el_pulpo> dassouki: we are using asana, I think is free for less than 10
<dassouki> ok
<dassouki> does it include document sharing? and task management?
<soy_el_pulpo> you can add documents to the proyects...
<soy_el_pulpo> up to 15 users is free..
<soy_el_pulpo> the good thing is that it has iOS and android app to keep you working on the road, they work very nicely
<dassouki> does it work on the ipad you know?
<soy_el_pulpo> documents per project, photos, etc...
<soy_el_pulpo> we spent so much time looking for something...
<soy_el_pulpo> also we tried social office form Liferay that I also liked, but my team has old-fashion people that cannot use "modern" tools very well... jeje
<soy_el_pulpo> dassouki: take a look to it...
<soy_el_pulpo> msg dassouki
<zul> is there a python way of determining number of cpu threads?
<brendan-> is there a way to find out when a specific package was added to the repos?
<brendan-> http://packages.ubuntu.com/precise/libssl1.0.0
<brendan-> im building a debian package to for ruby & there's a dependency for libssl0.9.8 according to the script i came across, but im not sure if that dependency was as a result of when it was created (about 1yr ago) or for another reason
<FunnyLookinHat> Hey guys - is there a reason that apache2-mpm-prefork is still the default?  I had thought that worker was the best means to take advantage of modern hardware ?
<TJ-> FunnyLookinHat: Good question; it's down to PHP and some of the libraries it depends on not being thread-safe, but I've not been able to find a list of the supposedly unsafe libraries
<FunnyLookinHat> Yeah - that's what I thought.
<FunnyLookinHat> I asked in #httpd and got the usual "use worker but don't use mod_php" - which is more or less a PITA
<TJ-> I installed php5 on a new 13.10 apache 2.4 install and it replace worker with prefork. I quickly reversed that, switched to event, and installed php5-fpm
<patdk-wk> and?
<patdk-wk> you should have install php5-fpm first, instead of plain php5
<patdk-wk> plain php5 defaults to mod_php
<ivoks> zul: there is
<ivoks> zul: i remember doing that for one project
<zul> ivoks:  cpu threads?
<ivoks> give me a second to check, it was long long time ago
<ivoks> multiprocessing.cpu_count()
<ivoks> ah, just the number of cpus
<ivoks> zul: ^
<zul> ivoks:  yeah thats what i used ;)
<TJ-> zul: ivoks: wouldn't a simple read of /proc/cpuinfo and then look at "siblings" and "cpu cores"?  see http://www.richweb.com/cpu_info
<zul> TJ-:  thanks
<TJ-> zul: also, look for the "ht" flag
<Serhad> hey volks
<Serhad> I got a question to commerce? How can I create a ney payment method like paying if the guy pick the product at my office
<Serhad> ney = new
<yolanda> jamespage, zuul https://code.launchpad.net/~yolanda.robla/horizon/autokpgtests/+merge/182168
<yolanda> sorry, zul.. . that's consequence of my CI work :)
<jamespage> yolanda, comments on MP
<yolanda> jamespage ok, i'll fix t
<Serhad> sorry wrong chat...
<yolanda> jamespage, updated mp
<kurt_> how does one install the software in a bzr branch once its downloaded?
<sarnold> kurt_: depends upon the software in question; bzr is just a source control system similar to git, mercurial, subversion, bitkeeper, perforce, etc.
<kurt_> sarnold: I'm trying to install juju 1.12
<kurt_> :)
<sarnold> kurt_: aha! :) can you use their ppa? that will provide you with precompiled packages for several distributions and architectures and save you the hassle of building it yourself
<kurt_> I could, sureâ¦ more googling ha ha
<sarnold> kurt_: that'd be the add-apt-repository and following command here: https://juju.ubuntu.com/docs/
<kurt_> sarnold: too easy thanks.  seems this version doesn't support --version flag or -version flg
<sarnold> kurt_: try just "juju version"?
<sarnold> I've got a vague memory that something there was a bit goofy
<kurt_> :) duh
<kurt_> 1.12.0-quantal-amd64
<kurt_> perfect!
<kurt_> thank you sir
<sarnold> woo, back to fighting your fight :) good luck! hehe
<kurt_> yes indeed.  I'm sure you've been reading on juju listâ¦. her hee
<sarnold> kurt_: just irc, I try to not subscribe to too many lists, it's an easy wya to lose hours every day :)
<kurt_> indeed and understood.  I appreciate your help. :)
<koolhead17> hi all
<arooni-mobile> trying to install nginx on ubuntu 12.04;  i have previously installed from source in /opt/nginx when running an older version of ubuntu.  how can i install correctly?  when i do sudo apt-get install nginx i see this https://gist.github.com/arooni/d27e1b6cc6443fcf4a4b
<sarnold> arooni-mobile: yikes you've got a good problem there to solve. echo $PATH as root will probably give you different results than the actual environment that apt-get was executed with. try this instead: sudo env
<arooni-mobile> sarnold, what about adding /sbin to roots path?
<sarnold> arooni-mobile: ooh, note this too: "...chasetoys/.bashrc:10:PATH=/bin:/usr/bin:..."
<arooni-mobile> sarnold, i updated root's path to add /sbin ... but doesnt seem to make a difference
<sarnold> arooni-mobile: whatever you've currently got is also scribbling over the path you've got
<arooni-mobile> sarnold, should i get rid of my users path additions?
<sarnold> arooni-mobile: you should at least inspect them to make sure they make sense
<arooni-mobile> havent messed with them for awhile
<sarnold> arooni-mobile: I personally would be terrified to have a user's home directory contents in root's PATH anywhere, but that might be difficult to fix.
<arooni-mobile> sarnold, i think i just copied thie bashrc file from my home laptop
<arooni-mobile> the only thing i need is the ruby gem stuff
<arooni-mobile> sarnold, i commented out all that stuff modfiying the $PATH
#ubuntu-server 2013-08-27
<dmbQs> Need some help reconfiguring Grub after restoring to a new VM. Can anyone help me with the instructions here: I restored a Tar backup to a new VM. Now I need to reconfigure Grub. Need some help with the commands found here: https://help.ubuntu.com/community/BackupYourSystem/TAR. Would anyone mind helping??
<delinquentme> sshfs is supposed to create the mount dir on execution right?
<delinquentme> im getting a complaint from bash: fuse: bad mount point `/home/thrive/www': No such file or directory
<sarnold> delinquentme: no, a directory needs to exist somewhere before you can mount a filesystem on it
<delinquentme> sarnold, much apreesh
<babak_>  question mod_security comes on 12.04 as 2.6.3 and the current version is 2.7.5 is it ok to stay with 2.6.3 ? I though we where supposed to use LTS for things like this.
<rbasak> Daviey: I'm looking at http://reqorts.qa.ubuntu.com/reports/ubuntu-server/merges.html. It looked truncated; then I realised that it's being updated as I read it. What do you think about changing the cronjob to write to a temporary file and then renaming in, so we get an atomic update? Not worth it for the one time I hit the problem? :)
<Daviey> rbasak: Yeah, i can do that.
<Daviey> rbasak: if you saw this crontab... you'd be wishing we had wrappers :)
<Daviey> (busy)
<rbasak> :)
<Daviey> rbasak: should be good now
<rbasak> Daviey: thanks!
<Daviey> zul: hey, can you check if kombu can be sync'd? kombu 2.5.12-0ubuntu2 -> 2.5.12-1
<Daviey> jamespage: looks like gdisk can be sync'd?
<jamespage> Daviey, probably
<yolanda> zul, trying to build ceilometer for the tests, but having a lot of RuntimeError: No 'ceilometer.storage' driver found, looking for ''
<zul> yolanda:  yeah ill have a look at it today
<yolanda> i was trying to build ceilometer to run the tests, do you prefer that i wait for it?
<zul> yolanda:  yeah
<yolanda> ok
<yolanda> zul, please let me know when it's fixed so i can add my tests
<zul> jamespage/roaksoax: https://code.launchpad.net/~zulcss/python-novaclient/babel/+merge/182367
<wsk233> hello i need to setup a ftp server for some internal use the only thing i need to be able to configure is chroot which is the most easy ftp server to set this up
<greppy> wsk233: I've always liked proftpd if I *really* had to setup a plain ftp server, I much prefer using sftp.
<wsk233> ofcrouse i understand
<zul> jamespage: https://code.launchpad.net/~zulcss/python-neutronclient/deps-refresh/+merge/182372
<zul> Daviey:  should be ok to sync
<crass> I'd like to run the server iso from an iso on a filesystem on a usb stick. Is this possible?
<crass> this can be done using the loopback.cfg with the desktop iso, but server's loopback.cfg just has an entry for testing the iso, not actually running it
<Monotoko> what should I be allowing through the firewall to stop a DNS lookup failure when doing a proxypass?
<Monotoko> outgoing is allowed... I don't see the problem
<hallyn_> zul: if/when my laptop(s) overheat can you step in for a few mins in the virt stack talk (in 1.5 hrs)
<zul> hallyn_:  sure
<hallyn_> thx
<zul> hallyn_:  if my network connection doesnt crap out on me :)
<hallyn_> sbux?
<zul> hallyn_:  rural internet
<disposable> i have a vlan interface on top of bond(active-backup) interface on top of 2 physical interfaces. I recently upgraded from 10.04 to 12.04 and suddenly I see a lot of dropped packets on my bond0 and the inactive underlying eth interface. same problem on 3.2, 3.5, 3.8, 3.9 kernels. it seems about a 1/5 of received packets are dropped. my cards use igb module (intel 82576). has anybody come across this?
<disposable> i've tried juniper and procurve switches
<zul> roaksoax: mind +1ing https://code.launchpad.net/~zulcss/python-novaclient/babel/+merge/182367 and https://code.launchpad.net/~zulcss/python-neutronclient/deps-refresh/+merge/182372 please
<zul> yolanda:  ceilometer should be fine now
<roaksoax> zul: done
<yolanda> zul, thx
<zul> thanks
<patdk-wk> disposable, works great for me
<disposable> patdk-wk: if you type ifconfig bond0, do you not see any dropped packets?
<patdk-wk> oh, that, your suppost to see that
<patdk-wk> every single multicast/broadcast packet it sends out one nic, will be dropped when it's received on the other nic
<disposable> patdk-wk: i'd understand seeing dropped packets on the ethX device that isn't the current slave, but not on bond0
<disposable> patdk-wk: on your bond0 interface is the number of dropped packets also around 1/5 of all received?
<patdk-wk> 1/230th
<disposable> patdk-wk: thanks.
<patdk-wk> it exactly matchs the received and dropped count of my backup nic
<disposable> patdk-wk: yes, same here.
<patdk-wk> so that easily explains it
<patdk-wk> it's a backup, anything received isn't suppost to be, and is a dup
<patdk-wk> and is dropped
<patdk-wk> as I said
<disposable> patdk-wk: weird thing is that every couple of reboots i get 0 dropped packets.
<disposable> by couple i mean >20
<Shadowandlight> anyone have ideas on how to deploy a cloned copy of Ubuntu faster?  I need to change passwords (mysql and linux) and settings faster then doing everything by hand... if possible http://askubuntu.com/questions/337503/easiest-solution-to-modifying-linux-mysql-passwords-and-other-settings-after-c
<disposable> Shadowandlight: i'd suggest chef/puppet/cfengine but that may be too much overhead if you're just changing a few things. in your case i'd simply start clusterssh, log into all machines at once and just click into individual windows to enter different passwords.
<Shadowandlight> do the config tools run locally or do they need to be web based like webmind / cloudmin?
<thebwt> are there any reccomendations for a php ppa? I'm trying to get newer version stuff for a 12.04 server
<arosales> jamespage, smoser is there goingto be a Server IRC meeting today since there is 1308 vUDS?
<jamespage> arosales, nope
<jamespage> can't do sessions and irc meeting at the same time
<arosales> jamespage, ok
<arosales> jamespage, I thought so just wanted to confirm
<smoser> arosales, i say no
<arosales> smoser, ack thanks. I'll put a reminder in ubuntu-meeting since we didn't get a mail out to the list
<arosales> or I may have missed it if it did go out :-)
<disposable> Shadowandlight: unfortunately, chef/puppet require installation of ruby and lots of additional software. cfengine is smaller and faster but with the steepest learning curve. for clusterssh, you just need ssh.
<rbasak> disposable: is there a real world problem with pulling in all this "overhead"? apt-get does it automatically for you, and it doesn't use a significant amount of space on an installed server node. I also don't like the ruby dependency, but I don't think it'd affect my recommendiation to look into chef/puppet over anything else.
<wedgwood> Is there a good way to set environment variables when calling lxc-attach? I need something like --clear-env plus a few extra values.
<wedgwood> hallyn_: If I might trouble you, any ideas? ^
<rbasak> wedgwood: I call env after lxc-attach. lxc-attach --clear-env ... -- env foo=bar my_command
<wedgwood> rbasak: ah, I'll give that a shot. there are ~a dozen things to set in some cases, but I still think that could be manageable
<smoser> ok. i have a stupid question.
<smoser> i wnat to make 'cloud-utils' package basically only depend on another package now.
<smoser> transitional if you will.
<hallyn_> wedgwood: had to look back at the changelog;  right now rbasak's is the best option.  There is code for 'extra_keep_env', but it's not yet hooked up to the cli
<hallyn_> smoser: what is your q?  (does it belong in -devel? :)
<smoser> i think its never mind.
<hallyn_> ok :)
<roaksoax> smoser: yeah that would work
<roaksoax> smoser: cloud-utils would become a transitional package indeed
<roaksoax> smoser: or you could make it a meta-package
<smoser> whats a meta-package specifically ? a task?
<roaksoax> smoser: no, a metapackage are packages that simply depend on others and do not install anything
<smoser> example?
<roaksoax> smoser: for example, 'maas' is a metapackage that installs 'maas-region-controller', 'maas-cluster-controller'
<smoser> perfect. thank you.
<roaksoax> smoser: now, transitional packages are usually needed for upgrades
<gartral> hey all, I've been looking into Landscape to help manage a private server and I was wondering if I install the landscape-client package, and decide later that I don't want it, will it break the server if I remove it?
<sarnold> gartral: I had no trouble on my machine removing the landscape client after playing with it for a few weeks
<gartral> sarnold: and exactly how much IS the Landscape service, I'm quite dismayed and put off by Cononical's secrecy on that subject
<sarnold> gartral: heh, I'm sorry I can't actually fix that :) I had a free account on account of being an employee...
<sarnold> gartral: hey! this looks like it. :) http://www.canonical.com/enterprise-services/ubuntu-advantage/server
<gartral> sarnold: 320 USD *PER SERVER*?! holey crap! never mind that!
<sarnold> gartral: yes that seems a bit steep if landscape is the only part of the plan you want
<gartral> sarnold: yea.. I was thinking it would be like $10-20 US/year for a single server.. $320 is far, far *far* too rich for my blood
<sarnold> gartral: yes, it looks like it was priced to be competitive against rhel and sles: https://www.redhat.com/apps/store/server/ https://www.suse.com/products/server/how-to-buy/
<gartral> sarnold: yea.. I'd rather just run my single server through an SSH console..
<sarnold> gartral: and perhaps the "unattented-upgrades" package would provide a lot of what you'd want from landscape anyway
<mgw> Is btrfs recommended for general use on server machines? I'm interested to use in conjunction with lxc. As I understand it, this will help reduce disk usage from duplicated container file systems.
<RoyK> mgw: btrfs is not recommended for production systems
<mgw> RoyK, thanks
<xnox> mgw: you can equally use overlayfs or lvm snapshots for that.
<RoyK> mgw: afaik btrfs doesn't have any dedup stuff yet
<xnox> mgw: which are more stable than btrfs.
<RoyK> xnox: 'cept lvm snapshots are dead slow :P
<mgw> xnox: thanks, looking at overlayfs
<xnox> RoyK: well dedup is being packaged by me, soon ;-)
<RoyK> mgw: try zfs
<xnox> RoyK: not in my testing.
<sarnold> RoyK: is the zfs support entirely via fuse?
<RoyK> mgw: zfs has a decent dedup implementation if you have *lots* of memory
<RoyK> sarnold: there's zfsonlinux - separate ppa
 * xnox wouldn't recommend zfs on linux to anyone. Solaris or Freebsd maybe.
<RoyK> sarnold: works well
<xnox> sarnold: zfsonlinux is not legally clean.
<sarnold> RoyK: hrm. how is that legal?
<sarnold> aha :) hehe
<RoyK> xnox: heh - I've worked with zfs on rather large systems, as in quater of a petabyte, works well
<sarnold> RoyK: linux?
<RoyK> sarnold: it's legal, it's legal to download and install non-gpl software on your own
<RoyK> sarnold: zfsonlinux does just that
<mgw> ubuntu-zfs is zfsonlinux?
<RoyK> sarnold: it's like graphics drives that compile on install
<RoyK> mgw: no, ubuntu-zfs is fuse-based, dead slow
<mgw> RoyK: okâ¦ so I'll need to download the package from http://zfsonlinux.org/
<RoyK> mhm
<sarnold> RoyK: neat. thanks :D
<RoyK> mgw: keep in mind that zfs doesn't support things like expanding a VDEV (that is, a RAIDz1 or RAIDz2 aka RAID-5 or RAID-6)
<mgw> RoyK: I *think* that would be ok for us.
<keithzg> xonx: It's only not legally clean to distribute the code compiled into the kernel; but for example a binary module for the kernel is just fine, same as the proprietary NVIDIA blobs are.
<RoyK> mgw: takes a wee bit more of planning
<keithzg> ^xnox, I mean.
<uvirtbot> keithzg: Error: "xnox," is not a valid command.
<sarnold> I think the zfs modules work in part because the code clearly originated elsewhere, rather than works only on linux.
<sarnold> it's hard to claim it's a derived work when the primary platform was something else entirely :)
<mgw> RoyK, xnox, sarnold: thanks for the assistance
<sarnold> mgw: thanks for asking the question at the right time :) I'd not seen zfsonlinux before. it looks cool. :)
<mgw> RoyK: is it stable?
<RoyK> mgw: works for me (tm)
<RoyK> mgw: and it's used by some large storage providers
<mgw> ok, great
<RoyK> mgw: what sort of storage are you planning?
<mgw> RoyK: you mean the use case, or the type of hardware?
<RoyK> mgw: both
<RoyK> mgw: ping?
<mgw> RoyK: most of my machines have 2x1TB with HW RAID
<mgw> SATA
<RoyK> ok
<RoyK> for that, zfs should work well, that is, for even larger stuff like multihundred terabytes, zfs should work well, if properly planned
<mgw> I'm wanting to back lxc containers in such a way as to not have to replicate the 500MB ubuntu files for every containerâ¦. does zfs have a versioning system that would allow that?
<lifeless> mgw: have a look at lxc-start-ephemeral
<lifeless> mgw: which does exactly that
<RoyK> mgw: somehow, yes, you can clone a filesystem which will make it dedup what's there, but not the new stuff
<RoyK> mgw: you can turn on dedup on the dataset, but it'll require large amounts of RAM to run efficiently
<mgw> lifeless: lxc-start-ephemeralâ¦ so the / filesystem would be read only/shared?
<lifeless> mgw: have a look at the script; it should answer your questions :)
<mgw> (looking at man now)
<stuxlof> I'm running ubuntu 11.10 x86_64 with a dell h700 raid controller containing megaraid sas firmware. I'm not sure if I either need to install megasasctl or megacli. I've got megacli64 running but it only gives me an 'Exit Code: 0x00'. Now megasasctl needs some 32 bit libraries (https://github.com/gnif/ARMT/tree/master/utils/megactl) to function, and this is where I'm stuck at atm
<stuxlof> the megaraid_sas driver is loaded in the kernel
<mgw> lifeless: do I understand that lxc-start-ephemeral still needs overlayfs?
<lifeless> or aufs yeah
<lifeless> you could do a similar script though
<lifeless> or manually setup your lxc mount tables
<lifeless> so that you have a readonly fs and a /var/state/myproject that is readwrite
<mgw> yeah, i'm thinking i'd need to use that script as a ref
<mgw> but create my own
<delinquentme> I've got a file I've used for sshfs mounting remote files
<delinquentme> I want to delete the local version of this file
<delinquentme> sudo rm -rf localFile/
<delinquentme> will do that without removing the content of the remote dir
<delinquentme> correct?
<sarnold> delinquentme: does the output of mount show the mount currently active?
<delinquentme> mount: can't find dopamine_live/ in /etc/fstab or /etc/mtab  << looks like im good
<Darkstar1> I can't seem login via ssh. I'm on 12.04
<RoyK> Darkstar1: what error message?
<Darkstar1> None. I think the server was hacked
<Darkstar1> just looking through the  bash history
<RoyK> lots of nice stuff? ;)
<Darkstar1> it seems someone got in yesterday and unpacked something called portuser.
<Darkstar1> Just trying to get ssh back
<Darkstar1> ssh logs to system normally right?
<RoyK> Darkstar1: try rkhunter and chkrootkit
<Darkstar1> RoyK: It's a remote vm
<Darkstar1> but I'll look at those toools
<RoyK> they attempt to do the same job, but may overlap a bit
<RoyK> Darkstar1: yiu can run those on a remote machine
<Darkstar1> Think I need to boot into single user mode
<sarnold> Darkstar1: best would be taking the machine entirely offline and inspecting the filesystem from a known-good system. preferably a known-good system you can throw away when you're done.
<Darkstar1> sarnold: thing is I don't know what to look for tbh. I looked at the history of bash and managed to find the one thing that was installed. What that thing did I do not know, but I removed the compromised user and inspecting the bash history has allowed me to remove the unpacked stuff
<sarnold> Darkstar1: check especially for unexpected dot files in all directories, debsums mismatches, unexpected kernel modules, etc.
<Darkstar1> debsums?
<sarnold> Darkstar1: debsums checks the sizes / hashes of dpkg-managed files, it's a useful way to get a quick overview of what might have changed if an installed rootkit is pretty amateurish
<Darkstar1> ok.
<crass> something has changed wrt to how ubuntu handles inactive lvm vgs within the last few releases
<crass> any one have ideas on how to really release all resources using a vg so that I can luksClose the underlying device?
#ubuntu-server 2013-08-28
<pabelanger> hey
<pabelanger> just because I am curious, how come http://cloud-images.ubuntu.com/releases/12.04.2/release/ doesn't actually contain 12.04.2 images?
<pabelanger> looks like 12.04.3 was uploaded into it today
<sarnold> smoser: see pabelanger's question about http://cloud-images.ubuntu.com/releases/12.04.2/release/ containing 12.04.3 ...
<Darkstar1> is it adviseable to change files like passwd and group to 600?
<sarnold> Darkstar1: no, that's what shadow is for
<Darkstar1> I don't understand what you mean
<sarnold> Darkstar1: passwd used to contain the hashed passwords in addition to all the other data currently in passwd; but as offline brute-force attacks against the hashed passwords became more commonplace, the 'shadow' system was created to keep the hashed passwords in mode 640 files, reducing the availability of the hashed passwords
<sarnold> but ls -l, ps aux, and other utilities all benefit from having user-readable passwd
<Darkstar1> Ok
<Darkstar1> Thanks
<sarnold> I'm having trouble creating new libvirt kvm VMs, does this look familiar to anyone? :) http://paste.ubuntu.com/6034869/
<TheLordOfTime> 12.04 server is still supported right?
<TheLordOfTime> i'm getting issues on an OpenVZ VPS when installing iptables-persistent and i'm not getting any good output from the breaking apt-get installation
<TheLordOfTime> is there a way to get more than just this:
<TheLordOfTime> dpkg: error processing iptables-persistent (--configure): subprocess installed post-installation script returned error exit status 1
<sarnold> TheLordOfTime: I don't think openvz guests get to use iptables do they?
<TheLordOfTime> sarnold, afaict they should be allowed to.
<TheLordOfTime> i can check with the provider, but it's not my openvz setup
<TheLordOfTime> sarnold, i know that i've been able to use iptables on openvz guests before on other providers
<TheLordOfTime> so maybe it's different between providers
<TheLordOfTime> sarnold, the fact i was able to put a POSTROUTING entry into the nat table makes me think there *is* iptables access
<Darkstar1> just a qq: http://askubuntu.com/questions/223944/how-to-automatically-restart-tomcat7-on-system-reboots the script in the accepted answer will run tomcat as the user that calls the script right?
<thurstylark> Does anyone know how how to get a ubuntu (12.04+) to bind to an Open Directory hosted by a Mountian Lion server? Is it possible?
<sarnold> TheLordOfTime: did you figure it out?
<sarnold> Darkstar1: that'll start /usr/share/tomcat7/bin/startup.sh as root
<TheLordOfTime> sarnold, no, but I confirmed there is access to iptables on the box.
<TheLordOfTime> s/box/VPS.
<TheLordOfTime> s/box/VPS/
<sarnold> Darkstar1: chances are very good that the tomcat7 configuration specifies a user for tomcat to change to once it has bound sockets
<jasondotstar> evening.
<TheLordOfTime> by opening an HTTP listener on a given port, rejecting it, and then later ACCEPT-ing it in iptables
<sarnold> TheLordOfTime: thanks for the correction :)
<TheLordOfTime> and that did as expected: blocks the traffic, and accepts it
<Darkstar1> sarnold: Figured as much. I don't think so but I'll try the script now.
<jasondotstar> hey, I'm looking to contribute to the ubuntu effort. I'm a linux admin. is there a place I can help here?
<Darkstar1> sarnold: I mean I don't think tc7 changes to another user after binding *
<TheLordOfTime> sarnold, but i still am trying to get more verbose error output as to *why* postinstallation failed
<TheLordOfTime> sarnold, any idea how i can get more verbose output?
<sarnold> Darkstar1: eww. :)
<sarnold> TheLordOfTime: check /var/log/dpkg.log? I wish I had a better answer, launchpad is full of those bugs :(
<sarnold> jasondotstar: welcome aboard :) there's a lot to do, the hard part will probably be figuring out what you'd like to do. :)
<TheLordOfTime> sarnold, it'll probably get another one sooner or later from me depending on what I get from the dpkg logs
<sarnold> TheLordOfTime: hehe
<TheLordOfTime> sarnold, what do you think the priority is for a "Package Fails To Install" bug... just, you know, out of curiosity.
<jasondotstar> sarnold thanks. is there a place where I can do some configuration management stuff? I've heard cool stuff abt juju
<thurstylark> Does anyone know how how to get a ubuntu (12.04+) to bind to an Open Directory hosted by a Mountian Lion server? Is it possible?
<Darkstar1> sarnold: problem is if I try to run tomcat as a created user (with no home and no shell) I get propmted for password still.
<sarnold> jasondotstar: yeah, juju looks cool. I haven't used the newer go-based version yet, but the old stuff was cool and the new stuff has more features. :) check out #juju, there's lots of friendly folks there who can help you write charms, debug installations, etc
<jasondotstar> sarnold ack.
<jasondotstar> sarnold are you guys maintaining the ubuntu infrastructure here? or answering questions abt ubuntu server stuff here? or both?
<sarnold> jasondotstar: this channel is mostly for people developing server components, but we do help users with questions as well
<sarnold> Darkstar1: there's a -user command line option described here that looks related: http://tomcat.apache.org/tomcat-7.0-doc/setup.html#Unix_daemon
 * Patrickdk_ just attempts to abuse users in here :)
<jasondotstar> sarnold i c.
<Patrickdk_> atleast till ikonia beats me up over it
<TheLordOfTime> sarnold, nothing in dpkg.log :/
<TheLordOfTime> Patrickdk_, heh
<TheLordOfTime> or until someone tells ikonia about it, in either of those cases you get beat up :P
<Darkstar1> sarnold: thanks I already read that one. But want to use a normal startup script. Looks like I'll have to use it in the end :)
<sarnold> Darkstar1: I was thinking you'd modify the startup script to use the -user ..
<Darkstar1> sarnold: except I always get asked for a password. but the user can't login
<sarnold> Darkstar1: what prompts you for the password? o_O
<Darkstar1> sarnold: su -c "$TOMCAT_HOME/bin/startup.sh" noone <====== when I run the script I get prompted for password because of this line
<sarnold> Darkstar1: check out the startup.sh file -- does it use 'jsvc' to start tomcat?
<Darkstar1> nope
<sarnold> Darkstar1: can you pastebin that startup.sh?
<sarnold> Darkstar1: (the pastebinit package can make that easy :)
<Darkstar1> http://fpaste.org/35331/
<Darkstar1> that's from catalina.sh which the startup sctript calls
<sarnold> man that's gross. :)
<sarnold> Darkstar1: how about changing your su line to: su username --shell /bin/sh -c "/path/to/tomcat/home/bin/startup.sh"  ?
<sarnold> Darkstar1: perhaps $TOMCAT_HOME doesn't work in that location. and perhaps you need a shell for the command to be executed at all.
<sarnold> if your user doesn't have a shell, you may need to specify one here.
<Darkstar1> sarnold: going to try out the daemons.sh : http://fpaste.org/35332/ which apparently can use jsvc
<sarnold> Darkstar1: oh yes! that's better. then you can just use --tomcat-user right there :)
<Darkstar1> hope so
 * Darkstar1 crosses his fingers
<Darkstar1> sarnold: getting a Cannot set group id for user 'apache-tomcat' with that.
<Darkstar1> Error validating user 'apache-tomcat'
<Darkstar1> it would seem creating a user with no home and no shell is problematic
<sarnold> Darkstar1: yeah, I'm not -too- surprised. set the home to the data directory and shell to either /bin/false or /bin/sh ..
<sarnold> Darkstar1: did you provide a primary group id in the passwd file too?
<Darkstar1> sarnold: also su username --shell /bin/sh -c "/path/to/tomcat/home/bin/startup.sh" <== prompts for passwd
<Darkstar1> that was executed from command line
<Darkstar1> sarnold: no I didn't modify the passwd file when I created the user
<Darkstar1> but it created one
<sarnold> Darkstar1: just to check, you are running that 'su' command as root, right?
<Darkstar1> no
<sarnold> aha :)
<sarnold> run it as root :)
<Darkstar1> sarnold: thanks
<Darkstar1> I was able to run the daemons command which uses jsvc and runs tomcat as the other user
<sarnold> Darkstar1: nice.
<DWSR> Anyone know how I can run a command as a user AFTER startup home directory decryption has taken place?
<qman__> put it in that user's .bashrc (or .profile or equivalent)
<Raboo> i have some servers running ntpd and the time still drifts off for some strange reason
<Raboo> ntpq -p shows that there is no offset or jitter..
<Raboo> ntpdate -q shows a offset of 19 seconds
<sgran> if the offset and jitter are actuall 0, then you're not connected
<sgran> there is always some offset and jitter
<Raboo> ok
<Raboo> then why does it list some servers?
<Raboo> and why doesn't it reconnect?
<sgran> you're looking for a '*' at the start of the line in ntpq -p output - that will tell you you're actually syncing
<Raboo> ok
<sgran> it lists the servers it's trying to sync with
<sgran> usually the st column has a high number, like 16, if you're not syncing
<Raboo> yeah i got 16
<sgran> you want a low number (<=3) for it to be considered a good host for syncing
<sgran> this can be anything - a firewall might be blocking ntpd but not ntpdate (ntpd uses source port 123, ntpdate uses a high source port.  Some firewalls think this is bad and drop ntpd)
<sgran> or the server you're trying to sync from is itself out of sync
<Raboo> i use public
<sgran> firewall?
<Raboo> http://pastie.org/8276376
<Raboo> nope
<Raboo> if i stop ntpd
<Raboo> and run ntpdate against  0.se.pool.ntp.org it works
<Raboo> it fixes the time.. and I start ntpd again and after a while it drifts again
<Raboo> is there something wrong with my ntp.conf?
<sgran> run 'tcpdump port 123 -s0 -A -vvv -n' ; restart ntpd.  Pastebin 60 seconds of output
<sgran> I'm on the train, so I may drop off but I'll look when I get connectivity again
<Raboo> ok cool thanks
<Raboo> ohh crap
<Raboo> tcpdump listens on wrong interface
<Raboo> i have to specify bond0
<Raboo> sgran 0 packets captured
<Raboo> 5 packets received by filter
<Raboo> 0 packets dropped by kernel
<sgran> Raboo: what I suspect and am asking tcpdump to confirm, is that you are sending traffic out and not getting a reply because of some network device dropping traffic with a low source port
<sgran> since no traffic is leaving your machine, I now am considering your machine to be the thing stopping the traffic
<Raboo> why does ntpdate set the time correctly against same servers then?
<sgran> because it is a different program, and uses a different source port, and runs as a different user and ...
<sgran> there's lots of things that could cause the difference
<Raboo> ok
<sgran> do you use any RBAC like apparmor or selinux?
<sgran> Do you have host based firewall rules?
<sgran> if you strace ntpd, do you see it trying to make outbound connections?  Do you see it succeed or fail?
<sgran> these are the sorts of things I'd consider
<Raboo> ok
<Raboo> well there is a firewall since this is not a public ip.. But I didn't think it would block outgoing traffic..
<Raboo> and iptables is not running locally
<sgran> if I were you, I'd start by trying to get a capture of ntpd traffic.  This involves the tcpdump above, and looking for traffic going out but not coming back in
<sgran> that will tell you conclusively where to start
<sgran> if traffic goes out and comes in, you have a bad peer or a config problem.  If it goes out but not in, you have a firewall problem.  If it doesn't go out, you have something like an RBAC or local firewall problem
<Raboo> sgran ok i will start with strace
<Raboo> kan i run ntpd as root?
<sgran> you don't want or need to do that
<Raboo> ok so i can run strace as ntp user
<Raboo> as ntp it says "must be run as root, not uid 107"
<Raboo> ah it's a option to specify user
<disposable> rbasak: the version in repositories isn't the newest (it's actually pretty old). my only experience is with chef and that installs around 40MB of stuff. it has a dependency on ohai, which is a daemon that runs every couple of minutes and scans system info (devices connected, IPs, etc). In my case, ohai was continuously spinning up disks that i wanted in standby so that bit was annoying but otherwise i would not go back to managing servers 
<rbasak> disposable: 40MB really is not significant for most users, or can you give me an example of where this is a problem for a significant number of users? I can't speak for chef, but I know that in puppet you can configure the automatic update interval, or turn it off entirely. Surely chef can do that too? puppet 3 is in Saucy, BTW. Not old!
<rbasak> I have no problem with anyone who disagrees. But I don't think it's reasonable to recommend against chef and puppet to others, when in fact most of the world uses these tools.
<disposable> rbasak: i did not recommend against anything. merely pointed out that for one-off configuration of multiple servers, they're overkill. especially if you have to learn them.
<disposable> rbasak: since you use puppet, i'd like to know a few things. does the free version come with any king of web interface?
<rbasak> No. You must learn the DSL.
<disposable> rbasak: s/king/kind
<rbasak> You can easily run it serverless, though.
<rbasak> puppet apply <file>
<rbasak> That will do a one off run, and "make it so".
<disposable> how does puppet set IPs? does it just use 'ip' or are there modules for persistent changes in /etc/network/interfaces?
<rbasak> http://docs.puppetlabs.com/references/latest/type.html lists the built in resource types available. I don't manage IP from puppet (I don't personally think that makes sense). But you could do it by just maintaining /etc/network/interfaces as a file managed by puppet, though, using the "file" resource type.
<rbasak> Oh. There is a sutiable type available: http://docs.puppetlabs.com/references/latest/type.html#interface
<rbasak> Hmm. Only for Cisco switches though!
<disposable> rbasak: thanks.
<thurstylark> I'm running 12.10, and I can't seem to install the noip2 package. sudo apt-get install noip2 returns with unable to locate package. Any ideas?
<yolanda> jamespage, zul: https://code.launchpad.net/~yolanda.robla/ceilometer/autopkgtests/+merge/182583 and https://code.launchpad.net/~yolanda.robla/horizon/autokpgtests/+merge/182168
<stlu> Hello.
<stlu> I am looking for a tip on how to most efficiently get DNS servers to share information if they are in two different LANs.
<rbasak> stlu: details?
<Raboo> anyone good at reading strace?
<rbasak> !anyone
<ubottu> A high percentage of the first questions asked in this channel start with "Does anyone/anybody..." Why not ask your next question (the real one) and find out? See also !details, !gq, and !poll.
<Raboo> http://pastebin.com/yJHN1Zcf
<Raboo> my ntpd doesn't update time. It seems like it doesn't connect to the ntp servers
<Raboo> rbasak i was just about to :-)
<Raboo> but from what i can see in strace is that it tries to connect to the ntp servers?
<rbasak> Raboo: if you think it's a network issue, it's easier to isolate using tcpdump. Filter for the right protocol. Eg. "sudo tcpdump -nieth0 port 123 or icmp".
<stlu> rbasak: I'm doing this all under virtualbox, as a learning project.
<Daviey> stlu: If 'information' is zone files, and the DNS server is bind9 - it can do AXFR's itself.
<stlu> Daviey; rbasak:  I read about the option to set a secondary dns, but if there are several Lans (say, like 8 or 9) then that will be a lot of transfers, no?
<Daviey> stlu: Yes, but it's designed to do this.  Only transfers when a zone file is modified
<stlu> I was thinking there might be a proper way to get the DNS in LAN #5, for example, to just point to the other DNS server addresses instead of being a secondary DNS for ALL of them...
<Daviey> erm
<Daviey> i think there is some confusion here...  is this for domains under your control?
<stlu> okay then, I was just thinking I might have had a gaping hole in my understanding.
<stlu> It is true that there won't be many transfers after its up and running...
<stlu> Daviey: this is a learning project entirely under virtualbox, so yes, these LANs are all my creations.
<stlu> If you'd care to know, I will share the setup with you...
<Daviey> stlu: I mean, are you talking about stlu.com, for example - that you are trying to manage?
<Daviey> or is this to resolve google.com for example?
<stlu> Well, my bedroom is the domain, so I am managing "myroom.local"
<stlu> LAN #1 is vbox1.myroom.local, a network of virtual machines, with an accompanying DNS server for their addresses.
<Daviey> Ah.. erm. .local is usually not recommended.. that typically uses a different protocol called mDNS or Avahi
<stlu> its actually all going to fall into an address block 172.24.0.0, each DNS and so-called "lan" will have a block of 256 addresses, that I treat as a /24 network.
<stlu> 172.24.200.x is the LAN#1, and 172.24.201.x will be LAN#2, only special thing is seperate DNS servers.
<Daviey> right.. but i am suggesting you use something other than .local to avoid confusion
<stlu> Daviey: oh, well, I lied.  My home router provides the domain "gateway.2wire.net" so I am not actually using local (but I had other plans to)
<stlu> I actually got the .local idea from a D-Link router that was defaulting to that.
<stlu> I looked at the IANA list of suffixes, they don't have a suffix reserved for LANs like I am making.
<Daviey> stlu: Ubuntu should disable mDNS / Avahi, but not worth risking it whilst learning IMO
<stlu> Daviey: totally agree, if I throw in too much unpredictability I won't gain any knowledge, just a headache.
<rbasak> With DNS, you don't usually have a separate DNS server per network. You might have a separate resolver per network, but they don't need anything special apart from being able to route to DNS servers.
<rbasak> I'd put one DNS server on a single LAN, and if you want to experiment with secondaries, then put than on a LAN that can route to it and use AXFR. Then you'll be learning something closer to what might be used in the real world.
<rbasak> You can make DNS as obtuse as you want, but it'd just be prone to problems and I'm not sure that you'd learn anything useful out of it.
<rbasak> Just make sure that either the LANs can all route to your DNS server, or that they use resolvers that can route to your DNS server.
<stlu> rbasak: hmm
<smoser> utlemming, people (pabelanger and sarnold) mentioning that http://cloud-images.ubuntu.com/releases/12.04.2/release/ has a banner that says 12.04.3
<smoser> i would personally it rather not say '.X' at all.
<smoser> which seems new to me.
<smoser> (they complained of the title, i think. i dont like the path as it breaks anything and is inconsistent.
<rbasak> I'm pretty sure it's always done that. At least since 12.04.1 I think.
<stlu> I don't seem to know what a resolver would be... all I really have played with is /etc/resolv.conf to point to DNS.
<smoser> rbasak, it would seem that *something* changed in .2
<smoser> http://cloud-images.ubuntu.com/releases/
<stlu> smoser: yeah, the banner on the release webpage takes the name of the latest release.  even back to 8.04
<rbasak> Oh
<rbasak> I was talking of the motd banner. You aren't talking about htat.
<stuxlof> is ia-32libs (32bit compat libs) able to conflict with current software? I think it can't but I've never used it so I'm not really sure
<stuxlof> it's ubuntu 11.10
<rbasak> YOu know that 11.10 is EOL now, right?
<rbasak> We don't use ia32-libs any more. I'm not sure whether the change happened before or after 11.10.
<stuxlof> yes, it happened after 11.10 iirc
<stuxlof> it's a production environment, sadly I'm not allowed to update anything
<stlu> rbasak: by "configuring a resolver", do you refer to the configuration of resolv.conf on a host?
<stlu> Or is this a software package that does something more comprehensive?
<jamespage> roaksoax, adam_g, yolanda, zul: updates for havana ca - http://people.canonical.com/~jamespage/ca/havana/
<stlu> Ok, so I can set up a separate network for each virtual LAN.  If the host OS for the virtual systems has been given a domain name of say "laptop2.gateway.2wire.net", would the household router direct *.laptop2.gateway.2wire.net, or fail?
<stlu> *direct all traffic to the *.laptop2...
<stlu> thats a definite fail on the subdomain idea...
<codepython777> Whats the best way to automate CLI tasks? Anything better than expect/pexpect ? (Things like entering passwords, keys, directories to command line programs that ask questions from standard input, to things like automating ssh tasks)
<greppy> codepython777: depending on what you need, something like ansible may work.
<stlu> codepython777: I was just thinking the same thing.  Excepty I didn't even know about 'expect'.
<codepython777> stlu: I dislike it very much.
<stlu> codepython: oh.
<stlu> codepython: well I'll try it anyway, and if I don't like it either, well I'll have a better idea of what I want.
<stlu> My friend had tried out 'Webmin' but I don't know if all it's features are really needed.
<codepython777> stlu: last update of expect webpage = 2009
<stlu> a bit stale.
<stlu> I suppose you'd be looking for something simpler then Webmin too?  And more easily automated I bet.
<greppy> just because it hasn't been updated doesn't mean it's not still current. :)
<Pici> !webmin | also
<ubottu> also: webmin is no longer supported in Debian and Ubuntu. It is not compatible with the way that Ubuntu packages handle configuration files, and is likely to cause unexpected issues with your system.
<stlu> thx Pici
<yolanda> zul, jamespage? can you take a look at my MP for horizon and ceilometer and horizon? https://code.launchpad.net/~yolanda.robla/horizon/autokpgtests/+merge/182168 , https://code.launchpad.net/~yolanda.robla/ceilometer/autopkgtests/+merge/182583
<jamespage> yolanda, I have been trying to test them but we have a broken cloud-utils in saucy right now - blocking testing
<jamespage> they look OK
<yolanda> ceilometer tests fail
<yolanda> at least in my local machine, but they are testing the right things
<yolanda> ceilometer-common isn't starting cleanly
<stuxlof> rbasak: do you have any words left about my question? I think I should be able to install those ia32 libs
<stuxlof> without running into conflicts
<geser> stuxlof: what error do you get it you try?
<zul> sarnold: ping https://bugs.launchpad.net/ubuntu/+source/msgpack-python/+bug/1207003
<uvirtbot> Launchpad bug 1207003 in msgpack-python "[MIR] msgpack-python" [Undecided,New]
<jamespage> xnox, just spotted your comment on the mongodb openssl bug
 * jamespage does a little dance
<jamespage> \o/
<jamespage> I can sleep again....
<xnox>  /o/ \o\
<xnox> jamespage: =)))) lol, ScottK was giving you nightmares =)
<jamespage> lol
<smb> zul, When will we see you?
<zul> smb:  never hopeuflly ;)
<smb> zul, Have people returned your walls :)
<zul> smb: @starbucks today
<smb> zul, Ah so better coffee and better internet. ;-P
<zul> in theory ;)
 * smb is reminded of those day working from the mall
<smb> actually days...
<stuxlof> geser: I'm not getting any errors, I just want to know if this can influence my currently installed stuff in a way
<stuxlof> sorry for the slow response
<stuxlof> I have a perc h700 raid controller on ubuntu 11.10 x64, and the software needs ia32 libraries to be compatible because it's 32bit software. But I've never used these, since this is an production environment I thought it wouldn't be wrong to ask for a 2nd opinion. I think it shouldn't matter
<stuxlof> It uses either megacli or megasasctl, I've got megacli running but it keeps returning with an exit code 0x00 when I try to use it, so I guess megasasctl is the proper one
<geser> it shouldn't matter as ia32-libs installs additional 32bit libraries which can only be used by 32bit apps
<jamespage> yolanda, horizon tests MP _ test are fine but needs a changelog entry!
<jamespage> yolanda, ceilometer merged - thanks!
<yolanda> jamespage, will take a look at horizon, i created it but may be have lost in the way
<jamespage> zul, as we have 20 mins - http://people.canonical.com/~jamespage/ca/havana/
<stuxlof> I thought so thanks
<zul> jamespage:  doh...
<zul> jamespage:  i dont think we need to update pep8 anymore since its not being used anywhere
<jamespage> zul, oslo-config
<jamespage> python-quantumclient
<jamespage> oh - that can be dropped
<zul> jamespage:  fuuuuu oslo-config
<zul> jamespage:  you need python-itsdangerous for flask i believe
<zul> jamespage:  other than that +1
<jamespage> Daviey, around?
<adam_g> jamespage, how do i get a bug tracked on server section of http://reqorts.qa.ubuntu.com/reports/rls-mgr/rls-s-tracking-bug-tasks.html ?
<jamespage> adam_g, raise a release specific task
<jamespage> and hope the package is on the list...
<adam_g> jamespage, against saucy?
<jamespage> yes
<adam_g> thanks
<sarnold> zul: msgpack-python is not forgotten, but not yet started. :( this is a busy week. :(
<zul> sarnold: cool
<adam_g> zul, what version of swift we are shipping /w havana/saucy?
<zul> 1.9.1
<adam_g> zul, so the 1.9.3 release that is out is post-havana?
<zul> adam_g:  i think so
<adam_g> oh, 1.9.3 isn't out yet
<zul> unless there is a bug fix release before then
<adam_g> jamespage, those CA updates LGTM
<zul> hallyn_:  http://paste.ubuntu.com/6037443/
<jonascj> I would like to reinstall Ubuntu Server 12.04 and I have a mirror raid with two disks and two paritions on each with md0 being sda1 and sdb1, and md1 being sda2 and  sdb2. If I just make sure to untick "format" when I choose partition layout will I then keep my data on those mount points with "format" unticked?
<RoyK> jonascj: the installer won't reformat md1 unless you do something stupid
<RoyK> jonascj: make sure you have a backup, though, things sometimes don't go as planned
<hallyn_> zul: what was the exact command line?
<hallyn_> zul: you were trying ot exec the container name for some reason
<hallyn_> d'oh
<hallyn_> zul: lxc_start - ' ' is initialized
<hallyn_> i think you did something like "lxc-start -n " " containername
<zul> hallyn_: ?
<hallyn_> you passed a space as the container name
<zul> hallyn_:  oh doh...ill fix that
<hallyn_> zul: ok
<jonascj> RoyK: Thank you. I don't know how I forgot it, but I forgot to tell you that on top of the raid I have LVM. So i would like my vg0/root formatted but I would like to keep my vg0/home and vg0/var logical volumes. So during install should I choose to mount /var and /home to those location but untick "format" ?
<RoyK> jonascj: same rules apply
<xnox> jonascj: in installer, enter manual partitioning, activate all raids, activate VGS, then manipulate the volumes as needed.
<xnox> jonascj: which ones to keep, which ones to format.
<jonascj> xnox and RoyK: and we are in agreement on this: those mountpoints for which I uncheck "format" will retain their data?
<xnox> jonascj: and set to "do not use"
<RoyK> jonascj: yes
<RoyK> jonascj: but keep a backup nevertheless
<jonascj> RoyK: I will when it matters. I'm just fooling around.
<RoyK> jonascj: backups are usually most useful for people accidentially deleting their things, or some worm comes in and makes havoc
<RoyK> jonascj: ok, go on, then :)
<jonascj> RoyK: atm I'm actually such a big fool that I cannot remember why I have two raid devices. md0 is 2gb and md1 is 998gb and only md1 is part of my LVM. What the heck was I thinking to use those 2gb for :S Can I find out if something is on those 2gbs?
<sarnold> jonascj: you might have been trying to fit the /boot partition in the first N gigs of the disk for ease of use with lilo or similar..
<jonascj> sarnold: you have high thoughts of me :P /boot is however on the LVM root logicial volume. Inside my system which resides on LVM on md1, can I peek at md0 and whatever is on there?
<RoyK> jonascj: I usually make a small mirror for boot, then a larger one for swap and then something bigger for root - if the system doesn't have more than those two drives, I'll make a larger one for /home
<RoyK> boot as in /boot
<jonascj> royk: sounds reasonable. What I have now is two physical disks and then two mirros. One which purpose I've forgot (2gb) and then one large for LVM on which I have different logical volumes (root, home and var)
<RoyK> quite possibly the swap on those 2GB
<RoyK> the installer should autodetect that
<jonascj> RoyK: sorry, i forgot swap, I have root, swap, home, and var as logical volumes. I think you are right, I probably intended them for a non-LVM /boot
<jonascj> but somehow failed that during install
<RoyK> jonascj: with current systems, it's not really that important to have /boot on a separate partition
<RoyK> jonascj: guess it's just bad habit
<jonascj> royk: and /boot on non LVM is also not necessary, grub2 handles lvm just fne
<RoyK> indeed
<sarnold> RoyK: /boot on a different partition might -also- be with the idea that it need smirroring but the data partitions might do fine with raid5 or raid6 or raid10..
<jonascj> sarnold: ah yes :) but not in my setup :P
<RoyK> sarnold: somewhat, yes, but I never mix data disks and OS disks anyway
<RoyK> sarnold: IMHO having data and OS on the same drives is bad, having it on the same partitions isn't good at all
<sarnold> RoyK: ah :D that's a nice rule of thumb. very nice.
<RoyK> sarnold: you'll have to reinstall one day or another, so better keep the data off the root
#ubuntu-server 2013-08-29
<test54321> has anyone tried to setup maas & juju with bonded nics?
<sarnold> test54321: you may wish to try asking in #maas -- I think I heard something recently about dual-nic setups on maas being out-of-the-usual, but I don't think there was any discussion of bonded...
<test54321> ok thanks!
<jonascj> any libvirt users? What are the newest version of libvirt you've got on precise (12.04)?
<crass> if there documentation on how to use fstab.d?
<jibel> rbasak, Hey, I'm reviewing the lxc driver for autopkgtest but cannot seem to make it work from the source tree. How doyou invoke it usually?
<jibel> rbasak, hm, it's because I used an ubuntu template for the container not a cloud template
<jibel> rbasak, could we gracefully fail in that case ?
<rbasak> jibel: that sounds reasonable. I could dpkg-query for cloud-init at the start I guess.
<bariho> is there a way to prevent the display from switching off after some time when using the virtual consoles?
<xnox> rbasak: jamespage: mysql--5.5 adt tests were failing before the borked 	5.5.32-0ubuntu2, and still fail after the revert in 	5.5.32-0ubuntu3. https://jenkins.qa.ubuntu.com/view/Saucy/view/AutoPkgTest/job/saucy-adt-mysql-5.5/
<xnox> Completed: Failed 3/1713 tests, 99.82% were successful.
<xnox> Failing test(s): main.symlink sys_vars.secure_file_priv_basic main.mysqltest
<xnox> is anyone looking into fixing mysql adt test?
<jibel> rbasak, can you remind me what was the requirement to use a cloud template instead of a debootrap based container?
<jamespage> xnox: I think rbasak is
<xnox> ok.
<rbasak> xnox: do we know if the adt test has ever passed?
<xnox> rbasak: history only goes back to build nr 39, all of which are red. Not sure if there are further logs. jibel do we have more adt history beyond what's visible in jenkins?
<rbasak> Yeah that's what I was looking at.
<rbasak> There's no "last successful build" noted in the status though.
<rbasak> Looks like jenkins isn't blocking migration though.
<rbasak> And that we need a no change rebuild for pinba-engine-mysql
<jibel> xnox, no that's all we keep. I don't remember these tests ever passed
<rbasak> jibel: thanks for the review!
<jibel> xnox, I confirm that mysql tests never passed, neither on raring nor saucy
<rbasak> Thanks
<rbasak> We're running the same test suite in the build itself. So I'm not too worried about it.
<rbasak> It should be fixed though of course
<oz0ne> Hello, I wanted to add a notice under common error and fixes for others that might have been faighting the same issue as me with Postfix\opendkim\amavis. https://help.ubuntu.com/community/Postfix/DKIM (the issue is with enabling dkim signing like that guide if you allready have amavis doing spam checks is that the dkim will sign the header two times. one before amavis and then again one
<oz0ne> more time after amavis) i wanted to put the note and how to correct the behaivor(eg. with adding -o no_milters when adding it back in postfix queue) under common errors and fixes. Reson for putting it under this article, cause many newcommers belives it is the issue of postfix\opendkim settings and not the way you feed messages back to queue from amavis.
<rbasak> oz0ne: thanks for your input. The server guide is a volunteer effort and we appreciate all the help that we can get. Could you please file a bug against the serverguide project? Use this link: https://code.launchpad.net/~ubuntu-core-doc/ubuntu-docs/help.ubuntu.com
<rbasak> If you are able, then a bzr merge proposal against the serverguide branch would be appreciated as well, those this is a bit more involved (editing the docbook xml source)
<rbasak> Sorry, wrong link.
<rbasak> THe bug reporting link is https://bugs.launchpad.net/serverguide/+filebug
<oz0ne> i will file a bug and then i will check if i am capeable of doing a merge proposal. =)
<rbasak> Thank you!
<MACscr> how can i rsync files from one box to the other box, while preserving all ownership/perms, etc? Im trying to move LXC containers to a new ubuntu box
<andol> MACscr: rsync --archive covers most of it. Depending on the situation you might also want --hard-links --xattrs and/or --acls
<andol> MACscr: Depending on the situation, you may or may not want --numeric-ids
<MACscr> andol: yeah, but since root is needed to create files other than itself and some owned by root, etc
<MACscr> i think i might have to adjust sudoers or something
<andol> MACscr: Or you could just allow yourself to use the root account for stuff like this? Otherwise you should be able to use the --rsync-path= option to include a sudo on the recieving end.
<hallyn_> is there a kernel pkg that has a vmlinux.S?
<hallyn_> gah
<hallyn_> i'm apparently not awake yet.  sorry.
<jamespage> rbasak, did you figure out the mysql test failure? I had a poke as well
<jamespage> rbasak, --vardir=$WORKDIR/var/ -> --vardir=$WORKDIR/var (debian/tests/upstream)
<jamespage> want me to upload?
<rbasak> jamespage: wow. That's all? Please do.
<rbasak> jamespage: I never looked. I was focused on getting it through proposed migration, and it wasn't blocked by that.
 * rbasak wonders if anyone tests their uploads nowadays
<jamespage> rbasak, OK _ I'll upload the fix
<rbasak> Thanks!
<rbasak> BTW, I uploaded a no change rebuild for pinba-engine-mysql which was blocking migration, and then it went through. So hopefully this upload will go straight through,.
<jodh> rbasak/jamespage: hi guys - it appears mysql's failing ADT tests are blocking the latest upstart upload. Are you working on this failure?
<jamespage> jodh, just uploaded a fix ^^
<jodh> jamespage: any idea on turnaround?
<jamespage> jodh, it takes about ~3 hours to build normally
<jamespage> jodh, if its urgent please ask the release team to override - they can
<phunyguy> quick question... I have some LVM volumes that were temporarily in the system, but have since been removed without restart (USB), but ubuntu keeps thinking it is still there... aside from a reboot, how can I force-remove it?
<RoyK> philipballew: were the filesystem(s) mounted when you removed them?
<RoyK> phunyguy: that was for you...
<RoyK> phunyguy: echo "- - -" > /sys/class/scsi_host/host0/scan
<RoyK> just choose the right host
<phunyguy> RoyK: they were unmounted
<phunyguy> define "host" in this context?
<phunyguy> :P
<RoyK> host0 is the first
<phunyguy> right, but there are 8... how do I know which one is correct?
<RoyK> phunyguy: perhaps in /dev/disk/by-id ?
 * RoyK a bit unsure
<RoyK> anyway - it won't hurt to run a rescan on another scsi host
<phunyguy> so just hit em all
<phunyguy> roger.
<phunyguy> that worked!
<phunyguy> GENIUS!
<phunyguy> owe you beer.
<RoyK> :)
<diegonat> hi guys, i need a tip! Im going to develop a piece of software with a couple of friends so i was wondering what kind of resource i would need to have to keep in touch. Like a forum? or something else?
<jkitchen> diegonat: with each other? github and an irc channel?
<jkitchen> trello board?
<Katafalkas> trello
<diegonat> trello seems cool
<jkitchen> trello is awesome
<Katafalkas> its more then cool
<diegonat> can u chat on trello
<diegonat> ?
<Katafalkas> sort of
<Katafalkas> its real time
<Katafalkas> so yeah
<adam_g> jamespage, any idea what might have changed in the virt stack that would cause nova-compute to no longer pull in qemu-kvm?
<jamespage> adam_g, on saucy?
<adam_g> jamespage, about to test on saucy, but using havana-staging atm
<jamespage> adam_g, qemu-kvm is a transitional package for saucy
<jamespage> qemu-system-x86 is the one for saucy/havana-staging
<adam_g> jamespage, which is responsible for actually getting /dev/kvm setup now?
<jamespage> erm
<adam_g> at the end of the day 'apt-get install nova-compute-kvm' has no kvm module loaded
<jamespage> hallyn_, ^^
<smb> Its automatically created when the kernel modules load
<smb> and those load based on cpu type
<adam_g> smb, right, but what package is modprobe'ing ?
<smb> The kernel does
<jamespage> <adam_g> at the end of the day 'apt-get install nova-compute-kvm' has no kvm module loaded
<smb> well udev
<jamespage> where are you doing that?
<adam_g> jamespage, the jenkins lab. waiting for a qemu backport build
<jamespage> adam_g, unsure then
<jamespage> I'd defer to hallyn on that
<smb> adam_g, check modinfo kvm-intel for example. That has an alias section in S
<hallyn_> adam_g: yeah, qemu's postinst no longer modprobes kvm
<adam_g> jamespage, but ive tried elsewhere from the H ppa
<hallyn_> udev does it for us without qemu being installed
<jamespage> adam_g, actually that might be it
<hallyn_> adam_g: are you running linux-image-generic?
<jamespage> hallyn_, is that udev in saucy doing that? does precise run the same way?
<adam_g> hallyn_, yea.. the kvm_intel.ko is available
<hallyn_> jamespage: i think it does, but i don't recall for sure when that was introduced.  smb?
<hallyn_> adam_g: available, but not modprobed.  what does kvm-ok say?
<smb> hallyn_, I would need to check git but in Saucy at least
<smb> Maybe before
<hallyn_> adam_g: removing the 'modprboe kvm_intel' from qemu.postinst was deliberate...
<adam_g> hallyn_, it tells me that something is wrong. cpu supports it, but module is not loaded
<hallyn_> adam_g: ok but loading the module by hand works ok?
<adam_g> hallyn_, yup
<hallyn_> ok.  then i guess we have a packaging issue.
<adam_g> hallyn_, AFAICS everything is where it needs except the the module being loaded
<hallyn_> adam_g: do you have
<hallyn_> /lib/udev/rules.d/40-qemu-system-common.rules:KERNEL=="kvm", GROUP="kvm", MODE="0660"
<adam_g> hallyn_, yeah
<hallyn_> huh.  then why didn't it run?
<hallyn_> oh i'm not thining right
<adam_g> hallyn_, is that responsible for loading the module itself, or just setting the permission on /dev/kvm when it shows up?
<hallyn_> adam_g: yeah the latter
<smb> hallyn_, So that was introduced with the 3.5 kernel
<smb> Which means Q, R and S
<adam_g> ok, so the issue should be limited to running this on precise?  booting a saucy machine now
<hallyn_> adam_g: but shoudldn't your cloud-wahtever ppa be depending on a newer kernel than 3.2?
<adam_g> hallyn_, no, we haven't done any kernel backports
<hallyn_> bleh
<hallyn_> adam_g: how big a deal is it to add a delta to the backported qemu pkg?
<adam_g> hallyn_, its not that we've had to do before, but i always assumed we would need to eventually
<adam_g> *not something
<hallyn_> adam_g: ok, well we *could* put the modprobing logic back int at saucy as well.  the problem was that in situations like containers the modprobe could fail, and it all just got too ugly to bother since we expect it to be loaded already
<adam_g> what specifically is introduced in 3.5 that all of this depends on?
<hallyn_> smb: ^ how is the kvm autoloading done?
<smb> hallyn_, The modalias that triggers on cpu type and features
<smb> alias:          x86cpu:vendor:*:family:*:model:*:feature:*0085*
<smb> the amd module has a similar one for svx
<smb> hallyn_, When the cpu is registered it shows up on cpu bus and emits a device add uevent. from there normal magic matches the alias to a module to probe
<adam_g> smb, thanks
<hallyn_> adam_g: what's your plan stan?
<adam_g> hallyn_, trying to capture the issue in a bug
<hallyn_> ok
<adam_g> hallyn_, if its between backporting the kernel, or a small qemu delta, i vote for the latter
<adam_g> i assume it would be a small delta?
<hallyn_> adam_g: yup
<hallyn_> to qemu-system-common.qemu-kvm.upstart or somesuch
<adam_g> hallyn_, also, when checking saucy i noticed permissions: crw------- 1 root root 10, 232 Aug 29 15:09 /dev/kvm
<hallyn_> adam_g: that's without qemu-system-x86 installed?
<hallyn_> kvm group doesn't exist at that point, so that would be correct
<adam_g> hallyn_, oh duh. ya
<adam_g> without that missing udev rule
<prime> I have a storage array I am trying to install ubuntu 12.04.3 precise pangolin LTS. I've used the Universal-USB-Installer which seems slightly outdated. The boot menu had a hard time finding the installation files. Any tips?
<prime> I'm using a usb thumb drive. This time I'll format it prior to copying the slightly newer 12.04.03 precise pangolin lts as Unlisted rather than the known 12.04.02 server.
<oz0ne> rbasak: i registred the bug, but now i do see I can actually do the change directly on the wiki page. It just took some time after registring as a part of the team until i could actually edit the page. This is not directly related to the server-guide but rater the wiki page. Tho if you do follow the instructions, install amavis opendkim and postfix. and you decide to enable signing you
<oz0ne> will most likly get this double sign behavior as this line will make the milts run two times. http://bazaar.launchpad.net/~ubuntu-core-doc/serverguide/raring/view/head:/serverguide/C/mail.xml#L1616
<rbasak> oz0ne: which wiki page?
<oz0ne> https://help.ubuntu.com/community/Postfix/DKIM
<rbasak> Ah, right.
<rbasak> oz0ne: sorry, I was mistaken. You're right - you can just edit that. Please do!
<oz0ne> rbasak: https://bugs.launchpad.net/serverguide/+bug/1218335 read my correction: it kinda also apply to the server guide, if you stilla gree will i make a patch\merge request.
<uvirtbot> Launchpad bug 1218335 in serverguide "opendkim double header signing following Postfix/DKIM guide if amavis is also in use" [Undecided,New]
<rbasak> oz0ne: I see. Yes, that makes sense. Please go ahead with the merge request.
<rbasak> (I'm not sure about the issue itself - just that you're filing the bug, editing the wiki, submitting the merge proposal in the right place, etc)
<koolhead17> adam_g: ping
<adam_g> koolhead17, hey
<koolhead17> adam_g: will you be in town on 6th?
<adam_g> koolhead17, should be
<koolhead17> cool. lets catch up for coffee/breakfast then
<koolhead17> DM me your number
<adam_g> jamespage,  thoughts on carrying a delta re:  https://bugs.launchpad.net/bugs/1218459 ?
<uvirtbot> Launchpad bug 1218459 in qemu "nova-compute-kvm does not function without manually loading kvm module" [Undecided,Invalid]
<jamespage> adam_g, +1
<adam_g> jamespage, where should track it? should we keep stacked branches in lp:~ubuntu-cloud-archive repositories for these types of packages?
<arrrghhh> hello all.  I have mdadm running a raid1 array on two 1tb hdd's.  I kind of want more speed, as I have my source for Android on this array... I don't so much care if I lose the data.  Can I convert it to raid0 "in place" or do I have to migrate the data to another disk/array, and format/rebuild the array new as raid0?
<rbasak> I'm not aware of any way to convert md in place, but perhaps I'm missing latest developments there.
<arrrghhh> yea, based on my knowledge/what I've read I have to rebuild the array
<sarnold> throwing it on an ssd would probably give you better performance improvement for less effort.
<arrrghhh> I'm wondering if it'll help my build times that much even... hum.  Just annoying.  Rig used to build in about 20 minutes, and now it takes nearly 40
<arrrghhh> ssd is there, it is used for ccache and out/
<arrrghhh> I guess I could get another and put the source on it
<RoyK> arrrghhh: ccache?
<arrrghhh> RoyK, my rig builds Android, among other things.
<RoyK> arrrghhh: iirc you can convert it to raid0, but I haven't tried it
<arrrghhh> based on my readings, I will have to backup the data if I want to preserve it... which I'd rather not redownload the source if I can help it.
<RoyK> I have a vm with 13.04 (or was that 13.10)
<RoyK> my raidtest vm
<arrrghhh> nice
<RoyK> arrrghhh: ok, so two drives in raid1 and you ant to split and merge to raid0?
<arrrghhh> yes, basically.  space is not a concern, I just want moar speed.
<arrrghhh> which another ssd would be better... but I am cheap :)
<RoyK> arrrghhh: hrmf - seems I bricked that vm somehow
<RoyK> arrrghhh: give me 10 minutes
<RoyK> arrrghhh: shouldn't take too long to setup a new vm :P
<RoyK> arrrghhh: there's something about zfs on a few drives and an ssd for caching ;)
<RoyK> arrrghhh: took a bit more time to reinstall that vm - 20 mins
<RoyK> arrrghhh: still waiting for resync on that mirror
<RoyK> arrrghhh: still here?
<cyphermox> utlemming: ping?
<utlemming> cybermox: pong
<utlemming> cyphermox: pong
<cyphermox> utlemming: hehe. this is about https://code.launchpad.net/~med/ubuntu-seeds/sosreport/+merge/179485
<utlemming> cyphermox: yeah, I have spoken to med about that
<utlemming> med: ^
<utlemming> cyphermox: I had suggested to med that it being in the ubuntu-server seed might be more appopriate, given that it is useful on servers, not just cloud instances
<cyphermox> ok to add to the cloud-image seed? I think that's all that's left... though I'd be more conforable for one of you guys to handle it rather than I change the seed :)
<utlemming> smoser: ping
<cyphermox> sure
<cyphermox> server is a good idea as well
<utlemming> cyphermox: I have no objections, I just want to check with smoser, et al of the server team first
<cyphermox> np
<utlemming> smoser: sosreport to ubuntu-server or cloud-image seed?
<smoser> yeh
<smoser> what is the added delta ?
<smoser> i have only objected in the past based on python2.
<utlemming> smoser: sosreport does not include any new dependency in the images
<utlemming> smoser: in checking on a fresh saucy, its just an extra package
<smoser> is there an mp ?
<smoser> i'l just do it now if there is.
<utlemming> smoser: for cloud-images, https://code.launchpad.net/~med/ubuntu-seeds/sosreport/+merge/179485
<utlemming> smoser: I think there is one for ubuntu-server, but I need to check
<utlemming> smoser: I would rather see it in server, though
<utlemming> smoser: there isn't one for server
<clusty> hey
<smoser> there is a bug on this.
<smoser> https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1206106
<uvirtbot> Launchpad bug 1206106 in sosreport "[MIR] sosreport" [High,Fix released]
<clusty> last upgrade hung so i had to close the console. now every time i run "sudo dpkg --configure -a" as suggested it still hangs at "Setting up apache2-mpm-worker (2.2.22-6ubuntu5.1)"
<clusty> clues how I could flush all apaches so i can restart ?
<smoser> utlemming, i just pushed cloud-utils -> cloud-guest-utils in images
<smoser> in cloud seed
<utlemming> smoser: ack
<clusty> anybody ?
<arrrghhh> RoyK, sorry I was at work.. then lunch.
<arrrghhh> now back to work, but I'll be semi-here
<RoyK> arrrghhh: I quit testing - don't know
<arrrghhh> hah np
<med_> smoser, utlemming cyphermox : catching up on sosreport...
<med_> it's already in Main per that MIR smoser found
<cyphermox> yes
<med_> and it is still python 2.7 based though is scheduled to be reworked ahead of 14.04
<cyphermox> med_: I'm going through the sponsoring overview, the seed change was up there
<med_> nod.
<cyphermox> I just wanted to get an ack from people before changing seeds I don't normally touch
<med_> nod.
<smoser> cyphermox, you can have ack from me.
<smoser> i was about to do it.
<smoser> you can do i.
<smoser> it.
<cyphermox> can you?
<cyphermox> I'm beyond busy with Touch stuff :)
<LargePrime> I cant find a google answer to this.
<LargePrime> What is the server load number mean?
<genii> LargePrime: The wikipedia page covers it fairly well. http://en.wikipedia.org/wiki/Load_%28computing%29
<LargePrime> genii: Thanks so much
<LargePrime> i have a problem  did an apt-get upgrade and i am getting bitched at cause  /boot/initrd.img-3.2.0-40-generic updated?
<LargePrime> Configuration file `/etc/issue'
<LargePrime>  ==> Modified (by you or by a script) since installation.
<LargePrime>  ==> Package distributor has shipped an updated version.
<LargePrime> what should i choose?
<LargePrime> sorry for the noob question
<sarnold> LargePrime: probably doesn't matter -- it just shows a login banner to the login prompt on virtual consoles.
<sarnold> LargePrime: if you accept the package version, you probably won't be bothered in the future. :)
<mgw> so I have what may be an odd question. What would be the performance implications of an lvm over a loopback inside a qcow image?
<sarnold> it probably won't be fast, but you knew that right? :)
<MACscr> I want to run ubuntu server from a usb thumb drive, but more importantly, i want it to mostly be running in just a ram disk for speed and to minimize writes to the usb drive. I do though still want some persistency. Any suggestions on making that happen? Its going to be for a storage node.
<mgw> sarnold: I assumed that, yesâ¦ here's what I'm trying to accomplish: I want to run lxc containers inside a host which is a kvm guest.
<sarnold> mgw: should be alright, people use nested kvm for testing / development work all the time. I don't know if I'd want to rely on that for deployment, but lxc-in-kvm ought to be less painful than that :)
<sarnold> MACscr: look into "laptop mode"
<MACscr> sarnold: i think thats power power related and since this is a server, its going to be running 24/7
<sarnold> MACscr: limiting the number of disk writes feels like a useful thing to aim for however
#ubuntu-server 2013-08-30
<codepython777> http://ppa.launchpad.net/yannubuntu/boot-repair/ubuntu/ - whats this ppa?
<MACscr> anyone having issues with the ubuntu repos tonight?
<MACscr> getting resolving issues
<bradm> MACscr: what are you seeing?
<MACscr> nvm, guess the system got some networking issue after upgrading from 12.04 to 12.10
<MACscr> still havent solved why the networking is no longer working though =(
<MACscr> doesnt appear to get an ip from dhcp anymore
<Rallias> So... every time in the last month that I've done init 6 on my newly installed ubuntu servers, it's just shut down the machines... is this intentionally diversionary behaviour or is it something I should be concerned with?
<rbasak> Rallias: "init 6" is an unusual way to do it. I think it should work, but it's an unusual path, given that we don't use Sys V init. Does "reboot" work?
<Rallias> No...
<Rallias> rbasak, The only reboot thing that works reliably is sysrq...
<rbasak> Rallias: sounds like you have an issue there then.
<laite> Hi, I have spent past hour trying to figure how to install Ubuntu server 12.04.3 from usb - no matter what I seem to do, I'm always stuck on "problem reading data from CD-ROM"
<rbasak> !details
<ubottu> Please give us full details. For example: "I have a problem with ..., I'm running Ubuntu version .... When I try to do ..., I get the following output: ..., but I expected it to do ..."
<laite> I have tried creating install usb with dd, unetbooting and startup disc creator, and everytime the result is same
<rbasak> So you can't boot the installer at all?
<rbasak> At what stage do you get the error?
<laite> installer boots, I can set locale and such, but it stops on error where it "can't find cdrom"
<rbasak> I see.
<rbasak> Have you checked the image you're writing?
<rbasak> md5 or sha256 or whatever.
<laite> ah, no, obviously I should've
<laite> I'll see to it now
<laite> yes, at least the downloaded file seems correct
<laite> how do I check if it's correct on my usb?
<rbasak> You can dd the other way
<laite> ah, thanks
<rbasak> dd if=/dev/... bs=128k|cmp - original.iso
<rbasak> Or something like that
<rbasak> (otherwise you'll go past the end and it'll come up wrong anyway)
<rbasak> Or check the size and calculate and use dd count=
<laite> rbasak:  I ran `dd if=/dev/sdd bs=128k|cmp - ubuntu-12.04.3-server-amd64.iso` and it gave just a line: "cmp: EOF on ubuntu-12.04.3-server-amd64.iso"
<laite> does this mean it's all right?
<rbasak> Yes
<laite> so the disk isn't faulty
<rbasak> Next, I'd try another machine and see if you can get past that stage. It's before anything actually gets done to the disk, right?
<laite> right, I'll try with my laptop
<rbasak> There might be some BIOS settings around USB boot emulation to tweak
<laite> still the same "can't find cd-rom" error on another machine
<kalle_> I have a problem with the package "ftp-proxy". I have set up a reverse proxy for my FTP on a firewall/NAT machine, but I get random "Connection reset by peer" when doing file listings in empty directories...
<rbasak> yolanda: regarding https://launchpadlibrarian.net/147274320/buildlog_ubuntu-saucy-i386.rubyluabridge_0.7.0-2ubuntu1_FAILEDTOBUILD.txt.gz, the buildds resolve build dependencies slightly differently. I reproduced your sbuild result OK, and compared the build dep resolution against the archive build failure. I get these differences: http://paste.ubuntu.com/6043311/. It looks to me that on the buildds things like "require rdoc/task" and "require mkmf" a
<rbasak> I never figured out if there's a way to get sbuild to do exactly what the buildds will do for build dependency resolution. It's annoying.
<yolanda> rbasak, what i saw is that my local sbuild uses ruby1.9 and builder there ruby1.8
<yolanda> that should be the key
<yolanda> i need to force ruby1.9
<rbasak> I get both ruby1.8 installed in both environments.
<rbasak> And ruby1.9.1-dev in both environments.
<rbasak> Some source package intentionally build for both ruby1.8 and ruby1.9.1, so dropping one and forcing another may cause a regression in that functionality. I'm not sure if that applies to this package, though.
<rbasak> I think that adding a build-dep on ruby and/or ruby1.9.1 may fix it.
<yolanda> rbasak, i was comparing my build log with that, mine is setting up update-alternatives: using /usr/bin/ruby1.9.1, and builder is setting up ruby1.8
<yolanda> so yes, trying to add ruby 1.9 as dep
<yolanda> i thought sbuilds for same platforms should work the same in all builders, then there is no way to be sure 100% that a package builds until it's uploaded?
<Daviey> yolanda: You can download the chroot used by launchpad, to better match the remote environment
<yolanda> Daviey, how can i do it, i'm interested on it
<Daviey> yolanda: wget http://launchpadlibrarian.net/147277667/chroot-ubuntu-saucy-i386.tar.bz2
<yolanda> thx
<RoyK> huh - http://paste.ubuntu.com/6043505/ - any idea? whoever came up with the idea of three kernel tracks for an LTS release?
<RoyK> damn - /boot is full -
 * RoyK sticks his head down back in the sand
<rbasak> RoyK: https://wiki.ubuntu.com/Kernel/LTSEnablementStack if you're not aware of it.
<rbasak> It's to give users of newer hardware who need new kernels the LTS with the LTS userspace and just the new kernel
 * xnox ponders if in "Investigate alternatives to mysql" session anyone mentioned PostgreSQL =)
<jamespage> xnox, no because its not an alternative to mysql as it does not implement 'mysql'
<jamespage> :-)
<xnox> =)))))) i know, but would be funny if somebody writes compat for postgresql =)
 * xnox says "what could possibly go wrong...."
<geser> xnox: like http://andreas.scherbaum.la/blog/archives/657-PostgreSQL-9.0-Includes-the-new-MySQL-Emulation-Layer.html (but look at the date)
<xnox> i love it!
<oz0ne> hi, tryed to do a merge proposal from my branch. got an error "this branch is not mergeable into lp:serverguide." I did fill in Target branch (other) : ~ubuntu-core-doc/serverguide/saucy
<oz0ne> is it affected of the naming when i pushed it? "bzr push lp:~ablacksheep/ubuntu-docs/saucy-serverguide"? or am i just doing something other wrong? ty.
<oz0ne> i managed. : )
<roaksoax> jamespage: yeah exactly the same issue
<jamespage> yolanda, I think we need to disable the cinder-volume and cinder-backup DEP-8 tests - they appear to be racey
<jamespage> if LVM is not configured then they exit out at some point in time after starting...
<eSoul> Hey guys, I am running an Ubuntu 13.04 server box and I have a 2nd network card for my VirtualBox VMs.  Right now, I have to issue a "sudo ifconfig eth1 up" on boot to start the interface but I was wondering if there is something in /etc/network/interfaces I could put in to bring the interface up at boot.   It has no network configuration on it at all, just looking to "turn-on" the
<eSoul> interface
<yolanda> jamespage, ok, i will do it
<roaksoax> jamespage: /win 15
<roaksoax> err
<roaksoax> sorry
<jamespage> lol
<roaksoax> jamespage: so destroying the environment would require to sync tools again?
<jamespage> roaksoax, yep - raised a bug about that as well
<jamespage> there is no 'public tools' in maas
<roaksoax> i see
<smoser> just fyi, jamespage utlemming anyone else interested
<smoser> https://bugs.launchpad.net/ubuntu/+source/ubuntu-cloudimage-keyring/+bug/1218963
<uvirtbot> Launchpad bug 1218963 in ubuntu-cloudimage-keyring "SRU ubuntu-cloudimage-keyring into archive" [Medium,In progress]
<roaksoax> /q/win 15
<banzounet> What port should I open for a mail server?
<banzounet> 25, ... ?
<banzounet> (Hi guys)
<e_t_> Yes, port 25. Perhaps other ports if you want to do more than receive mail.
<plokami666> Hello
<plokami666> Can somebody tell me if it's possible to make users able to write to a directory that is owned by a different user but have the owner user able to read/write to and from the files uploaded by the others?
<plokami666> Ubuntu server 12.04, going to use sftp
<plokami666> I need to do the above because the main user has a valid shell and I don't want the people I'll be giving sftp access to being able to ssh to the server.
<ikonia> plokami666: please don't cross post questions, you're already asking/getting help in #ubuntu
<plokami666> I'm not really getting any help and when I posted here nobody had replied to me.
<ikonia> ok, so please don't cross-post then
<ikonia> find the right channel for your question and post it
<plokami666> I wasn't aware of this channel before. I looked for it after getting silence at first in the #ubuntu one. :)
<vedic> Hello Friends. I am trying to setup a small team of 5 to 10 people to work on a project. I am looking to buy VPS from a hosting provider and run bug tracker and repository on it. How can I ensure that the team members can only access to the bug tracker site and pull/push to server only from particular systems.
<vedic> I use 12.04 LTS 64bit.
<sarnold> vedic: iptables; the ufw frontend is friendly and uncomplicated.
<vedic> I want to restrict that if they are not in office they should not be able to access the site
<rdw200169> vedic: why not use github organizations for private repos?
<vedic> rdw200169: We use bzr
<vedic> sarnold: ok, I take a public IP for office internet and only enable that ip on port 443 right?
<rdw200169> vedic: yup then ufw/iptables is the way to go; you can also make your vpc connect to your office (if you have  linux box there) thru openvpn to lock down access
<vedic> rdw200169: Yea, we use only Ubuntu 12.04
<rdw200169> sarnold: if your services run on port 443; i don't know what the bzr port is, but you'll need that one too
<vedic> rdw200169: oh yea
<rdw200169> vedic: really depends on your source ip on outbound traffic from the office; i assume the office will have a nat, and the external IP (or IPs) of that nat is what you'll you want to use on the vpc firewall
<rdw200169> vedic: if your office users are hidden behind a nat, the only way to classify traffic firewall rules per user at the vpc would be with a vpn (since nat will hide the IP of your users)
<rdw200169> vedic: or you could use a captive portal or something at the vpc to keep people out that don't have creds (the ways of doing this are outside the scope of this convo tho)
<vedic> rdw200169: Currently I don't have static ip. But that is what really required I will get another net connection and the PCs in office will be behind that running on DHCP. Then on the remote server I can enable ufw (or via iptables) only the static ip on specific ports. Hope that will work out access denied from anywhere else
<tonyyarusso> vedic: Why do you want to block access to working from home btw?
<vedic> tonyyarusso: Because as of now I don't want to allow work from home. The access to server involves access to user data which I don't want anybody to use outside the office space
<tonyyarusso> That didn't really answer why...
<tonyyarusso> Do you prohibit any electronic devices from entering or leaving the office space and do bag checks to enforce that?
<vedic> tonyyarusso: More restrictions doesn't make the argument to allow to access from anywhere.
<vedic> tonyyarusso: There are levels of security which you can add. And crossing each level has different motives and efforts
<sarnold> it might be easier to just hire people you can trust and give them the access they want to do their job best.
<vedic> sarnold: Will do. But not at this stage
<sarnold> restricting access to a limited number of IPs makes a ton of sense, no reason why the application source has to be the only line of defense..
<sarnold> .. but letting employees work from home tends to improve moral and performance in my experience
<vedic> sarnold: sure. I agree.
<sarnold> (who wants to sit in the office at ten pm? no one.. but from the couch at home? sure..)
<sarnold> but it's your staff and your data :) hehe
<vedic> sarnold: yup :)
<hallyn_> sarnold: no sometimes i do like being in an office at 10pm :)
<vedic> :)
<sarnold> hallyn_: haha :D
<banzounet> Hey, is it possible to use apache to cover URL like domain.com:8000 and have thing like aaa.domain.com ?
<banzounet> and still have the port 8000 block from outside
<ikonia> mod_proxy
<ikonia> mod_rewrite
<banzounet> ikonia: thanks i'll google that
<tonyyarusso> It doesn't make any sense to have "security" that doesn't actually accomplish anything except annoy people.
<tonyyarusso> That's like locking your front door while leaving the garage door open.
<thurstylark> I'm having trouble establishing a pptp tunnel. I'm completely new to vpn, and I want to learn, so any guides or help you can give me would be great
<Guest36475>  hey world
<garbagegod> Question regarding a high-volume web server setup
<garbagegod> Can anyone recommend a host / strategy for a web server which will need to serve several hundred thousand clients a minute
<garbagegod> I'm thinking an amazon ec2 with nginx serving flat html
<garbagegod> Do you guys think that'll cut it?
<garbagegod> Would you recommend using memcached?
<tonyyarusso> garbagegod: Key question:  several hundred thousand clients a minute EVERY minute, or sometimes, with several orders of magnitude less much of the time?
<garbagegod> most likely the latter
<tonyyarusso> Then Amazon's stuff may be worthwhile.
<garbagegod> I'm expecting no less than several thousand hits a minute every minute, and I'd like to be able to accomodate up to around 1 million hits/minute
<tonyyarusso> Some coworkers are trying to set things up on Amazon, and are frustrated by the added complexity.
<garbagegod> Do tell
<tonyyarusso> The alternative is just spin up normal VPS systems.
<garbagegod> I think it would be adventageous to implement a cloud based solution in this instance... a single VPS ain't gonna work
<tonyyarusso> Never said a single VPS.
<tonyyarusso> You can have a zillion of them if you want.
<garbagegod> yeah, so now it's not really that much added complexity in comparison, is it ;)
<tonyyarusso> It's whether it needs to be dynamically elastic that's complicated.
<tonyyarusso> If it really is just flat HTML and nothing else, your solution can be equivalent to a hostname with a thousand A records and a thousand Raspberry Pis.
<garbagegod> EC2 it is
<tonyyarusso> Meanwhile, you should balance the relative value of predictable costs for your business.  If for some reason there was a day when you got a TRILLION hits/minute, would you prefer to pay a million times more or have the site go down?
<tonyyarusso> I think Amazon might let you limit how big things can grow now though.
<garbagegod> Right... Obviously 1 million would be crazy high, but I would like the security because that's the high end I was given to support
<garbagegod> What kind of specs do you think I should get
<garbagegod> I assume I just need to get a small instance, do some benchmarking, and scale up as needed
<tonyyarusso> Not a damn clue :)
<garbagegod> Yeah. Just multiplying the avg RTTs and whatnot
<tonyyarusso> Yeah, if you have the ability to benchmark with some real-world data, that's obviously your best shot.
<garbagegod> Okay, thanks a lot
<tonyyarusso> Good luck - sounds like a fun project
<garbagegod> Indeed :)
<DammitJim> this is the weirdest thing
<DammitJim> I installed server 12.04 lts
<DammitJim> but when selecting the packages, I picked openssh and manual
<DammitJim> after that screen I got a black screen where I could install stuff or upgrade stuff. I quit out of that
<DammitJim> and finished the installation
<Lequtix> when i install it i only choose openssh
<Lequtix> then i install the rest once i reboot and setup IP address and hostname
<DammitJim> now I'm trying to install a window manager because my boss can't do terminal
<DammitJim> and I ended up adding a ubuntu-desktop ppa
<DammitJim> but during the install, it's giving me size mismatch
<Lequtix> email servers don't do well until u give the box a static ip and setup the hostname and FQDN
<DammitJim> I got an email address
<DammitJim> haven't gotten tot he email thing :D
<Lequtix> DamnitJim ..  install ubuntu server .. only choose ssh server during setup
<Lequtix> once you reboot
<DammitJim> ok
<Lequtix> set static IP.. and set FQDN
<Lequtix> you know how?
<DammitJim> why do I need an FQDN?
<DammitJim> I have a static IP address
<Lequtix> email servers work off the domain names
<DammitJim> oh, I'll get to that
<DammitJim> the first part I need to do is the window manager piece
<Lequtix> well.. i recommend using webmin
<Lequtix> instead of gui
<DammitJim> LOL
<Lequtix> gui adds complexity in terms of device automounting etc.
<Lequtix> if my boss wanted a gui for his mail server i'd tell him to pony up for exchange server and server 2012
<Lequtix> hahaha
<Lequtix> that changes his mind
<Lequtix> whats the point of a gui when you have to still manage exim from terminal anyhow
<Lequtix> a terminal in a window is the same as an SSH window
<Lequtix> with a webmin interface he can manage the server through a browser from anywhere
<Lequtix> just my opinion
<TJ-> Always dangerous; allowing a non-technical manager to do such a thing. Make sure and deploy git to /etc/ and a cron job to commit changes every 5 minutes or so
<mosh> ummm
<Lequtix> installing a gui on a server is just retarded in my opinion
<mosh> You can also use Redmail, for a /type/ of GUI
<Lequtix> exim4u.org also has a webmin thingie for exim only
<mosh> but I don't know if it supports ubuntu server, I know it works on debian.
<mosh> Which really it should support Ubuntu Server.
<mosh> If it works on Debian.
<DammitJim> ok, guys, I just started working here
<Lequtix> HAHAHAHA
<DammitJim> and I've got the ultimate decision
<Lequtix> i feel your pain man..
<DammitJim> LOL... I actually would have installed debian
<Lequtix> i have had bosses like that
<DammitJim> but he pushed for Ubuntu
<DammitJim> LOL
<Lequtix> ugh
<Lequtix> so what is the mandate for the project then?
<Lequtix> ubuntu linux, Gnome desktop, and exim4 mail server?
<TJ-> If you go with Ubuntu you have a faster upgrade cycle. For example, if you want to host HTTPS, then 13.10 has apache 2.4 which support's perfect forward secrecy via SSLCipherSuite, which is provided by OpenSSL, and can be implemented also in Postfix, Dovecot, Stunnel, and others that require TLS
<Lequtix> fast upgrade cycle sucks in business cus there's always better things to be doing with your tine
<Lequtix> if it works don't fix it
<Lequtix> my opinion
<Lequtix> im sure i'll get yelled at
<Lequtix> :O
<TJ-> Not when it comes to security
<DammitJim> actually, he listened to some other dude that has been here for over a decade and he said lxde
<Lequtix> i keep everything behind a firewal.. only necessisary ports are exposed
<DammitJim> no need to be on the fast lane for this project
<DammitJim> dammit... I have this size mismatch crap going on
<Lequtix> lxde is nice
<DammitJim> doing apt-get upgrade
<Lequtix> try apt-get -uy dist-upgrade
<DammitJim> why dist-upgrade?
<Lequtix> maybe  clean it first
<Lequtix> apt-get --purge clean-all
<Lequtix> apt-get --purge clean all
<Lequtix> i forget exactly what it is
<Lequtix> i find the dist-upgrade works better for me
<TJ-> "apt-get clean" will remove the packages in /var/cache/apt/archive/.
<Lequtix> yea.. he's having issues with the packages and the database i think
<Lequtix> clean it all out.. re-update
<Lequtix> then re-upgrade
<Lequtix> see if that solves the upgrade issues
<Lequtix> doesn't it clean out the repository database too?
<DammitJim> the clean didn't work
<DammitJim> ugh, I'm out of time
<DammitJim> I'll have to drill this on Monday
<DammitJim> no, Tuesday
<Lequtix> i would do a fresh install then
<DammitJim> yup
<Lequtix> save urself some time
<DammitJim> I'm changing the settings in vmware to boot from iso
<TJ-> You should pastebin the error messages
<Lequtix> so did ur boss mandate how you are to partition it too?
<Lequtix> so this is a VM?????
<Lequtix> why the hell put a gui on a fricking VM
<Lequtix> thats even more stupid
<Lequtix> no offence
<Lequtix> lol
<TJ-> it's offensive to put a GUI in a VM! :D
<Lequtix> its completely pointless
<Lequtix> first you need a windows workstation to run the vsphere client
<Lequtix> secondly the performance sucks
<Lequtix> thirdly the vmware tools are a pain in the ass
<Lequtix> Quit that job Jim .. find another one
<Lequtix> lol
<Lequtix> just kidding..
<Lequtix> i'm sure there's a vmware appliance for a standard ubuntu 12.04 LTS install
<Lequtix> already packaged
<DammitJim> weird... what is that manual package selection option where one picks openssh ?
<Lequtix> just don't pick anything
<DammitJim> should I select that or install ubuntu-desktop later?
<DammitJim> when do I do the ubuntu-desktop piece?
<Lequtix> you want the desktop for sure?
<Lequtix> in a vm?
<DammitJim> yes
<DammitJim> it sucks
<Lequtix> ok..  install it then
<DammitJim> but I gotta do it
<DammitJim> but how is the question
<Lequtix> let the installer do it for u
<Lequtix> it's not in tasksel?
<DammitJim> tasksel
<DammitJim> where is that?
<Lequtix> maybe that's debian
<DammitJim> where you pick the packages like openssh mail samba, etc?
<Lequtix> ya
<DammitJim> on debian you at least see a window manager option LOL
<Lequtix> ok.. don't pick anything
<DammitJim> here it just says manual selection or something
<Lequtix> just install base ubuntu
<Lequtix> then when u get into the os..  configure all the networking and FQDN
<Lequtix> then update/upgrade
<Lequtix> then apt-get install tasksel
<Lequtix> then sudo tasksel
<Lequtix> it will give u the software selection menu that you are used to
<Lequtix> https://help.ubuntu.com/community/Tasksel
<Lequtix> you should probably install the vmware tools first tho
<Lequtix> before installing everything
<DammitJim> thanks
<DammitJim> running
#ubuntu-server 2013-08-31
<mgw> How can I (externally) determine the interface MAC for an LXC container that was created from the default config?
<sarnold> mgw: does 'ip link' show them?
<mgw> sarnold: it doesn't show the interfaces inside the container's namespace
<mgw> I might be able to figure something out using ip netns
<sarnold> mgw: maybe tcpdump -i <bridge> -e ?
<mgw> I'm using macvlans
<mgw> sarnold: ^
<mgw> sarnold: I'm wanting a way to a) create a container with lxc-create; b) determine the hwaddr for the container's iface(s); c) doe some stuff with those addrs before actually starting the container
<sarnold> ah, so way beyond my experience, not just a little bit. hehe. :)
<sarnold> mgw: hey, check htis out: https://wiki.debian.org/LXC/SimpleBridge
<sarnold> mgw: lxc.network.hwaddr
<sarnold> mgw: there's plenty of mentions of that in the lxc package, I expect our packages honor it too
<mgw> sarnold: yeah, i may just generate hwaddrs and use that (that's what I was doing during testing). I was trying to avoid creating configs for every container.
<mgw> i can actually do some tricks to find the pid for the cgroup, and then do this:
<mgw> ip netns exec <pid> ip addr
<mgw> but i'm not sure it's worth it
<mgw> sarnold: for the record, I was able to do this:
<mgw> sudo lxc-execute -n container-name -f /etc/lxc/lxc.conf ip addr
<mgw> the trick was passing the -f arg rather than allowing it to use the auto-generated config associated with the container
<stlu> Bind9 - I'm trying to learn to configure dns - will it talk with a DSL Modem/router and become a secondary DNS for the router's hosts?
<subman> how do I install sendmail?  I've actually installed it but how do I set it up?  It does not seem to be sending email with the default settings.
<stlu> subman: there is a wiki page to configure sendmail.  I used another ubuntu wiki to learn to setup postfix.  It was very clear and thorough.
<stlu> brb
<subman> stlu address
<linuxtech> bind does not use a hosts file.
<linuxtech> stlu: why do you want to use sendmail, Ubuntu uses postfix by default and Debian uses exim4.  I use exim4.
<sarnold> man, there's an 800 page book for configuring sendmail. I'm not sure it's long enough.
<subman> sarnold, 800 pages????  The space shuttle manual was smaller than that!
<sarnold> subman: haha :)
<sarnold> wow. 4th edition has 1312 pages: http://shop.oreilly.com/product/9780596510299.do
<sarnold> ask yourself, if you had to write a mail server from scratch, how many printed pages do you think it would take?
<subman> Well I just need the page that helps me getting the basics setup.
<subman> I've run sendmailconfig and still not working
<sarnold> subman: getting mail working is a bit involved; there's remote -> local, local -> remote, forwarding... what doesn't work? are there any details in the log files?
<subman> What port is sending?
<subman> Ah, 25.  It is blocked I think.
<linuxtech> The port sending varies, the port receiving is 25 for SMTP.
<subman> So why else would sending not work?
<linuxtech> I haven't used sendmail in more than 15 years...  So ir could be something with it, or some other networking issue.
<linuxtech> I suggest you try postfix or exim4.
<crass> ubuntu is phasing out the ia32-libs, so what's taking its place?
<subman> I've tried postfix but how to configure?
<linuxtech> Did you try https://help.ubuntu.com/lts/serverguide/email-services.html
<linuxtech> crass: http://summit.ubuntu.com/uds-1308/meeting/21877/client-s-32v64-bit/ perhaps that session will answer your question.
<subman> let me see
<subman> is there no bare bones, teach me like i'm 5 tutorial?
<subman> Anyone could help and it would be appreciated.
<linuxtech> https://help.ubuntu.com/lts/serverguide/exim4.html
<crass> linuxtech: that seems to be a discussion of 32 vs 64 bit, not running 32bit on 64 bit.  Or have I not watched long enough?
<sarnold> crass: welcome back, you didn't miss any responses..
<crass> thanks sarnold, that answers my other question (did my message get sent :)
<sarnold> hehe :)
<stlu> back
<stlu> so generally, do DNS servers work with home routers?  I have a Bell 2Wire modem/router which provides local DNS for DHCP clients with hostnames.
<sarnold> stlu: you'll probably want to configure the dns server to be a forwarder rather than a recursor
<stlu> hmmm, well the Bell router isn't open to being a secondary DNS to a LAN host, so...
<stlu> I would have to make my DNS a secondary to the home router/dhcp/dns, or just ignore the DNS service it provides for LAN hosts...
<stlu> (assuming the home router will actually forward the LAN dhcp client records)
<MACscr> is there an ubuntu image out there for running ubuntu server from usb? I want to make sure the OS is ran from ram mostly, but i still need persistent storage for updates/configs. Im wondering what would be the best way to do that part too.
<MACscr> though i probably wouldnt update it much since it would be a storage server
<mardraum> standard server image installs to a usb just fine
<MACscr> mardraum: right, but it then runs from usb as well instead of a ramdisk. Correct?
<mardraum> yes?
<MACscr> mardraum: so that wouldnt really answer my question unless you stopped at the first sentence.
<mardraum> "ran from ram mostly"
<mardraum> if you wanted a ramdisk, why didn't you say?
<MACscr> mardraum: my apologies for not making that more clear
<ice9> Are there other alternatives for ownCloud?
<Jeeves_> Morning
<Jeeves_> init: dbus main process (3468) terminated with status 1
<Jeeves_> init: dbus main process ended, respawning
<Jeeves_> init: plymouth main process (514) killed by SEGV signal
<Jeeves_> init: plymouth-splash main process (3389) terminated with status 2
<Jeeves_> init: plymouth-log main process (3447) terminated with status 2
<Jeeves_> Anyone with good ideas on how to debug that?
<Douug> hello all, anyone know of a good guide for ubuntu MS Exchange replacement? I have tried the Samba4 , Sogo and OpenChange solution from Google but Outlook hangs on loading profile and Samba4 crashes
<RoyK> Douug: zimbra?
<RoyK> Douug: samba doesn't have anything to do with email/calendar etc
<Douug> thank you, tried it but not happy with the community edition.
<RoyK> why not?
<Douug> let me find the guide i was using .. brbr
<RoyK> Douug: keep in mind that zimbra should be installed on a dedicated machine/vm
<Douug> http://iabsis.com/EN/article/35/Installation-de-Samba4-et-Openchange-sur-un-serveur-Debian-ou-Ubuntu
<Douug> also want a active directory repalcement
<Douug> doesnt hte outlook connector for Zimbra cost a little ?? want to try put the free edition of everything in, or atleast compile something to that extent
<Douug> RoyK, what do you suggest I do ?
<RoyK> Douug: here?
<ikonia> RoyK: did you find out what was going on with that raid box and core dumping samba ?
<RoyK> ikonia: somewhat - samba is still getting segfaults every now and then, but after a few fsck and removing lots of old backups, the root seems healthy
<ikonia> any idea why it was dropping out ?
<RoyK> ikonia: btw, it's just the root, the data raid is healthy
<ikonia> yes, but root was mirrored wasn't it ?
<RoyK> ikonia: no, 24 hours of memtest showed no errors
<RoyK> ikonia: it's mirrored, yes, two old 250GB SATA drives
<ikonia> so it just "fixed" itself after a few fsck/reboots
<ikonia> how good but frustrating
<RoyK> quite so
<RoyK> if the problem reappears, I guess we'll have to reinstall the thing
<RoyK> not much more than a fileserver anyway, so not a big issue
<ikonia> coupled with samba, it just seems such an annoyingly odd problem
<RoyK> it certainly is
<Douug> sorry ROyK, stepped out
<RoyK> Douug: ping
<Douug> hello
<RoyK> for AD replacement, I guess the nearest you get is samba4
<RoyK> perhaps openldap
<RoyK> both can be integrated with zimbra
<RoyK> if you want outlook clients getting native access to the machine, it gets trickier
<RoyK> commercial zimbra can do it, but then, it's not open
<slylock105> hello guys
<slylock105> i wanted some help in setting up virtual hosts
<slylock105> using apache
<slylock105> can anyone help
<RoyK> Douug: I was in a meeting some three years back with the norwegian meteorologic institute, met.no, and they chose zimbra over exchange, and only after a few months, 95% or so of the users used the web client over outlook
<RoyK> slylock105: it's quite easy to google that
<TheLordOfTime> RoyK:  i think he wants his hand held through the process
<TheLordOfTime> :/
 * TheLordOfTime runs and hides :P
<slylock105> i know been googling for last half an hour
<Douug> if I download the ZEG (Zero Effort Groupware) as a virtualbox machine, it has LDAP and Sogo and it works perfectly, unfortunately I can't introduce it to my clients as a production machine
<slylock105> it should be simple but cant figure out
<Douug> support Zibra is MUCH cheaper than funding MS
<RoyK> slylock105: first hit on google http://httpd.apache.org/docs/2.2/vhosts/
<slylock105> RoyK: Well I am very confused i am trying to setup two virtual host
<virusuy> morning guys
<slylock105> i have a website already running under apache
<RoyK> !ask | slylock105
<ubottu> slylock105: Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience
<virusuy> zimbra <3
<RoyK> slylock105: or what I'm trying to say is 'be precise'
<Douug> slylock105, have you created another vhost file in /etc/apache2/sites-eanbled and ln -s to /etc/apache2/sites-avaliable
<slylock105> okay, I am trying to set up mutiple virtual hosts under apache. Right now I just have a file named default in /etc/apache2/sites-available and directs me to a website when i type in  example.com
<RoyK> slylock105: really - take a look into /etc/apache2/sites-enabled/000-default and copy the relevant stuff to another file in that dir, symlink it to /etc/apache2/sites-available/ and reload apache
<slylock105> I wanted apache to redirect me to a different www folder, when i type mtn.example.com. I have added a new virtual host in/etc/apache2/sites-available/default and specified the root directory of the other website
<slylock105> thats is what i have done so far
<slylock105> but example.com works the other one dosent
<Douug> have you enabled mod_rewrite?
<slylock105> i added the other virtual host to the same file/etc/apache2/sites-available/default
<slylock105> I copied the same Rewrite rules i had for the the orignal virtual host
<nizar> hey guys, is it possible to limit SSH logins to only a specific user or goup in ubuntu server?
<nizar> im trying to run an ubuntu server from home that i want to be able to access from the internet so i thinking of limiting SSH logins to a singal user
<Douug> why dont u use public key authentication
<Douug> for a specific user
<rdw200169> nizar: yeah... using keys is far and away preferred to password authentication.  Please consider this for your sanity!
<nizar> i care about my sanity!
<nizar> ill use that instead, thanks
<nizar> any other quick tips?
<rdw200169> nizar: if you go that route, just disable password auth in the sshd config
<rdw200169> nizar: (this is ridiculously easy to do, and is probably mentioned in comments in the sshd config)
<nizar> cool
<Douug> change the SSH port
<rdw200169> nizar: you can also add a Host config to your 'remote host' (i.e. laptop) that makes logging in easy
<nizar> yeah, ive already changed the SSH port on it
<rdw200169> nizar: when you're logging in remotely.  `ssh home` is much nicer than `ssh -i ~/my_id my_user@long_ip`
<nizar> i didnt know that
<rdw200169> nizar: yup, for setting it up, just read the man page for ssh_config
<rdw200169> nizar: the only file you should need to edit for your user is ~/.ssh/config
<rdw200169> nizar: likewise, just search for PasswordAuthentication in the manpage for sshd_config
<rdw200169> nizar: make your life even easier and install keychain (http://www.funtoo.org/Keychain)
<nizar> awesome, thanks
<rdw200169> nizar: and when you generate your key for your remote hosts, *put a password on your private key* and then use keychain to make it such that you only ever have to put in your key's password once in a while
<rdw200169> nizar: obviously, using keychain has some of those 'but what if... situations since ssh-agent is caching your password' but that is a decision for you to make ;)
<greppy> rdw200169: actually, it's caching the key data, not the password.
<Douug> guys, whats the best way to do the ubuntu server CD upgrade
<Douug> i.e alternative upgrade ?
<subman> I don't seem to be able to get my sendmail to work.  I get the following error email when I do try to send an email:  550 Access denied - Invalid HELO name (See RFC2821 4.1.1.1) (in reply to  MAIL FROM command)
<ikonia> subman: lookw like your don't have a sane hostname
<subman> Isn't it just server.example.com?
<subman> Or whatever my server is?
<subman> Here is the full error:  <keithclark@waterloosubstop.com>: host waterloosubstop.com[198.91.89.227] said:
<subman>     550 Access denied - Invalid HELO name (See RFC2821 4.1.1.1) (in reply to
<subman>     MAIL FROM command)
<ikonia> ok, so you don't meet RDC2821
<ikonia> check out what that is
<ikonia> I suspect it's a resolvable FQDN
<subman> I have no idea what that means
<ikonia> check the RFC
<ikonia> but I suspect it means you require a valid FQDN
<ikonia> (without me checking)
<subman> Sorry, I still have no idea here.
<subman> Ok, that part is solved.  I just put an extra '.' at the end of the server url.
<subman> Now I get: (reason: 554 No relaying allowed - psmtp)
<Douug> are you using a smart host?
<Douug> use your ISP's smtp server
<Douug> postconf -e relayhost=YOUR.ISP.FQDN
<subman> I think that is corrected but now:
<subman> <<< 550 Access denied - Invalid HELO name (See RFC2821 4.1.1.1)
<subman> 554 5.0.0 Service unavailable
<Douug> what is the banner in your config
<Douug> sorry myhostname ??
<subman> Ok, I'm new here.  What do you mean banner?
<subman> What config file?
<Douug> the postfix config file -> /etc/postfix/main.cf
<subman> Should $myhostname be my actual smtp hostname?
<Douug> no .. that is a variable picked up by the option lower in the file called "myhostname="
<subman> myhostname is set to my servers name
<Douug> the host hostname exist in DNS ?
<subman> ?
<subman> I don't know what you are asking me
<Patrickdk_> you shouldn't be running a server then, let along email
<Douug> does your server hostname resolve to an IP address on the internet?
<Patrickdk_> your hostname should resolve in dns, and your ip should resolve to that same hostname
<rdw200169> greppy: fair enough; i'm reasonably certain that ssh-agent is 'safe', but didn't feel like googling the specifics ;)
<greppy> rdw200169: it mostly is, but, as with anything that lives in memory, root COULD do something evil with it.  But if you don't trust root you are already boned :)
<Quest> just a simple question but I need to be sure.       If a server has, say 1 Mb psinternet connection. and if its pinged (traditional way of pinging like "ping (data size) google.com ") by a total (or one thread of ping data) of 1 Mbps . Will the normal trafic of that server (iweb / voip etc) will be effected in any way?
<LeinardoSmith> hello does anyone have experience using samba and cups to serve windows printer drivers?
<LeinardoSmith> I am getting a WERR_ACCESS_DENIED error when I try to run "sudo cupsaddsmb -v -a"
#ubuntu-server 2013-09-01
<dragun0v> general question: IS the ssh -f switch (where ssh goes into background before cmd exe)... is that similar in concept to running processes in background using ctrl+z and bg or using screen?
<LeinardoSmith> anyone still on here?
<LeinardoSmith> I need help with cupsaddsmb
<LeinardoSmith> anyone?
<eagles0513875_> good morning everyone where does ubuntu install squirrelmail by default on one's
<eagles0513875_> system
<eagles0513875_> reason im asking is im wanting to install a plugin but cant seem to find where its installed on my system
<Antar> Hi
<Antar> guys anyway to display php errors on Lamp server
<Antar> please ......
<bekks> Enable php error logging.
<Antar> bekks
<Antar> where and how
<bekks> Which ubuntu do you use?
<Antar> 12.04.04
<Antar> sorry 12.04.03
<bekks> Look at the php.ini used by your apache webserver, look for display_errors and error_log.
<Antar> the php ini fie have been changed
<ice9> Is it safe to use the auto generated SSL certifications during the installation of packages like Dovecot or Postfix?
<andol> ice9: Save, from what?
<andol> s/save/safe/
<bekks> Antar: Then look at the changes.
<ice9> andol, I mean are those certs get generated during the installation or they are already created and put inside the installer and then they are copied and used?
<Antar> bekks what should i change ?
<Antar> which param ?
<bekks> Antar: I just told you.
<Antar> display_errors to ..
<bekks> Antar: Please read the documentation of that paramater :)
<Antar> ; display_errors ;   Default Value: On ;   Development Value: On ;   Production Value: Off
<andol> ice9: A quick look suggests that the dovecot ones are generated during install, while the postfix (snakeoil) ones are copied
<bekks> Antar: So you know the possible values now.
<ice9> andol, where did you find this please?
<Antar> Exactly
<Antar> what should i do
<bekks> Antar: You have to tell us the current value...
<andol> ice9: The keyword being a quick look, still not entirely sure on the postfix ones. Checked it by downloading the source packages and looking inside (apt-get source dovecot, apt-get source postfix)
<Antar> current value is off
<bekks> Antar: Then read the meaning of the possible values again.
<Antar> bekks cant understand
<bekks> Antar: http://php.net/manual/en/errorfunc.configuration.php
<Antar> so instead of on ... 1
<Antar> ??
<bekks> Antar: No,
<Antar> ??
<Antar> can you give me a preconfigured php.ini file =
<Antar> '
<bekks> Antar: Boolean values are On/Off, true/false, 0/1 -- if the current value is Off and you want to _display_ the values... how about chaning it to "on"?
<bekks> *changing
<Antar> bekks Yeah exactly what i did
<bekks> Antar: Then check the location of the error log now and configure it to your needs.
<eagles0513875_> Antar: you probably want to restart the web server for your changes to take effect
<eagles0513875_> bekks: Antar probably isnt restarting their webserver
<bekks> eagles0513875_: I know - he isnt yet done with the config.
<eagles0513875_> oh ok
<eagles0513875_> my bad :(
<bekks> But he knows everything he needs now :)
<Antar> eagles0512875 doyou mean service apache2 restart ?
<eagles0513875_> i do have an interesting question if one has web sites or web apps what is the advantage of using /usr/share instead of /var/www
<bekks> Antar: First, check the location of the error log.
<eagles0513875_> Antar: yes sudo service apache2 restart unless you are ina  root prompt
<ikonia> eagles0513875_: there is no advantage - it's a file system, nothing more
<eagles0513875_> ikonia: ok as i find it a bit strange that ubuntu when installing from the package it puts squirrelmail in /usr/share instead of /var/www
<Antar> Yeah
<Antar> it is steal off
<ikonia> eagles0513875_: why is that strange ?
<eagles0513875_> seeing as squirrelmail is web based you would think it woudl be put by default in /var/www
<eagles0513875_> Antar: did you save any changes you made
<eagles0513875_> and what editor are you using
<Antar> Yeah of cours
<ikonia> eagles0513875_: bravo on once again reading nothing on the package before installing and asking question
<Antar> nano
<ikonia> eagles0513875_: let bekks finish helping
<Antar> or gedit
<bekks> Antar: So which php.ini file did you edit?
<ikonia> eagles0513875_: there reason it puts things in /usr/share is so that other sites (as ubuntu uses virtual namebased hosts by default) can use it
<Antar> or gedit
<Antar> it is /etc/php5/apache2/php.ini
<eagles0513875_> ikonia: ok are you referring to vhost's like apache uses?
<eagles0513875_> or am i misunderstanding the concept here
<ikonia> eagles0513875_: if you look at the package it actually creates a redirect to use /usr/share/squirrelmail to reference those files
<eagles0513875_> ahh ok :) got it now
<ikonia> eagles0513875_: like apache uses ??? it IS apache.....do you not know what webserver you are using ?
<bekks> Antar: And thats the php.ini file your apache is using?
<eagles0513875_> i do you meantioned virtual name based hosts i wasnt sure if that was something internal to ubuntu or to the web server
<Antar> i dont know but a lot said that
<Antar> is there another
 * eagles0513875_ goes into silent mode
<ikonia> virtual name based hosts, is an apache parameter
<bekks> Antar: Of course. Thats why I said: look at the php.ini file your apache webserver is using.
<Antar> ok
<eagles0513875_> ok thanks for clearing that up for me ikonia  ill be honest i havent gone into great detail in terms of the inner workings of apache hence why i wasnt sure what you meant
<bekks> Antar: Create an php file with phpinfo() in it to see where the configuration used by apache is located.
<eagles0513875_> bekks: seems like he quit or closed out of the web chat
<baswazz_> hdparm (spindown) does not work anymore. I installed ubuntu 12.04.3 lts edited the /etc/hdparm/conf removed the # from the line that say spindown_time = 120 (10 min) reboot and it did work fine. After installing some packages (tvheadend, oscam-svn, zfsonlinux, sabnzbdplus) i rebooted again. Only the drives do NOT spindown anymore. If i do 'sudo hdparm -y /dev/sdb for example the drive spins down.
<baswazz_> anyone who could help me out with hdparm
<ice9> How to determine the number of threads of a running process?
<patdk-lap> I normally just guess
<patdk-lap> you want to try 'man ps'
<patdk-lap> likely the T and m options
<Kaelten> Anyone know if Apache 2.4 got removed from the 12.04 repo?  IT looks like it has
<RoyK> didn't know it was there in the first place
<Kaelten> it was, had to adapt a bunch of scripts to it as a result
<Kaelten> and now it's not there which means it's poof
<RoyK> https://launchpad.net/ubuntu/+ppas?name_filter=apache2
<qman___> it most definitely was not, it is in the upcoming 13.10 though
<qman___> if you had it you got it from a third party source
<g0tcha> hey guys, im having some issues with a new install of an ubuntu server, i can ping an ipv4 internet address without any issues but i cannot resolve hostnames
<g0tcha> i have static ip setup and i tried with both my isp and google's dns servers and its still the same.. anyone can help me out with this maybe?
<g0tcha> server 12.04
<RoyK> g0tcha: possibly wrong dns resolver config?
<g0tcha> if i try 'dig @8.8.8.8 google.com'  it resolves fine
<RoyK> pastebin /etc/resolv.conf
<RoyK> and /etc/network/interfaces
<g0tcha> i thought in ubuntu server 12.04, i dont need to edit /etc/resolv.conf anymore?
<RoyK> g0tcha: no, it's generated by dhcpcd or /etc/network/interfaces contents
<g0tcha> this is my /etc/network/interfaces http://paste.ubuntu.com/6052744/
<RoyK> you probably won't reach that ipv6 dns server without an ipv6 address ;)
<Kaelten> qman__: hrm, it's possible we're using several, none tarted at apache2 but you know how that goes
<g0tcha> RoyK, i have ipv6 from my isp thats active, so i added google's ipv4 and ipv6 nameservers, is that wrong?
<RoyK> try to remove the ipv6 dns server and try with ipv4 only
<RoyK> then - if that works - you know where the error is
<g0tcha> RoyK, just tried, still does not resolve
<RoyK> what's in resolv.conf?
<qman__> cat /etc/resolv.conf, does it contain what you expect?
<g0tcha> no, its empty
<g0tcha> it says there in caps not to edit that file manualy
<RoyK> then your system can't resolve anything
<qman__> sounds like resolvconf hasn't updated it
<qman__> not sure how to kick it manually, but a service networking restart will probably do it
<g0tcha> qman__, i restart the service each time i do a change to the network interface with no change
<g0tcha> i added it manually and it works fine now
<g0tcha> im gonna try to reboot and see if the changes stick
<g0tcha> nope.. cant resolve a hostname after a reboot :/
<qman__> oh, I just caught it
<qman__> you typoed nameservers in your interfaces
<qman__> missing the e
<qman__> g0tcha, dns-namservers -> dns-nameservers
<g0tcha> damn.. good eye dude
<g0tcha> that did it
<mojtaba> Hi, Is there anyway to encrypt all the emails in the inbox? I know how can I send encrypted emails using public key with enigmail and Thunderbird. I am using IMAP and I would like to have all of my emails encrypted on the server.
<Patrickdk_> is there a point? the email just passed through the open internet, without encryption, everyone on the earth could have read it
<mojtaba> Patrickdk_: I did not use encryption before and now I just want to encrypt all of my emails in the server. (I think there could be a way to encrypt them locally and then synchronize it on the server.)
#ubuntu-server 2014-08-25
<qman__> I'm trying to add NIS extensions to my samba4 domain, following the instructions here: https://wiki.samba.org/index.php/Using_RFC2307_on_a_Samba_DC -- Checking my environment, I don't have them installed, and it says to use ypServ30.ldif to add them. However, this file does not exist anywhere on my system. Running Ubuntu 12.04 with Samba 4.0.0~alpha18.dfsg1-4ubuntu2
<fridaynext> here's my /etc/exports, but when I try to connect to my shares from OSX 10.9, it's slow to connect. Any idea why? http://pastebin.com/raw.php?i=9hPvZDgX
<fridaynext> I have "rw" in my nfs share, but I can't write to the share - how do I enable the ability to write to it?
<qman__> fridaynext: file permissions, most likely - that's a really bad configuration from a security standpoint
<fridaynext> qman__: do you have any links to tutorials about how to increase the security without spending four hours reading the man pages?
<qman__> it's very simple
<qman__> 192.168.1.0/255.255.255.0(ro,async,no_subtree_check)
<qman__> is what I have on mine
<pfsense_rookie> huh..
<fridaynext> qman__: and how is mine less secure than yours?
<qman__> to make it work, your UIDs have to match on all affected systems
<qman__> and the users have to have permissions to the files
<fridaynext> that's why i set the uid to 1000, since I can't change my UID on OSX.
<qman__> fridaynext: yours turns off all the security features that are built in, allows root and guests to write to anything, etc
<fridaynext> I want to be able to write to all the folders I connect to - but I can't seem to make that happen without adding all the extra variables.
<qman__> change your UID on the linux side then, or add a specific user for it
<qman__> then set the ownership and permission on the files
<fridaynext> I tried doing that with groups before, but I guess GID is not enough...
<qman__> NFS is designed to be used within a unified environment, where everyone authenticates to a central authority, UIDs are synced, etc
<qman__> without one, security is impossible
<qman__> but allowing root and guests is a particularly bad idea
<fridaynext> gotcha - i'm making those changes now to see if it works.
<fridaynext> qman__: so since you have 'ro' in yours - is that read only for even your UID?
<qman__> yes, for that one, I have an rw one too
<qman__> it's the same except for that
<qman__> I also have a samba share on the same location as the rw nfs share, all the files have enforced group-ownership through several methods, and all the share users are a member of that group
<fridaynext> see with your settings, I can't connect to the NFS share. Even with creating a user on my *nix box with the same UID as my OSX user.
<qman__> the user has to have at minimum read permission on the directory you're sharing, write to write to it
<qman__> and execute on directories to list them
<fridaynext> the user does.
<fridaynext> the 'casey' folder is the one i'm trying to share: http://pastebin.com/raw.php?i=SJZwXUZa
<fridaynext> and my nfs export line is: /media/raid/tm_backup/casey	192.168.0.0/24(rw,async,no_subtree_check)
<qman__> ok
<fridaynext> I gave 'node304' the same UID as my OSX user I'm logged in as.
<qman__> trying to find where the log is stored
<fridaynext> I can't find it either...
<qman__> also, are you running nfs-kernel-server?
<fridaynext> qman__: ye
<fridaynext> s
<qman__> ok
<qman__> try changing to /255.255.255.0 instead of /24, just to test
<qman__> I'm not certain if it supports CIDR or not
<fridaynext> i can connect to another of my shares with /24, but I'll test this anyway.
<qman__> ok
<qman__> what error, if any, do you get trying to mount it?
<fridaynext> https://lh5.googleusercontent.com/--VrYk3lnOYg/U_qP9qN8hCI/AAAAAAAAHDk/UbT4XBCdjok/s800/Screen%2520Shot%25202014-08-24%2520at%25208.22.44%2520PM.png
<fridaynext> that error message doesn't help much
<qman__> try this on the server: sudo mkdir -p /tmp/nfsmount; sudo mount -t nfs 192.168.0.X:/media/raid/tm_backup/casey /tmp/nfsmount
<qman__> replacing X with your server's IP
<qman__> if it mounts that at least means the NFS configuration works, and points us to a network issue or a mac specific issue
<fridaynext> just issued that command - it's hanging.
<fridaynext> nope - did it. looks like it's mounted.
<fridaynext> 192.168.0.100:/media/raid/tm_backup/casey 5858043904 1247198208 4610845696  22% /tmp/nfsmount
<qman__> ok
<qman__> try the same thing on the mac
<qman__> from the terminal
<fridaynext> "Operation not permitted"
<fridaynext> http://pastebin.com/raw.php?i=yfZh4n9k
<qman__> try this: sudo mount -t nfs -o resvport,rw 192.168.0.X:/media/raid/tm_backup/casey /tmp/nfsmount
<fridaynext> okay, that worked nearly instantly.
<fridaynext> so now I need to know how to do that via the Finder.
<fridaynext> where'd you find that fix?
<qman__> ok, that tells us exactly what the issue is
<fridaynext> nice! what is it?
<qman__> that option tells the mac to use a reserved port (sub 1024)
<qman__> so we either need to tell finder to do that, or tell linux it's okay if you don't
<fridaynext> I wonder if I can add the port to my finder connect comand
<fridaynext> actually, I don't think I can do that in my finder command.
<fridaynext> so what's the option in /etc/exports to remove the port specificity?
<qman__> insecure
<qman__> add that to the list and it should work
<fridaynext> Ah, I did that once before, and it caused the Finder to freeze up.
<fridaynext> I just need some program that runs scripts on startup for OSX, then I can avoid this Finder nonsense.
<qman__> you can configure it to auto mount I think
<fridaynext> Yeah, finder has frozen again (once I've added 'insecure' to the list)
<qman__> http://apple.stackexchange.com/questions/102092/mount-linux-nfs-share-on-mountain-lion-10-8-5
<fridaynext> ah - it worked with the finder after about 40 seconds
<fridaynext> super slow.
<qman__> finder might be trying to do something silly
<fridaynext> i'm sure it is.
<fridaynext> you've been a great help, qman__!
<fridaynext> although, I do'nt have that same file that the poster is talking about on that page.
<fridaynext> no /etc/auto_nfs in my 10.9 build.
<qman__> this may or may not help: http://www.techrepublic.com/blog/apple-in-the-enterprise/mounting-nfs-volumes-in-os-x/
<qman__> unfortunately this is getting pretty deep into mac-specific territory I'm not familiar with
<fridaynext> yeah, that article was written bakc when you could mount NFS with disk utility.  Apple removed that option now.
<qman__> heh
<fridaynext> yup
<qman__> well, I think there's a feature of launchd to make stuff run at startup that could be scripted, but that's getting pretty deep for a filesystem mount
<fridaynext> oh - that did seem to work
<fridaynext> I just wasn't looking in the right place.
<qman__> for the performance problem, it could either be using poorly optimized defaults for the block sizes and such, or it could be trying to do something silly having to do with the apple-specific dotfiles
<fridaynext> oh i have great performance on NFS
<fridaynext> 100-110MB/s
<fridaynext> pretty much maxing out my RAID5 array
<fridaynext> Does that guy add too many attributes to the connect? the stackexchange post?
<fridaynext> or do those not affect the security?
<qman__> you don't need resvport since you changed the server side to be ok with it
<qman__> the rest are performance-related
<qman__> you can view what each one does in the nfs man page
<qman__> he is missing the 8 on the rsize
<qman__> they're supposed to be the same number in that example
<qman__> while I'm not sure how it behaves on a mac, using "soft" on a linux client is important especially if your network isn't perfectly reliable; the default "hard" will cause any process with a file open on the share, should the share drop off the network, to freeze until it comes back
<qman__> with soft, it just generates an error and allows execution to continue
<fridaynext> good god, now I can't get it to work.
<apb1963> I've NFS exported my root filesystem (from host/server blue to client orange), which of course includes /mnt/  which itself mounts a secondary drive; i.e. it's a different filesystem than /  So orange sees the mount as /mnt/blue/. Although I'm able to see the directory tree under /mnt/blue/mnt/ which has additional subdirs (i.e. /mnt/blue/mnt/a/b/c/d/e/f/g/h/) the files that should be under ../h/ are not there.  server is ubuntu 12.04, client is
<apb1963> ubuntu 14.04
<sarkis> any advice on whether to use ufw or just use iptables directly?
<qman__> sarkis: either is fine, it's up to your preference
<qman__> sarkis: ufw does a lot of things for you but can't handle every configuration possibility
<sarkis> ya  i lean with using iptables so if i ever switch distros i don't rely on a ubuntu only thing
<qman__> personally I prefer to write a shell script that does all the iptables stuff and put it in if-up.d
<qman__> for anything complicated, at least
<fridaynext> qman__: well, I can at least mount my NFS share - even though I can't get it to automount yet - but that's all I need for now, so I Can start taking backups of this machine.
<fridaynext> Thanks for all your help tonight!
<superspring> hey all, is there a way to get a list of which security updates are available for a server with aptitude or similar?
<pmatulis> superspring: i guess one cheesy way would be to enable only the security pocket and see what comes down
<superspring> pmatulis: i see, no one-liner commands to do it? :p
<pmatulis> superspring: there probably are
<pmatulis> superspring: hint, all security updates will have a '.#' tacked on to its version number
<pmatulis> superspring: the kernel is at least one exception to that rule
<pmatulis> as each kernel is its own package, not an upgrade
<trijntje> Hi all, ubuntu server running on the local network keeps losing ethernet connection, I've put the output of ping on pastebin. http://pastebin.com/WGe0Eq0Q How can I fix this?
<lordievader> trijntje: Run a packet capture and see what happens?
<trijntje> lordievader: how would I do that? I dont have a lot of experience debugging network problems
<lordievader> trijntje: "sudo tcpdump -i eth0" Hopefully it doesn't grow too large before it happens. Wireshark doesn't like large files.
<trijntje> lordievader: it only happens when I'm using torrents, so it might grow pretty fast
<lordievader> trijntje: Ah, then the problem is likely that you open too many TCP connection and DOS your router.
<lordievader> trijntje: Tell your client not to open a million tcp connections ;)
<trijntje> lordievader: I thought that as well, but the router is fine and the other pc's attached to the router are also fine
<lordievader> Still, limit the number of connections and see if the problem persists.
<trijntje> lordievader: I thought I did, this is the relevant part of my rtorrent config http://pastebin.com/br8Fa08d
<trijntje> also, netstat shows that rtorrent only has 16 open connections
<lordievader> trijntje: Hmm, that should be fine indeed. Carry on with the packet capture ;)
<trijntje> ok, I added a torrent and its dropped of the network now
<trijntje> I can still connect to the router and other pc's on the network
<trijntje> lordievader: ok, its back online now and I have the tcpdump file, what should I do next?
<lordievader> trijntje: Analyse it with Wireshark. See if there is something fishy going on.
<trijntje> lordievader: "tcpdump.txt isnt a capture file in a format that wireshark understands"
<trijntje> hm, looks like I have to rerun it with the -w flag to make it readable for wireshark, i'll have to capture it again
<trijntje> thanks for your help so far, i'll be back when I got the data
<lordievader> trijntje: Oh, right. I allways forget that -w flag.. Sorry 'bout that.
<trijntje> lordievader: no problem, its not an acute problem, just anoying
<lordievader> trijntje: I get that ;)
<Vladimir_> are there any microsoft office irc channels?
<soren> Vladimir_: If there are, we'd be some of the least likely to know about them.
<Vladimir_> soren: thanks, no problem=) hehe
<SrRaven-work> so, lets say I have two servers running Ubuntu Server, can I simply copy all relevant postfix files to the new machine ?
<lamont> SrRaven-work: depends on what  your intent is.  in anycase, make sure that they don't ultimately have the same answer for $myhostname
<lamont> though it defaults to /etc/mailname, so you should be ok
<mdev> is enabling automatic update installs safe/best security practice
<mdev> am using 14.04.1 lts
<mdev> nevermind bad packages can wreck havoc, so i'll do manually
<Vladimir_> Is there anyone who use Rancid ?
<Vladimir_> Maybe can help me set it up, I'm having alot of errors
<Vladimir_> I get one problem with Rancid, when I run clogin to a switch it says that the .cloginrc file does not exist in the /home/rancid/.cloginrc directory. Why does it look in that directory since I created that file in /var/lib/rancid/.cloginrc ?? very strange, how do I change where clogin looks for that file?
<_ruben> wonder if i can somehow combine bonding and mpio on the same set of nics (as in: bonding to have 1 management ip reachable over all nics, mpio to provide extra bandwidth for iscsi)
<pgar23> Vladimir_: is there a GUI for rancid? If so, look to settings
<Vladimir_> pgar23: no I have to get it to work before I can get web gui to work :/
<_ruben> seems setting fail_over_mac to active might work for me
<smoser> hallyn, when we move to systemd does that magically solve the issue with upstart and network namespaces ?
<smoser> without adding other issues?
<hallyn> smoser: i think so.  don't know enough about systemd
<patdk-wk> systemd not add other issues? :)
<smoser> patdk-wk, ewll, those will be magiclaly solved
<smoser> with pixie dust
<patdk-wk> ah, my daughter is good with that, I'll ship her over
<sarnold> apb1963: what was the missing piece?
<darrenr> Is it possible to create a deployment server on a minibook?
<apb1963> sarnold: firmware
<sarnold> apb1963: groan.
<sarnold> apb1963: nice debugging :)
<apb1963> well, I can't really take the credit....  TP-Link support suggested it.
<apb1963> (after suggesting some other somewhat dumb things that might have made sense for some people but was clearly a checklist of items that the support girl had no idea what she was really suggesting)
<apb1963> I have two new issues though :/
<apb1963> It just never ends....
<sarnold> apb1963: oh, nice. I guess it makes sense for them to include firmware update in their list of suggestions
<apb1963> well, that was last on the list
<apb1963> sarnold: but hey, I'm just glad it's working... better ;)
<sarnold> :)
<apb1963> sarnold: how's your NFS?
<sarnold> apb1963: heh, I haven't used nfs in a decade.
<apb1963> sarnold: yeah, same here... and now I need to
<apb1963> sarnold: unless of course you can think of a better solution
<sarnold> apb1963: well, depends what you're trying to do. the new hotnesses are iscsi or ceph, but I like the simplicity of NFS.
<apb1963> sarnold: never heard of those (this is why I ask these things).  I just want to share my filesystem with my other computer, same LAN.
<sarnold> apb1963: NFS is probably best then
<apb1963> sarnold:  if only it was working :/
<apb1963> sarnold: I've got the strangest problem.
<sarnold> not surprised :D
<sarnold> apb1963: firewalls on either system?
<apb1963> sarnold: yes... but I don't think that's the issue
<apb1963> sarnold: 103  7988 ACCEPT     all  --  any    any     192.168.0.0/24       anywhere
<apb1963> I've NFS exported my root filesystem (from host/server blue to client orange), which of course includes /mnt/  which itself mounts a secondary drive; i.e. it's a different filesystem than /  So orange sees the mount as /mnt/blue/. Although I'm able to see the directory tree under /mnt/blue/mnt/ which has additional subdirs (i.e. /mnt/blue/mnt/a/b/c/d/e/f/g/h/) the files that should be under ../h/ are not there.  It DOES find /mnt/blue/etc/foo; just
<apb1963> not /mnt/blue/mnt/files - server is ubuntu 12.04, client is ubuntu 14.04
<apb1963> As if that wasn't enough, at the moment the client is giving me: clnt_create: RPC: Program not registered when I do a showmount -e
<sarnold> apb1963: nfs export will only export that one filesystem; if you want clients to have submounts in the proper location, you've got to export that filesystem too, and mount that filesystem too
<sarnold> apb1963: program not registered is probably missing the rpc stats or whatever arranges the dynamic ports that nfsv1 loved so much
<apb1963> portmapper?
<apb1963> Actually I tried mounting just /mnt
<patdk-wk> well, programs that use portmapper
<patdk-wk> like lockd, statd, mountd, ...
<sarnold> ah, thanks
<apb1963> patdk-wk: that still begs the question.... how do I fix it?
<apb1963> using nfsv4 actually
<patdk-wk> nfs4 has lots more things
<patdk-wk> like id mapping and stuff
<patdk-wk> nfs4 is a whole nother level of difficulty, over nfs3
<apb1963> actually...  I don't care what version I use, as long as it works
<patdk-wk> and 10.04 doesn't support nfs4, I don't believe
<patdk-wk> I would be supprised
<apb1963> 12.04 here
<patdk-wk> I thought that was 12.04
<patdk-wk> oh, client is 14.04
<patdk-wk> thought that said 10.04
<apb1963> server 12.04, client 14.04
<patdk-wk> did you edit /etc/defaults/nfs*
<patdk-wk> to enable nfs4 stuff?
<apb1963> I did not
<patdk-wk> configured idmapper
<patdk-wk> and all that other nfs4 config info
<apb1963> nfs-common    and    nfs-kernel-server
<patdk-wk> and /mnt/blue isn't valid for nfs4
<apb1963> I took a look at them last night... I didn't see anything that needed changing
<patdk-wk> all nfs4 exports must happen from the single nfs4 export mount
<patdk-wk> well, last I checked
<patdk-wk> idmapper and gss aren't enabled by default
<patdk-wk> and idmapper is required for nfs4
<patdk-wk> gss is for kerberos only I think
<patdk-wk> though if your doing nfs4, you should be using kerberos
<apb1963> yes, gss was causing a problem... I blacklisted the  module
<patdk-wk> otherwise, there is not much point of using nfs4 :)
<apb1963> ah
<apb1963> well, it's just on my local lan
<apb1963> so let me go back to nfs3 since I don't need the new features
<sarnold> oh I thought nfsv4 brought tcp support which was nice because tcp
<patdk-wk> no
<patdk-wk> that was nfs3
<patdk-wk> nfs4 brings user authenication and mapping
<patdk-wk> though, user authenication requires kerberos
<patdk-wk> and usermapping is still broken
<patdk-wk> so yournames need to match and user id's
<apb1963> whoops... I already did go back to nfs3 earlier :)
<patdk-wk> sarnold I switched to tcp for netapp
<patdk-wk> now I'm not on netapp I switched back to udp, and got more benifits
<sarnold> patdk-wk: oh, what do you prefer about the udp service?
<patdk-wk> it doesn't have all the performance issues tcp has :)
<apb1963> my etc exports:     /         192.168.0.101(rw,sync,no_root_squash,no_subtree_check,fsid=0)
<patdk-wk> tcp is nice, if your not staying on a local network, or if you have dropped packets
<apb1963> I guess I should lose the fsid=0 if I'm going back to v3
<apb1963> patdk-wk: so there's really nothing to do on the client other than mount the fs, is there?
<patdk-wk> and install the nfs-common package
<patdk-wk> yep
<patdk-wk> specify vers=3 as a mount option
<apb1963> 192.168.0.12:/  /mnt/blue   nfs      auto,noatime,nolock,bg,nfsvers=3,intr,tcp,actimeo=1800 0 0
<apb1963> root       407     2  0 00:37 ?        00:00:00 [rpciod]
<apb1963> root       415     1  0 00:37 ?        00:00:00 rpc.idmapd
<apb1963> root       635     1  0 00:37 ?        00:00:00 rpcbind
<apb1963> statd      709     1  0 00:37 ?        00:00:00 rpc.statd -L
<patdk-wk> seriously? your exporting root?
<apb1963> it's local LAN
<patdk-wk> still, it's odd :)
<apb1963> it's just (in theory) easiest
<apb1963> I didn't want to mess around with exporting one by one what I might need later.
<apb1963> one export... done.
<apb1963> again... in theory.
<apb1963> so do those look like the proper set of processes?
<patdk-wk> for what?
<patdk-wk> client or server?
<apb1963> client
<patdk-wk> same as mine
<apb1963> and the server...
<apb1963> root     14707     1  0 01:38 ?        00:00:00 /usr/sbin/rpc.mountd --manage-gids
<apb1963> root     14826     1  0 01:38 ?        00:00:00 rpcbind -w
<apb1963> statd    14840     1  0 01:38 ?        00:00:00 rpc.statd -L
<apb1963> root     16293     2  0 Aug24 ?        00:00:00 [rpciod]
<apb1963> root     16303     1  0 Aug24 ?        00:00:00 rpc.idmapd
<patdk-wk> well, I don't have a 12.04 nfs server anymore
<patdk-wk> but it seems right
<patdk-wk> no firewalls?
<apb1963> on the server yes
<patdk-wk> and everything is allowed for nfs?
<patdk-wk> or for the client?
<patdk-wk> you know what ports those rpc processes are using?
<apb1963> 53  4250 ACCEPT     all  --  any    any     192.168.0.0/24       anywhere
<patdk-wk> that line doesn't mean anything
<apb1963> it doesn't?
<patdk-wk> what table is it in?
<patdk-wk> wihtout context, no
<apb1963> iptable
<patdk-wk> iptable is a program
<apb1963> INPUT table
<apb1963> well that's interesting.... showmount -e with a hostname works... but without gives me clnt_create: RPC: Program not registered
<apb1963> so I guess... it's not an issue to worry about maybe
<apb1963> that's from the client
<apb1963> the server just hangs with the same command.   I guess I'll chalk it up to different versions.  None of that is my real problem... just thought maybe it might provide a clue.
 * apb1963 puts away his red herring
<patdk-wk> you did do a, showmount -e 192.168.0.12
<patdk-wk> showmount -e works for me, on 14.04 and 12.04
<apb1963> patdk-wk: it's all very strange.  I can see /mnt/blue/mnt/swapfile but /mnt/blue/mnt/backintime/whatever doesn't exist
<apb1963> well I did once I realized it needed a hostname
<patdk-wk> did you cross filesystems?
<patdk-wk> nfs doesn't support crossing filesystems
<apb1963> the server has /mnt/backintime mounted
<apb1963> its a different disk
<patdk-wk> but you exported /
<patdk-wk> not /mnt/backintime
<apb1963> right
<patdk-wk> nfs depends on inodes to keep things straight
<patdk-wk> when you cross filesystems, inodes get all screwy
<patdk-wk> and things don't work
<patdk-wk> this is why nfs4 has the fsid option
<patdk-wk> and cause it only has 1 export
<xibalba> hey folks, i just upgraded to ubuntu 14 lts from 12 lts, i'm trying to start up one of my apps and its giving me the following. http://puu.sh/b7xda/2676f0628e.png
<xibalba> i suspect i need to downgrade perl from 5.18.2 to 5.14.0
<patdk-wk> xibalba, oviously you installed some non-ubuntu software on there
<patdk-wk> that is made for perl 5.14
<xibalba> correct, zimbra collaboration suite
<patdk-wk> so when you upgraded, and got perl 5.18, it's broken
<patdk-wk> can't help you there
<sarnold> why not just upgrade zimbra?
<xibalba> yup, can i downgrade perl?
<xibalba> i'm on the latest version...i think, double checking
<xibalba> yeah 8.0.7 is the version i'm on
<xibalba> and is the latest
<patdk-wk> never upgrade, till your software your using, supports it :)
<xibalba> well yeah i just made a stupid mistake
<xibalba> i wanted to try the do-release-upgrade, i hadn't tried it before and just jumped the gun
<xibalba> can i downgrade perl?
<patdk-wk> it's not supported
<patdk-wk> dunno what would happen
<patdk-wk> you could attempt it
<xibalba> i'd just have to compile and install
<patdk-wk> but most likely, the result is, your system will be unusable
<xibalba> fudge, i wish i snap shotted this thing
 * xibalba kicks /me
<qman__> The takeaway: never upgrade without a rollback plan
<xibalba> yeah, i wouldn't in production
<xibalba> this is just my personal mail server
<dasjoe> You could upgrade to the Zimbra beta, it's available for 14.04
<xibalba> dasjoe, ty didn't see that
<apb1963> patdk-wk: sorry... phone rang
<apb1963> patdk-wk: well, I thought I had tried what you said.... but apparently I didn't do it right because I just tried it now and it's working.
<apb1963> patdk-wk: Thank you!!!
<xibalba> dasjoe, thank you! beta did the trick
<qman__> got lucky there, hopefully there's no show stopping bugs
<zzxc> Hey guys. I have kind of a wierd issue. I created a AWS 14.04 instance. When I make a sudo call I get the message "sudo: unable to resolve host (IP Address)". Its only for sudo calls, and I haven't edited the hostfile. Anyone have any ideas?
<zzxc> hostname file*
<lordievader> zzxc: Is the hostname in /etc/hosts correct (the localhost entry I mean)
<lordievader> ?
<zzxc> Yep. 127.0.0.1 localhost then the subsqent ipv6 stuff as well
<zzxc> lordievader: Sorry for the delay. I had to kick off a build on another machine,
<lordievader> zzxc: Ok, what hostname is in /etc/hostname?
<zzxc> ip.170.30.xx.xx
<lordievader> zzxc: Hmm, strange hostname. Anyhow, add it after localhost in /etc/hosts (unless it is defined elsewhere in that file).
<zzxc> lordievader: Its pretty common fomrating for aws instanse.
<zzxc> instance* One sec. I'm going to test if that worked.
<zzxc> Yeah. That worked.
<lordievader> zzxc: ;)
<zzxc> Funky. I've never had an instance where it didn't add that in the first place. Cool thank you lordievader.
<lordievader> zzxc: No problem ;)
<zzxc> Also the whole xx.xx at the end was just because I didn't want to post the machine's ip (though it is interal) on IRC.
<lordievader> zzxc: I know ;)
<zzxc> Figured you would. =]. Also on a unrelated note, I love vundle occationally.
<delinquentme> so after I open up a few ports for some of my programs to use ... do I need to restart something to facilitate that?  $ sudo ufw allow 4506 ... then I check it with $nmap -v 127.0.0.1 -p 4505 ... and it says its closed
#ubuntu-server 2014-08-26
<sarnold> 4505 or 4506?
<sarnold> delinquentme: does netstat -lnp show something listening on 4505 or 4506?
<sarnold> delinquentme: .. if so, is it bound to localhost?
<delinquentme> sarnold, nope !
<sarnold> delinquentme: nmap can only tell if a port is open if a process is actively listening on that port
<delinquentme> actually i used $ nmap -v 104.130.135.79 -p 4505 to check it from a remote machine
<sarnold> delinquentme: shows closed for me too, but '22' for example is openbecause sshd is listening
<delinquentme> well theres a web server called  zeroMQ which should be listening on ports 4505 and 4506 and
<delinquentme> and I ran $ sudo ufw allow 4505 && sudo ufw allow 4506
<sarnold> delinquentme: what does sudo netstat -lnp | grep :450  show?
<delinquentme> sarnold, nothing
<sarnold> delinquentme: okay, how about ps auxw | grep -i z  ? does it show zeromq running? that'll let you know if you need to figure out (a) why it isn't starting or (b) why it isn't listening on the ports you expect
<delinquentme> zeromq is not showing up
<delinquentme> damn.
<delinquentme> is there not a way to tell whether a port is open by itself?
<sarnold> delinquentme: nc -l 4505   on the host where you want to run zmq, then on another host, run "echo hi | nc <ip address> 4505"  -- if you see 'hi' in your first terminal, then the firewall lets the port through
<delinquentme> sarnold, check. I got "hi" on the first terminal
<delinquentme> so then it could be an issue that zeroMQ is only running while its needed ( this is a infrastructure provisioning software )
<sarnold> delinquentme: nice; if you want to make sure your firewall is okay,try again on ports e.g. 4000 or something :)
<sarnold> delinquentme: no idea there. from what I know of zmq it seems very unlikely to me that it wouldn't be running all the time..
<delinquentme> sarnold, yeah cool trick !
<delinquentme> right now im trying to sort out why im getting so many hanging instance events on rackspace
<delinquentme> IE  I ask for a node and it takes > 10 minutes for it to answer back as being up + provisioned
<sarnold> owwwwwww
<delinquentme> yeah i was thinking it was rackspace ... but it might have been that I failed to open up the right ports for the software source on the master
<delinquentme> ... but then im not sure why it ever even once provisioned
<rostam> HI I want to create a local repo of some debian packages with their dependencies what tool would be the best and easiest to do this task? thanks
<sudormrf> hey guys, trying to figure out which video driver I need.  it is a bit of an odd question since I am running server, but I installed xbmc on my nuc and now xbmc is saying it needs a video driver.  was looking at this package, i965-va-driver.  that look right?
<sudormrf> the simple question, which driver do I need for this? 00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09)
<Patrickdk> intel :)
<sudormrf> Patrickdk: aye, but which?
<sudormrf> xorg isnât installed
<cfhowlett> sudormrf, this one I believe https://01.org/linuxgraphics/downloads
<Patrickdk> what does xorg have to do with graphics drivers?
<sudormrf> cfhowlett: is it in the repos?
<sudormrf> Patrickdk: nothing.  was just mentioning it in case it mattered :)
<Patrickdk> I never bothered
<Patrickdk> it *just works*
<sudormrf> Patrickdk: ?
<Patrickdk> sudormrf, /kernel/drivers/video/intelfb/intelfb.ko
<Patrickdk> that is the video driver for it
<Patrickdk> so if you see intelfb in lsmod, it's loaded
<sudormrf> Patrickdk: it is not loaded
<Patrickdk> well, what are you using?
<Patrickdk> is this a laptop?
<sudormrf> an intel NUC
<sudormrf> sorry had to make some change
<sudormrf> changes
<sudormrf> not sure where to go from here
<sudormrf> would prefer to load using apt
<lordievader> Good morning.
<abhishek> can u tell me why ext4 partition displays 93GB but actual size is 99GB
<abhishek> I can't understand the mathematics behind it
<sarnold> abhishek: echo "1024 * 1000 * 1000 * 99 / (1024 * 1024 * 1024)" | bc -l   :)
<abhishek> what is 1000 here
<abhishek> actually I am confirmed that this is 1024 as a unit
<sarnold> that's what your drive maker calls a "megabyte" and "gigabyte" :)
<abhishek> I thought this is taken by some jounaling features
<sarnold> overhead probably does account for some of that..
<kamilboratynski> Hello, I installed some caintainers.
<kamilboratynski> But I see some tun-tap cards that are not attached with containers.
<kamilboratynski> container*
<kamilboratynski> Suggestion...?
<pmatulis> morning
<Aison> does the order of the files in /etc/apache2/sites-enabled matter?
<Aison> somehow one of my virtual hosts is allways ignored
<Aison> the others are working
<andol> Aison: The order matters in the regards that the first one will become the default vhost.
<andol> Assuming you are dealing with named vhosts.
<Aison> ok, this one starts with 000
<Aison> andol, yes, i'm dealing with named vhosts
<andol> Yepp, that 000 prefix is often used for the purpose of making sure the default really ends up being the default.
<Aison> looks like the hostname is not submitted to apache
<Aison> very strange
<Aison> 10.0.1.1 - - [26/Aug/2014:14:38:13 +0200] "GET /phpsysinfo.txt HTTP/1.1" 404 516 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.63 Safari/537.36"
<Aison> "-" no hostname here
<RoyK> Aison: I've seen that happen, but somewhat never understood how to fix it without disabling the "default" site. Try that first.
<patdk-wk> heh?
<RoyK> patdk-wk: heh.
<patdk-wk> hostname not being submitted when using http 1.1, is invalid, and will cause an error
<RoyK> Aison: can you do wireshark dump of that? hostname should be submitted, but apache may mess it up (or your config may)
<patdk-wk> and "-" != hostname, that is the referer, by default
<Aison> RoyK, right, when I disable the default, then everything works
<RoyK> Aison: bingo...
<patdk-wk> never had an issue with it myself
<RoyK> patdk-wk: I have
<patdk-wk> hmm, I need to figure out how to deal with the apache 2.2 -> 2.4 change
<Aison> RoyK, and how did you solve it finally?
<RoyK> Aison: removed default config :P
<patdk-wk> oh wait, you mean just remove 00-default right?
<patdk-wk> ya, I always do that
<patdk-wk> and make my own
<RoyK> yes
<RoyK> perhaps there's something in the catch-all logic there that catches too much?
<patdk-wk> think it depends on what you do
<patdk-wk> I found one server I have the default enabled
<patdk-wk> on it, I use ip specific vhosts, virtualhost fullip:80 style
<LucidGuy> Ok, I have about 4-5 ubuntu servers mainly serving files/nfs.  I wan't to start monitoring their performance numbers, something straight forward.  Thinking of just installing Munin on each.  Recommendations?
<patdk-wk> so the ip overrides the default vhost
<patdk-wk> but if you make another *:80 vhost, you will have to specify servername and/or serveralias for it
<RoyK> patdk-wk: and when that doesn't work either, and it returns to the default, even if the new config is correct?
<patdk-wk> that, I haven't seen :)
<RoyK> that's what Aison is seeing ;)
<Aison> the funny thing is, I did not change anything in the configs
<Aison> this config worked now for over 2 years
<Aison> suddenly it stopped working...
<RoyK> Aison: a glitch in the matrix? ;)
<Aison> all what I did is a reboot of the server....
<Aison> so something happened
<Aison> do anybody know a good web file manager (maybe with ldap support)
<Aison> damn, building my raid array takes hours.....
<Aison> already waiting 24h
<rostam> hi  does all the debian files located in the directory /var/cache/apt/archives are installed on my system?
<lordievader> rostam: Not per se. It is a temporary place to store .deb files.
<lordievader> rostam: Apt places them there when they are downloaded.
<rostam> lordievader, oh I see thanks
<jamin> has anyone here successfully used the text debconf_ui frontend for oem-config on 14.04?
<jcastro> jamespage, I have an odd problem. The charm-tools in trusty is out of date
<jcastro> but the new one, in the PPA, depends on a new package, charmworldlib
<jcastro> which is not in trusty
<jcastro> is there any option at all for getting a new package in -updates or is it pretty much we're doomed?
<thebwt1> Hey guys, what package provides ubuntu's sweet stats on login?
<lordievader> thebwt1: Seems to be 'update-notifier-common'.
<thebwt1> thanks lordievader
<YamakasY> anyone using preseed with the interfaces ?
<YamakasY> It's overwriting my /etc/network/interfaces file everytime
<thebwt1> lordievader: actually it's landscape-client
<shredding> can someone help me on how a curl command should look like if the client_id and api_key of oauth are weird?
<lordievader> thebwt1: The scripts that provide the data are all over the place though: apt-file search /etc/update-motd.d
<shredding> curl -X POST -d "grant_type=password&username=Christian&password=secret" 'http://WpDs.u9yrWD3js;fYq?cog;MvTiq0Bj02r8LTL_v:c1W2:PzVsccsIt_G_uxnwE_TC08z14IKqYYKy0DJAok;_B?RvuVashIsARqhGwF=ChoBJveA7LvB;C?IeXyp?0ZiyBtg9;tSwTjVdC.K_f@n=;@V;2:VoX@IhPyiHzC@localhost:8000/o/token/'
<shredding> ends with curl: (6) Couldn't resolve host 'WpDs.u9yrWD3js;fYq\'
<shredding> I understand why that is, but i donât know how to male it work.
<dav1dp0101> Does anyone have any expertise troubleshooting Upstart scripts?
<Aison> why do one of my ubuntu server have got p9p1 as name for the network device
<Aison> and the other one eth0
<Aison> where are these settings to change that behavoir?
<henkjan> Aison: check https://bugs.launchpad.net/ubuntu/+source/biosdevname/+bug/1284043
<uvirtbot> Launchpad bug 1284043 in biosdevname "udev renaming the same hardware network i/f to different name, breaks networking and firewall (dup-of: 1324558)" [High,Confirmed]
<uvirtbot> Launchpad bug 1324558 in biosdevname "[SRU] biosdevname returns identical names for two different devices." [Undecided,Fix committed]
<bananapie> What is the "right" way to connect to generic ssl service ( https, imap, whatever ) and check that the certificate is signed by an authority in /etc/ssl/certs ?
<bananapie> I can't use my browser because it won't work for anything other than https
<bananapie> Sorry, I got disconnected.
<darksider> hi guys, just installed ubuntu server 14.04. On first boot, after installation, the booting process seems to hang, with the harddrive led blinking every second or so. Last line is Adding 4GB swap on /dev/sda2
 * genii makes more coffee and wanders over to #ubuntu-meeting
<smoser> utlemming, https://bugs.launchpad.net/ubuntu/+source/walinuxagent/+bug/1360471
<uvirtbot> Launchpad bug 1360471 in walinuxagent "update walinuxagent to 2.0.7" [Medium,Confirmed]
<smoser> and, fyi, ubuntu ServerTeam meeting going on irght now in #ubuntu-meeting
<zul> hallyn: ill get to libvirt tomorrow
<hallyn> zul: ok
<zoidberg-> Hey does anyone run a speedtouch 330 modem on debian, or has done in the past ( i know its an old modem ), http://codepad.org/SFzY5kZL <-- anyone know how to get this working i can't see to get it to grab device like ttyS0?
<sarnold> zoidberg-: https://lkml.org/lkml/2002/11/7/186 ... is there still a 'speedtouch' kernel module to load? :)
<Sierra> For someone that has never worked with cronjobs before, is it possible to create one that'll run this every 30 or so minutes? sudo service apache2 restart & sudo service mysql restart
<Sierra> It's a temporary fix to an issue a server I'm working on is having, but it'll hopefully help a bit while I workon the main issue
<sarnold> Sierra: add "0,30 * * * * root /usr/sbin/service apache2 restart ; /usr/sbin/service mysql restart"  to /etc/crontab
<RoyK> sarnold, Sierra: is it really wise to restart apache and mysql to fix an issue?
<sarnold> RoyK: no :)
<RoyK> Sierra: check the logs - they should see what's wrong
<sarnold> RoyK: but Sierra promised it's a short-term workaround
<RoyK> s/see/show/
<RoyK> sarnold: obviously, yes
<Sierra> Royk: There seems to be a memory leak somewhere thats causing the server to completely die after ~45 minutes, this is just temporary while I get this fixed
<RoyK> Sierra: what's leaking?
<Sierra> Not certain yet
<RoyK> Sierra: have top running or monitor the problematic processes with munin or something to show their memory use
<Sierra> All its running right now is a set of wordpress sites, but I've watched the memory usage slowly climb until it hits a point where it stops responding, using htop
<Sierra> Process wise, it seems to be heavy CPU usage from apache, and growing memory usage from mysq
<RoyK> Sierra: the apache processes or mysql?
<RoyK> ah
<RoyK> that's rather bad...
<Sierra> Yeah.. Just suddenly started up yesterday, no warning. Nothing was updated or changed beforehand that I'm aware of
<RoyK> too bad wordpress can't use postgresql :P
<RoyK> Sierra: any wordpress or php upgrade happened recently?
<RoyK> erm... if mysql is eating the memory...
<RoyK> just forget that question
<Sierra> I do know there was a wordpress update released, but as far as I know it hasnt actually been updated yet due to the resource issues
<RoyK> Sierra: can you isolate the process(es) leaking?
<Sierra> I'm wondering if there's just a suddenly massive amount of people trying to access the websites or something (Which is odd.. It's nowhere near tax season)
<sarnold> http://people.canonical.com/~ubuntu-security/cve/2014/CVE-2014-5265.html
<uvirtbot> sarnold: The Incutio XML-RPC (IXR) Library, as used in WordPress before 3.9.2 and Drupal 6.x before 6.33 and 7.x before 7.31, permits entity declarations without considering recursion during entity expansion, which allows remote attackers to cause a denial of service (memory and CPU consumption) via a crafted XML document containing a large number of nested entity references, a similar issue to CVE-2003-1564. (http://cve.mitre.org/cgi-bin/cv
<sarnold> http://people.canonical.com/~ubuntu-security/cve/2014/CVE-2014-5266.html
<uvirtbot> sarnold: The Incutio XML-RPC (IXR) Library, as used in WordPress before 3.9.2 and Drupal 6.x before 6.33 and 7.x before 7.31, does not limit the number of elements in an XML document, which allows remote attackers to cause a denial of service (CPU consumption) via a large document, a different vulnerability than CVE-2014-5265. (http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-5266)
<Sierra> RoyK: By the looks of it, a bunch of mysql processes using ~2.5% of memory each, and the number of those processes is stadily growing
<RoyK> Sierra: how is the I/O load?
<RoyK> given by "wio" by "sar" (from sysstat) or "wi" from "top"
<RoyK> Sierra: install sysstat (or munin) if you want to monitor the system over time. using just top doesn't give you history
<Sierra> RoyK: htop, sorted by memory usage: http://i.imgur.com/JSGHNrL.png
<Sierra> It was just rebooted about 2, 3 minutes ago
<RoyK> Sierra: htop doesn't seem to say much about the I/O load
<Sierra> The IO monitors for htop seem to not work.. Everything's at 0
<RoyK> use top
<RoyK> or just install munin+sysstat
<RoyK> btw, there doesn't seem to be any global wio stats from htop
<Sierra> RoyK: Ok, I've got sysstat installed. Did a small google search, should I follow these instructions? http://www.leonardoborda.com/blog/how-to-configure-sysstatsar-on-ubuntudebian/
<RoyK> Sierra: looks ok, although I'm not sure if you need 2min samples - up to you
<Sierra> For now, might as well, in case the server decides to implode after 5 minutes of running instead of 30 xD
<RoyK> Sierra: also - try munin - it makes nice graphs to help visualising things
<RoyK> hehe
<Sierra> Ubuntu manual pages for munin are showing instructions for two different servers; Not sure what that's about >.<
<RoyK> Sierra: apt-get install munin munin-node # that should do for most
<RoyK> munin is pretty good at configuring plugins available for the system
<RoyK> Sierra: by default, it'll setup a /munin place on the webserver - see /etc/munin/apache.conf for the configuration of the web access
<Sierra> RoyK: Well, this just suddenly happened in a split second: http://i.imgur.com/S4fW2sP.png
<Sierra> i.imgur.com/8K5199J.png
<RoyK> Sierra: run top - check i/o statistics
<Sierra> So maybe its not a memory leak, somethings causing a ton of threads to suddenly start up from apache and mysql..
<Sierra> And the server's not responding again
<RoyK> Sierra: your load is very high, so the problem is probably due to I/O load
<RoyK> Sierra: use top next time - not htop
<Sierra> Is there a way to get the SSH session to have a higher priority or something so the server will still respond to ssh?
<Sierra> It seems that it is responding, just incredibly slowly..
<RoyK> usual I/O issue
<RoyK> probably swapping the best it can
<Sierra> Gonna have to call the guys at the datacenter and have em reboot the box again
<Sierra> Ok so it finally responded to a service apache2 stop after ~5, 6 minutes, and everythings running fine again
<Sierra> Although the webserver's offline now
<Sierra> RoyK: How do I have top display the i/O stats?
<RoyK> Sierra: echo b > /proc/sysrq-trigger
<RoyK> that'll reboot it
<RoyK> it won't sync the disks or anything, but hopefully, ext4 will sort that out
<Sierra> RoyK: Not sure if the reboots necessary atm as its load dropped down after it finally stopped apache2
<RoyK> Sierra: echo s > /proc/sysrq-trigger; echo u > /proc/sysrq-trigger ; echo b > /proc/sysrq-trigger
<RoyK> that'll be safer
<RoyK> ok
<RoyK> Sierra: top will report i/o stats by default
<Sierra> Unless its not labelled as io, I'm not seeing it :/
<RoyK> wi
<RoyK> time spent waiting for i/o
<RoyK> erm
<RoyK> wa
<RoyK> %Cpu(s):  0,2 us,  0,2 sy,  0,0 ni, 99,7 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
<RoyK> the wa thing there
<Sierra> http://i.imgur.com/HwmcNCi.png
<Sierra> Oh there it is
<RoyK> press 1
<RoyK> so you get the output of all CPUs
<Sierra> I'm seeing it hitting 60-80%wa on both cores
<Sierra> And now 90%
<Sierra> http://i.imgur.com/2ERj1KB.png is what its showing right now
<RoyK> Sierra: lots and lots of swap used - that's the cause
<Sierra> Any idea whats causing the massive spike in resource usage that wasn't occuring before? It happened at the end of tax season (When lots of people were using the site) but nowhere near this bad
<Sierra> The server still responded back then, at least
<RoyK> Sierra: I'd try with reducing the maximum processes forked by apache
<RoyK> Sierra: 2GB isn't a lot if a webserver is under stress
<Sierra> Royk: Not certain how to do that. Google search came up with things that seem irrelevant
<RoyK> Sierra: set MaxClients to 50 or something - perhaps less - under <IfModule mpm_prefork_module>
<RoyK> in /etc/apache2/apache2.conf
<RoyK> Sierra: it certainly looks like a truckload of connections to your webserver that's killing it
<RoyK> Sierra: change that - stop apache and start it again - don't just do a reload
<Sierra> Would help if the router wouldnt keep disconnecting my laptop
<sarnold> Sierra: are you under constant ddos attack or something?
<Sierra> I have no idea, sure as hell hope not >.<
<Sierra> Just randomly started up yesterday or the day before
<sarnold> Sierra: netstat -tun  might be useful to determine what's going on..
<Sierra> Well the server didnt even last 2 minutes before I lost access again.. -headdesks-
<RoyK> Sierra: after changing the apache conf?
<RoyK> Sierra: looks like a DDoS to me
<Sierra> I didn't even get so far as to restart apache to load the new config
<Sierra> So how do I explain what a DDoS attack is to a boss that knows nothing about networking and will probably think its an excuse?
<Sierra> XD
<RoyK> Sierra: what sort of content is on this website? Have you offended some people? ;)
<kklimonda> contact your ISP asking them to route the ddos to /dev/null? ;)
<sarnold> Sierra: if it is a DDOS, your ISP can help immensely. give them a call.
<sarnold> Sierra: they'll probably be happy to help even if you haven't yet figured out what exactly is going on, they'll probably be able to tell in a few seconds if it is network related or not :)
<Sierra> RoyK: It's a series of front end wordpress websites for a united states tax efiling company
<sarnold> Sierra: .. in the middle of august. heh.
<Sierra> Yeah.
<Sierra> Thats whats got me wondering. Plus, when tax season hit, it was nowhere near this bad
 * Sierra is just the programmer that writes the web code :(
<Sierra> And manages the server
<kklimonda> heh, that's pretty painful position to be in.
<kklimonda> I'm now trying to train some of my webdevs in the old work to manage servers, and it's not going well :(
<Sierra> Welp, gonna have to see if the datacenter can help with this. That's the only contact I have (That will recognize me as authorized to discuss such issues)
<Sierra> My supervisor's on a week long vacation. So this is fun XD
<kklimonda> yeah, definitely give them a call if you think it's network related
<Sierra> Already done, they're going to have another guy call me back
<RoyK> Sierra: better give that VM another few gigs of memory, for a start, and move the database to a dedicated server
<Sierra> RoyK: It's not a VM, it's running on a physical server that my supervisor provided
<Sierra> Sitting on a rack at a datacenter
<RoyK> Sierra: a physical server with 2GB of memory?!?
<Sierra> Yeah.
<Sierra> I brought that up, was told to just do it
<Sierra> 'This server ran perfectly fine before'
<RoyK> Sierra: my home server has 16GB :P
<Sierra> My personal VM has 512MB XD
<Sierra> And runs better than that damn thing does.
<Sierra> Even when its not having this issue
<RoyK> Sierra: again - start out with decreasing the number of apache processes
<RoyK> Sierra: if you get access to that thing :P
<Sierra> I did manage to get the change made to the file
<Sierra> The server just kinda.. Died before I could get the apache2 restart out
<RoyK> Sierra: do you have some remote control?
<RoyK> Sierra: remote console?
<Sierra> SSH terminal is my normal method, but that immediately responds with 'Server closed the connection unexpectedly' now
<RoyK> Sierra: most servers have iDRAC or something like that in them, which works well if it's configured correctly
<Sierra> There was /supposed/ to be a newer server with better hardware setup, but that never actually happened
<RoyK> IMHO psycial servers should only be used for special purposes :P
<RoyK> (such as VM hosts)
<RoyK> we even run our database servers virtualized - it works
<Sierra> The issue there is getting my boss (Not the same person as my supervisor) to actually pay for the hardware
<RoyK> heh
<Sierra> Whenhe's convinced that what we have has been working so if it stops working I can fix it
<RoyK> then let the system die and tell your boss why it died
<qman__> Yeah, I've had servers lock up from improperly tuned apache maxclients, that's probably the situation
<qman__> Fortunately mine were all VMs, connect to hypervisor, hit ctrl+alt+del or reset
<RoyK> qman__: wrong answer - servers like these should be virtulised
<Sierra> RoyK: Sadly, that may result in loss of a job
<qman__> Retune apache, hope for the best
<RoyK> Sierra: not if you play your cards correctly - you can't maintain a server with too little resources
<qman__> And yes, they should all be virtualized, no reason not to
<Sierra> Tbh, if I was actually paid enough, I'd have probably just upgraded the hardware myself
<Sierra> But I'm not paid enough for that. :<
<qman__> Running hardware incapable of virt in prod is inexcusable today
<Sierra> To be fair... I dont think running everything on VMs is a good idea anyway. Head honchos get confused enough with this older stuff as it is
<Sierra> And they insist everything I do be something they can understand how to us
<Sierra> >.<
<Sierra> There's so many things that could be done better if I just had free reign to get it done right
<qman__> Its easier to ask forgiveness than permission
<qman__> If you do it right and prove it works, that normally solves the problem
<Sierra> Tried that once. They had me go and redo it the way they wanted
<qman__> Otherwise, its time to job hunt
<Sierra> Ohhh I've been tempted
<Sierra> If my dad's livelihood didn't depend on this business staying alive, I'd probably be working toward IT at MAcy's. Family friend works there, and has said I'd be great with them (I've even gone with him a couple times as an intern of sorts)
<RoyK> Sierra: I've talked to people running large-scale Oracle databases virtualised - they work
<RoyK> Sierra: running a webserver on physical stuff isn't good
<Sierra> What I really want to do is make a living building custom PCs. I enjoy that, it's like super fancy legos
<RoyK> Sierra: also - separate the webserver from the database server
<RoyK> Sierra: a database server have different needs than an apache server - apache+php may need some RAM, but rarely I/O, unless access takes off and it starts swapping
<qman__> Yep
<Sierra> Well, got this from the datacenter
<Sierra> I have verified with one of our network engineers and we cannot see any indication from your current network usage that would indicate a DDoS from a networking standpoint.  Network activity has spiked, but within norms of the historical data.
<qman__> Yeah, an improperly tuned server can choke on just normal heavy load, or even a security scanner or web crawler hitting everything
<Sierra> Hm... On a random note
<Sierra> The server's not getting massive resource hits
<Sierra> Someone recommended a command earlier to check network usages, but I've forgotten what it was.. netstats something or other?
<Sierra> Found it, netstat -tun
<Sierra> Is there a way to have a certain command run when memory usage hits a certain number, say, 1024MB?
<Sierra> Hm...
<Sierra> I'm seeing a lot of conenctions from a single IP address with random ports
<sudormrf> Hey guys, I would like to install intel graphics drivers to my NUC running ubuntu server.  I am not sure which package in the repos I should use.  any suggestions?
<RoyK> Sierra: just block that - iptables -I <that ipaddress> -J DROP
<Sierra> RoyK: Not sure if that's normal or not for a webserver, though
<sudormrf> Sierra, setup fail2ban
<sudormrf> that way you don't need to do this manually
<Sierra> Ok looks like what I was seeing might be norma... Loaded up the website, clicked a couple pages, and now my ip address shows up with the same thing
<Sierra> Isnt fail2ban for failed logins?
<qman__> Yes
<RoyK> Sierra: you can use it for all sorts of things
<sudormrf> Sierra, I use it to block IPs all the time
<sudormrf> Sierra, what RoyK said
<sudormrf> I have it looking at the syslog for a certain output and then it blocks the IP it finds in that output
<qman__> The out of the box config only does SSH logins IIRC but it is very configurable
<qman__> I have it set up on an asterisk box
 * RoyK uses astersk or wind whenever someone points guns at him
 * RoyK uses astersk or windows whenever someone points guns at him
<qman__> That's basically the situation here, inherited a mess, made it work
<sudormrf> qman__, I believe you are correct.  I have it setup to block a specific type of entry.  was noticing a bunch of easter european IPs trying to do stuff
<sudormrf> so I stopped them
<sudormrf> first I blocked entire IP ranges for those regions...and then some sites stopped working lol
<sudormrf> had to scale back
<qman__> This asterisk server is completely integrated into the ticketing system and email system, so any change is going to be very involved
<qman__> Email system is an unmaintainable disaster, too
<RoyK> qman__: is asterisk so full of (some word not allowed on this channel) as earlier?
<qman__> Yeah, asterisk is...appropiately named
<RoyK> :P
<qman__> One of the email servers is running debian etch, and someone decided it'd be a good idea to install random lenny packages, so now its totally non-upgradeable, can't install or remove software
<qman__> It sits in the middle of the relay path with a custom LDAP+postfix integration that doesn't work in newer versions
<genii> RoyK: I find that wind was more apt, since they are a fairly crappy cell provider company
<teward> is it possible to configure BIND9, when it acts as a caching/forwarder server, to NOT go hunting for authoritative nameservers to go get data from?  It appears to do this in all cases...
<teward> (this is all bind9 on Ubuntu Server boxes, hence me asking here)
<RoyK> teward: DNS is pretty straight-forward - you configure the SOA and it says how long things can be cached
<teward> RoyK, i have it only configured as a forwarder, i.e. it forwards all requests to 8.8.8.8 and 8.8.4.4 (google DNS).
<teward> BUT, it also is reaching to the root nameservers, then the ccTLD/gTLD nameservers, etc. until it hits one for an authoritative answer
<teward> trying to figure out if there's a way to disable that
<qman__> Remove the root hints
<qman__> It can't ask root servers if it doesn't know what they are
<sarnold> qman__: haha
<sarnold> teward: look around for a setting that might be "forward only" or "don't recurse" or similar
<RoyK> I have finally discovered what's wrong with my brain; On the left side, there's nothing right, on the right side, there's nothing left!
<Sierra> Ok so ufw is proving to just not work. I've blocked an ip address, and netstat -tun is still showing that ip addresses all over the place... Is there a better option for blocking specific IP addresses
<sarnold> Sierra: probably ufw block rules won't affect already established connections; you can either kill the process that has those sockets open (netstat -tunp can help) or you can nullroute the address, ip route add blackhole <ipaddr>
<Patrickdk> ufw uses conntrack
<Patrickdk> conntrack will keep existing open :)
<Patrickdk> use conntrack tool to fix it
<sarnold> "conntract tool"?
<sarnold> ah you can see how my fingers work..
<YamakasY> are there some good preseed guys in here ?
#ubuntu-server 2014-08-27
<plk> Hi everyone... this is a long shot but does anyone know why postfix smtp would pass shadowauth and fail SASL auth?
<lordievader> Good morning.
<YamakasY> morning!
<pmatulis> morning
<smoser`> hallyn, is there any cost to subuid/subgid mappings being complex
<smoser> as in 'lxc-usernsexec' -m b:0:32000:1 -m b:1:33563:1 -m b:2:45191:1 ... ...
<smoser> as opposed to just: lxc-usernsexec -m b:0:65535:65534
<hallyn> i haven't measured it
<smoser> hallyn, does that end up being something in the kernel that is utilized real-time ?
<smoser> or does -m b:0:65535:65534 just end up in the kernel as 65534 individual mappings.
<smoser> that would esem expensive
<smoser> seem
<hallyn> it's in mappings by ranges
<hallyn> it should be very fast
<smoser> so its possible then that 65535 individual mappings would perform poorly compared to a range
<smoser> unless something went in and realized it coudl re-write that.
<dav1dp0101> Does anyone have experience or advice on how to change permissions on an apache2 web server? I put some files on my server at /var/www/html/archives and I can't access the files through a web browser (I get a 403 error). Checking my /var/log/apache2/error.log, I get: (13)Permission denied: AH00132: file permissions deny server access. I have changed the user and group owners to a default user, but still I don't have it working. Any thoughts?
<thegoat> i am trying to configure smtp auth on 12.04 server, but everytime i try to pass my creds when sending an email it comes back with login failed and i can't find any logging as to why
<patdk-wk> well, what *does* the logs say?
<thegoat> i can't find any logging as to why
<thegoat> nothing in mail.log or auth.log
<thegoat> or daemon.log
<thegoat> is there a way to run sendmail in the foreground in a non daemon mode so i can can get some verbose output?
<darkside_> hey guys
<darkside_> keep getting this message while accessing a NAS using CIFS: "CIFS VFS: bogus file nlink value 0", any ideas what it can be?
<pmatulis> thegoat: what smtp server are you using?
<thegoat> sendmail
<pmatulis> oof
<zul> hallyn:  libvirt 1.2.7 is building in  my libvirt testing ppa (ppa:zulcss/libvirt-testing)
<thegoat> pmatulis: yeah i know ;-)
<pmatulis> thegoat: have you considered postfix?  sendmail is so 1980's
<pmatulis> (good music in the 80's though)
<thegoat> the milter i use seems to require sendmail, so i am kind of stuck
<pmatulis> ah, milters, that brings me back
<pmatulis> thegoat: not many people here will have experience with sendmail as an MTA i'm afraid
<thegoat> i am using sendmail out of the box, trying to tweak things from what i get from searches, but i am at my witts end.
<pmatulis> thegoat: tell your people to get into 21st century?
<thegoat> yeah i know.....but it's an upstream provider so that might not work too well ;-)
<thegoat> no biggie, i'll keep digging
<streulma> finally running Ubuntu Server 14.04.1 fresh install from Virtualbox on Hosting server VPS in the cloud.
<streulma> booted in recovery, wiped hda1 and copied over with NC my Ubuntu Server 14.04.1 tar.gz
<patdk-wk> milters don't require sendmail
<patdk-wk> milters require a sendmail milter interface, that postfix supports
<patdk-wk> though personally, I dislike milters :)
<thegoat> pmatulis: got it working :-D
<irv> erm, i'm getting W: Failed to fetch http://ca.archive.ubuntu.com/ubuntu/dists/raring-backports/multiverse/binary-i386/Packages  404  Not Found [IP: 91.189.92.200 80]
<irv> when trying to sudo apt-get update
<cfhowlett> !raring | irv,
<ubottu> irv,: Ubuntu 13.04 (Raring Ringtail) was the 18th release of Ubuntu. Support ended on January 27, 2014. See !eol, !upgrade and http://ubottu.com/y/raring
<irv> oh kk
<irv> so i need to dist upgrade?
<irv> preferably to an LTS
<irv> as this is my production webserver :P
<irv> i was on the bleeding edge lol
<irv> not the best idea but hwatever
<irv> so to upgrade to 14.04?
<irv> it's telling me i can do-release-upgrade to go to 13.10
<irv> do i have to do that before going to 14.04?
<cfhowlett> irv, 13.10 is also dead.  easier to download and clean install 14.04 but you can do an eolupgrade if you wish
<irv> yeah there's a decent amount of customization to the webserver so i don't wanna start from a fresh install this time
<irv> i'll look eolupgrade, thx :)
<irv> hmm, so seems the simplest way is something like sudo apt-get update sudo apt-get dist-upgrade sudo update-manager -d sudo do-release-upgrade
<irv> but i can't even run the update without errors since hte old repos are down
<irv> hmmmmmm
<irv> lol
<genii> irv: Change the repo names from ca.archive.ubuntu.com  to old-releases.ubuntu.com
<irv> thx
 * RoyK sticks to LTS releases unless he's testing stuff
<irv> what about the security ones
<irv> same thing?
<genii> Yup
<pmatulis> irv: google for 'ubuntu old-releases'
<pmatulis> there is a wiki page
<irv> cool, thanks
<irv> it's going now
<irv> :)
<RoyK> :)
<irv> i'm just toying with awstats before i upgrade
<irv> taking a while to do it's initial run
<cfhowlett> irv, that's why I suggested you get the 14.04.1 iso and just clean install.
<RoyK> cfhowlett: may be easier to upgrade if you have configured a lot of stuff, or if you're somewhat new to linux/ubuntu, or both...
<irv> yeah, i probably will stand up a new VM alongside
<irv> and migrate over
<irv> but because this is a production server
<irv> i can't afford the downtime to set up all that stuff from scratch heh
<irv> so i'll just throw it up to 14.04.1 upgrade then stand up a freshie and take my time moving stuff over
<irv> :)
<YamakasY> guys which keyring am I missing ? Ign http://nl.archive.ubuntu.com:80 trusty Release                             No keyring installed in /etc/apt/trusted.gpg.d/
<RoyK> irv: If I were you, I'd start with a new VM instead of upgrading something in production. Things sometimes break during upgrades, and it's not always easy to fix them without a reinstall
<irv> yeah i have a full copy of the VHDX
<irv> so if something goes wrong i'll simply flip over to that
<irv> but yeah i feel ya
<RoyK> yuch - hyper-v :P
<irv> ahah, yeahhh
<irv> i know i know :P
<irv> but it's 95% windows infra
<irv> just my webservers and a few other random servers are on linux :(
<irv> and ubuntu actually runs quite nice on hyperv now a days
<irv> it's hyperv 2012 too heh
<RoyK> haven't used that - ubuntu didn't run too well on 2008r2's hyper-v - in fact - it just disconnected from the network when traffic was high, without anything in the logs on either ubuntu nor windows
<irv> lol sounds like a fun issue to diagnose
<irv> :P
<irv> i've had 400 days of solid uptime on these boxes
<RoyK> irv: 400 days of uptime? although that may seem nice, it doesn't seem very good regarding security patching :P
<irv> lol yeah
<irv> hence why i need to upgrade to a modern LTS distro :D
<irv> most of those are just internal web servers tho for things like security cams etc :)
<irv> but i do see your point
<lordievader> Good evening.
<pmatulis> lordievader: evening
<lordievader> Hey pmatulis, how are you doing?
<pmatulis> flying high again
<webfox> Hello folks!
<webfox> I was trying to change the ssh port at my machine but it is not working as it should.
<RoyK> webfox: how did you change it? got a firewall?
<webfox> What I did is change Port 22 to 2222 at /etc/ssh/ssh_config.
<lordievader> webfox: Define 'not working as it should'
<webfox> Is that enough? I don't have a active firewall.
<sarnold> did you restart sshd?
<RoyK> webfox: ssgd_config would be better
<RoyK> sshd_config, even
<sarnold> sigh
<sarnold> RoyK++
<webfox> let me see it, just a sec.
<RoyK> ssh_config is the client config
<genii> webfox: I think you wanted sshd_config and not ssh_config
<webfox> Great, now it is working just fine! Thank you guys!
<webfox> how do I find squid version?
<sarnold> webfox: dpkg -l '*squid*'
<RoyK> webfox: squid -v
<webfox> squid proxy, path is not default installed here.
<webfox> sarnold: strangely it shows version 3.3.8
<webfox> I was expectins something as 7.04 to 7.10
<webfox> expecting
<webfox> weird
<sarnold> webfox: you can aim a browser right at the squid port, e.g. localhost:3128, mine reports "Generated Wed, 27 Aug 2014 19:43:01 GMT by hunt (squid/3.3.8)"
<webfox> sarnold: do you still using default port?
<RoyK> webfox: latest version is 3.4.6
<sarnold> webfox: what 'squid' are you expecting? wikipedia reports their most recent version is 3.4.6, released in june 2014
<YamakasY> i'm trying to find the right debian-installer source for 14.04.1 LTS
<webfox> well, not sure now. I was reading  https://help.ubuntu.com/community/Squid
<sarnold> webfox: funny enough I am. no idea why, I thought the debian mirror setup I used had it on 8000, but I'm apparently listening on both.
<sarnold> "surprise!"
<RoyK> webfox: the current version probably works :P
<webfox> I am quite new on using it. And is seems to be blocking everything by default.
<sarnold> webfox: oh, that appears to be talking about ubuntu versions from seven years ago :)
<webfox> sarnold: :P
<RoyK> webfox: did you change the squid config? ;)
<webfox> RoyK: not sure how, yet ;)
<RoyK> webfox: vi /etc/squid3/squid.conf
<RoyK> webfox: or change 'vi' to 'nano' if you're a newbie
<webfox> No, I use vim
<sarnold> webfox: don't worry, the first time I set up squid was back in 96 or 97, it seems fairly similar.. :)
<RoyK> sarnold: more or less the same ;)
<webfox> I would like to allow an specific IP full access.
<webfox> Others than that completly blocked.
<RoyK> webfox: just read the config file - it's got lots of comments - look for ^acl
<webfox> Ok. let me try it.
<YamakasY> anyone and idea about this when building an image ?
<YamakasY> cp /usr/lib/syslinux/pxelinux.0 ./tmp/netboot/dir_tree/ubuntu-installer/amd64
<YamakasY> cp: cannot stat '/usr/lib/syslinux/pxelinux.0': No such file or directory
<YamakasY> this is on the 14.04.1 source
<sarnold> YamakasY: install syslinux-common?
<YamakasY> sarnold: is installed
<YamakasY> it was all pain to get the right packages :)
<YamakasY> when I pull a recent one from git it goes well, but that one is too new for LTS
<YamakasY> so won't boot
<YamakasY> sarnold: so is that source missing parts ?
<sarnold> YamakasY: I'm not sure; is the 'cp' running in a chroot or lxc container or sometihng similar?
<YamakasY> sarnold: I do a fakeroot
<sarnold> YamakasY: apt-file search pxelinux.0  reports that path is owned by syslinux-common, so if the package is installed, it should be there
<sarnold> YamakasY: hmmm. fakeroot LD_PRELOADs some libraries, which might not work depending upon the programs that are executed
<YamakasY> sarnold: yeah I had some trouble with packages so I downloaded the latest stable ones for syslinux and so on
<YamakasY> yeah running as root now
<YamakasY> kinda fun to do
<sarnold> oh okay
<sarnold> hunh
<YamakasY> there is a stupid preseed bug for netcf
<YamakasY> netcfg
<sarnold> YamakasY: well, when it doubt, add in strace :)
<YamakasY> damn again
<YamakasY> sarnold: should syslinux be too new ?
<sarnold> YamakasY: no idea there. All I know is you've got an error saying the file doesn't exist, but you're confident the file is there, right? so break out strace and find out what's going on to change the filesystem view..
<YamakasY> sarnold: nah I installed the package
<YamakasY> that is
<sarnold> lunchtime :) have fun YamakasY
<YamakasY> damn my mirror server is becoming a hoo!
<YamakasY> sarnold: have a nice lunch
<YamakasY> sarnold: fixed, the package was too new to other location
<YamakasY> the maintainer like to rename a lot btw
<webfox> I am reading a tutorial about squid and there is a like as : "Get ready with the configuration file located at /opt/squid/etc/squid.conf"
<webfox> I don't think it is for Ubuntu because I don't have any items inside /opt folder
<webfox> But I don't know if it uses a different .config file other than the main default one.
<qman__> On ubuntu, squid uses /etc/squid3/squid3.conf
<webfox> yes, perhaps the book just uses the default .config file. I think I will make a backup of it. :D
<scoutmastershake> Has anyone had a problem with 14.04 and proftpd where user auth just stops working for one user? I'm having this problem and the only work around currently is to reset the password with passwd.
<scoutmastershake> ???
<scoutmastershake> Helllo!!!
<pmatulis> scoutmastershake: hello, how are you?
<zzxc> Alright I got a question. I have someone wanting to to a SFTP connection in my machine (chrooted with a rbash shell to prevent anything but SFTP). The guy keeps insisting that I need to send him a public key. Is it just me or does that make absolutly no sense?
<zzxc> as I understand it private keys should always be on the client side and public keys should always be on the client side. Otherwise you have a private key that is shared on a shared resource.
<fridaynext> I'm having an email issue.
<fridaynext> My WordPress installs are supposed to send me emails to notify me of new sales.  They are, indeed sending out the emails,
<fridaynext> but those emails are going directly to the Postfix/dovecot install that I switched away from 2 weeks ago, instead of my Google Apps address.
<fridaynext> Do you folks know how php mail() works, to get it to route outside of my Ubuntu 12.04 (Linode) server?
<catbus1> Hi, I am getting the hash sum mismatch error from the apt-get update around package bzip2. I found an irclog about the same error message on this channel, and learned that this might be temporary, but I have encountered this issue several times. I am wondering if someone can point me to the right direction to get this solved.
<scoutmastershake> pmatulis: hellp
<scoutmastershake> pmatulis: hello
<pmatulis> scoutmastershake: yes?
<scoutmastershake> Has anyone had a problem with 14.04 and proftpd where user auth just stops working for one user? I'm having this problem and the only work around currently is to reset the password with passwd.
<pmatulis> zzxc: the person logging in needs to have his public key on the server he's logging in to
<zzxc> pmatulis: Yeah thats what I was thinking.
<pmatulis> zzxc: did you test logging in with keys to the chrooted environment?
<pmatulis> scoutmastershake: so after resetting the password everything works?  sounds like the user forgot the original password
<RoyK> scoutmastershake: I stopped using proftpd some years back - vsftpd is nice - and if you need it secure, use sftp
<RoyK> scoutmastershake: don't use cleartext login as with ftp, use sftp, it's supported on most platforms, windows included (filezilla is a good sftp client)
<catbus1> W: Failed to fetch bzip2:/var/lib/apt/lists/partial/us.archive.ubuntu.com_ubuntu_dists_precise_main_source_Sources  Hash Sum mismatch
<catbus1> this is on 12.04.5
<zzxc> pmatulis: I've set this up serveral times before. I have a few users that already have jail and I've had no issues. I've even had other companies have me walk them through how to set up there own.
<zzxc> pmatulis: Issue is he's not giving me a public key
<sarnold> catbus1: is it still there after a fresh apt-get update?
<catbus1> sarnold: yes. and I just did the apt-get update again, it's still there.
<sarnold> catbus1: can you grab the IP address (netstat -antp | grep http) while downloading the package lists?
<catbus1> sarnold: 91.189.92.201 and 91.189.91.14.
<scoutmastershake> never used vsftp how does it compare to proftpd
<scoutmastershake> ?
<sarnold> catbus1: thanks!
<scoutmastershake> does it allow my to lock users in ther home directories
<scoutmastershake> ?
<fridaynext> mail sent from my ubuntu server to an email address attached to a domain on the same server isn't making it out to my google apps account - any ideas how to fix this?
<fridaynext> I've already deleted references of the email address from virtual domains, forwardings, etc.
<sarnold> fridaynext: check server logs to see if the googleapps server reported any errors
<fridaynext> sarnold: mail works going to google apps, as long as it originates off of this server.
<fridaynext> sarnold: ah - found it.
<fridaynext> it's getting marked as spam now.
<fridaynext> thanks for that tip! made me check the logs, which proved to me that it was indeed getting delivered.
<sarnold> :)
<pmatulis> catbus1: dunno, try changing mirrors
#ubuntu-server 2014-08-28
<zartoosh> Hi during installing of ubuntu packages I get sometime stall for a long time and this is displayed on monitor:   Connecting to security.ubuntu.com  (2001:67c:1562::15)]       any help greatly appreciated? thanks
<sarnold> zartoosh: do you have a partially-functioning ipv6 stack?
<zartoosh> sarnold, I am not sure if I have or not, is this part of my network configuration that causes this issue?
<sarnold> zartoosh: what kind of response times do you get from ping6 security.ubuntu.com?
<zartoosh> sarnold,  64 bytes from likho.canonical.com: icmp_seq=1 ttl=39 time=89.0 ms   this is what I get
<sarnold> zartoosh: 89 ms is pretty snappy..
<sarnold> zartoosh: the canonical servers aren't exactly -fast-, but if your pings get there in a reasonable time, then my idea's done :)
<zartoosh> sarnold, then you are suggesting perhaps something is wrong with my network configuration? or network environment? thanks
<sarnold> zartoosh: yeah, I thought perhaps your DNS servers were returning AAAA records when your network wouldn't actually route those packets
<zartoosh> sarnold, thank you so much.
<zartoosh> Hi I am getting installation issue: libc6-i386 : Depends: libc6 (= 2.19-0ubuntu6) but 2.19-0ubuntu6.1 is to be installed.  How could I resolve this please?
<Patrickdk> apt-get update
<zartoosh> Patrickdk,  still the same issue ??
<pmatulis> zartoosh: pastebin the output to the update command
<pmatulis> and why do you need the 32bit package?
<zartoosh> pmatulis, I think valgrind needs that, if I can get ride off that would be great..
<zartoosh> http://paste.ubuntu.com/8164510/
<Matadoer> hey guys how can I create a MySQL cluster across a bunch of ubuntu 14.04 vps
<vedic> Hey guys, I am trying to run a service which doesn't seem to go well. It keeps giving me output: .: 11: Can't open /etc/init.d/functions
<sarnold> vedic: can you find what functions it expects to find declared in the /etc/init.d/functions file?
<sarnold> vedic: /lib/lsb/init-functions may be useful
<vedic> sarnold: Is there any walk through/tutorial to follow
<sarnold> vedic: no idea, I don't know what service you're trying to use
<lordievader> Good morning.
<Tazmain> Hi all, can someone maybe tell me why adding entries to rc.local does not end up getting started at all expect for the curlfs bash script that I added. None of the other services I wanted to have started at boot started at all .
<andol> Tazmain: Any chance that you could share your /etc/rc.local, using a suitable pastein?
<Tazmain> andol, sure thing
<Tazmain> https://privatepaste.com/525d264e38
<Tazmain> The last script it the only one that seems to get executed. The other scripts all work. That it how I start those 3 services mostly.
<pmatulis> morning
<lordievader> Hey pmatulis, how are you?
<pmatulis> lordievader: a little sleepy, you?
<lordievader> Doing good, refactoring old code.
<Tazmain> So do I need to write upstart jobs for those or not >
<Vladimir_> I setup tftp server but it doesn't work, I can't send files to it, but I can send to a windows machine
<ikonia> Vladimir_: please don't cross post
<Vladimir_> Sorry
<blaaa> What is the procedure to update a package in universe (e.g. related to https://bugs.launchpad.net/ubuntu/+source/drupal7/+bug/1262813 )?
<uvirtbot`> Launchpad bug 1262813 in drupal7 "multiple security issues in drupal7 package" [Undecided,Confirmed]
<ikonia> blaaa: what do you mean update it ? get a new package built you mean ?
<blaaa> ikonia: yes
<ikonia> blaaa: log a bug / feature reqeust for it
<ikonia> blaaa: it's unlikley to happen, and more likley to get the fix back ported
<blaaa> ikonia: ok, so what does that mean, will an updated package appear in the backports repository or will someone backport the security fixes in version 2.27-31 (an further) and apply them to the 7.26 package?
<ikonia> blaaa: I mean the fix is more likley to get applied to the existing package, rather than a new package upgrade
<blaaa> ikonia: in that case I suppoose it's rather unlikely fixes will be applied at all in a timely fashion
<ikonia> I agree
<blaaa> I believe it's unwise to leave remotely accessible packages without security fixes for a long time, I think it would make sense not to include these packages in any repository. If users choose to install them manually they will at least know they will have to watch for issues and fix them if needed
<ikonia> ok
<blaaa> ikonia: what is the right forum for a suggestion like that?
<ikonia> log it in the bug
<streulma> hello
<streulma> is here one?
<ikonia> yes
<streulma> oh ikonia :)
<streulma> well
<streulma> I have a vps server by a cloud provider, but every time I make a backup, /var/cache and /tmp are removed by a restore
<ikonia> streulma: you've just asked this in #ubuntu and been told to talk to your cloud server provider
<lynxman> hi guys, what would be the best guide to make a deb to build and package a jar built with maven?
<cfhowlett> !packaging | lynxman
<ubottu> lynxman: The packaging guide is at http://developer.ubuntu.com/packaging/html/  - See https://wiki.ubuntu.com/UbuntuDevelopment/NewPackages for information on getting a package integrated into Ubuntu - Other developer resources are at https://wiki.ubuntu.com/UbuntuDevelopment - See also !backports and !sponsoring
<lynxman> cfhowlett: I already know the packaging guide and use it, I just wanted to find a clear example with maven dependencies
<cfhowlett> lynxman, ah.  sorry.  clearly I'm not the one to ask.
<lynxman> cfhowlett: no worries :)
<lynxman> jamespage: you around mister? :)
<lynxman> ah found this, very helpful https://wiki.debian.org/Java/MavenBuilder
<roaksoax> .win 16
<zartoosh> HI I have installed stock python-twisted using apt-get install python-twisted.    I found out there is a newer deb version on their development site. If install the newer version: dpkg -i  <newer version>  would it remove the older one automatically or I have to manually remove the older version? thanks
<patdk-wk> that depends on how *they* created the deb package
<csmit> Hey guys. I am using Ubuntu Server 14.04.1. This is a fresh install. I am attempting to set a static IP on eth0, ifconfig only detects lo and does not show eth0 even though I have the static entries for eth0 entered in.
<hallyn> sarnold: ok, i'm dedicating my laptop to this m-f qemu qcow2 bug.  i'm upgrading a vm with a qcow2 with backing file from precise to trusty...  what can i do to increase the odds of it happening d you think?  more or less memory for hte vm?  does >1 vms at a time help?
<hallyn> man but that ws a fast upgrade
<zartoosh> HI how could I control which ports dhclient should use? thanks
<RoyK> zartoosh: dhclient only uses 68/udp AFAIK
<sarnold> hallyn: it triggered for me quite often with a cycle of {apt-get -u dist-upgrade, run some tests, forcibly power off and revert to snapshot}
<adam_g> smoser, hey. off-topic, but do you have access to logs for download.cirros-cloud.net?
<hallyn> sarnold: what is "forcibly power off" ?
<sarnold> hallyn: in marc's uvt, that's uvt stop -rf
<sarnold> hallyn: looks like that turns into virsh snapshot-revert
<hallyn> hm.  lemme try one more time another way then i guess i'l linstall uvt.  i'm getting desparate
<hallyn> and NOW ctrl-alt-t won't launch a terminal
<sarnold> hallyn: yeah it feels like hte sort of problem that ought to be reproducable even without uvt
<hallyn> sarnold: i've never had a update/release-upgrade trigger this.  always many cycles of compile/test.  but that's not reliable.  only annoying
<sarnold> hallyn: the only surefire way to make it happen is to have pressing need for your VMs to work correctly
<hallyn> yup
<hallyn> and how is it that upstream hasn't seen this
<sarnold> my only theory is that we may rely upon revert far more than the average bear
<zartoosh> Hi I am using ubuntu 14.04 I keep getting this error and not sure how I can resolve it, please help:  http://paste.ubuntu.com/8172857/   thanks
<sarnold> zartoosh: maybe you've got a bad mirror. try adding more apt sources for more mirrors
<hallyn> sarnold: but i'm not using revert at all
<zartoosh> sarnold, sorry not sure what you mean: here is the complete log : http://paste.ubuntu.com/8172873/
<sarnold> hallyn: no?
<sarnold> zartoosh: add some lines to your apt sources, something like /etc/apt/sources.list.d/ugh.list:  deb http://security.ubuntu.com/ubuntu trusty-security main restricted universe multiverse   and  deb http://mirror.anl.gov/ubuntu// trusty-updates main restricted universe multiverse   and deb http://mirror.anl.gov/ubuntu// trusty main restricted universe multiverse
<sarnold> zartoosh: mirror.anl.gov is usually pretty good for me and when they have packages they are ten times faster than the archive.ubuntu.com archives..
<zartoosh> sarnold, I will do that but I like to know the root cause of this issue. I started with kickstart which was build for ubuntu 14.04. Is it possible when I do apt-get update causes some conflicts due to 14.04 update one has been release?  Thanks
<hallyn> sarnold: nope, just using rbasak's uvt-kvm which uses qcow2 with backing file
<zartoosh> sarnold, by the way that solved my problem thanks.
<sarnold> hallyn: and rbasak's stuff doesn't d=o revert? or you just don't use it?
<sarnold> zartoosh: your mirror didn't finish a full sync or something similar
<hallyn> i don't think it does it.  if it does, i dont 'use it or don't know that i am
<hallyn> but a bunc hof extra work isn't working so lemme try reverts i guess
<hallyn> sarnold: and upgrading a single vm at a time can do this for you, or you need N in parallel?
<sarnold> hallyn: sorry, I can't recall there. I think one-at-a-time is sufficient but I can't promise that.
<hallyn> sarnold: ok, trying with jdstrand's published image.  never worked for me o nthe vostro, let's see if it works on the thinkpad
<sarnold> hallyn: happy smashing :)
<hallyn> :(
<hallyn> maybe we should push 1.5 to utopic
<Sierra> Running ubuntu 14.04.1 64 bit.. Why am I getting this error with the add-apt-repository command? I've never come across this with ubuntu before:
<Sierra> sierradmin@sierrabrown:~$ sudo add-apt-repository ppa:tuxpoldo/btsync
<Sierra> sudo: add-apt-repository: command not found
<sarnold> Sierra: first install the software-properties-common package
<Sierra> Ah.. Thanks, dont recall ever having to install another package to run that before
#ubuntu-server 2014-08-29
<jdstrand> hallyn: you shouldn't need uvt-- my instructions in the bug give the steps uvt does
<jdstrand> hallyn: uvt stop -rf will do a virsh destroy followed by a revert to pristine snapshot
<hallyn> jdstrand: sigh, no, your instructions still don't seem to reproduce for me.  GAH
<hallyn> just to be clear, 1.5 was good and 1.7 was bad right?  I'll downgrade to 1.7 and try a few more times
<hallyn> or, no, we never knew about 1.7 did we.
<hallyn> oh phew, we did
<thumper> anyone know who is the owner of gcc (or just gccgo) for ubuntu?
<lordievader> Good morning.
<Kunzem1984> good day people. I'm rather new on Ubuntu server. I have two ubuntu servers one has backuppc and other is gateway,dhcp etc for gateway to my network. i've setup the backuppc to run using rsync and have succesfully backed up a text file from my gateway server to my backupserver. I know i'm not supposed to backup /proc dir as it has files in use ( or so ) which dir should i not backup on the live gateway server ? hope i explained correctly.
<Abhijit> Kunzem1984, as per him, http://ubuntuforums.org/showthread.php?t=35087 he includes mnt sys and proc
<Abhijit> and lost found
<Abhijit> and the target .tgz itself! <-- the most important!
<Kunzem1984> Thanks Abhijit :)
<zartoosh> hi I am trying to understand an issue with packet conflict versioing.  The command "apt-get install <pkgname>" fails. The failure is due to one of the <pkgname> dependencies are at higher version than <pkgname>  requires.  So my question is where "apt-get"  gets the depency information?   thanks
<caribou> zartoosh: the information is built into each package and then this info is made available in the archives
<caribou> zartoosh: you may have archive inconsistencies in your /etc/apt/sources.list
<caribou> zartoosh: or from a package that has been installed manually directly from the .deb file and not from an archive
<zartoosh> caribou, The repo was made by me, I basically collected the required packages in a directory and ran : dpkg-scanpackage,
<WilliamDotAT> im using 14.04 and changed the network card in my server
<WilliamDotAT> so it has no network as the MAC changed
<WilliamDotAT> 14.04 has no more /etc/udev/rules.d/70-persistent-net-rules
<WilliamDotAT> so what do i need to change?
<caribou> zartoosh: sorry, I'm not familiar with dpkg-scanpackage
<zartoosh> caribou, thanks
<Vladimir_> I setup a tftp server, so when I try to send a file from the same machine it works fine, but when trying to send a switch I get an error on the switch (sending to a windows machine works fine)
<sfix> hi guys, is SELinux practical to use on Ubuntu 14.04 ?
<sfix> I heard Ubuntus support for SELinux was pretty poor
<Beliq_> How to choose good hostname for a internet enabled server ?
<Beliq_> (subdomain ? )
<pmatulis> Beliq: it doesn't matter
<croepha> Is there a way to have apt-get not restart database engines when you upgrade?
<cfhowlett> croepha, don't update --- it'll use the last recorded data
<cfhowlett> croepha, wait, you mean upgrade from distro or within distro , i.e 12.04 > 14.04 or 14.04 > 14.04.1
<croepha> What I mean, is when I do an apt-get upgrade postgresql and then it decides that that is the best time to restart the postgres service
<cfhowlett> croepha, if you have the service running prior to upgrade, after upgrade it should restart the service.  if the service is NOT running before upgrade ...
<croepha> ok, Iâll just save the apt-get upgrade for when I can schedule downtime
<cfhowlett> croepha, you might wish to do dist-upgrade
<croepha> ill check it out, thanks
<webgeek> Hi everyone
<webgeek> I need to learn ubuntu server fast
<webgeek> like really fast
<webgeek> for web development purposes
<patdk-wk> nothing to learn :)
<patdk-wk> you just need to learn what you are going use
<patdk-wk> php/python/ruby/...
<patdk-wk> your sysadmin would be the one to learn ubuntu server
<zartoosh> HI I have created ubuntu mirror, In /etc/apt/sources.list I have created this entry: "deb http://eng-lab/ubuntu  UbuntuMirror"  When I ran apt-get update I get the following error: "E: Malformed line 2 in source list /etc/apt/sources.list (dist parse)"
<streulma> hello, I run a script at boot in rc2.d/S19boot but it says Unable to reopen stdin. Is it bad?
<teward> zartoosh, that's because it's a malformed sources.list entry.  A 'mirror' of the archives is always going to be accepting the same request(s) as the standard repositories, assuming it is indeed a mirror of the repositories
<teward> zartoosh, so you would use a line similar to the default ones but point it to your mirror instead of the main archives
<teward> assuming it's an archive mirror
<zartoosh> teward,  okay so please help me to understand this better. I can access to the site: http://eng-lab/ubuntu   under this I can dists, pool and project subdirectories.  This mirror is archive of 14.04  main.
<teward> zartoosh, do you still have your original sources.list entries in there?
<zartoosh> yes I do
<zartoosh> teward, yes i do
<teward> zartoosh, use those as the template.  then edit the HTTP address
<teward> but leave *everything else* the same
<teward> then it shouldn't throw those errors.  (it will still error if, say, your mirror is improperly configured, but that's a different issue entirely)
<zartoosh> teward,  okay here is what the template is: deb http://us.archive.ubuntu.com/ubuntu/ trusty main      So I will change this to: deb http://eng-lab/ubuntu  trusty main
<teward> yep
<zartoosh> teward, one moment please
<teward> zartoosh, i would comment out the main line though
<teward> and then use the new one as a separate line
<teward> so you can always easily revert
<zartoosh> teward, thank you so much it worked as you suggested.
<streulma> hello
<streulma> what is STDIN ?
<teward> zartoosh, you're welcome.  ALWAYS REMEMBER you can use the default entries as templates, just change the URL instead
<zartoosh> teward, will do thanks again.
<sarnold> streulma: man 3 stdin   for decent overview :)
<zul> hallyn: when you get a test can you run the libvirt tests on my libvirt-testing ppa
<hallyn> zul: is it ok if i fire those off on monday?
<zul> hallyn: yep
<hallyn> k, making a note to do so
<hallyn> (i suppose maybe it'd be better if i run them today i ncaes there are blockers)
<hallyn> yeah i better fire them of fnow
<zul> ack
<zartoosh> Hi all I have create private mirror from ubuntu 14.04  main.  That is about 11 GB.   Is there any way I can create a mirror with subset of packages? thanks
<sarnold> zartoosh: squid-deb-proxy is a nice half-way point
<zartoosh> sarnold, great thanks I will give it a try.
<StolenToast> if I'm running 12.04 are there any good things I would get by upgrading to 14.04?
<dobi_> hello guys. I'm trying to connect my smarttv (lg) to a media server using a crossover cable but the wizard attempts to connect to internet to verify that all is correct so the connection is not established. Is it posible to give the tv access to the internet using the wlan adapter on the media server?
<qman> dobi_: if your media server supports routing, yes
<dobi_> qman: how do I know that?
<qman> Well, if its ubuntu it does
<dobi_> qman: its ubuntu server, yep
<qman> You will need to enable ip forwarding and configure masquerafing most likely
<qman> Masquerading
<dobi_> qman: can you help me with the process? any help would be welcome
<qman> dobi_: edit /etc/sysctl.conf, uncomment the line for net.ipv4.ip_forward=1
<dobi_> qman: done
<qman> You will then need to configure a static ip network on the crossover segment, I assume you are already doing that
<qman> To apply the sysctl change: echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
<dobi_> qman: yup http://pastebin.com/9UpLUtp1
<qman> Ok
<dobi_> qman: is that correct?
<qman> Yes
<dobi_> qman: anything else?
<qman> Next you need to apply two iptables rules
<dobi_> qman: ok
<qman> iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE
<dobi_> qman: done
<qman> iptables -A FORWARD -i eth0 -j ACCEPT
<dobi_> done
<qman> Then, configure a dns server on the tv, and it should work
<dobi_> are they permanent or do I have to do anything else?
<qman> They are not permanent
<dobi_> qman: and what should I do to make them permanent?
<qman> To make them permanent I suggest adding as a script in /etc/network/if-up.d
<qman> Also with these two lines above those
<qman> iptables -F
<qman> iptables -t nat -F
<dobi_> which file? a new one?
<qman> Yes, a new one
<dobi_> and do I have to put sudo in front?
<qman> What that does is clear out the rules and recreate them whenever an interface comes online
<qman> No, in the script, it already runs as root
<dobi_> done
<dobi_> are this file executed automatically?
<qman> Yes
<qman> Every time an interface is brought up
<dobi_> qman: ok, let me try in this computer. Ill disconnect for a moment
<qman> Forgot, you may have to chmod +x that script you created
<dobi__> qman: its working! thank you man
<qman> Glad it works, I'm on a phone so I was worried about typos
<dobi__> qman: no, all perfect. At least on the computer. Ill try later on the tv. Thanks!
<qman> Cool
<dobi__> qman: something is wrong with iptables. After restart I have no rules
<dobi__> qman: buuuuut, the connection on the tv is working perfectly!
<dobi__> qman: (after adding the rules)
<qman> dobi__: try sudo chmod +x /etc/network/if-up.d/yourscript
<dobi__> qman: ok
<dobi__> qman: nope, the same, no rules
<qman> dobi__: ok, if you manually run that script, does it work? I.e. sudo /etc/network/if-up.d/yourscript
<dobi__> qman: yep
<qman> And creates the rules?
<dobi__> qman: yeah, I get this ACCEPT     all  --  anywhere             anywhere on iptables --list
<qman> Ok
<qman> Try replacing "iptables" with "/sbin/iptables"
<dobi__> qman: on the script?
<qman> Yes
<qman> It might not have adequate $PATH
<dobi__> qman: nothing, the same
<qman> Hmm
<dobi__> qman: maybe anything else is flushing the rules? maybe the file is not executed?
<qman> Could be something else interfering
<dobi__> qman: is a clean install if that helps
<qman> Try moving the script to if-post-up.d
<qman> So it runs later
<dobi__> qman: that dir doesn't exist
<dobi__> should I create it?
<teward> have any of you considered iptables-persistent?
 * teward runs
<qman> No
<qman> My mistake, going from memory
<teward> iptables-persistent is a package which actually makes the iptables rules persistent
<user_> Hi guys, I need help to get a font. I'm using Ubuntu 14.04 with xorg and i3 as window manager. I'm using Firefox as my default web-browser, which contains the font "sans-serif". This font is however not present elsewhere. I would like to use this font outside Firefox.
<teward> for both iptables and ip6tables
<teward> i have yet to do extensive MASQUERADE testing but in theory it should work SO LONG AS the rule is included in an iptables-save
<dobi__> teward: I prefer not to use that package but if it is needed...
<qman> dobi__: if you haven't already, try adding shebang line at the top of the script: #/bin/sh
<qman> Err
<qman> #!/bin/sh
<qman> Also which version of ubuntu?
<dobi__> qman: 14.04
<teward> dobi__, may I ask why you don't want to use that package?
<dobi__> teward: nothing in particular. Its only that if qman solution works without the need of any package I prefer that solution
<teward> dobi__, for what it's worth the least painful solution is to use iptables-persistent.  Of the three things listed here (https://help.ubuntu.com/community/IptablesHowTo#Configuration_on_startup) it was the only thing that would do what i needed it to do
<teward> on my VPSes and my actual Ubuntu machines (not servers)
<qman> dobi__: if its not the shebang line, it might be an issue with whatever is doing the wireless config
<dobi__> qman: was that. Its working now
<dobi__> qman: thanks!
<qman> dobi__: great
<qman> I much prefer this method to iptables-save because it is dynamic and easily changes
<dobi__> qman: exactly
<qman> You can add variables and do other scripting tricks if needed
<dobi__> qman: thanks again for your help ;)
<jdstrand> hallyn: ok, I'm grabbing the forhallyn image and going to verify for myself that distro qemu can still break it
<jdstrand> hallyn: once I do that, I'll try your build
<jdstrand> s
<hallyn> jdstrand: thx.  pushing 27..29 now, so probably just look for and fetch the latest one;  if it doesn't reproduce then i can stop buliding :)
<jdstrand> hallyn: so, you want me to test 29?
<hallyn> jdstrand: how about 32
<jdstrand> ok
<hallyn> or actually, i'm pushing up to 37 right now, how about in 30 secs you download 34
<jdstrand> heh
<hallyn> (i'ts done so you can grab now)
<jdstrand> ok, so literally, the latest one that isn't 36 or higher
<hallyn> :)
<hallyn> there are 69 commits, so 34 should be perfect (we start at 0)
 * hallyn sits on pins and needles awaiting results
<hallyn> (i suppose i should move off of those)
<hallyn> jdstrand: well up to 59 pushed;  i guess i may finish the set before dinner, so you should be able to properly bisect
<jdstrand> hallyn: ack, I'm all setup now, but still trying going through my first (massive) dist-upgrade on the forhallyn image
<jdstrand> with 2.0
<hallyn> pessimist in me says it's going to work fine
<hallyn> sarnold: jdstrand: binaries.0..binaries.68 all pushed.
<sarnold> hallyn: nice :)
<jdstrand> hallyn: thanks
<jdstrand> hallyn: of course, the first try failed (meaning, it booted fine)
<jdstrand> so I need to script this
<jdstrand> hallyn: I'll try bisecting as much as I can over the weekend
<hallyn> jdstrand: awesome.  i'm guessing even after the first 4 steps rharper  will have a good idea of what it is
<hallyn> thanks for testing!  have a good weekend - i'm off to dinner, ttyl
#ubuntu-server 2014-08-30
<MavKen> where can I find best practices on adding webmaster users to system and giving them access to certain domain root directories?  I don't have experience with setting user permissions (will be my first time adding a user since I normally do everything as root)
<pmatulis> MavKen: commands would be 'adduser' ; 'chown' ; and 'chmod'
<blackyboy> how can i restart network in ubuntu 14.04 LTS desktop ? or Server edition ? Before i do the command # sudo service networking restart but its not working now. is there any alternative command to restart the network to make changes my interface edit
<Caitanya> blackyboy: ifup and ifdown
<faryshta> hi. i am trying to configure my localhost to make dynamic virtualhosts. something like i type on my browser
<faryshta> hello.faryshta.local
<faryshta> and it will take me to /home/faryshta/sites/hello/public_html/
<Tzunamii> faryshta: Again, I will give you a overview what you need to do. _HOW_ you do each step you need to read up on as I'm post-OP and don't have the inclination nor energy to help you with the details.
<faryshta> Tzunamii, hi. i readed it last time. i am stuck at the details
<Tzunamii> 1) Set up a namebased vhost that responds to hello.faryshta.local
<Tzunamii> 2) Set up a local DNS so your local DNS requests (hello.faryshta.local) are recognized. I recommend dnsmasq as the only thing you need to do is to edit /etc/hosts to add what you need.
<faryshta> Tzunamii, I have a vhost faryshta.local the idea is to make it dynamic so that it works with hello1.faryshta.local, hello2.faryshta.local, etc
<Tzunamii> 3) On the PCs/Macs/workstations you need to inform them about your local DNS so they use it first. Set it up as primary DNS.
<Tzunamii> 4) et voila!
<faryshta> Tzunamii, the idea is to make it dynamic. the step 1 lacks that detail.
<Tzunamii> You need a vhost for _each_ website you're trying to access
<faryshta> is there a way to use wildcards? both on the serveralias and the dns?
<Tzunamii> I never heard of a concept such 'dynamic vhosts', but someone else here might have a solution for it.
<faryshta> Tzunamii, or virtualhosts subdomains
<faryshta> Tzunamii, right now i am stuck on the dns part. ping faryshta.local works, ping hello.faryshta.local doesn't
<dbck> Hi, is there a way to autostart unprivileged lxc containers, when the host system was rebooted?
<Tzunamii> faryshta: c&p your /etc/hosts on some pastebin and throw us the link
<faryshta> Tzunamii, /etc/hosts i just added one line 127.0.0.1 faryshta.local
<faryshta> since /etc/hosts doesn't support wildcards
<faryshta> i installed dnsmasq
<faryshta> but haven't been able to configure it so that every request to *.faryshta.local sends it to 127.0.0.1
<Tzunamii> faryshta: This has what you need http://brunodbo.ca/blog/2013/04/26/setting-up-wildcard-apache-virtual-host-wildcard-dns
<faryshta> Tzunamii, thanks
<faryshta> Tzunamii, i looks exactly what i need but i can't get pass step 4. after restarting apache faryshta.local doesn't work on browser anymore
<Tzunamii> I'm sorry, but I can't help you any further
<smokie> hey guys, im restored my ubuntu from a backup.. when i boot it i get this message "error mismatched names grub rescue"
<smokie> can anyone help me out with this?
<smokie> i tried running "rescue mode" from ubuntu server cd and reinstall grub from there but it fails.. i also tried Boot-Repair-Disk as well and that didnt help
<bekks> What exactly does "but it fails" mean?
<Abhijit> does not comply with NSA.
<smokie> bekks, thats what it says, grub reinstall fails with fatal error
<bekks> smokie: Whats the exact, full error message - can you pastebin it please?
<smokie> bekks, yes, let me run rescue mode again so i get the exact error.. give me a minute
<smokie> btw, this is a VM and not a physical install
<smokie> bekks, here is the exact error http://paste.ubuntu.com/8186565/
<smokie> when it asked for the device to use as root filesystem, there was /dev/sda1 /dev/sda2, /dev/sda5.. i picked /dev/sda1
<jacob___> hi
<jacob___> I have a ubuntu server on linode
<jacob___> iwould like toconnect with xterm, but it seems ubuntu doesnt use xserver?
<jacob___> startx doesnt work and it seems i cant install it
<bekks> jacob___: startx is deprecated since ages. USe a display manager to startX, like lightdm
<Abhijit> jacob___, so you mean you have physical server right?
<jacob___> yes yes
<jacob___> but its somewhere in texas
<jacob___> i am in Luxembourg))
<jacob___> so its headless in texas
<bekks> jacob___: And "connecting using xterm" - what does that mean? Use ssh -X, and start xterm on your server, so you will see the server's xterm on your client.
<jacob___> oh wait sorry
<jacob___> xclient
<jacob___> but i didnt get that farcoz, i could start "startx"
<jacob___> ok, i admit its been awile for me, i didnt know it was deprecated
<smokie`> bekks, no thoughts on my issue?
<bekks> smokie`: Which Ubuntu are you using, and which DVD (running that grub-install) are you using?
<smokie`> bekks, server 12.04 with server 12.04 CD
<smokie`> well.. is it possible to transfer the data i have on that system on a new install at least by any chance?
<smokie`> so no one here knows on how to troubleshoot a boot issue with ubuntu server?
<smokie> can you guys at least recommend a good forum support for ubuntu server?
<cfhowlett> smokie, ubuntu forums has a server section
<Sierra> I have two SSH pubkeys in the ~/.ssh/authorized_keys file, one on each line. SSH is recognizing the first one and letting me authenticate with it, but the second one it's not seeming to recognize. I've restarted the SSH service, any ideas why it wouldn't be working? Ubuntu server 14.04.1 64-bit
<pmatulis> Sierra: pastebin the output of 'ssh -vvv -i <privatekey> <user>@<host>' for the non-working part
<Sierra> Nevermind, figured it out.. Second client was connecting to the wrong user
<pmatulis> Sierra: ok good
<webgeek> Hi everyone
<webgeek> Can you recommend your favorite book for ubuntu server for web development?
<webgeek> Anybody?
<cfhowlett> webgeek, book?  www.fullcirclemagazine.org   had a series on this not long ago.  and the Official Ubuntu Server book (updated edition!) is worth a look.
<faryshta> hi
<faryshta> i am trying to do what is described here http://brunodbo.ca/blog/2013/04/26/setting-up-wildcard-apache-virtual-host-wildcard-dns
<faryshta> but haven't managed to get the step 4 functional since my virtual host is not working
#ubuntu-server 2014-08-31
<jsonperl> Trying to configure sysstat to collect Power Management statistics, I've installed/configured lm-sensors and it's working. Also configured 'sadc -S POWER'
<jsonperl> still no love for sar -m TEMP "Requested activities not available in file /var/log/sysstat/sa30"
<jsonperl> I feel like I'm missing a step
<grendal_prime> anybody using kvm on 14.04?
<grendal_prime> ya cent i believe is based on ubuntu..oh wait no its based on redhat right?
<gdi2k_> I'm trying to increase the size of a file system on a virtual machine (KVM). I have increased the image file size on the host, and in the guest, parted shows the new size of the drive correctly. However, using resize2fs gives the error that the file system is already as big as the device - it is not seeing the new device size. resizing from within parted isn't an option as it doesn't support ext4. any ideas?
<PryMar56> gdi2k_, you can delete the partition and redefine it.. I've done it, but its scary
<gdi2k_> PryMar56, yeh just reading about that - sounds hairy! did it go ok?
<PryMar56> yes, it worked
<PryMar56> gdi2k_, before you start, note the first sector
<gdi2k_> PryMar56, yes, it's straightforward in this case (1)
<PryMar56> oh, consider using `fdisk -lu`
<gdi2k_> PryMar56, hmm, so I delete the partition, create new, and it wants to start my partition at sector 3. If I force sector 1 it says out of range. bummer
<PryMar56> scroll back in your console to see what the first sector was
<gdi2k_> definitely sector 1
<gdi2k_> http://paste.ubuntu.com/8193538/
<PryMar56> if you have another image file create the same way, do fdisk -lu
<gdi2k_> I didn't write changes, so it's all still good. will paste output to fdisk -lu
<PryMar56> yeah, use sectors
<gdi2k_> http://paste.ubuntu.com/8193548/
<gdi2k_> ah, 63 there
<gdi2k_> what is that? not sector?
<PryMar56> gdi2k_, you have more detailed control with -u
<PryMar56> sectors
<gdi2k_> it's about the units. If I turn off Dos compatability mode and switch units with u, I get 63
<gdi2k_> so that's my starting point...
<PryMar56> that is the right way to do it
<gdi2k_> 63 is out of range too
<PryMar56> gdi2k_, the starting point is the most crucial thing
<gdi2k_> it suggest 2048
<PryMar56> its an image file?
<gdi2k_> http://paste.ubuntu.com/8193574/
<gdi2k_> yes, for a virtual machine
<PryMar56> gdi2k_, something strange about that version of fdisk.. you can exit without changes and try gparted?
<gdi2k_> found this: http://confluence.wartungsfenster.de/display/Adminspace/fdisk+Force+sector+63+boundary
<gdi2k_> it has no GUI, so no gparted but I have tried with parted - it won't allow the resize due to lack of ext4 support
<gdi2k_> (it wants to resize the file system as well as the parition)
<PryMar56> gdi2k_, did you say if resize is bigger or smaller?
<PryMar56> must be bigger
<gdi2k_> bigger
<gdi2k_> expand
<gdi2k_> PryMar56, it was 50G, need it to be 200G
<PryMar56> gdi2k_, things were simpler in the old days
<PryMar56> forcing a starting sector of 2048 is harsh
<gdi2k_> PryMar56, that URL gives a command to force access to sector 63, but the command does not work: "fdisk -c=dos -u=cylinders /dev/vdb"
<PryMar56> I would setup LVM and rsync the image file over
<PryMar56> get it working before you retire the image file
<gdi2k_> I may end just creating a fresh larger image file and copying all the files and changing all the mount points... will play with this first though - there must be a way!
<gdi2k_> PryMar56, wow, parted just goes ahead and writes changes. now I'm in a muddle...
<gdi2k_> PryMar56, ok with parted, I was able to specify starting point as 63s (sector 63) - it complained about the drive not being optimally aligned, which I ignored. I ran fsck which corrected some issue, then I ran resize2fs on it and it went smoothly. Mounted it, and all data is there :) thanks for your help!
<PryMar56> gdi2k_, good to hear
<PryMar56> what is the ubuntu version?
<gdi2k_> PryMar56, it's actually on a PBX system running on CentOS (which I find hard), but that's what it comes with. would much prefer it to be running Ubuntu like the rest of our servers
<blackyboy> Openldap problem, from client machine while i switch the user as su - babin it say user dose not exists any one there to help me ?
<zoski> hi, i'm in trouble with my server. I was running ubuntu 12.04.4 LTS and decided to upgrade to the latest LTS. I launched it via SSH and byobu. During the install I needed to switch off my computer so I detached the terminal and left the release running. Later I relog on my server and byobu didn't load, I try to rattached the terminal via screen, the upgrade has failed and my system was in read only.
<zoski> So I need help to continu the upgrade and save my server life. Thanks you.
<lordievader> Good afternoon.
<zoski> hi
<lordievader> Hey zoski, how are you?
<zoski> yes I'm lordievader
<zoski> I feel lost lordievader
<lordievader> zoski: Do you know why your server went into read-only mode?
<zoski> I think it's because the upgrade failed
<lordievader> zoski: http://askubuntu.com/questions/105793/why-has-my-file-system-turned-read-only-after-updates
<zoski> I'm checking this lordievader thanks for the link
<zoski> I used "dmesg | less" lordievader I get some messages I don't uderstand... I did a pastbin : http://pastebin.com/84FysAmT
<lordievader> zoski: IO errors, thats why it went into read-only mode. Do you have a backup?
<zoski> lordievader: yes only of the important files
<lordievader> zoski: Ok, do you have SMART enabled on that disk?
<zoski> lordievader: yes, I enabled some smart alert last year
<zoski> lordievader: the server was supposed to email me if something went wrong
<lordievader> zoski: Read [1], run a long test and see if your disk needs to be replaced. [1] http://www.techrepublic.com/blog/linux-and-open-source/using-smartctl-to-get-smart-status-information-on-your-hard-drives/
<zoski> okay thank you lordievader
<lordievader> zoski: No problem ;)
<zoski> lordievader: I can't launch the long test smartctl did work I get an I/O error
<zoski> sorry didn't work sorry
<lordievader> zoski: Turn off the server, grab the drive. Pop it into another machine and run it from there.
<zoski> lordievader: Okay, I'm not near the server right now I will try if someone can open me the room where the server is
<YamakasY> what's best actually to install software ? /usr/share or /usr/local/share ?
<zoski> lordievader: I'll going to check the server in about half an hour. Someone there may help me
<lordievader> YamakasY: Software that I manually install allways goes to /opt. But it is best to use apt for installing software.
<YamakasY> lordievader: yeah I know, but I need to install some own packages and I see that /opt is kinda 1995 :P
<lordievader> YamakasY: Works for me. Keeps it away from the rest of the system, and gives a nice overview.
<YamakasY> lordievader: true!
<YamakasY> lordievader: I love you
<YamakasY> damn I keep doing that :P
<zoski> lordievader: thank you a lot for your time, I'm leaving
<lordievader> YamakasY: Heh, that's the first time someone said that to me on IRC :P
<YamakasY> lordievader: I adore you!
<lordievader> zoski: No problem, good luck ;)
<YamakasY> someone said that ?
<zoski> lordievader: thank you
<lordievader> YamakasY: Who doesn't, I'm great :D
<lordievader> Just kidding ;)
<YamakasY> lordievader: hehe, indeed, you need to love yourself every day!
<YamakasY> lordievader: I do twice ;)
<lordievader> ;)
<YamakasY> man I hate cybersquatters
<xr600> Does iptables check for connecting host ? - Anyone knows ?
<xr600> Having a weird problem with one of me Ubuntu servers. It is seemingly refusing connections from certain hosts...
<Sierra> xr600: Did you check your firewall on the server, and any firewalls that may be on the network?
<Lcawte> xr600: You can do iptable rules that only allow connections from certain places, yes.
<zoski> lordievader: story update. I'm back home with my server. His hard drive is dead
<zoski> lordievader: The full story is : I get there to clean it and reboot it early today. Then I saw I could update to 14.04 so I did. During the install the hard drive didn't manage the load and decide to leave us.
<zoski> lordievader: right now the server didn't even manage to boot. I will backup what I can with a bootable usb key. Thank you for your help
<RoyK> zoski: check smart status for the drive. smartctl -H (or even, smartctl -x) /dev/sd(something)
<zoski> RoyK: I cannot managed to launch smartctl early when I was connected to the server. I know my hard drive is dead. I was just updating lordievader on my situation 'cause he helped me. Thank you
<RoyK> zoski: I was thinking if you were to boot that thing up on a usb key, just run smartctl to verify the harddrive's dead, and that it's not just the filesystem that went nuts
<RoyK> zoski: also, keep in mind to install smartd (part of smartmontools iirc) after reinstalling - it may warn you if the drive is dying
<RoyK> zoski: it *may* warn you, not all disk failures are detectable before they happen
<zoski> Thank you RoyK for the advice but I had smartd installed and it didn't warn me at all. I think cold hard drive + upgrade was to hard. I will try smartctl while backing up what I can with the usb key, it's a good idea
<zoski> But it's not for today, I need to pack my stuff, I'm going back to my school city tomorow morning.
<zoski> Sorry for my english, by the way ;-)
<RoyK> zoski: no worries <(
<RoyK> :)
<zoski> ^_^
<xr600> Sierra: There is no firewall, but when I'm on the same subnet as the server, I can reach it... When I go on the outside of the gateway, I can't...
<xr600> (... ANd sorry for the delay, had to shop & cook here...)
<xr600> Lcawte: Any special switch I need to list it ?
<TheFearow> Hi all - I'm trying to troubleshoot an annoying issue I've got with a relatively new server I have running 14.04 in AWS EC2. It's working fine except after some period from boot (30 mins - 2 hours) I can't get any incoming traffic to it - no SSH or web. Oddly, occasionally it's allowed some traffic (I could SSH from a neighbouring host) but not public, but then eventually all has failed. I presume some sort of firewall but I don't know Ubuntu
<TheFearow>  enough to troubleshoot further. I'm sort of lost as to where to troubleshoot so any pointers are much appreciated
<pmatulis> TheFearow: possible DoS or intrusion.  is the server reasonably secure (who can log in and how)?
#ubuntu-server 2015-08-24
<moneylotion> howdy, hey, anyone have an idea, how i can rsync "from" a read only file system (dvd)
<andol> moneylotion: The same way you would rsync from any other file system/path?
<moneylotion> "operation not permitted"
<moneylotion> a bunch of files have weird used, which cannot be changed
<moneylotion> weird users
<andol> Any chance that the problem is rather the destination, you are rsync:ing to?
<moneylotion> hmmm.... i was copying to a network volume... seems to work when i go to my desktop
<lordievader> Good morning.
<andol> moneylotion: Well, if it was to a nework volume things makes a lot more sense, given that the mounting user might not have suffecient access to changer ownerships, or that the network filesystem handles users in a different manner.
<moneylotion> i guess rsync can ignore permission :)
<andol> moneylotion: Yepp, just don't add flags telling rsync to care about permissions, ownerships, etc
<moneylotion> any ways I can significantly compress backups?
<bearface> depends if the data is significantly compressible
<purefan> hello. I found a server who just rebooted and Im trying to find the cause, any advice?
<lordievader> purefan: Look in the logs?
<purefan> lordievader: I have looked at syslog, auth.log, lastlog, boot.log, apache error.log, apache access.log and wtmp, nothing seems unusual
<purefan> lordievader: apache reports getting a sigterm but I cant find what triggered it
<lordievader> Does the kernel log show anything?
<lordievader> Or does it just stop -> bios.
<purefan> its actually missing the entries from around the server reboot
<purefan> server reboot happened around 2am, and the earliest entry is from 12:40
<purefan> oh hold on
<lordievader> purefan: That might indicate a failing disk. Bit difficult to write to a disk if it ain't there anymore.
<lordievader> Check smart, I'd say.
<purefan> lordievader: the earliest record in the previous kernel log is just for terminating ttys due to Sigterm
<lordievader> Hmm, that sounds a bit like an actual reboot.
<purefan> and thats weird because afaik no one was even awake at the time
<purefan> lordievader: Do you think a faulty HD is a risk in a cloud environment? this server runs on AWS
<arcsky_> Im going to install ubuntu on a server with RAID5. should i use the ubuntu installation on the entire drive or just a part of it?
<Walex> arcsky_: depends on how you do it and what you want to achieve. Is that hw or sw RAID5? Do you want the system filetrees to be RAID'ed underneath? etc.
<Walex> arcsky_: a typical installation profile is to have on each disk a '/' partition, a swap partition and an "other" partition, and if using sw RAID the "other" partitions are those that are RAID5-ed
<Walex> arcsky_: traditionally the '/' partitions are RAID1-ed to make it easy to boot from them, and 'swap' does not need to be RAID'ed
<Walex> usually at least, some people RAID1 the 'swap' too
<lordievader> purefan: No, for a vm it is unlikely.
<purefan> lordievader: Thanks
<lordievader> I guess Amazon has pretty good monitoring on that.
<purefan> lordievader: I think so too, I would expect at least an alert
<lordievader> Walex: Raiding swap is usually a good idea. Prevents inconsistent swap when a disk fails.
<lordievader> purefan: You won't receive it, their sys-admin team would get that alert.
<purefan> lordievader: Ok, but they would let us know if the service was interrupted because of that, right?
<lordievader> I don'
<lordievader> don't think their service will be interupted by a disk failure.
<purefan> lordievader: Yeah I dont think it would either... I need to report this and Im looking at all possible angles, its not a good thing that a server reboots itself without an explanation
<Siilwyn> Hi people, if I want to restrict a group of users to only have access to one directory through sftp (Match Group www-data), does it match all users in that group or only if it's their primary group?
<arcsky_> Walex: its hw raid
<Walex> arcsky_: then hw RAID is transparent to the OS and you need not worry.
<linocisco> hi all
<linocisco> no talloc stackframe at ../source3/param/loadparm.c:4864, leaking memory
<mnms_> Guys it is reasonable to give 10GB for swap partition ?
<mnms_> I have 5GB of memory
<mnms_> it will be a mail server
<mnms_> Somewhere I read it shoubd twice of RAM, but it is not too much ?
<mybalzitch> its not much space
<mnms_> mybalzitch: you mean 10GB for swap its not much space yes ?
<mybalzitch> yes
<mnms_> mybalzitch: I assigned 8GB finally, can I have problems ?
<mybalzitch> if you do, add mroe ram
<ogra_> well, mailservers are pretty I/O bound ... if you start swapping you should ASAP get more ram
<ogra_> on a busy mail server, swap is rather a safety net than anything else
<mnms_> I have 2xSSD disks and I want to put them in software RAID1
<mybalzitch> mdadm
<mnms_> what initial values you would use for this configuration ?
<mnms_> Yes, I dont want to use swap, I think 5GB of memory will be enough for my server and I dont want to hibernate my machine
<mnms_> mybalzitch: You are not saying to not use swap at all ?
<mnms_> I mean to not create swap partition
<mybalzitch> no
<mybalzitch> 5-10 gb of swap is plenty, especially if it lives on an SSD
<lordievader> Create some swap, as has been said before, as a safety net.
<ogra_> create it, but turn down the value for swappiness
<mybalzitch> how much mail are you handling anyway
<ogra_> so it doesnt get used by default, only for overflowing if required
<mnms_> I assigned 8GB
<mnms_> lordievader: 1000 mails per day
 * ogra_ doubts you will even use any ram with that, unless all 1000 hit the machine at the same time or some such :)
<mnms_> swappiness to 0 ?
<ogra_> not 0 ... take 1 or 5 ... just less than the default
<mnms_> or 10 like it is on help.ubuntu
<mnms_> ogra_: ok
<mnms_> so 2GB would be ok for SWAP cause I dont expect to use it all, correct ?
<mnms_> this would be enough for my "safety net"
<ogra_> right, and worst case you can immediately create a swap file and add it to the swap space in case you really hit an emergency
<mnms_> ogra_: btw 2GB of swap is not small, yes ?
<ogra_> 2G should really be fine
<mnms_> guys could you tell me how I can remove raid device which I created before, is says it is in use
<mnms_> when I try to stop I get "Cannot get exclusive access"
<mnms_> by mdadm --stop /dev/md0
<mnms_> Ok..solved it.
<Siilwyn> Just a quick question, can I chroot a sftp user to a non-home directory?
<Siilwyn> I got it working with a home directory but for some reason it doesn't work if I change it to /srv/userWebsite
<Siilwyn> (and yes root does own srv and 'userWebsite'
<mnms_> how do you backup your servers, with dd ?:)
<lordievader> Dirvish here.
<mnms_> lordievader: thanks
<jiku> hey guys..
<jiku> having a weird issue here..
<jiku> i just did an installation of php5-fpm with apache.
<jiku> was all fine until when i need to restart the service php5-fpm with /etc/init.d/php5-fpm or reload php5-fpm i get the error..
<jiku> http://pastebin.com/D7QqMAV6
<jiku> many such lines like these.
<jiku> i have to kill the process with PID 2wice 3rice..
<jiku> and then start php5-fpm
<jiku> is it a bug or something which is per the design ?
<genii> I'm pretty sure it uses Upstart and not sysvinit
<jiku> yep it does.. but reload php5-fpm is giving the same erro..
<jiku> i believe start, reload, stop --> all are upstart related.
<patdk-wk> service php5-fpm restart :)
<F1skr> Is it possible to get a ubuntu server root file system like what is available here for the cloudimages: https://cloud-images.ubuntu.com/vivid/current/ I'm looking for something without the cloud-init stuff because it doesn't work if I spawn it in a systemd-nspawn container
<catphish> F1skr: personally i just do a quick install from dvd in kvm, then extract the filesystem with guestfish
<F1skr> that seems very complicated
<F1skr> then I could just get the cloud image and remove the stuff I don't need
<catphish> normally reasons for wanting a raw image *are* complicated :)
<F1skr> I donÃ¸t know, I simply want to spawn a systemd-nspawn container with an ubuntu server instance
<ogra_> F1skr, http://cdimage.ubuntu.com/ubuntu-core/trusty/daily/current/ (its a bit more cut down than an actual ubuntu-server though)
<catphish> i've never seen those before, awesome
<ogra_> dont mix them up with the daily-preinstalled images though ... that is ubuntu snappy core ... nothing you can use OOTB
<ogra_> (and i guess the better place would actuallly be http://cdimage.ubuntu.com/ubuntu-core/releases/)
<F1skr> what does OOTB stand for? :)
<ogra_> out of the box
<F1skr> that seems perfect for what I'm doing! do you happen to know the default root password in those images?
<ogra_> there is none
<ogra_> it isnt set up at all
<ogra_> it is pretty much mostly for using chroots
<ogra_> the definition for the non..snappy ubuntu core images is: just enough OS to run apt
<ogra_> (in a chroot that is)
<F1skr> or a container as I'm doing now :)
<ogra_> :)
<ogra_> note, you might need to copy resolv.conf in place
<F1skr> systemd handles that for me :)
<F1skr> I also needed something where the network service was disabled (in order to let systemd handle it) this is perfect
<F1skr> thanks ogra_ !
<ogra_> good luck
<oste> i am seeing something strange using nginx, when i change the server root symlink, i get âNo input file specifiedâ for a few minutes then the new changes are properly displayed
<oste> i am using php fpm with fastcgi - and tried setting SCRIPT_FILENAME to a few different things with no luck
<teward> oste: there's a document for that somewhere...
<teward> oste: what version of nginx?
<teward> and what Ubuntu
<oste> nginx/1.4.6
<oste> Ubuntu 14.4
<oste> wonder if i should upgrade nginx
<oste> update*
<oste> PHP 5.5.9-1ubuntu4.11
<jwitko> Hey guys I have a server where my iscsi connections to my SAN stopped working.  I have identical setups in the same network where the iscsi connection is still just fine to the SAN.  Multipath -ll gives me the error "Error: : Inappropriate ioctl for device"   can anyone help ?
<teward> rbasak: i plan to bash my head against nginx today/tomorrow, but FeatureFreeze is in, and nginx merging will have new features, do you just want me to wait to X series?
<rbasak> teward: I can't approve a feature freeze exception if that's what you're asking?
<teward> rbasak: no, i'm asking if it's worth it - there's 3 new features in ntinx 1.9.4
<teward> i know you can't approve ;P
<rbasak> Hmm
<teward> substantial bugfixes though
<teward> http://nginx.org/en/CHANGES
<teward> it'll be available in Debian tomorrow (uploaded only today, not sure if it's built there yet :P)
<rbasak> I'm not sure if I had considered Wily feature freeze when we talked about plans for 1.9
<teward> i don't think we had, but given FeatureFreeze has passed I'm OK leaving it alone
<teward> and saying "Wily people: use the ppa not the repos, we aren't updating because freeze"
<rbasak> Yeah I think you'll have to leave it alone, unless you want to do the work backporting bugfixes from upstream hg
<teward> (the ppa gets a resync with debian tomorrow xD)
<teward> heheh, yeah, no thanks
<rbasak> OK.
<rbasak> I'm happy for you to leave Wily nginx alone then and focus on the PPA.
<teward> wheee rmadison lag
<teward> that's the general idea now
<teward> since college started back up and all :)
<teward> rbasak: PPAs get updated either way, so long as Debian's version builds, I just 'borrow' the package from there :P
<rbasak> Sounds good
<catphish> is there something i can append to the installer kernel / initrd to give me a root shell?
<catphish> init=/bin/bash doesn't work because there doesn't seem to be a /bin/bash in the initrd
<rmc3> Haven't been able to find an answer in the documentation about this: I see that the Ubuntu installer supports using Anaconda-style kickstarts. Is the kssendmac kernel parameter supported for sending the MAC of the interface that's in use to the server providing the kickstart as a parameter?
<sarnold> catphish: /bin/sh ought to be a symlink to /bin/dash
<catphish> sarnold: thanks, i just actually just this second tried that and it worked :)
<sarnold> \o/
<catphish> is there a way to run the disk detection procedure from there?
<sarnold> "disk detection procedure"?
<sarnold> do you mean udevadm trigger or whatever that is?
<catphish> the installer does something that loads modules for detected hardware
<catphish> i just want to load the modules for my raid array so i can mount something
<catphish> i can't run the normal recovery, because its network detection routine breaks my IPMI connection :)
<catphish> i suppose a simpler question is: where are the .ko files in the installer?
<jwitko> Hey guys I have a server where my iscsi connections to my SAN stopped working.  I have identical setups in the same network where the iscsi connection is still just fine to the SAN.  Multipath -ll gives me the error "Error: : Inappropriate ioctl for device"   can anyone help me trouble-shoot iscsi issues?
<jwitko> doing a scan results in the error "no session found"
<md50> Does anyone know if there is an ubuntu server installer that includes the sfc (Solarflare) nic drivers?  I tried 15.10 and 14.04.3, but no luck.
<md50> only mention of sfc I saw was this bug, but I'm not sure if this was iso or pxe specific.  I tried the isos https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1481490
<ubottu> Launchpad bug 1481490 in linux (Ubuntu Vivid) "Add sfc to nic-modules udeb" [Undecided,Fix committed]
<sarnold> md50: how are you determining if the driver is supported?
<sarnold> md50: 'apt-file search sfc.ko' on my 14.04 LTS system shows kernel modules for it for both the 3.13 and the 3.16 based kernel lines
<md50> The daily iso builds of the installer don't recognize the cards, and sfc.ko doesn't appear to be on the media.  However, after installing without a network, the kernel that boots contains the sfc module.  So it's a crappy installer, but things eventually work
<md50> I won't be able to mass install anything automatically without the driver (e.g. pxe install) but I can hack around it for now
<TJ-> if you're doing PXE based installs you can slipstream the driver into the initrd that is served to the host via TFTP
<md50> yeah, I guess I'll have to add it as part of my PXE boot, but I was hoping it would just work.
<TJ-> md50: I usually have 2 PXE entries per ISO... 1 standard and one unpacked on the server file-system so it can be customised easily
<eatingthenight> im on ubuntu server 14.04. When i type hostname i get the hostname as expected. When i type hostname --help it shows all the flags. One of them being -i to get the IP address. when i type hostname -i it says name or service unknown.
<eatingthenight> anyone ever ran into this? it's driving me crazy. It seems to work on ubuntu 14.04 02 but not on the original 14.04 release/
<eatingthenight> I can't find any bugs about it as well.
<eatingthenight> But i have tested it on multiple servers all with the same results.
<capricorn_one> your RR in DNS is missing
<eatingthenight> sorry what is RR? capricorn_one
<capricorn_one> Reverse Record
<capricorn_one> man apge says
<capricorn_one>        -i, --ip-address
<capricorn_one>               Display the network address(es) of the host name. Note that this works only if the host name can be resolved. Avoid  using
<capricorn_one>               this option; use hostname --all-ip-addresses instead.
<eatingthenight> o i see. I was using -i because -I adds an extra space at the end of the output
<eatingthenight> i guess i'll just strip it out since that is more reliable
<eatingthenight> capricorn_one: Thank you very much
<capricorn_one> when in doubt use man pages
<eatingthenight> capricorn_one: Yeah i need to get better about that one.
<sarnold> eatingthenight: why are you looking for the local IP? perhaps there's a better way to do what you're doing
#ubuntu-server 2015-08-25
<ElevyNJ> I was referred here from #ubuntu  i am running Ubuntu Desktop but installed a PPP server on it. My Windows 10 systems can connect but are not getting default routes so they can't get to the internet. I need help resolving this
<ElevyNJ> I was referred here from #ubuntu  i am running Ubuntu Desktop but installed a PPP server on it. My Windows 10 systems can connect but are not getting default routes so they can't get to the internet. I need help resolving this
<emanu_> I was referred here from #ubuntu  i am running Ubuntu Desktop but installed a PPP server on it. My Windows 10 systems can connect but are not getting default routes so they can't get to the internet. I need help resolving this
<ponyofdeath> hi, trying to figure out why after a dist-upgrade to 14.04 for the latest updates it will now not boot my multipath server
<lordievader> Good morning.
<jelly> ponyofdeath: how does it fail?  Do you have / on a multipath device?
<jelly> and did you try the usual initramfs debugging stuff
<lordievader> Does it boot a kernel/initramfs? Or does it fail before that?
<deepquestions> hi@ll, is there a solution to use the yubikey HMAC-SHA1 in the channelResponse mode under ubuntu?? :)
<deepquestions> hi@ll, is there a solution to use the yubikey HMAC-SHA1 in the channelResponse mode under ubuntu?? :)
<OliverUK> Hiya, I have been looking around for a solution but clearly I don't know how to use Google.  What should I be looking at to run Windows VMs on a Ubuntu Server host?  Thanks in advance
<lordievader> OliverUK: KVM.
<OliverUK> lordievader: Is this the only option?  I was looking at KVM and it seemed a pretty steep learning curve
<OliverUK> lordievader: Not a problem though if this is the way to go, I will sit and read :-)
<lordievader> I suppose there are alternatives but I know of KVM that it works very well. I run Lightroom that way.
<RoyK> OliverUK: apt-get install kvm virt-manager # virt-manager doesn't have a steep learning curve ;)
<RoyK> it's just a nice GUI to do the KVM fiddly bits for you
<OliverUK> RoyK: Trying to not have a GUI at all :-)
<OliverUK> Might have to look at it in test though :-)
<RoyK> OliverUK: not saying you should install X on the server - just use X on the client to avoid doing everything manually
<lordievader> OliverUK: Virt-manager can connect over ssh. Just install virt-manager on your workstation.
<OliverUK> lordievader: Nice, thank you :-)
<RoyK> lordievader: probably better if he's using linux on the workstation ;)
<RoyK> lordievader: I'm using OS X on my laptop, so less hassle to just run it from the server
<lordievader> I thought I had seen virt-manager run on Windows somewhere, never looked into it though.
 * lordievader dislikes X forwarding.
<RoyK> lordievader: x forwarding just works ;)
<lordievader> Meh, on crappy connections it is rather annoying. But then again, anything is.
<RoyK> yeah
<trijntje> I want to share a folder on my home network to other ubuntu pc's, what is the best way to do this with authentication, so you need a password to acces the share?
<lordievader> I'd say Samba.
<trijntje> is it possible to share ext over samba? I want to preserve things like hardlinks, softlinks, ownership, permissions etc
<lordievader> Yes.
<trijntje> lordievader: cool, I'll use samba then. Thanks
<rbasak> frediz: any news on kimchi please? Do you have an ETA?
<stemid> hi I upgraded from 12.04 to 14.04 and now root keeps getting these security errors saying "SECURITY information for host" "problem with defaults entries ; TTY=pts/10 ; PWD=/home/user" I've done visudo -cf on sudoers and every file in /etc/sudoers.d without any issues.
<stemid> I've found that this could be because I am connected to an active directory in this bug. https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1249777
<ubottu> Launchpad bug 1249777 in sssd (Ubuntu) "libsss-sudo generated nsswitch.conf leads to error messages upon sudo invocation" [Low,Confirmed]
<stemid> I use sssd and it affects both local and remote users.
<stemid> seems to indicate that sudo 1.11.7 will fix it.
<stemid> will this upgrade ever come to ubuntu 14.04?
<deepquestions> hi@ll, is there a solution to use the yubikey HMAC-SHA1 in the channelResponse mode under ubuntu?? :)
<deepquestions> help
<deepquestions> sry, wrong channel .... i retire my question. bb.
<trijntje> what is considered a safe operating temperature for always-on harddisks? I"m testing my new WD disks now by writing to them from /dev/zero, and their at  33 C, is that OK?
<trijntje> *their temperature is at 33 C
<RoyK> trijntje: should't be a problem. drives normally likes it best < 50 degrees, 33 is nice. what drives don't like, is temperature varying a lot
<trijntje> RoyK: what would varying a lot be? My house has crappy isolation, so it goes between 20-30C in the summer and maybe 14-20 in the winter, on a 24 hour cycle
<RoyK> trijntje: I'd guess the drives get worn out a bit earlier, but hell, you're using RAID, right? ;)
<trijntje> RoyK: yeah, zfs mirror of the 2 disks, and backups of course
<RoyK> trijntje: also, drives normally won't keep room temperature - they naturally heat up - I don't think your situation varies very much from mine in terms of temperature diff, and I've been running these drives for 2-3 years without much issues (although one or two are having issues now)
<RoyK> 7-drive raidz2
<trijntje> RoyK: thats good to know, thanks. I guess i'll write some automated scripts to keep an eye on the temperature, to get a feeling for how much it actually fluctuates
<RoyK> trijntje: smartmontools / smartd is always a good idea ;)
<RoyK> trijntje: also, the standard disk plugin for munin draws nice graphs of disk temp
<RoyK> trijntje: see pm
<trijntje> RoyK: I hadn't heard about munin, but thats pretty cool. Thanks for the pointer, I'll try that out for sure
<RoyK> trijntje: this is an old munin installation, mind, nicer graphs on 2.1.x
<RoyK> erm - on 2.0, I mean
<RoyK> this is old 1.4
<trijntje> thanks again for the pointer to munin, I'll check it out for sure
<RoyK> it's in the repos and is easy to configure - an apt-get away ;)
<AEL-H> If I want to mount a disk using mount such that only root can access the mountpoint, how can I do this?
<RoyK> AEL-H: chown root:root /that/mountpoint ; chmod 0700 /that/mountpoint
<AEL-H> Well what I have done at the moment is mounted it at /mymountpoint/, and when I do chmod 700 /mymountpoint/ as root user, nothing is happening
<RoyK> AEL-H: pastebin output of ls -ld /mymountpoint
<AEL-H> It is just a one liner --
<AEL-H> drwxrwxrwx 1 root root 24576 Aug  1 19:14 /mymountpoint/
<RoyK> what sort of fs?
<AEL-H> standard windows -- should be ntfs
<RoyK> then, as root, mkdir /safe ; chmod 0700 /safe, mkdir /safe/mp, mount whatever /safe/mp
<RoyK> (replace some commas with ;)
<AEL-H> so the problem is I can't chmod the folder, but I can mount it at a folder that has already been chmod'd?
<RoyK> mhm - ntfs doesn't use the linux filesystem perms
<AEL-H> That makes sense, thanks
<RoyK> same applies to FAT
<AEL-H> I can't unmount for some reason? It is saying the device is busy but I have no idea why that would be
<RoyK> probably an open file somewhere - check with lsof
<arosales>  I think smoser by be out this week, any volunteers for chairing this weeks ubuntu server irc meeting
<smoser> i'm hree. and can chair
<smoser> is that in 8 minutes ?
<smoser> or 1:08
<arosales> oh smoser!
<arosales> smoser, in 1:08
<arosales> thanks smoser
<jak2000> my network card have a ip: 192.168.0.200/24  how to add other interface(alias) for listen too in the network: 192.168.1.200/24 thanks
<RoyK> ip addr add dev eth0 192.168.0.201/25
<RoyK> or something
<RoyK> ip addr add dev eth0 addr 192.168.0.201/25
<RoyK> perhaps
<jak2000> mmmm but i want work on network 192.168.1.200
<RoyK> then change the address and mask
<RoyK> ip addr add dev eth0 addr 192.168.1.200/24
<RoyK> why would you want two different networks on a single nic?
<jak2000> Error: an inet prefix is expected rather than "Addr"
<jak2000> i am change from my hose to work and other places...
<jak2000> then i want configure permanently both networks
<jak2000> i know with alias eth0:1 but not remeber the sintax
<RoyK> that's old stuff
<jak2000> yes
<sonne> greetings!
<sonne> is xen-api available on 15.04 too? there seem to be no related packages on the repository, but iirc there was a working version on LTS...
<RoyK> jak2000: ip address add 192.168.99.37/24 dev eth0
<jak2000> ifconfig only show 192.168.0.200
<jak2000> why? need restart the network services?
<RoyK> ip addr list
<RoyK> ifconfig is outdated
<jak2000> done....
<RoyK> and if you want that permanent, add 'up ip addr add ...' in /etc/networks/int
<RoyK> and if you want that permanent, add 'up ip addr add ...' in /etc/network/interfaces
<jak2000> yes i want permanently
<jak2000> mmm wich line add?
<RoyK> "up" means "whenever the network is upped"
<RoyK> then the ip addr add thing
<RoyK> just below the nic config
<jak2000> ok
<RoyK> https://www.garron.me/en/linux/add-secondary-ip-linux.html
<jak2000> reading
<jak2000> RoyK done...
<jak2000> thanks
<RoyK> jak2000: did it work after a reboot?
<jak2000> yes
<jak2000> reboother
<jak2000> copied all the lines of eth0 and pasted, and changed eth0 to eth0:1
<RoyK> no need for eth0:1
<RoyK> that's deprecated
<patdk-wk> unless your still using ifconfig
<RoyK> just add "up ip address add ..." at the bottom of the eth0 config
<patdk-wk> I noticed my rhel7 machines dont even have ifconfig installed
<patdk-wk> I imagine that will break a lot of things for people
<RoyK> patdk-wk: that ip addr add things works with even 10.04
<patdk-wk> I know, ifconfig has been dead for ages
<RoyK> patdk-wk: it broke a few things, but a yum install ifconfig worked well
<jak2000> wait
<jak2000> the url you gave me say add eth0:1
<RoyK> patdk-wk: it broke vmware tools for one
<jak2000> changing
<patdk-wk> doesn't break vmwaretools for me
<RoyK> patdk-wk: perhaps it's updated now - first time I installed RHEL/CentOS7, vmware tools complained rather badly
<patdk-wk> right from vmware's repo
<patdk-wk> I have used it since it came out, though it was mostly unusable till 7.1
<patdk-wk> but starting to push some production load to it now
<RoyK> guess I'd have to try without ipconfig again
<RoyK> or ifconfig
<ponyofdeath> anyone know of a write once read many ( WORM ) media that is Linux friendly? Does a filesystem exist for this already?
<RoyK> patdk-wk: I don't know any filesystems supporting that
<RoyK> erm
<RoyK> ponyofdeath: that was for you, 19:03 <            RoyK > patdk-wk: I don't know any filesystems supporting that
<jrwren> ponyofdeath: the ISO file system is exactly that.
<ponyofdeath> RoyK: k thx
<patdk-wk> I wish ufs was more usable
<patdk-wk> ponyofdeath, the problem with worm, is it's very usecase dependent
<ponyofdeath> jrwren: ok, but that is only encapsulating something already there correct
<patdk-wk> there are many of them in linux, like initramfs, squashfs, ...
<patdk-wk> there is also ltfs if you use lto tapes :)
<jrwren> ponyofdeath: it depends on what properties you expect out of a filesystem. Yes, I guess? Don't all filesystems deal with data that is already there?
<jrwren> ponyofdeath: or did you not mean the entire fs was write once, but maybe per file or something?
<ponyofdeath> lol sorry this is for loggs
<ponyofdeath> basically we want to make sure the logs are not tampered with
<ponyofdeath> so if there is sometihng better that you guys know of besides WORM
<patdk-wk> just use off-system logs
<patdk-wk> and the old classic worm, a printer :)
<RoyK> patdk-wk: does ufs have worm?
<crinkle> is there any chance https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1453188 can be backported to trusty? (or who would I ask if not here?)
<ubottu> Launchpad bug 1453188 in neutron (Ubuntu Wily) "Incorrect path to binary in neutron-plugin-linuxbridge-agent" [High,Fix released]
<adam_g> crinkle, it should be available in the trusty kilo packages via the cloud archive. or are you actually using the 2014.1 packages shipped in trusty?
<crinkle> adam_g: no I'm using the cloud archive
<adam_g> crinkle, oh, it looks like that updates still waiting to be synced out to cloud-archive kilo -updates.
<adam_g> coreycb, jamespage ^
<coreycb> crinkle, adam_g, kilo cloud archive should be released to -updates later wed or thurs
<crinkle> cool, thanks adam_g and coreycb
<neredsenvy> Anyone on ubuntu able to help with this https://gist.github.com/anonymous/8d38e61ab3d4ff3297ae trying to install mysql-server-5.6 smooth installation, configuring password however service fails to start
<RoyK> neredsenvy: why mysql?
<neredsenvy> ...
<RoyK> ..--..
<neredsenvy> lets not do the which is better disscussion
<neredsenvy> *discussion
<jelly> because obviously the answer is percona-server-server (-server)
<coreycb> beisner, testing is good from my end and yours at this point with kilo-proposed in the CA.  can you plan on promoting to -updates wed pm or thurs?  that'll have given us 7 days in -proposed.
<RoyK> well, if you're installing from git, try #mysql
<beisner> coreycb, yep, i'll plan on it.
<coreycb> beisner, thanks
<beisner> coreycb, thanks for all the updates!
<ponyofdeath> could the tux3 versioning FS be used as WROM media?
<OerHeks> write read once many ?
<ObrienDave> writable read only memory? that's an oxymoron ;P
<RoyK> no, it's about media that can be used for backups - only writable
<ObrienDave> that would be WORM
<ObrienDave> Write Once Read Many
<pmatulis> waiting for the worms...
<jelly> heh, is tux3 still alive
<acmehandle> I'm on 14.04.01 with openssl 1.0.1f
<acmehandle> How can I tell if I'm vulnerable to heartbleed?
<acmehandle> I have self signed certificates
<teward> acmehandle: have you ran 'apt-get update; apt-get upgrade' yet?
<teward> also consider that self-signed certs are BAD if the site is in the wild and publicly accessible
<teward> because there's no 'trust' in place
<acmehandle> One of the first things I did earlier this year when I first got it up and running
<acmehandle> Ah.  Ok.
<teward> acmehandle: i mean, have you run that since
<acmehandle> Nope.
<acmehandle> Should probably do that.
<acmehandle> It is an experimental vps.  so nothing live on it.  But I still want to know.
<teward> acmehandle: well, there's always https://filippo.io/Heartbleed/
<teward> if the VPS has a domain tied to the 'experimental' stuff, point it there.
<teward> alternatively...
<teward> um...
<sarnold> you can install the unattended-upgrades package
<teward> that too
<teward> but also http://serverfault.com/questions/587839/is-there-a-way-to-manually-check-for-openssl-cve-2014-0160-vulnerability  <-- first answer for testing perhaps
<teward> hmm
<sarnold> it'll keep the thing up to date even if you don't log in often. granted, some services probably require restarting to make the updates work -- especially for e.g. openssl updates, your web servers or whatever need to be restarted
<teward> there's a lot of test scripts out there though :P
<teward> and yes, unattended-upgrades
<teward> sarnold: if a system is managed by Landscape can it still have unattended-upgrades installed and set for security only?
<acmehandle> No, its good.  I remember now.  I tested it against another ssl/tls testing website
<teward> or would that be handled by Landscape instead
<acmehandle> But I ran it against filippo.io just now
<acmehandle> says all good
<teward> acmehandle: then you're all set, just remember that if you're going Live to the world, don't use self-signed
<teward> you'll get yelled at by people lol
<sarnold> hopefully qualys's check can handle all the other issues even if you are self-signed
<teward> it does
<teward> it just really downgrades your grade because self-signed
<sarnold> teward: I assume so, they have to have programmed it with the expectation that some package management will happen outside of landscape
<teward> sarnold: cool.
<teward> sarnold: BTW, i had fun with landscape.
<teward> landscape and gitlab on the same server
<acmehandle> I guess I should just get a cheap certificate from ssl start
<teward> had to mess with a LOT of settings to make everything work, PLUS reverse-proxy via nginx xD
<teward> i'm writing up a blog post on it :P
<sarnold> teward: woo :) thanks for writing it up, I always like to read those sorts of things
<teward> sarnold: interesting tidbit: one of the bits of Landscape listens on port 8080.  Gitlab's Unicorn implementation (for the Ruby handling) also listens on 8080
<teward> gotta change the Gitlab one xD
<teward> sarnold: i'mma write it with the "Caveats, Headaches, and Why This Is Evil"
<teward> sarnold: it'll be aggregated on planet.u.c, once i publish
<acmehandle> apt-get update should be enough right?
<teward> maybe it can make it into the newsletter, i gotta poke pleia2 possibly
<teward> acmehandle: that updates the list of software available, apt-get upgrade executes the actual updates
<acmehandle> it'll do everything right?  Because it just flew right past didnt do a thing except check stuff
<acmehandle> oh, heh
<teward> (unattended-upgrades does that automatically though)
<sarnold> acmehandle: it's a two-stage process -- update updates lists, upgrade (or dist-upgrade) downloads and installs packages
<teward> ^ that
<teward> as i said :)
<acmehandle> openssl is not in the list it seems.
<acmehandle> Why do i even bother with a vps, everyone is going to the cloud.
<teward> acmehandle: apt-cache policy openssl
<acmehandle> but I dont wanna go to the cloud
<sarnold> "the cloud" can be more expensive, depending upon what you're doing.
<acmehandle> I'm getting *** 1.0.1f
<acmehandle> ubuntu 2.7 0
<teward> pastebin the whole output lol
<teward> !pastebin
<ubottu> For posting multi-line texts into the channel, please use http://paste.ubuntu.com | To post !screenshots use http://imgur.com/ !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic.
<acmehandle> What if I just run apt-get upgrade instead?
#ubuntu-server 2015-08-26
<Knightmare> Does anyone know if it's ok to use lxd on a home server? I want to replace kvm as my main hypervisor.
<lordievader> Good morning.
<sysrex> Good morning everybody
<lordievader> o/
<ld2412> Hello guys. Can anyone help me to setup an Ubuntu Server with full disk encryption?
<ld2412> I've been googling around but haven't find anything like a guide
<ld2412> May be someone have a guide so please share a link
<ld2412> Thanks
<RoyK> ld2412: it's a choice during installation
<ld2412> But if a rent a server (dedicated) and os is already installed
<ld2412> I mean this case
<RoyK> ld2412: not sure how to encrypt an existing system
<ld2412> RoyK: nevermind. I'm sure it's possible and it's somehow connected with KVM thiing
<TJ-> Encrypting an existing file-system using LUKS/dm-crypt?
<ld2412> TJ-: How to decrypt hdd after a reboot?
<ld2412> E.g.
<TJ-> ld2412: That depends on how it was encrypted :)
<ld2412> So, can you help me with it? :)
<TJ-> ld2412: Give some specifics and I may be able to
<TJ-> Ahhh... silent encryption, eh, sneaky!
<ld2412> TJ-: I've sent you messages
<TJ-> ld2412: Please keep the support messages in the channel so others can learn from the issue
<ld2412> Ok
<ld2412> So
<ld2412> The example
<ld2412> I rent a dedicated server
<ld2412> It comes with default Ubuntu Server 14.04
<ld2412> That's it
<ld2412> Full disk encryption is needed for that server
<ld2412> I have just ssh root shell access
<TJ-> ld2412: OK... well first off, anyone with physical access will be able to circumvent any encryption
<ld2412> That is clear :)
<ld2412> But still
<TJ-> ld2412: so what attacks are you looking to prevent. Encryption will protect the data if the server is powered off, but as soon as it powers up and is active, the block devices are available unencrypted and the key is in memory
<ld2412> You have already named the reason - protection when server is powered off
<TJ-> ld2412: Well, it's unusual to have a dedicated server that isn't powered on :)
<ld2412> That is true :D
<ld2412> But stiil, encryption is needed
<TJ-> ld2412: that said, you can configure an initrd.img with a small ssh client that can connect out to get a key to unlock encrypted devices
<TJ-> ld2412: it's much better to protect individual files/sets of files, if the aim is to protect personal data in, for example, databases
<RoyK> ld2412: usually, with full disk encryption, you'll have to type a password on the console to unlock it - for a headless machine, that's a bit tricky
<RoyK> ld2412: and storing the password in the bootup renders encryption rather useless
 * RoyK encrypts all his data with ROT13 - twice!
<TJ-> ld2412: If you want to protect data areas but the OS doesn't need encryption, then that makes it slightly easier since you can wait for the init system to start before needing to handle unlocking the encrypted device(s)
<TJ-> RoyK: That is so secure with UTF-16 :)
<RoyK> ld2412: if you're really paranoid, setup your own hardware :P
<ld2412> I can't setup my own :)
<lordievader> RoyK: 2x rot13 is the best security! By far.
<lordievader> ld2412: Ain't it better to make luks containers for your critical data rather than go full disk encryption?
<ld2412> I think FDE is better
<ld2412> I dont mind typing a password on each reboot
<ld2412> So
<ld2412> Is there any guide on the internet? I haven't found one
<RoyK> ld2412: do you have console access or just ssh? also - why so scared? the ISP will gain access if they want to
<ld2412> I have only ssh
<ld2412> That's my personal "features" :D
<ld2412> Please help guys
<TJ-> lordievader: FDE and LUKS are orthogonal. LUKS is simply a way to manage the keys of a dm-crypt device
<RoyK> ld2412: well, as TJ- said, you can probably get sshd into the initrd to help, but setting this up on a machine without console access will be hard - I've never tried it...
<lordievader> TJ-: I know. I was targeting encrypted lvm volumes.
<ld2412> I one provides me with step by step guide I will thank that person via bitcoins :)
<TJ-> ld2412: if you use FDE first of all, it won't be FDE, what it will be is root file-system file-system encryption. you'll need a separate unencrypted /boot/ file-system containing GRUB and the kernels/initrd.img
<RoyK> ld2412: why on earth would you bother with encrypting the root?
<ld2412> Of course boot leaves unecrypted
<ld2412> I am a bit more than average user, so not all the things you guys say are clear
<TJ-> ld2412: install a sttically linked ssh client in the initrd, that is hard-coded to contact your key-server via a key-script, and then crypsetup's cryptroot update-initramfs scripts can do the rest
<RoyK> ld2412: this is a bit on the advanced side ;)
<ld2412> RoyK: That is why advanced help is needed :)
<TJ-> ld2412: some people install dropbear in the initrd, the micro-ssh-server, and have it wait for incoming connections from a key-delivery system, also, but I prefer the call-out method
<lordievader> Personally I don't think it is worth the trouble.
<TJ-> ld2412: I've been working on ssh support in GRUB so GRUB's LUKs encryption can be used remotely too, to allow the /boot/ file-system to be remotely unlocked. Currently it requires console access to unlock GRUB's root file-system
<ld2412> TJ, would you like to help me tet-a-tet and I will pay your time?
<ld2412> Hourly
<ld2412> :)
<TJ-> ld2412: no thanks, I've got enough of my own to do
<TJ-> ld2412: pro tip... practice in a simple local VM guest until you're confident it works and you understand it
<ld24121> Sorry, my interned connection failed and I was disconnected
<TJ-> With servers that my contain confidential info a better solution is to use per-application encryption that encrypts its data before writing to disk, such as in databases, so even if the files are compromised the contents are safe
<Celphish> Hello everyone!
<Celphish> Got a quick question for you:
<Celphish> If I have a server, on which I run a very important web-service for our business, and I want to add two drives to fstab and then type "mount -a", will anything be interrupted or will the new drives just be added?
<lordievader> The second.
<Celphish> lordievader: so mount -a doesn't do anything to the already mounted?
<lordievader> No.
<Celphish> not that I don't believe you lordievader but I need to be 200% sure, can someone confirm?
<dasjoe> A "very important web-service" should be built in a redundant way
<TJ-> Celphish: test it locally
<rbasak> magicalChicken: how do you feel about bug 869017? Are you happy to post to ubuntu-devel?
<ubottu> bug 869017 in kbd (Ubuntu) "Ubuntu server enables screenblanking, concealing crashdumps (DPMS is not used)" [Medium,In progress] https://launchpad.net/bugs/869017
<RoyK> Celphish: well, of course, I use it regularly, but as dasjoe says, make it redundant
<rbasak> mdeslaur: around? I'm looking at sponsoring bug 1394403 - as you're looked at it before I'd like your opinion please.
<ubottu> bug 1394403 in apache2 (Ubuntu Trusty) "RewriteRule of "^$" is broken" [Medium,Confirmed] https://launchpad.net/bugs/1394403
<rbasak> when I asked magicalChicken to look at it I didn't realise the upstream fix would add a configuration directive. But it looks like it's safe as it defaults to the same behaviour. Had you considered this already? Does it also look reasonable to you?
<rbasak> I also think we should include the documentation update in our backport - better than not having it in the SRU IMHO.
<rbasak> smoser: reminder to look at bug 1481337 when you can please.
<ubottu> bug 1481337 in keepalived (Ubuntu Wily) "keepalived makes a floating IP available on more than one host after configuration reload" [Undecided,Confirmed] https://launchpad.net/bugs/1481337
<rbasak> smb: thank you for driving bug 1483214! Looking good.
<ubottu> bug 1483214 in linux (Ubuntu Vivid) "ipmi_si module spams kernel log with "ipmi_si 00:05: Could not set the global enables: 0xcc."" [Medium,Fix committed] https://launchpad.net/bugs/1483214
<smb> rbasak, np, should be getting out as this cycle closes
<mdeslaur> rbasak: I'm ok with the new config option...I've added options before to packages as security updates, so it's not like we haven't done it before. The option will change the behaviour though, but cases where it will break something are unlikely
<mdeslaur> rbasak: for the documentation, meh...if it were man pages, I'd push for it...but the static web documentation, meh
<mdeslaur> rbasak: especially since there are localized versions of the documentation and we'd only be updating the english version
<Azaril> hello
<Azaril> i cant get random_delay to work in /etc/crontab
<mdeslaur> rbasak: the only thing is perhaps add what the option is and how the default has changed to the changelog
<Azaril> http://pastie.org/10376721
<Azaril> every job runs exactly on the minute
<Azaril> that its "supposed" to
<rbasak> mdeslaur: OK. Thanks!
<jpds> Azaril: No mention of RANDOM_DELAY on "man crontab"
<SeerKan`> Hi guys
<SeerKan`> If I mount a gluster volume from server1 with server2 as backup with the fuse mount, I understand that once server1 is down it will use server2 automatically. But what happens when server1 is back ? will it start automatically use server1 even if it doesn't have the latest data or keep using server2 until it goes down and then go back to server1 ?
<Azaril> jpds: having gone through a lot of google, apparently ubuntus version doesnt support it
<jpds> Azaril: Not mentioned in the manpage. :)
<Azaril> yeah, not in ubuntus
<Azaril> ill have to do it by hand
<teward> (whee wordpress exploded)
<teward> rbasak: FYI, i've downloaded the nginx package from Debian Unstable, and am test-building it without changes before pushing to PPAs...
<teward> so the latest 1.9.x will be 'available' via that PPA
<rbasak> teward: sounds good. Thanks!
<teward> has anyone seen sarnold, I think he was lookin for my writeup on my Landscape-and-Gitlab-On-the-Same-Server blog post on my headaches and solutions for everything on the same server xD
<teward> rbasak: no problem
<magicalChicken> rbasak: Oh, yeah, I need to ask the mailing list if they think it's a good idea. I'll send that email out this afternoon.
<Celphish> Quick question, when I typ lsblk or fdisk -l, I see the same disk appear both as sdb and sdc, any way to get rid of sdc?
<TJ-> Celphish: same disk? sounds like some symlinks or device nodes are stale
<Celphish> TJ-: looks like it's the same disk yes
<Celphish> TJ-: how do I check if it's the same, I've been fiddling with this for a few hours now, haha
<RoyK> Celphish: do you have multipath somehow?
<TJ-> Celphish: there are symlinks under /dev/disk/   check out "ls -l /dev/disk/by-id/" and look at the names/serial numbers and where the symlink points to, and follow the trail
<RoyK> Celphish: I've seen same disk appearing twice if it's connected on multipath
<RoyK> Celphish: if that's the case, setup multipath in linux
<beisner> hi coreycb, ready for me to push from proposed to updates in Kilo cloud archive?  FYI, this is the proposed vs. current updates list:  http://paste.ubuntu.com/12199098/
<coreycb> beisner, probably should wait until after 4:30 pm to officially have it in proposed for +7 days
 * beisner moves finger off the trigger
<beisner> coreycb, gotcha
<coreycb> beisner, the list looks good, thanks!
<Celphish> RoyK: well, it was used faulty before, I haven't used it though.. When I did what TJ- said, it looks like one number points to sdc, and then the same number with the addition of "-part1" points to sdb1...
<beisner> coreycb, ok thanks for confirming.  I'll push later today.
<TJ-> Celphish: is it using multipath? how is the device connected?
<TJ-> Celphish: also, check "/var/log/kern.log" and look at the messages when those devices are added by the kernel, that might give a clue
<Celphish> TJ-: not sure tbh, think it's connected with fiber optic cable somehow
<RoyK> Celphish: pastebin output of 'smartctl -i /dev/sdc' and similar for sdb
<Celphish> RoyK: not installed on the server, smartctl, not sure I want to install it either since it's a production server
<TJ-> Celphish: possibly "sudo dmsetup info" might give come clues
<Celphish> TJ-: no devices found
<TJ-> resort to the logs then :)
<RoyK> Celphish: it should be installed :)
<RoyK> Celphish: install smartmontools
<RoyK> Celphish: it's very nice for monitoring and reporting things
<Celphish> RoyK: I'm just a tad restrictive when it comes to install anything on a server that's in use by all our customers atm
<RoyK> Celphish: it comes with smartd, monitoring physical disk health
<RoyK> well, uninstall it later, then
<Celphish> TJ-: there are some interesting entries in the log but I'm not sure what they mean, haha :D
<Celphish> alot of "device-mapper........ error getting device
<Celphish> with a "multipath" in between
<Celphish> but we've flushed all mp, there are none left
<TJ-> Celphish: looks like stale device references then, not sure how they aren't identical though
<TJ-> Celphish: you'd expect sdb sdb1 sdc sdc1 really
<Celphish> TJ-: ye, that's what I suspected too... but how do I correct / remove stale references?
<TJ-> Celphish: remove the nodes
<TJ-> Celphish: and any sym-links of course
<Celphish> TJ-: sorry for lacking the knowledge, but please elaborate on how
<TJ-> Celphish: if you're not familiar with such basic tasks, I would recommend getting a capable sysadmin to deal with it. you said it is a production server.
<rbasak> rharper: around? I'd like to talk about bug 1481289.
<ubottu> bug 1481289 in php5 (Ubuntu) "PHP 5.5.9 Default socket timeout being not honoured by application" [Medium,Incomplete] https://launchpad.net/bugs/1481289
<rharper> rbasak: here
<rbasak> rharper: thank you for investigating the bug. It's turned out to be much more complicated than applying a simple patch (don't they all!) but at least that's clear now so it doesn't look like we're just ignoring people sending patches.
<rbasak> rharper: I'd like to either get an SRU landed though, or drop it off my list for driving.
<rharper> rbasak: indeed;  it's rather complicated w.r.t what's actually needed
<rbasak> rharper: ah, so it's not a "minimal fix", right?
<rharper> there's nothing definitive in the referenced bugs that cleanly applies
<rbasak> OK
<rharper> and the upstream versions that are fixed are rather significantly modified in that area
<rbasak> The bug status should still be Triaged though - as it's about whether the bug is valid rather than if we have a readily available fix.
<rharper> they went through a number of iterations to get things working right
<rharper> ah, ok
<rbasak> I gave you the bug because I thought it was trivial, rather than because it has a large user impact.
<rharper> sure
<rbasak> So I think it's fine to drop it on that basis, while inviting others to provide a minimal patch for SRU.
<rharper> right, if we get a simpler patch (or any set of patches) I'd be happy to re-review that for SRU
<rbasak> Could you maybe explain what you think is required to drive the bug further in the bug itself, and then withdraw and unassign yourself? Assuming we don't want to work on it ;)
<rharper> rbasak: sure
<rbasak> Thanks!
<rbasak> rharper: is there anything you're waiting on me for BTW? I'm trying to go through all the bugs I'm tracking but don't see anything else from you on my list.
<rharper> rbasak: no;  I have the other bug related to puppet service status that I need to pick back up but that's not waiting on you, just me
<rbasak> OK, np. Thanks!
<rharper> sure
<Celphish> TJ-: well, they don't have any responsible syadmn :)
<smmoCoffee> Is this the right channel to discuss ubuntu openstack installation as a single installer without maas?
<rbasak> smmoCoffee: I'm not aware of a better place if it doesn't involve MAAS. Try asking your question.
<smmoCoffee> rbasak: We're wondering if there are any recommended procedures for running openstack-install behind a proxy server
<rbasak> stokachu: ^^ are you the right person to help smmoCoffee?
<smmoCoffee> running it with the http-proxy argument helps, but I'm finding with a self signed certificate used by the proxy server
<rbasak> danwest: ^^
<danwest> stokachu, can you answer the proxy question for smmoCoffee?
<smmoCoffee> we're the there is an insecure option that might be passed to ignore the self signed cert
<RoyK> smmoCoffee: if you use a linux server as the router/proxy, you could setup transparent proxing
<smmoCoffee> we have no direct access to the proxy server
<danwest> smmoCoffee, looks like there is a proxy option [--http-proxy HTTP_PROXY] [--https-proxy HTTPS_PROXY]
<smmoCoffee> logs for the bootstrap lxc container show errors with curl
<RoyK> I've never installed openstack, so sorry
<stokachu> smmoCoffee:we have cli options that you can set to run behind a proxy, --http-proxy and --https-proxy
<stokachu> have a look at openstack-install --help
<smmoCoffee> danwest: yes, that's how I'm running and that is allowing us to actually get to whitelist sites like entropy.ubuntu.com
<stokachu> oh self signed certificate
<RoyK> don't we just love those? ;)
<smmoCoffee> curl: (60) SSL certificate problem: unable to get local issuer certificate
<stokachu> yea we don't implement a way to accept insecure certificates
<smmoCoffee> that appears in the bootstrap container clould init log
<stokachu> whats the full command thats being run
<stokachu> everything we query for shouldn't be pulling in any self signed certificates
<rbasak> I get the impression that smmoCoffee has a MITM proxy that expects you to have installed its CA in every client.
<rbasak> (or bypass warnings)
<stokachu> yea makes sense
<smmoCoffee> openstack-install --http-proxy $PROXY_SERVER:$PROXY_PORT --https-proxy $PROXY_SERVER:$PROXY_PORT
<stokachu> smmoCoffee:whats the output from the log where it fails
<stokachu> you can paste.ubuntu.com that
<rbasak> SO I guess a proper solution is to install the CA everywhere (every container etc) where it is needed.
<rbasak> This might need cloud-init support for an ideal solution.
<RoyK> the proper solution is to use proper certificates
<rbasak> An MITM SSL proxy is reasonable in a controlled environment for consenting adults IMHO.
<rbasak> Otherwise it's just something that tunnels through firewalls defeating their ability to keep you safe.
<stokachu> i wish the internet was more trusting
<stokachu> so i wouldnt have to lock my doors at night
<RoyK> stokachu: don't we all :P
<rbasak> And being Free Software we should be free to run our own CAs if we wish. So this reduces to a feature request :)
<smmoCoffee> just as the ubuntu pastebin  url?
<smmoCoffee> http://paste.ubuntu.com/12199815/
<stokachu> yea just paste it at that url and post the link here
<smmoCoffee> Here
<stokachu> smmoCoffee:youre running an older openstack-installer
<smmoCoffee> Here's higher level logging from the commands.log
<stokachu> smmoCoffee:http://ubuntu-cloud-installer.readthedocs.org/en/latest/single-installer.guide.html
<smmoCoffee> http://paste.ubuntu.com/12199839/
<stokachu> follow ^ that guide as that is the version that will end up in the ubuntu archives
<smmoCoffee> ok, thanks. So we should use ppa:cloud-installer/experimental
<stokachu> the pollinate command should work behind a proxy so I think using the latest installer fixes all that
<smmoCoffee> stokachu: thanks for confirming
<stokachu> np, also you can find us in #ubuntu-solutions if you need anything further
<Knightmare> A question for those in the know. Can I replace KVM with LXD?
<jpds> You can't just migrate from one to the other
<teward> any landscape pros here?  (got a question about the actual Landscape Dedicated Server software)
<Knightmare> I don't want to migrate. I was just wondering if I could use lxd instead of KVM.
<shauno> it depends what for.  you can use a shoe as a hammer, but not a hammer as a shoe.
<jpds> teward: Yes
<teward> jpds: what part of LDS listens on 8080?  And is it possible to make it listen elsewhere
<jpds> teward: I think that's the appserver
<teward> (say, 28080 or similar, because portconflicts)
<jpds> teward: Why do you already have something on there?
<RoyK> Knightmare: some prefer lxd over KVM - it will share resources better, but IMHO, KVM is nice
<teward> jpds: good question, it's more a hypothetical question :p
<teward> jpds: because it's an experimental system, so exploring it is different than production deployment
<teward> there's quite a lot of cruft there
<teward> (experimentation vlans ftw)
<jpds> teward: I always install LDS on a new VM
<teward> jpds: remove VM from the equation - my ESXi box that runs VMs decided to blow up on me
<teward> so i'm still hunting a replacement/warrantyfix
<teward> (so it's on a barebones test system for now)
<teward> i was more curious what's listening on 8080 rather than changing the port :P
<jpds> Right
<jpds> teward: Apache just mod_proxy's stuff to the relevant component/appserver based on URL
<teward> right, which i discovered is fine, except for when you have one IP, many domains, then you need reverse proxies :/
<teward> (I am also NOT an apache guru :P)
<teward> jpds: a lesson in how NOT to run things: I am temporarily putting LDS on my gitlab barebones box :/
<teward> 8080 is conflicted because Gitlab's Unicorn is on 8080
<teward> (I fixed that easy)
<jpds> Ew
<teward> as i said
<teward> my esxi box decided to nuke itself :/
<jpds> KVM is lovely :-)
<teward> making it WORK is not hard once you fix the port conflict
<teward> mmm
<teward> indeed
<teward> jpds: too bad it doesn't like this box
<teward> in either case - http://dark-net.net/?p=100 is my musings and discovery
<jpds> teward: Why doesn't it like that box?
<teward> jpds: old
<teward> (old converted desktop, my guess is virtualization isn't present in BIOS/CPU chipset
<teward> oh
<teward> jpds: interesting tidbit...
<Knightmare> RoyK: Thanks for the advice!
<jpds> So, ESXi isn't painful for you?
<teward> jpds: landscape-client refuses to recognize valid SSL on a Landscape server
<RoyK> Knightmare: IMHO KVM (or other virtualisation software) is far better in security terms - it's total isolation, not merely smart chrooting
<teward> (i.e. a valid SSL cert from an SSL certificate provider)
<jpds> teward: Weird
<teward> jpds: it required me to copy in the CA chain and provide it via the cli arguments
<teward> jpds: that *may* be a bug, but meh
<teward> (it's a Comodo wildcard cert)
<jpds> teward: It'll use whatever the Ubuntu ca-certificates uses
<teward> weird because curl worked fine to recognize the SSL as valid
<teward> landscape-client didn't
<teward> (so did Firefox, and Chromium, and w3m even)
<teward> also a funny story about Landscape - things run in KVM are sometimes NOT recognized as a VM
<teward> and as barebones hardware instead
<jpds> teward: landscape-client uses gnutls instead of openssl, which is slightly more stricter about certs
<teward> mmm
 * teward shrugs
<jpds> teward: Yeah, did you expose the host cpu flags to the guest?
<teward> jpds: wasn't my system, the other sysadmin set that one up
<teward> :/
<teward> my guess is 'maybe'?
<teward> my experimental test instances only have VMs
<teward> well...
<teward> jad.
<teward> had*
<Knightmare> RoyK: I've got a HP Microserver and just want to make the most of the resources.
<teward> jpds: another question if you have a moment - any way to get more than those 20 free licenses?  I assume one has to buy the advantage licenses, but my question is are they the same price, or do the prices on the canonical store reflect the Canonical-run Landscape instance?
<RoyK> Knightmare: what sort of cpu? how much memory?
<jpds> teward: You have to buy licenses
<Knightmare> RoyK: http://n40l.wikia.com/wiki/Base_Hardware_N54L
<teward> jpds: i know that, but are they the same licenses listed on the Ubuntu Advantage pages, or does one reach to Canonical Sales about pricing
<RoyK> Knightmare: if kvm doesn't do the job, try lxc, or get some better hardware ;)
<jpds> teward: talk to sales
<teward> ok
<teward> jpds: thank you!
<Knightmare> RoyK: I don't need serious hardware for what I want to run. Just want to be a efficient as possible :)
<RoyK> Knightmare: lxc is probably the best thing for that
<Beret> teward, do you have an account on landscape.canonical.com?
<Beret> teward, the hosted Landscape?
<rbasak> jgrimm: I agree with you on bug 1397250 - I'd like *someone* (even if it's the reporter) to be able to reproduce it, or at least still be affected by it, before we attempt an SRU.
<ubottu> bug 1397250 in libnss-ldap (Ubuntu) "SIGPIPE not caught in do_atfork_child()" [High,Triaged] https://launchpad.net/bugs/1397250
<jgrimm> rbasak, thanks.. i was going to ping you for opinion on that. hopefullly someone can actually recreate the bug..
<jgrimm> rbasak, i find it rather curious that the test reproducer didn't even hit the function that is patched as part of the fix when i enabled debugging.
<jgrimm> rbasak, but has been an excellent learning exercise none-the-less
<teward> Beret: no, Landscape Dedicated Server, the personal-run one
<teward> NOT the cloud landscape.canonical.com
<teward> if I had the hosted landscape I'd have a support ticket in, not asking questions here :)
<jgrimm> rbasak, would this actually be SRU'able even?  I can't think that this is hitting that many folks..
<rbasak> jgrimm: if someone's actually impacted (rather than it being theoretical) then I wouldn't block doing an SRU on that basis (though time and resource is a separate thing, I'm always happy for an affected user to drive).
<rbasak> jgrimm: OTOH, if we can't actually find anyone impacted because it's theoretical or that person has moved on and doesn't trigger it any more, then I see no point in risking regression by doing an SRU.
<rbasak> That's my personal opinion, anyway.
<jgrimm> rbasak, that's my concern too
<teward> mmm... jpds, Beret: did either of you invite me to landscape.canonical.com for some reason?
<Beret> teward, yes, do you not see my private messages?
<jpds> teward: He's onto you
<teward> Beret: +g
<teward> (i get too much PM spam, being an op in bitcoin chans)
<Beret> eh?
<Beret> ah
<teward> (so I have umode +g on, which blocks PMs)
<Beret> not ideal
<teward> Beret: feel free to resend if you wish :)
<Beret> :)
<Beret> paste coming your way
<teward> Beret: getting spammed by 300+ bots is less ideal
<Beret> resent
<Beret> get it this time?
<teward> Beret: yep!
<smmoCoffee> hi, question about installing openstack per the single installer guide http://ubuntu-cloud-installer.readthedocs.org/en/latest/single-installer.guide.html
<smmoCoffee> the top level container is failing to initialize
<smmoCoffee> a process execution error is thrown where the single_install python script tries to run pollinate command
<kirkland> smmoCoffee: the pollinate command should fail gracefully, quietly
<kirkland> smmoCoffee: is it not?
<kirkland> smmoCoffee: can you paste some logs?
<acmehandle> I have a dump question.  I'm running nginx.    I set up a self signed certificate a few months ago.  everything worked great.  I checked it against site certificate ssl/tls checkers and it was getting high marks.    I bought a certificate and I guess I used the wrong key or something.  I forgot the whole process from a few months ago.  Right now I keep getting the same self signed certificate warning in the browser.
<pmatulis> acmehandle: check the nginx documentation to be sure
<smmoCoffee> kirkland: here's the commands.log http://paste.ubuntu.com/12201532/
<kirkland> smmoCoffee: so, first, I should mention that https_proxy=http://129.165.60.148:80 is a really bad idea, from a security perspective
<SCHAAP137> acmehandle, it might be that the selfsigned cert is still in your /etc/nginx/nginx.conf, while you defined the website and its new "legit" certificate in a config file in /etc/nginx/sites-enabled/
<acmehandle> Yes, seems I was passing the old csr to the cert issuer
<acmehandle> Had to regenerate new server.key and new server.csr
<kirkland> smmoCoffee: next, can you try adding "-i" to the list of pollinate flags?
<acmehandle> now I have an ocsp server has not status of certificate.  But that appears to be that it requires to take a little time to resolve with the CA or something.
<SCHAAP137> make sure to use an SHA-2 hashing function for the CSR, or your Chrome users will get the SHA-1 security warning when visiting the site
<acmehandle> Yes, Sha256 2048
<SCHAAP137> cool
<SCHAAP137> you could also generate custom DH parameters
<smmoCoffee> kirkland: so try running the installer as: openstack-install --upstream-ppa --https-proxy=http://129.165.60.148:80 -i
<acmehandle> I was thinking of going all the way to 11 by using 4096...but in todays world I dont know if anyone cares
<SCHAAP137> strengthens it a bit more
<SCHAAP137> i use 4096 for mine
<acmehandle> Oh, cool.
<kirkland> smmoCoffee: sorry, no
<acmehandle> I generated a dhparam.pem using 4096
<acmehandle> dont know if that matters,
<SCHAAP137> getting the 100/100/100/100 mark on ssllabs.com is not that hard either
<acmehandle> or if it will cause a problem
<acmehandle> Not if you have a vague idea of what youre doing   :-)
<acmehandle> Which is pretty much what I have
<acmehandle> But as long as its secure I dont care.
<SCHAAP137> hehe, if you're using a 2048 bit cert, 2048 bit dhparams would be sufficient, but more is even better
<SCHAAP137> generally speaking
<SCHAAP137> as long as it's not less
<acmehandle> This was really more of an exercise for me.  To gain some experience.  But with everyone moving to clouds I dont know if anyone cares anymore.  Besides me.
<patdk-wk> getting 100/100/100/100 and maintaining a USABLE site, is hard :)
<smmoCoffee> kirkland: where/how is the pollinate flag "-i" passed or configured?
<acmehandle> Do you have ocsp as well SCHAAP137?
<kirkland> smmoCoffee: it's in the cloud-init userdata
<SCHAAP137> in apache i got OSCP to work, but with my current setup OCSP fails, haven't figured out why yet
<kirkland> smmoCoffee: sorry, I don't know about the openstack-install
<SCHAAP137> i got HPKP though
<SCHAAP137> current *nginx setup
<SCHAAP137> might be SNI related, not sure
<SCHAAP137> if you want the CHACHA20-POLY1305 ciphersuites, you could recompile nginx with LibreSSL btw
<acmehandle> Has it been 12-24 hours?  My 'resources' inform me that in some cases thats how long it takes to resolve.  I just got it to this point a few minutes ago
<acmehandle> Do I need it?
<acmehandle> I dont even know anymore.
<SCHAAP137> hmm, really? i might just have been impatient
<SCHAAP137> OCSP could protect your clients from some forms of MITM, theoretically
<acmehandle> So I have read.
<acmehandle> Some of what I've read says that with chrome the sites work right away. Firefox often has that delay with ocsp
<SCHAAP137> it would require quite a skilled attacker though
<acmehandle> I just opened up chrome and was able to access my site.  So that confirms taht much of the theory
<acmehandle> How does a MITM attack work nowadays?  Are all those points out there that insecure that someone can break in to them?
<acmehandle> I'm naive about this.
<SCHAAP137> one could employ this: https://mitmproxy.org/
<acmehandle> I just know to try to plug up whatever vulnerabilities I might have
<patdk-wk> ocsp only protects you from a revoked certificate
<patdk-wk> so, you have to know your certificate was compromised
<patdk-wk> and revoke it
<patdk-wk> before using oscp gains you any protection
<smmoCoffee> kirkland: looks like openstack-install uses configuration found in ~/.cloud-install/userdata.yaml for the pollinate command
<SCHAAP137> allright, got OCSP working now as wlel
<SCHAAP137> *well
<prudentmav> I'm new to server admin... Just created a vps at digitalocean with a lamp stack.  When I use adduser, is there a way to have a few additional things happen when a new user is created?  for example, in addition to creating the user, it creates a folder coping contents from a temp folder for the temp landing page then create a file in sites-available from template with variable for username inserted then enable site?
<iNs> has anyone managed to get a simplest pptp tunnel going on? im having some dafuq moments ;) http://paste.ubuntu.com/12202197/
<RevertToType> im having some difficulties setting up ubuntu server to automatically pull down an ip, if after boot I run sudo dhclient wlan0 it's all good, how can i automate that (is there a normal way in network/interfaces?)
<fishcooker> i have keyauth enabled on sshd config .. how to show whois login through accepted key when one account login has many authorized keys?
<RevertToType> could be as simple as post-up dhclient wlan0?
<beisner> coreycb, fyi - kilo proposed pushed to kilo updates.  http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/kilo_versions.html
<coreycb> beisner, thanks!
<beisner> coreycb, likewise, yo.
<sarnold> iNs: is that tunneled through ssh?
<iNs> sarnold, no its not, i figured now its a problem with chap secrets i think, http://paste.ubuntu.com/12202341/
<iNs> thats server side log snippet
<sarnold> iNs: interesting. (the bit about /dev/pts/3 made me think it was like the old days when we'd run ppp over ssh..)
<iNs> its 2 xen VM hosts, connection is about to take place between 2 VMs on each of those
<iNs> every art/tut says something idfferent, any chance u could guide me to the 'proper' chap-secrets structure both server and client wise?
<sarnold> iNs: looks like the format is "username servername password ip_address" with single spaces bewtween them
<iNs> yea i know, that is the point
<iNs> ive followed the man's and ugh
<iNs> this is weird
<sarnold> iNs: something that picky might also get confused if you have any trailing spaces at the end of the line, it might be worth looking for those, and probably the file needs to end with a newline
<sarnold> are there any strange chars in the username, service name, or password?
<sarnold> I'm having trouble figuring out what the service name ought to be -- but the guide on the ubuntu community wiki suggests that * also works there :)
<iNs> the serivice name should be pptpd, should the chap secrets on the server be identical to the one on the client?
<iNs> the service name in chap-secrets should be equal to the remotename send by request from the client
<sarnold> iNs: hmm, the digitalocean guide suggests the client username and password ought to be stored in an /etc/ppp/peers/ file
<RevertToType> so like what do i need to do to fire off dhclient at startup is there a entry i can just throw into network/interfaces or is there some other official way or is it literally throwing a script into rc#.d
<RevertToType> my network/inferfaces file def. has dhcp in the second line... no clue why my card isn't pulling down an ip address until i run dhclient manually\
<iNs> sarnold, hm, corrected it exactly like that tut says and nothing ;o
<sarnold> RevertToType: anything in any logs? dmesg? syslog?
<RevertToType> nothing seems amiss
<RevertToType> like no errors or anything
<RevertToType> it's just like it's not running
<sarnold> iNs: logs on both client and server look unchanged?
<sarnold> RevertToType: do you have an 'auto' stanza for tht nic?
<RevertToType> (ub15.04, 3.19.0.26-generic)
<RevertToType> yup
<RevertToType> auto wlan0
<RevertToType> next line is iface wlan0 inet dhcp
<iNs> RevertToType,  u mean 'iface eth0 inet dhcp; ?
<RevertToType> then my pre-up supplicant
<sarnold> wireless? hmm
<iNs> sarnold, yea O_o
<RevertToType> @ iNs nope it's my wireless card and that's the dev
<sarnold> iNs: check permissions? sshd at least is pretty picky, maybe pptpd is picky too :)
<RevertToType> @ iNs would "wireless-mode Managed" in the interfaces file be useful... ?
<iNs> after the wlan0 inet dhcp line, u can try having ssid key and managed mode yea
<RevertToType> i have the ssid/key and all that in a pre-up wpa_supplicant thing
<RevertToType> (WPA2, AES, Hidden ssid, ... i know this isn't making it easy)
<RevertToType> and it didn't work (adding the managed line)
<RevertToType> blech now lets say i just wanted to add that simple line (dhclient wlan0) to the end of my strup... used to be rc.local ... no clue what it is now, where do i put this rubbish?
<sarnold> RevertToType: /etc/rc.local should still work
<RevertToType> in the interest of being like "hrm" I don't see such a file
<sarnold> granted, if you put it at the end, any services that should bind to that interface that are started via startup scripts will probably also fail
<RevertToType> frick
<RevertToType> so what do?
<sarnold> interesting, the file I've got there is reported as unowned by dpkg -S /etc/rc.local
<RevertToType> hrm
<RevertToType> hrm there it is
<RevertToType> oi vey
<sarnold> try using it anyhow. add an 'exit 0' as the last line, #!/bin/sh -e  as the first line, mode 755 root:root:
<iNs> sarnold, purged everything and done from scratch - same thing
<iNs> dafuq
<sarnold> iNs: dang. if it were mine to solve I'd use strace on both client and server and hope it leaves some clues behind. reading strace output isn't much fun but might give an idea where things go astray
<RevertToType> it didn't work
 * RevertToType scratches her head
<RevertToType> this is so utterly baffling
<iNs> welcome to the baffled club RevertToType lul
<prudentmav> where if the file located that runs when a new user is created?  Wanting to create a public_html folder in their home directory, install october cms and then create file for sites-available folder then enable the site
<prudentmav> where is*
<RevertToType> so like techincally i should be able to run rc.local as if it were any other script right?
<RevertToType> like
<RevertToType> I should be able to ./rc.local and execute it if all goes according to plan yes?
<sarnold> prudentmav: /etc/skel for the public_html, /usr/local/sbin/adduser.local for the adduser site-local customizing
<sarnold> RevertToType: mostly
<RevertToType> i'm geting a !/bin/sh not found O_O
<prudentmav> thanks sarnold
<RevertToType> when i attempt to run it from terminal (cause why not)
<sarnold> RevertToType: the startup environment is typically very different from a logged-in-user's environment. so something that works perfect as a logged in user may not work well when booted.
 * RevertToType nods
<sarnold> RevertToType: #!/bin/sh ?
<RevertToType> yeah
<RevertToType> running it from command thrwos that error however it properly executed dhclient
<RevertToType> but didn't on boot
<RevertToType> as i said baffling
<sarnold> and it's root:root, mode 755?
<RevertToType> def mode 755
<RevertToType> how do i tell the first part?
<sarnold> ls -l /etc/rc.local
<RevertToType> -rwxr-xr-x 1 root root 319 (?WHAT?)
<sarnold> RevertToType: that means 755, one hardlink, owned by user root, group root, 319 bytes
<RevertToType> so it should be all good
<RevertToType> no clue why it hates me
<sarnold> yes
<sarnold> yes :)
<RevertToType> honestly getting this thing to even connect to the freaking wireless was a chore
<RevertToType> like not a single tutorial i walked through worked until i just started parsing out as much as i could from arch and debian tutorials :V
 * RevertToType drools staring at the screen
<RevertToType> this has made me feel more and more incomprehensibly dumb every second
<sarnold> it shouldn't
<sarnold> wireless is miserable
<sarnold> the debian networking configuration is miserable
<sarnold> combined they are very miserable.
 * RevertToType continues to drool
<RevertToType> but now we're not even talking about wireless
<sarnold> I know I got it working a decade back on a laptop, but, as much as I detest network-manager, it does seem to more or less mostly work on wireless things. :/
<RevertToType> we're talking about a single command that does run in terminal a-ok
<RevertToType> but doesn't in a startup script
<RevertToType> and oh god yes, even with a wm i had a dog of a time on ub9.04 and my old netbook
<RevertToType> i remember the nightmares
<RevertToType> do i need to do like update-rc.d or something?
<RevertToType> i mean i shouldn't have to but at this point you could tell me 'create a script that just forks endlessly' and i'd believe it
<sarnold> RevertToType: /etc/rc.local should be run by /etc/init.d/rc.local during boot without any real effort on your part
<sarnold> RevertToType: pastebin your script?
<RevertToType> ack systemd might be the ish?
<RevertToType> no need to paste (excluding commented lines)
<RevertToType> !/bin/sh -e
<RevertToType> sudo dhclient wlan0
<RevertToType> exit 0
<sarnold> that first line needs the #!
<sarnold> the kernel is looking for those two bytes, they've got to be there. :)
<RevertToType> damnit
<RevertToType> my brain
<sarnold> take out the 'sudo', it's already running as root..
<RevertToType> it was a second add after the first 2 failures :V
<sarnold> and it'd be best to give the full path to dhclient, the PATH is often very constrained during boot scripts.
<sarnold> :)
<RevertToType> oh yeah
<RevertToType> thx
<RevertToType> oi
<RevertToType> freaking staring at this screen has made me dumb and just near constant callllll/interruptions
<iNs> i am as well pulling my hair out
<iNs> both out of my head and beard now as well, thefuk
<RevertToType> don't lose dat beard
<iNs> im closer to losing my mind actually
<RevertToType> sarnold: no joy :/
<sarnold> RevertToType: alright, what went wrong? did you get any error messages logged? anything in dmesg?
<RevertToType> hrm
<RevertToType> nothing relating to rc.local
<RevertToType> i do however see that ipv6 seems to be fine
<RevertToType> hrm do i need to set up some freaking dhcpconf kinda rubbish?
<RevertToType> i mean regardless the rc.local thing should do it
<sarnold> RevertToType: maybe remove all mentions of this interface from /etc/network/interfaces -- stick everything needed for it in the /etc/rc.local -- modprobe whatever modules are needed, add the iwconfig and wpa_supplicant and dhclient commands there manually..
<RevertToType> ugh
<iNs> sarnold, you wont believe it
<teward> sarnold: ping
<iNs> by habit, even simple passwords of mine, containt numerical/special characters ...
<teward> (apologies for duplicate posts, laggggggggggy)
<sarnold> heya teward
<iNs> sarnold, using a simple password fixed it, lawl
<sarnold> iNs: .. which oddball charcter broke this? :)
<iNs> im gonna check it now lol for test purposes
<iNs> seriously
 * iNs knocks his head
<sarnold> iNs: and worse yet.. dare look at the code and see if it's exploitable? :)
<teward> sarnold: http://dark-net.net/?p=100 <-- the landscape-and-gitlab headache.  hoping to god i can find a replacement CPU for my ESXi box :/
<teward> (it's still not 100% fixed :/)
<teward> rbasak: FYI: nginx ppa builds delayed due to other (private business) builds that are taking an emergency-priority leve.
<teward> :/
<iNs> sarnold, # breaks it
<iNs> : d
<sarnold> iNs: haha, probably killed by a comment parser in an earlier pass..
<sarnold> teward: nice :)
<sarnold> teward: man there's a lot of fiddly things to change there
<iNs> wasted so much time for this ;d
<RevertToType> apparently ub15.04 is using systemd and upstart ... perhaps it's time to look into that for this one simple stupid command
<teward> sarnold: yeah, the Apache redirs don't help
<teward> (change the port, port redirs are still enforced by Landscape, even if the baseurl isn't set as such)
<teward> Landscape used to be on my ESXi box
<teward> as a VM
<teward> then the CPU died
<teward> so...........
<teward> (we need ARM landscape-server xD)
<teward> (then my RPi can be my Landscape server xD)
<sarnold> RevertToType: pitti put this together, it's been useful reading for me https://wiki.ubuntu.com/SystemdForUpstartUsers
<sarnold> heh, would an rpi have the ram to make it work? :)
<RevertToType> nice i'll poke into that... gotta run but sarnold thank you so much for your help
<teward> sarnold: potentially?  Landscape never ate more than 768MB on my system, but meh
<teward> sarnold: the big problem is that my ESXi box did blow up on me, without that, everything's on the converted desktop that's my Gitlab machine
<sarnold> RevertToType: good luck :)
<teward> sarnold: btw can i grab your opinion on something?
 * RevertToType salutes sarnold
<sarnold> teward: 768, that's not bad
<teward> sarnold: only 3 systems on it right now, the others are status:dead because ESXi
<teward> but i still need your thoughts
<teward> as a sec team person :P
<teward> PM?
<sarnold> sure
<iNs> anyway sarnold thanks as well
<iNs> "ridiculous solutions vol. XX"
<sarnold> iNs: haha :)
<sarnold> iNs: glad you got it! your beard will thank you.
<iNs> haha, most def
<iNs> damn thesis deadlines
<iNs> gotta take a break, this thing annoyed the shit out of me lol
#ubuntu-server 2015-08-27
<AEL-H> I am having trouble mounting a drive such that only the root can access this, how can I do this? plain old chmod doesn't appear to be working..
<AEL-H> I chmod 700 the mountpoint and then mount the drive, however the permissions become rwxrxrx as soon as the drive is mounted. The drive filesystem is NTFS
<sarnold> interesting, the mount(8) manpageg says ntfs ought to be root-owned and not readable by anyone else
<zerick> hi, I have an empty /var/log/messages file, while current log seems to be /var/log/messages.1, any idea ?
<zerick> also happens with other logs
<sarnold> zerick: that can happen sometimes, it's a race condition between the log rotating and the syslog :/ it' sbeen ages since I've seen it, but there it is...
<okloks> ÑÐ°Ñ ÑÐ°Ñ
<okloks> any1one?
<okloks> ÐµÑÑ ÐºÑÐ¾ Ð¶Ð¸Ð²Ð¾Ð¹?
<sarnold> waka waka?
<okloks> I have a question about postfix+spamassassin+amavis-new
<okloks> ?
<sarnold> okloks: irc works best if you just ask questions, rather than asking if you can ask questions
<sarnold> okloks: if you'd asked six minutes ago, you might have received an answer by now :)
<sarnold> or maybe not. it's too late for americas and still early for europe
<okloks> ok, I had postfix + spamassassin, I saw X-Spam flags in mail headers. Now I installed amavis-new, uncommented spam settings and spam flags disappeared from headers. Looks like spamassassin doesnt' work. Maybe I need to provide some config here..
<samthewildone> how do I get the nfs service installed on my server, I thought it was installed by default (daemon)>
<tarpman> samthewildone: install nfs-kernel-server
<samthewildone> tarpman, I'm having trouble with the nfs
<samthewildone> I added "/home/user/nfs_free IP/MASK(ro,sync)" and it does not work
<samthewildone> after I reset the service.
<tarpman> samthewildone: can you be more specific than "does not work"?
<samthewildone> http://codepad.org/1ucPIDGk
<tarpman> samthewildone: I don't see anything not working there
<tonyyarusso> samthewildone: Yeah, there's no error in that paste.
<adun153> Hi, question about mysqld: What's the proper my.cnf variable, secure_auth = 1 or secure-auth = 1?
<lordievader> Good morning.
<rbasak> teward: np
<AEL-H> I am having trouble mounting a drive such that only the root can access this, how can I do this? plain old chmod doesn't appear to be working..
<AEL-H> I chmod 700 the mountpoint and then mount the drive, however the permissions become rwxrxrx as soon as the drive is mounted. The drive filesystem is NTFS
<pmatulis> interesting. my trusty suddenly wants to install (not upgrade) gnupg2 during a dist-upgrade
<jpds> pmatulis: gnupg2 != gnupg
<pmatulis> jpds: yes, i know
<pmatulis> jpds: so why is it getting installed
<pmatulis> i see that enigmail wants to get upgraded. it must be pulling it in
<pmatulis> yep, that's what's going on. but then i will have both gnupg and gnupg2 installed. weird
<pentiumone133> i have a fresh 14.04 machine with a fresh install of samba 4.1.6, and im wondering what is the proper way to restart samba
<pentiumone133> im using it for simple filesharing, not any of the active directory stuff
<pentiumone133> there is /etc/init.d/smbd, samba and samba-ad-ds
<ogra_> you want /etc/init ... not init.d
<ogra_> and to stop or start it simply: sudo stop smbd ... sudo start smbd
<pentiumone133> when i do that it doesnt seem to actually restart
<pentiumone133> it tells me its starting and stopping, but it doesnt seem to reflect changes ive made
<pmatulis> pentiumone133: what changes?
<pentiumone133> as an experiment i had a share that was locked down to specific users, i removed one of the users from the allowed list and restarted smbd, and was still able to mount the drive as the user i just removed
<pmatulis> pentiumone133: you'll prolly need to pastebin the output to 'testparm -sv' for anyone to help
<pentiumone133> pmatulis: http://paste.ubuntu.com/12206146/
<pentiumone133> its really just a "stock" config file
<neurotus> how can i execute a binary file from .sh file when uid=0 but as another user
<lordievader> krsna: sudo -u ./<path to binary>?
<jak2000> RoyK hi
<RoyK> jak2000: ho
<AEL-H> found solution --> needed to use ntfs-3g
<RoyK> AEL-H: another solution is to just mkdir -p /safe/something, chmod 0700 /safe, mount /dev/somentfsfilsystem /safe/something
<RoyK> AEL-H: but then, you should use ntfs-3g *anyway*
<pmatulis> is it possible to disable the LXC cache?
<poli> What would be a faster/batter alternative to mpstat to check on cpu load for scaling up a horizontal web server? mpstat takes a real long time to reflect load increase that top, for example, does not.
<sarnold> poli: what specifically are you trying to measure?
<poli> sarnold: CPU load
<poli> sarnold: specifically the amount of free cpu cycles in relation to the total available, the %free cpu or %used cpu.
<poli> sarnold: I am a bit lost in the "integration" thing mpstat must be doing. "mpstat" is giving me 70% free while "mpstat 1" gives me the correct 0. Seems like mpstat attenuates the change in load in time for some particualr reason.
<RoyK> poli: sar is nice, though
<sarnold> poli: hmm, a quick look through the top source code (not recommended, eww) suggests that it goes to some effort to try to calculate a percentage-of-cpu -- that's just not an easy measurement to make.
<poli> sarnold: "mpstat 1 1" gives me a differnt format in the result, but seems to solve the problem. I wonder why, though.
<Daphko> Hello, is it possible to handle virtualhosts when request was passed forward from an proxy reverse ?
<Daphko> nobody there ?
<teward> patience lol
<teward> you haven't stated your webserver, etc.
<Daphko> I have one apache on Server 1 which passed via Proxy Reverse all request to Server 2, on Server 2 the apache should managed the requests through virtualhosts
<tyuyui> hi ther
<tyuyui> i got this error
<tyuyui> i got this Error: no display specified
<Daphko> teward do you have an idea ?
<tyuyui> when i try to open an application over server x
<tyuyui> from windows pc
<tyuyui> with putty / xming ?
<sarnold> tyuyui: do you have an X server running on your windows machine?
<tyuyui> yes of course
<sarnold> tyuyui: did you use the windows-ssh-client equivalent of ssh -X? if that alone doesn't work, try it's equivalent of ssh -X -Y
<tyuyui> i m using putty with x forward enable
<tyuyui> with xming server
<sarnold> tyuyui: is the DISPLAY environment variable set in the shell where you're trying to start the programs?
<tyuyui> wht do you mean ?
<sarnold> tyuyui: $ ssh -X localhost 'echo $DISPLAY'
<sarnold> localhost:10.0
<sarnold> .. the DISPLAY environment variable needs to be set so X clients know where to contact the server
<tyuyui> Error: cannot open display: localhost:10.0
<bdx> Hows it going everyone? Does anyone here know how to add custom cloud-config to maas provisioning....i.e. curtin_userdata preseeed or custom preseed e.g. curtin_ubuntu_amd64_generic?
#ubuntu-server 2015-08-28
<jak2000> how to know wich program listen on a specified port, sample: mysql listen on port 3306 how to know?
<patdk-lap> jak2000, cause you ask your program
<adun153> Quick question, when should I choose PostgreSQL over MySQL and vice versa?
<sarnold> adun153: some applications are written so that they can only support mysql
<sarnold> adun153: but since postgresql had ACID transactions about a dozen years before mysql, I've always had a soft spot in my heart for postgresql
<sarnold> adun153: and now that mysql is run by oracle .. well ..
<adun153> well.. that, too.
<sarnold> jak2000: netstat -anp might be your friend
<frediz> rbasak: Good morning there. About kimchi, I had mails after the technical interlock, yesterday, saying that we were missing things on kimchi. I thought that you did the upload after I provided the last packaging on mentors (s/ln/cp change). Isn't that the case ?
<Celphish> Anyone awake?
<Celphish> I'm trying to download a pdf through sftp from an ubuntu 12.04-server, but it keeps saying:
<Celphish> open for read: no such file or directory
<sarnold> are there any funny characters in the filename? i seem to recall that sftp had a problem with one or another character
<Celphish> sarnold: well, yea, Ã¥Ã¤Ã¶
<sarnold> (which is funny, I thought sftp was supposed to handle that better than scp. bah.)
<Celphish> sarnold: that worked, can't believe I missed that! thanks!
<sarnold> Celphish: try doing something like sftp 'user@host:/path/to/f\Ã¥\Ã¤\Ã¶ilename' ....
<sarnold> Celphish: hah, what worked? :)
<Celphish> sarnold: I just replaced Ã¥ and Ã¤ with a, and ?u with u, Ã¶ with o and it worked :D
<sarnold> ???
<sarnold> i'm surprised that worked :)
<lordievader> Good morning.
<rbasak> frediz: hi!
<rbasak> frediz: no, I had no idea you had uploaded an update. I had pinged you on here a couple of times but never saw an answer.
<frediz> rbasak: well it was just about the "ln" blocker issue
<frediz> are we in phase on that ? :)
<rbasak> Yeah, that was the only thing I wanted fixed before upload
<rbasak> That's fixed now on mentors?
<frediz> mmm then that should be good; I was sure to have it done then
<frediz> you're making me doubt :) .. let's check
<rbasak> It's just that mentors doesn't really provide an opportunity to tell apart when things have been fixed in a new upload because they'll have the same version number and no changelog entry.
<frediz> you're right
<frediz> Uploaded: 2015-08-20 12:05
<frediz> that should be my change
<rbasak> I see now. Thanks! Sorry, I just hadn't realised that you had uploaded a new version.
<maswan> Here's a question, one of our VMs is unreachable over IPv6 from our other VMs on one of its addresses, but reachable from the world (or the HW hosting them). Anyone with clues about that?
<rbasak> frediz: thank you for syncing up there, and sorry for the confusion. It's over to me now - I'm no longer blocked.
<frediz> rbasak: wait, that seems to be wrong what's in there
<frediz> damn
<sarnold> maswan: is it perhaps a link-local address or something else that's not supposed to be routable? https://en.wikipedia.org/wiki/IPv6_address#IPv6_address_classes
<rbasak> frediz: yeah - looking at it, that should be in debian/rules now, or a dh_install file, rather than the postinst.
<frediz> rbasak: right, I'll ping you once it's re-uploaded
<rbasak> frediz: OK. Thanks!
<maswan> sarnold: Nope, publically routable and it is reachable over the internet
<maswan> and it is only one of the two v6 addresses it has that is unreachable, both on the same network (just different last octet)
<maswan> ah, I fixed it by randomly "ip addr del; ip addr add" for the same IP...
<sarnold> eww :)
<maswan> yeah
<maswan> Looking at nagios history it stopped working spontaneously at 4 in the morning, 3.5 days after the last reboot..
<gartral> hey all, I have a very old production server that I *MUST* upgrade, is there any way to list the installed packages and have apt reinstall the system and force though the old packages as newer varients?
<sarnold> what's wrong with do-release-upgrade?
<gartral> sarnold: Ubuntu 14.04
<maswan> you can play with dpkg --get-selections etc, but do-release-upgrade is much more likely to work since some packages change names, etc.
<maswan> gartral: the old versions are available with a different apt source, so you can get the last released version there and then do-release-upgrade
<gartral> maswan: ubuntu 14.04
<gartral> maswan: how?
<sarnold> so, "very old" means "16 months"?
<gartral> er, sorry,, oh crud.. i misread, 10.04
<maswan> gartral: http://old-releases.ubuntu.com/ubuntu/
<gartral> IE... "OLD"
<sarnold> that's more like it. hehe. :)
<maswan> yeah, but still only out of support for a year or so. :)
<sarnold> do-release-upgrade still ought to take you from 10.04 to 12.04 though
<maswan> yeah
<sarnold> here i was afraid this was going to be something like a red hat linux 7.2 or something :)
<maswan> but if you don't have that installed, you might need to get old-releases apt sources
<maswan> especially if it wasn't updated for the last couple of years of running [brr]
<gartral> maswan sarnold many many MANY packages are out of date in 12.04 and never updated clean...
<gartral> maswan: this is a super-stable platform, ot hasen't been powered down since august 2012
<maswan> gartral: I've upgraded a dozen or so servers 10.04->12.04->14.04 fairly recently (just before 10.04 got removed)
<gartral> the company who owns it doesn't want it down for more that 15 minutes
<maswan> hm. that'll be problematic.
<gartral> please excuse my typos, it's 5:20 AM for me
<sarnold> then definitely image it, and do the updates in a VM first
<gartral> sarnold: I can't...
<gartral> sarnold: there's some kind of undocumented security module and custom kernel to match on it, all attempts at imaging the machine results in corrupted data
<sarnold> gartral: can you walk away from this? this sounds like you're guaranteed to fail.
<gartral> these people are, fpr whatever reason, extremely paranoid
<maswan> Yeah, given those constraints, I'd build a new server, and then move services over.
<gartral> sarnold: not really... i could, but i don't know what they'll do...
<sarnold> that's a far more predictable outcome..
<maswan> (as a side note, paranoia that prevents you from being up to date with security updates is rather misdirected paranoia :/ )
<gartral> maswan: impossible, they went through the trouble of having a custom BMC built into a motherboard that handles security
<sarnold> man are they going to be pissed when it blows capacitors
<maswan> gartral: Ok. Will the custom kernel work with a newer base OS though?
<maswan> gartral: Or are you looking at replacing that when you upgrade?
<gartral> sarnold: all the caps are socketed... it's like alien tech
<sarnold> gartral: ooh.
<maswan> Anyway, do-release-upgrade and let it take the time it takes then. Just flag that 150 minutes is more likely than 15.
<sarnold> gartral: well, alright, maybe they aren't as insane as I thought..
<gartral> maswan: it looks like as long as the hardware doesn't change it'll be happy
<sarnold> unless their storage has bonkers iops, 150 minutes is way more likely than 15 :)
<maswan> (and that if things go wrong, it could easily take 4 times that time)
<sarnold> yup.
<gartral> sarnold: there's a few packages that are marked 8.10 that really don't want to be upgraded
<gartral> the really weird thing is I know what's running.. it's nothing that shouldÂ·n't
<gartral> the really weird thing is I know what's running.. it's nothing that shouldn't be possible to upgrade
<gartral> and they're not holding a gun to me or anything like that, they're just... beligerent
<maswan> hm. old held packages might be tricky, unless you can release them
<maswan> part of it is to bring all the base libs etc up to more modern and supported versions
<gartral> maswan: np I can't, because there's packages that are so old they don't have modern equivelents
<gartral> sorry for my errors, i'm very tired
<maswan> sounds like you might also be helped by sleeping on it
<gartral> maswan: I've been working for a week... I've made progress, bur this is my last day
<maswan> but for packages that are gone that don't have a modern equivalent but is software that is still needed. hm. the reasonable thing to do is to repackage it yourself, or compile it locally, etc. old packages are likely to break too when you upgrade under their feet.
<gartral> maswan: some of them are old google-code repo packages that can't be updated...
<maswan> gartral: ick.
<gartral> the whole thing's a mess
<gartral> well they've paid me for my time, i have a great new comp because of it... i'm going to tell them it's a loss
<frediz> rbasak: I've reuploaded kimchi on mentors. I tested also to install because the change to debian/rules implied a few others.
<rbasak> frediz: OK, thanks! I'll take a look.
<frediz> thank you
<maswan> yeah, I mean, it could possibly be done but I'd say that first you need to go in and take all these old packages and repackage them (possibly from installed files on the server), and then do some test runs etc, but it seems like we're talking about month or two, not a week
<maswan> gartral: As in, start by cleaning up the mess, one bit at a time, then once it is clean enough...
<maswan> gartral: Or, build a new from scratch and move services over. Even for an alien server, it might be cheaper than that manpower.
<sarnold> .. and with a machine this touchy I think I'd want to do the moral equivalent of do-release-upgrade entirely by hand.
<maswan> sarnold: Yeah, but then you need to dig in and see the corner cases that do-release-upgrade script solves that aren't just plain package upgrades.
<sarnold> maswan: yeah. it wouldn't be a quick thing..
<gartral> yea, they weren't too happy, but the consensus is basocially "Good show, but we're letting you go, take the computer we gave you and clear out, you have an hour
<sarnold> eep
<gartral> i'll talk to you all later
<sarnold> take care gartral
<gartral> i have my own server which needs help
<sarnold> and hopefully a bed that's still warm? :)
<Fab_> Hi
<Guest77264> Hi
<lordievader> o/
<TheEternalAbyss> this may be a slightly stupid question but...on a default installation of ubuntu15 server and apache.. without having a domain name configured on the VPS where all this is installed... should typing in http://ipaddess/   get me to where my html files should be? or am I missing something here
<TheEternalAbyss> oh wait I think I see he problem
<TheEternalAbyss> the* problem..
<Daphko> do anybody know why i can not ping from an vserver to an ip (Server) where the vserver belong to ?
<Voyage> Hi
<Voyage> These are my devices I was able to sort out. I hope that identification is enough. Now I have NO idea how to set 2 groups of multiseat. http://pastie.org/10381571   I followed https://help.ubuntu.com/community/MultiseatX#Single_xorg.conf but its over my head. Any clues?
<ssmoCoffee> join /#juju
<dv310p3r> need help mounting a 3tb usb drive, getting the following error: ntfs-3g: Failed to access volume '/dev/sdb1': No such file or directory
<dv310p3r> ubuntu 14.05
<dv310p3r> sorry 14.04
<dv310p3r> /dev/sdc1               1  4294967295  4294967292   ee  GPT
<dv310p3r> output of fdisk -l
<dv310p3r> buehler?
<teward> dv310p3r: well, you're putting /dev/sdb1 in, but it's clearly /dev/sdc1
<teward> (I assume that's you from about 20 minutes ago)
<dv310p3r> Corerct
<dv310p3r> sorry, it happens for sdc1
<dv310p3r> ntfs-3g: Failed to access volume '/dev/sdc1': No such file or directory
<teward> pastebin the full `fdisk -l` ?
<dv310p3r> sudo mount -t ntfs-3g /dev/sdc1 /media/external
<dv310p3r> http://pastebin.com/r81fDzK9
<tarpman> dv310p3r: first, please note the part in the output where it says fdisk can't read GPT and you have to use e.g. parted. second, regardless of what fdisk says, does /dev/sdc1 actually exist in your filesystem
<teward> run `sudo parted /dev/sdc print` and pastebin it.  Use paste.ubuntu.com.
<teward> and also what tarpman says :)
 * teward failed ot see 'gpt' there :)
<tarpman> teward: oops, sorry for jumping over you :)
<teward> tarpman: no problem
<teward> i'm on laggy internet right now
<teward> fighting the fifty other students in the area tryin to hog my net
<teward> when i need it
<teward> still, pastebin the output of `sudo parted /dev/sdc print`  :)
<teward> god this mini wireless keyboard is awesome because now i can carry it around with my laptop xD
<teward> AND not have to worry about the laptop touchpad >:)
<teward> (brb, tarpman can take over xD)(
 * tarpman 
<dv310p3r_2> tarpman, yes it does exist
<dv310p3r_2> teward, Error: /dev/sdc: unrecognised disk label
<dv310p3r_2> results of sudo parted /dev/sdec
<dv310p3r_2> results of sudo parted /dev/sdc
<teward> dv310p3r_2: `ls /dev/ | grep sd`
<teward> pastebin
<dv310p3r_2> http://pastebin.com/VAg7WDCW
<echo1> Hi, I'm facing an issue with wifi connections by using nmcli on my ubuntu server 14.04 32bit: "nmcli -p dev wifi" I don't receive nothing back. I have checked first if my wifi interface is up: "ip link show" the response is "wlan0 <BROADCAST, MULTICAST, UP, LOWER_UP> mq state UP". To be sure I launch "ip link set wlan0 up" again and after I ensure NetworkMan is enabled by running: "nmcli nm enable". My scanning with
<echo1>  "nmcli -p dev wifi list" gives me nothing back as I told, but "id dev wlan0 scan" prompts me back the signals around me. How can I connect to wifi with nmcli?  I have tried to plug in second wifi inteface in that case I receive response from wifi, isn't it weired?
<dv310p3r_2> Just as a note. I pulled this drive out of a ubuntu server, it was connected via SATA internally, now it's connected via USB 3.0 dock to the new ubuntu server
<tarpman> dv310p3r_2: we're talking about the 3TB disk (sdc), right? parted says there are no partitions, /dev/says there are no partitions. maybe the filesystem covers the entire device?
<tarpman> dv310p3r_2: how were you mounting it on the old server? the corresponding fstab line would be helpful
<Norbin> why would i get this error while trying to ifdown a network adapter? http://i.imgur.com/dSSqdIH.png
<Norbin> it's clearly configured and is working...
<tarpman> Norbin: most likely it was configured by something other than ifup
<tarpman> Norbin: network-manager, ifconfig, etc
<Norbin> oh...
<Norbin> ye ifconfig :O
<Norbin> so i can't if down/up this network card this way?
<tarpman> Norbin: take it down with the same tool you brought it up with, is all
<Norbin> got it, thanks
<Norbin> (had no clue this how it works.. xD)
<dv310p3r_3> tarpman, ok one sec
<tarpman> dv310p3r_3: you are incrementing :o
<dv310p3r_3> I don't know why
<dv310p3r_3> http://pastebin.com/pyQz8PHd
<dv310p3r_3> fstab line from old server mounting that drive
<tarpman> dv310p3r_3: something is not adding up. i don't know why the new server would not detect partitions where the old one did
<dv310p3r_3> I know
<dv310p3r_3> it's killing me.
<dv310p3r_3> Could it be the USB dock somehow?
<dv310p3r_3> the connection was SATA before I pulled it
<tarpman> a usb dock _should_ not interfere that way, but i can't prove anything
<dv310p3r_3> tarpman, you've given me some good info and help, It's much appreciated
<dv310p3r_3> my search continues
<tarpman> dv310p3r_3: just out of interest. can you unplug the usb, plug it in again, and pastebin the related new lines from dmesg
<tarpman> the fdisk output you pasted looks sane enough
<tarpman> (give it a few seconds to settle after plugging in before running dmesg)
<Daphko> i have installed ubuntu server on hyperv, when i try to ping my root server where hyperv is installed, i dont receive any answer
<Daphko> have anybody an idea why ?
<zach2825> hi, i have a post fix queuing questions..
<RoyK> Daphko: I gues the hyper-v folks might know. last time I was using hyper-v, some three years back, only redhat was supported and ubuntu had major issues with networking intermittently dropped
<RoyK> s/gues/guess/
<Daphko> RoyK: the problem is that i can ping google oder other server
<Daphko> i only can not ping the host where hyperv is installed
<echo1>  I have solved my problem: 1. Uncomment everything in /etc/network/interfaces under interface lo, in other words uncomment wlan and eth configuration. 2. Reboot. 3. Run: nmcli dev wifi con SSID_name password SSID_password iface wlan0
<RoyK> Daphko: ask the hyper-v folks
<RoyK> Daphko: they may have put up heavy firewalling on it
<RoyK> which may indeed make sense
<RoyK> our ESXi hosts are locked into an RFC1918 network not available from anywhere but a small network only accessible by select personnel
<Daphko> do they have an channel ?
<RoyK> Daphko: why do you want to ping the host?
<Daphko> i have an application vhost on hyper-v, and on the root server where hyperv is installed, there is git installed. From everywhere else i can pull my project but not from the vhosts
<Daphko> so i tried so ping my host and see that this is not working
<RoyK> Daphko: don't do that
<Daphko> so i cant not use git over http
<RoyK> Daphko: create a tiny vm for git instead
<RoyK> Daphko: don't use hosts for anything but hosts
<zach2825> does anyone know how to check which domain is queuing emails in postfix? one of the word press instances i host is spamming emails but there are to many to shut down and wait till it stops.
<RoyK> Daphko: you'll probably need some disk space and 256MB memory for git to work
<RoyK> Daphko: in a separate vm
<RoyK> Daphko: always use separation - that's why you have virtialisation
<Daphko> i know but i use stash as an application
<Daphko> on my host
<Daphko> for managing repositories
<RoyK> just listen, ok? separate host and guest, use the host as a host only
<JaguarDown> Hi all newbie here. My home server uses namecheap freeDNS/dynamic dns service. Apparently ddclient 3.8.1 has a problem updating multiple domains/subdomains so I am just using the python script at https://help.ubuntu.com/community/DynamicDNS#Namecheap_.26_Python
<JaguarDown> However I have a subdomain "irc" I want to update as well, I imagine that would be easy but I have no python knowledge.
<jelly> what do you mean by subdomain, JaguarDown ?
<JaguarDown> in otherwords I have an A record that points irc.<mysite>.com to my IP
<JaguarDown> (for the sole purpose of aesthetics when connecting to my irc daemon)
<JaguarDown> ddclient could do this easily simply by specifying "irc" next to "@" on the last line of the config file. I want to do the same with this python script but I don't know how. I tried searching to no avail.
<RoyK> JaguarDown: it would have to be supported on the dyndns serverside too
<JaguarDown> True...the ubuntu documentation specifically states the script works with Namecheap
<JaguarDown> I could only assume you can update the subdomains with it.
<JaguarDown> Namecheap has no docs on python.
<jelly> JaguarDown: ok, so "<mysite>.com" is your domain, and "irc.<mysite>.com" is the A record you wanted updated via Namecheap dynamic dns API?
<JaguarDown> yes
<jelly> "irc.<mysite>.com" isn't a "subdomain", it's just a hostname (fully qualified)
<JaguarDown> ok my mistake
<JaguarDown> I suppose if it were set up as a CNAME it would be considered a subdomain?
<jelly> you can easily use the API to update irc.<mysite>.com A record just like you update somethingelse.<mysite>.com
<JaguarDown> Ok that makes sense.
<jelly> it would be a subdomain if there were host records under it.
<JaguarDown> I just added a line in the python script to do that and it works! Thanks, Jelly.
<jelly> JaguarDown: ah, good
<jelly> namecheap's API is basically just a http request -- you can do it with a browser or wget https://www.namecheap.com/support/knowledgebase/article.aspx/29/11/how-do-i-use-the-browser-to-dynamically-update-hosts-ip
<jelly> so you basically just fill in https://dynamicdns.park-your-domain.com/update?host=[host_name]&domain=[domain.com]&password=[domain_password]&ip=[your_ip]
<Norbin> most likely a silly question but can you use ssh keys from a windows machine connecting to an open ssh ubuntu server?
<Norbin> so i am connecting via my windows-putty client to an ubuntu-ssh server, with the use of a key instead of password
<Norbin> is that even possible?
<tarpman> Norbin: yes.
<JaguarDown> yes I do it.
<tarpman> Norbin: the putty key tool has an Import menu item somewhere that will import your openssh private key and convert it to putty format
<JaguarDown> jelly: Thanks again. I bookmarked that link.
<tarpman> Norbin: and then you point putty at that converted key under SSH â Auth (IIRC)
<Norbin> i saw that now tarpman, only missing how i go about transferring the public key to the windows host so i can import it. official man says by using "ssh-copy-id username@remotehost
<Norbin> "
<tarpman> Norbin: I think you're confused. you want your *private* key on the machine you run putty on, and your *public* key on the machine you ssh _to_
<Norbin> oh. correct
<tarpman> Norbin: ssh-copy-id copies your public key, not useful here. (also it uses scp, so probably not useful with windows)
<tarpman> Norbin: how you go about transferring â left as an exercise to the reader. i suggest a usb stick
<tarpman> Norbin: just don't email it to yourself, or send it over plain http, or something like that. those are very quick ways to have it intercepted...
<Norbin> tarpman thanks very much, i will see how i get the key over and then try to import it via putty or so, see if it let's me ssh over without being promted for a password
<tarpman> Norbin: i'm an idiot. you're already using putty -- pscp/psftp would be a quick and secure way to copy it
<tarpman> or, heck, cat + copy-and-paste from putty...
<jrwren> you are using a password protected private key, right?
<Norbin> exactly what i am reading now tarpman :P http://unix.stackexchange.com/questions/106480/how-to-copy-files-from-one-machine-to-another-using-ssh
<jrwren> pagent is very nice.
<Norbin> Yes, user/password key
<jrwren> so the private key is encrypted with that password. email it to yourself all you want :)
<Norbin> yep it's AES 128 CBC encrypted
<jrwren> I tend to run samba so I \\SERVER\user in windows and read the ssh priv key via cifs
<RoyK> I setup a system a year back or so to only allow ssh from known users with known keys with correct passwords
<RoyK> adding google authenticator on top would make it rather more messy :D
<teward> RoyK: there's a 2FA method/system from Duo Security - cloud based, but they give a pam module that can tie into the logins and require 2FA via their app or texted codes or a phone call with the codes
<teward> i use it on my servers' SSH interfaces
<teward> but i also enforce SSH keys, so... :P
<teward> gets pricey if more than 5-10 users though
<RoyK> teward: you need 12FA!
<teward> RoyK: I need bio-identification 2FA :P
<tarpman> something you have, something you know, something you are, something you forgot, something you ate, ...
<teward> "{
<teward> tarpman: something you DIDN'T eat
<RoyK> something you wish you ate, something you wish you never ate.....
<tarpman> :]
<teward> your home, your previous home, your parents' home... :P
<teward> anyways
<RoyK> you previous parents
<teward> your in-laws
<RoyK> your future in-laws
<teward> your significant other.
 * teward yawns
<teward> but i digress
<tarpman> what have I started. further discussion to twitter.com/must_contain please :P
<RoyK> https://xkcd.com/936/
<acmehandle> anyone running docker on 14.04 with a 2.6.32 kernel?
<sarnold> I'm terrified to ask how that particular combination happened..
<acmehandle> I'm terrified that youre terrified
<acmehandle> I thought I was running 14.04
<acmehandle> lts
<acmehandle> I dont know if my vps has been lying to me or what
<sarnold> 2.6.32 sounds vaguely like an openvz kernel
<acmehandle> /etc/issue says 14.04.3 lts
<acmehandle> Yes, it is a vps they say theyre running openvz
<acmehandle> I guess thats the story then
<acmehandle> So what happens to me know.  If I upgrad kernel do I run the risk of hosing my system
<sarnold> acmehandle: the openvz guys have been working on getting docker to run inside an openvz container, but I think they did it by having a daemon to proxy a bunch of services in the 'host' ...
<sarnold> acmehandle: I don't think you have any ability to upgrade your kernel. you're just a container, not a VM.
<acmehandle> O.  So does thatt mean no docker for me.
<sarnold> acmehandle: see if you've got a /proc/vz/veinfo file .. that looks like it might be reliable way to tell
<acmehandle> I have it
<acmehandle> says a whole bunch of numbrs and machine ip
<sarnold> yup. no docker on that system. :)
<Norbin> so regarding the ssh key from before (i've set up samba etc till now :S), i take the private key into the windows host, load it up using puttygen and saving it in a .ppk format, then i keep getting server refused key for some reason
<teward> so, with postfix, if the hostname of the server postfix is on is i.am.a.teapot.tld and all my different domains' MX records point to i.am.a.teapot.tld, do I need an SSL certificate for i.am.a.teapot.tld or do I need an SSL certificate for i.am.overlord.tld (which is where the @domain.tld says on the email address)?
<Norbin> other than generating the key (ssh-keygen -t rsa), giving it a name and a user/password... anything else needs to be done?
<Norbin> ls
<Norbin> ops
<sarnold> Norbin: verify permissions on the authorized_keys file, the containing directory, the home directory, etc.. sshd is very picky and will refuse to use an authorized_keys file that allows group or world write, etc..
<tarpman> Norbin: what sarnold said, also /var/log/auth.log on the server may contain more details
<acmehandle> sarnold, whats the worst that would happen if I ran apt-get install linux-image-extra-3.x-generic for example?
<sarnold> acmehandle: you'd probably waste some of your disk quota
<acmehandle> wont make a difference then?
<sarnold> acmehandle: right; the kernel you're actually using is stored on the host's filesystem; you share it with all the other users of your vps
<larsi> when connecting to a mssql database with freetds it successfully connects
<larsi> but I am not able to run anything
<larsi> it just starts counting up from 1
<larsi> anyone know why?
<larsi> https://gist.github.com/anonymous/f8f983fe88c91c4d841d
<larsi> the 4 is from when I pressed enter, 85 is where it's at when copied
<larsi> oh, got an error message now
<bdx> Does anyone here know how to add custom cloud-config to maas provisioning....i.e. curtin_userdata preseeed or custom preseed??
<bdx> been trying to figure it out for a while now.....It would be nice to get some advice from the pros....
<bdx> core, dev: Does anyone here know how to add custom cloud-config to maas provisioning....i.e. curtin_userdata preseeed or custom preseed??
<sarnold> bdx: you may wish to try also in #maas
<bdx> sarnold: Totally...I'v posted there too. Might hit up the ml soon. Thanks
<sarnold> bdx: it's a bit late on a friday, mail list seem slike a good bet, someone might get to it on monday before you're around...
<bdx> sarnold: totally. will do. thanks man.
<sarnold> goo dluck :)
<acmehandle> Trying to install mysql-server-5.6.  I get this error: dpkg: error processing package mysql-server-5.6 (--configure):
<acmehandle> any ideas?
<sarnold> pastebin the whole error?
<acmehandle> http://pastebin.com/1yJqx6pn
#ubuntu-server 2015-08-29
<Kalimero> any suggestions for a network analysing tool which records the traffic, shows me the connected hosts and warns me if something is odd?
<Kalimero> prefer web-based solutions
<Kalim3r0> any suggestions for a web-based network monitoring tool? (good if it records traffic, connected host and maybe some warnings and errors if something is wrong)
<Kalimero> any suggestions for a web-based network monitoring tool? (good if it records traffic, connected host and maybe some warnings and errors if something is wrong)
<Kalimero> but not too overwhelmed by functions and information
<Norbin> umm
<Norbin> what about nagios? doesn't it track network too ?
<bekks> better use checkmk or incinga instead.
<Kalimero> Norbin: nagios seems good for overall monitoring but hard to configure at first
<Kalimero> bekks: i'll check them thanks
<acmehandle> I'm trying to troubleshoot a connection refused error with nginx and php.  what would be a way to go about that?
<lordievader> Good afternoon.
<Norbin> can maas be used to deploy over vms ?
<Norbin> or physical only
<RoyK> Norbin: there are other ways to work with vms :P
<RoyK> I guess you could hammer a nail in with a screwdriver, but it's not really optimal
<pmatulis> Norbin: anything that can use PXE
<Norbin> if i tell the primary dns server to also-notify to the secondary on changes, will the secondary update the serial count automatically ?
<prudentmav> any of you use server pilot?  if you decide you don't want to use it anymore, do you just delete the serverpilot user and leave everything else as it?
<bekks> Never heard of it. What is it for?
<prudentmav> for people that dpm
<prudentmav> don't want a full blown control panel... makes it easy and quick to deploy wordpress or php aps
<bekks> One command to do so: sudo apt-get ...
<prudentmav> what do you mean?
<Norbin> when a secondary dns server pulls a zone from the primary, where does he store them? can't see it under bind as usual, yet syslog says it's imported successfully
<teward> Norbin: probably in the location the slave is told to store it, or in the bind9 cache folder, but note that it's not necessarily stored in plain text (my 'slaved' zones're in binary form on the slaves...)
<teward> any postfix pros here?
<pmatulis> teward: just ask and see if someone can help
<teward> lag prevented me from saying 'nevermind'
<teward> sorry
<Norbin> teward oh ok, so i am not expecting a "db.my.domainname" files
<Norbin> as on the main
<admin-g_> hello guys i was wondering how to change the desktop entery path of webstorm
<AEL-H> Does anyone think it would be possible to run Steam on ubuntu server and run games (not game servers) on the ubuntu server and stream the audio and video to another device?
<lordievader> AEL-H: Render the stuff on the server you mean? Perhaps with a powerfull graphics card...
<AEL-H> lordievader : My old computer I used to use to play games, it has a decent graphics card. I recently made a partition on the main harddrive and this is what I am running as Ubuntu server
<lordievader> I suppose if steam can run headless you might get it to work.
<teward> Steam needs the GUI, AEL-H
<teward> it needs the deps of the graphics libraries to even render itself, I don't think it has a headless mode
<lordievader> Could that be some X dummy like thing?
<teward> it's possible it could run headless based solely on deps, but I wouldn't chance that to apply to games as well, lordievader
<lordievader> Hmm, yeah. I guess it is more trouble than it is worth.
<AEL-H> I read about an emulator that tricks steam -- some guy tried to do it before
<AEL-H> The reason I bring this up is because all the hype following the Steam Link :)
<Glorfindel> I'm having problems connecting to a WEP wifi network
<Glorfindel> and I have no idea what is wrong
<Glorfindel> I've been trying this http://serverfault.com/questions/142225/connect-to-wep-wireless-network-by-command-line-on-ubuntu
<Glorfindel> but it doesn't connect to the network
<Glorfindel> I have also tried this: http://askubuntu.com/questions/294257/connect-to-wifi-network-through-ubuntu-terminal
<Glorfindel> it was connecting before, but now it doesn't
<Glorfindel> I am using the exact commands as worked before as well
<Glorfindel> while the internet was connected, I did install nmcli, so if there is an easy to remember command to connect with that which works, that would be great too
 * genii makes more coffee and slides Glorfindel a mug
<Glorfindel> :D
<Glorfindel> I'm going to monkey around with this later
<wolflarson> I think virt-manager is crashing my server
<wolflarson> confirmed
<wolflarson> any hints as to what log I should be looking at for this?
#ubuntu-server 2015-08-30
<JaguarDown> Hi all. Is there some kind of sleep or hibernate mode that ubuntu-server will go into that I can disable? For the second time now I was unable to connect to apache 2, ssh, or sftp and could not even ping from the the LAN or router. I have determined for the second time now that so much as hitting a single key (num lock) on a keyboard connected to the server tower is enough to "wake it up" and suddenly ping, apache2, ssh, and s
<JaguarDown> (home server on an old machine)
<JaguarDown> All of my internet searchs return threads and articles talking about suspending or hibernating ubuntu desktop...
<patdk-lap> ubuntu server does not sleep/hibernate by default
<patdk-lap> unless you ran a command to do so (like a ups program that detects power failure)
<pauljw> JaguarDown, understand i'm in no way knowledgeable about such things and this is probably a "mr obvious" question, but does your lan card support "wake-on-lan" and is it set to function?
<moneylotion> any way I can backup to an iso?
<moneylotion> like rsync to an iso file?
<patdk-lap> sure, you just *copy* it
<patdk-lap> or, you mean, you want to copy normal files, into an iso file?
<moneylotion> yeah, having a single file would be nice
<moneylotion> * copy into an iso file -
<moneylotion> i use carbon copy cloner for mac a lot, and this is an option, which makes organization a little simpler
<moneylotion> open to any advice
<oste> howdy lsmith?
<oste> I was able to move things along by adding some of the packages listed here http://wiki.apache.org/jackrabbit/JackrabbitOnTomcat6
<oste> but now i am hitting this line https://github.com/jackalope/jackalope-jackrabbit/blob/master/src/Jackalope/Transport/Jackrabbit/Client.php#L386
<oste> i think it could still be a mysql issue b/c the tables are never created
<oste> on this line https://github.com/jackalope/jackalope-jackrabbit/blob/master/src/Jackalope/Transport/Jackrabbit/Client.php#L385 $set->item(0)->textContent is empty
<lordievader> Good morning.
<Norbin> for SSL to work on sites other than the default, do i need to create certificate for each ?
<lordievader> Or add it as a san.
<Norbin> missing the part where i added a 2nd virtual host (other than default) to the ssl
<Norbin> like do i edit default-ssl.conf or copy it into mysite-ssl.conf
<RoyK> Norbin: a certificate per domain, usually - better get official certificates rather than using self-signed stuff
<RoyK> it costs a bit, but it's the only thing accepted many places
<RoyK> you can get host certificates or domain wildcard certs - the latter more expensive, but then you can create new, official certs for your own hosts
<Kalimero> is anyone familiar with the manual installation of Twonky-Server ?
<Kalimero> if i want to start twonkyserver i am getting an error like this -bash: /home/username/twonky/twonkystarter: No such file or directory
<Kalimero> the readme says i need to install ia32-libs but there is no package with that name in the repos
<patdk-lap> Kalimero, that hasn't existed since 12.04
<Kalimero> patdk-lap: i know but i need it to run my program ( and i got it to run a few weeks agon until now but i dont remember how i did it :( )
<Norbin> hi guys, i have set up apache2 server, all is good but one thing - others outside my LAN can not establish connection to the site via my external IP
<Norbin> i've forwarded port 80
<Norbin> been trying to fix this for like 2 hours now, but no go, will be happy to get some assistance
<pmatulis> Norbin: first ascertain whether your server is receiving the requests. my suggestion: use tcpdump
<Norbin> umm, i am not seeing much signs for failed attempts or anything, captured while a friend tried to access from outside
<Norbin> an example here pmatulis http://pastebin.com/mFunJNBx
<pmatulis> Norbin: you can try quieting it down with, assuming interface eth0: 'sudo tcpdump -i eth0 -qtn port 80'
<pmatulis> Norbin: and you're sure it's port 80 and not 443 (HTTPS)?
<pmatulis> Norbin: also, provide output to 'sudo lsof -nPi4:80'
<Norbin> pmatulis thank you for the tips. i am still capturing
<Norbin> sudo lsof -nPi4:80 outputs nothing
<Norbin> new empty line
<pmatulis> Norbin: that means your server is not listening on port 80
<pmatulis> Norbin: i'm using lighttpd as my web server and i get:
<pmatulis> lighttpd   2570        www-data    4u  IPv4   22859      0t0  TCP *:80 (LISTEN)
<Norbin> when u run sudo lsof -nPi4:443 ?
<Norbin> ops, with 80
<Norbin> i tried with 443 just in case
<Norbin> no results there either
<pmatulis> Norbin: next, what URL are you using when you claim to be connecting from the LAN?
<Norbin> pmatulis, it works now!
<Norbin> god damn :P
<Norbin> In my router
<Norbin> when i forwarded port 80
<Norbin> i had to choose internal IP (which i provided my server ip address)
<Norbin> and external (i entered my external ip...)
<Norbin> now i changed the external to 0.0.0.0
<Norbin> and it works
 * Norbin is not sure why though
<pmatulis> Norbin: well, not sure how if the server isn't listening, but ok
<Norbin> pmatulis not sure, but now the tcp dump is actually giving results via port 80
<Norbin> i can see the ip addresses of the people entering my apache page
<Norbin> sudo lsof -nPi4:80 still gives empty result
<lightair> hi! I'm installing iRedMail. The installation process asks me to enter "first virtual domain name". What is that?
<lightair> hello?
<Norbin> just enter FQDN lightair?
<Norbin> for the mail server ?
<Norbin> or follow a guide x:
<lightair> what value should be in /etc/hostname: lightair.com or lightair?
<PryMar56> booting Vivid, there is a 15s delay during /scripts/init-bottom
<PryMar56> that directory is empty, so why the delay?
<PryMar56> correction, there is one script: udev
<Meerkat> PryMar56, is the scripts dir in the root of the system?
<PryMar56> Meerkat, root of the initramfs?
<PryMar56> Meerkat, I'm thinking to deactivate LVM in initramfs
<PryMar56> Meerkat, yes the udev script is in root of ramfs: scripts/init-bottom/udev
<PryMar56> Meerkat, I'm thinking to never activate LVM in initramfs
<PryMar56> ^^ what I mean to say
<PryMar56> Meerkat, it worked. Skipped activation in ramfs and script/init-bottom flew by... finally.. couple weeks of pain
<PryMar56> Wily has changes in scripts/init-bottom coming
<PryMar56> I did look at the scripts/init-bottom in Debian8 and they are not the same
<arsalan_>  Hi guys! I would really appreciate it, if you could help me! I am working on a project that is similar to Shazam, I have a source code on a linux computer which gets a 10-second wav audio file and outputs the name of the song. So, everything works offline now. I was wondering how I can setup a server using ubuntu in order to support online recognition.
<OerHeks> arsalan_, start your project on launchpad :-)
<arsalan_> Oearheks, Thanks
<PryMar56> Meercat: trick was to remove vgchange in /lib/udev/rules.d/85-lvm2.rules, then rebuild ramfs. No more delay in scripts/init-bottom
<PryMar56> undo the mod after ramfs is built
#ubuntu-server 2016-08-29
<apb1963_> sarnold: still having problems here.   Trying to install wine and it keeps barfing.  Let me know when you wake up and if you're willing..  here are the details: http://paste.ubuntu.com/23105248/  Thank you
<RoyK> wine on a server?
<sarnold> apb1963_: hrm, no ideas there. What does ./debug.sh do? where did you get the wine1.8-i386 package, I can't find it on launchpad
<apb1963_> sarnold: debug.sh varies according to what I'm doing.  Right now it's "sudo aptitude --full-resolver -f install $1"
<apb1963_> sarnold: or at least, that's what produced that output.
<apb1963_> sarnold: https://launchpad.net/~ubuntu-wine/+archive/ubuntu/ppa
<sarnold> apb1963_: ah. i never understood aptitude
<apb1963_> me neither... I just collect random bits & pieces as I find them.
<sarnold> apb1963_: you may need to ask scott and maarten for help, they may be more familiar with whatever it is that's going on
<apb1963_> This too.  https://launchpad.net/~wine/+archive/ubuntu/wine-builds/
<apb1963_> sarnold: maybe...  but vlc has somewhat similar problems
<apb1963_> sarnold: http://paste.ubuntu.com/23105349/
<sarnold> apb1963_: what does apt-cache policy vlc show?
<apb1963_> apt-cache policy vlc
<apb1963_> N: Unable to locate package vlc
<apb1963_> sarnold: ^^^^^ Not in standard repos????
<sarnold> apb1963_: do you have 'universe' enabled in your apt sources.list files?
<apb1963_> no, not right now
<apb1963_> I try to keep it disabled unless I need something specific.  Is that where vlc is?
<sarnold> yes
<apb1963_> sarnold: well that certainly made a huge difference.  lol
<apb1963_> sarnold: that worked nicely thank you.  so... wine1.6 is in the standard repos?
<sarnold> yes
<apb1963_> sarnold: ok, I've purged the wine ppa and am reinstalling wine.  Seems to be working ok so far.  It's irritating because the first thing that pops up when you're looking for ubuntu wine is those ppa's I showed you... so yeah...  Been ripping my hair out all day over this.  As usual, in a few sentences you bring help& clarifty to the table without even trying.
<apb1963_> s/clarifty/clarity/
<apb1963_> so thank you for that!
<sarnold> apb1963_: much better? :) woo
<apb1963_> 3 minutes with you and a full days work could h ave been avoided.
<sarnold> granted, it may be an older wine than the one you'd have otherwise
<apb1963_> Well, #ubuntu support people are quite anal about things outside the acceptable repo list.
<apb1963_> And get really nasty when you admit that you need a program that's not supplied... or updated sufficiently.
<apb1963_> And therefore install a ppa.
<sarnold> aye
<sarnold> the trouble with ppas is that you must trust the authors of it completely
<apb1963_> you mean like the wine repos?  vlc?  Things like that?  I think I can take a risk on those.
<apb1963_> And that's what's so irritating about them.  Everybody uses that stuff but it's "unsupported".
<sarnold> well, they are only as supported as the ppa authors choose :)
<dcosnet> people complaining about support in an open source community makes me laugh. i am laughing for real.
<apb1963_> I mean unsupported by #ubuntu
<dcosnet> why would it be. go upstream with your questions/complaints/bug reports/patches
<dcosnet> ...
<apb1963_> dcosnet: You came late.  You missed the point.  I spent all day trying to resolve something that #ubuntu gave me grief over.  sarnold, gentleman that he is, gave me 2 minutes of his time today and resolved the issue.
<dcosnet> well thats nice of him. congrads.
<apb1963_> 12 freaking hours just today.
 * sarnold bows
<OerHeks> apb1963_, how valse ....
<sarnold> funny thing of course is that I gave more or less the same end advice they did -- to use the package from the main archive :)
<sarnold> s/main//
<dcosnet> s//sigh/
<apb1963_> That's true.. but you didn't make me feel like a leper
<OerHeks> we have been telling him to properly remove ppaÅ .. at least apb1963_ was cooperative with information, but couldn't get it done.
<OerHeks> moste people hide ppa's from support.
 * dcosnet woosh
<sarnold> OerHeks: hah, that's the quickest way to make no progress
<dcosnet> well there goes my chances for a golden egg... ^
<dcosnet> damnit.
<apb1963_> OerHeks: Because 1. After I removed it, it couldn't find wine at all.  So I put it back.  And after spending much time googling, I was unable to find any reference to wine other than the two ppa's I was using.  2.  People don't talk abou their ppa's because someone always "piles on" with WE DONT SUPPORT THAT!  REINSTALL!
<dcosnet> o good.
<apb1963_> Now, why it was unable to find it before and why it found it this time around is a mystery.
 * dcosnet hovers over magicalChicken waiting for a golden egg.
<dcosnet> wtf is a ppa
<dcosnet> some sort of ale?
<apb1963_> dcosnet: nobody really knows.  It's best you not think about it.
<dcosnet> o ok.
<apb1963_> dcosnet: ale would make a good substitute though.
<dcosnet> just assume no deviation from task goals. get more ale.
<apb1963_> well, it downloaded, installed... haven't tried running it but there are no errors.
<apb1963_> things are looking up
 * dcosnet still doesnt know much about whats going on. continues hovering.
<apb1963_> sarnold: I've decided to install asterisk on a different machine entirely.  It's not much of a machine but for my needs it should be sufficient.
<sarnold> apb1963_: it probably doesn't take much...
<apb1963_> sarnold: depends on what you're doing... if you're supporting 1000 phone devices... it takes more than just the one or two I have.
<sarnold> apb1963_: ha :) true
<dcosnet> i got some sun today
<dcosnet> walking around with a metal detector
<dcosnet> found nothing usefull
<apb1963_> What's truly aggravating is I lost my phone number because google made a tiny little change to their service... and so a corresponding change needs to be made in the asterisk sources in order to continue using google; changing a 2 to a 23 and recompiling.  And that has led to this grand adventure of misery and insanity that could only be brought about by open source authors that supply "contributed scripts' that they don't support... but
<apb1963_> tell you to use, and yet haven't been tested nor any clear indication given as to what version it might work with.
<dcosnet> not that i want to promote a random corporate product, but i'm happy with my ooma device. works with google voice just fine.
<apb1963_> When did you get it?
<dcosnet> eh, month ago
<apb1963_> sounds about right
<dcosnet> aestricks gives me a headache
<apb1963_> GV made the change something like 2 months ago... so if ooma was on the ball... they made their changes and all is well.
<apb1963_> It was a change in how they authenticate
<dcosnet> ah
<apb1963_> ssl version 2 vs version 2.3
<dcosnet> figures
<apb1963_> And nobody in #asterisk had any idea what the problem was
<apb1963_> Nor did they care, because they look down on GV and people that  use it.
<apb1963_> But I found a post... and that post is what I'm working on testing to see if it works as stated.
<apb1963_> asterisk is not for the faint of heart... I spent over 6 months learning it.
<dcosnet> yea
<apb1963_> and I only know enough to setup a small system such as my own.
<dcosnet> yea, i feel the same way about bind as well
<apb1963_> ha!
<apb1963_> Yep.. bind, sendmail...
<apb1963_> all fun stuff that took lots of time to learn
<dcosnet> to much
<apb1963_> Not to mention the purchase of books to facilitate it.
<dcosnet> openvpn has my head spinning a bit too
<apb1963_> Yeah that took a bit to hash out as well
<dcosnet> developers: how can we make this as painfull as possible.
<apb1963_> I'll have to do it again since I didn't have a permanent way to test it.
<apb1963_> haha no doubt
<apb1963_> Still... these are huge, major programs.. they do a lot.  You can't expect power to be simple.
<dcosnet> in other news. i am very mad at the computer hardware industry for not putting pci-passthrough support (iommu) on all expensive motherboards
<apb1963_> complain in ##hardware
 * dcosnet needs to find another 2011-v3 mobo quick
<dcosnet> yea
<dcosnet> no point.
<apb1963_> :)
<dcosnet> its basically porking me on a recent job though
<dcosnet> should have put slightly more research into the mobo first
<dcosnet> anyway
<dcosnet> now, about that golden egg.
<apb1963_> sarnold: http://ubuntuhandbook.org/index.php/2015/12/install-wine-1-8-stable-new-ppa/
<sarnold> apb1963_: that's the usual tension between what's "upstream" vs what's packaged
<apb1963_> yeah... but if you want 1.8, that's your ticket.
<k2gremlin> Hello all. Been a while since I have used GNS3 but I now see the "VMWare" options in the preferences. Are we able to pull in ESXi VMs now???
<apb1963_> sarnold: You had asked me about where I found it earlier so that was one extra link I forgot to give you earlier.
<sarnold> apb1963_: aha
<k2gremlin> Sorry about that last message. Was meant for the GNS3 channel >.<
<sarnold> apb1963_: there's also this: https://launchpad.net/ubuntu/+source/wine-development
<apb1963_> ok.. I don't need latest and greatest wine I don't think... in fact, I barely use it.  My machine is too underpowered.
<LUMIA930> -bash: /usr/local/bin/bundle: /usr/bin/ruby1.8: bad interpreter: No such file or directory
<LUMIA930> <sudo ruby-install ruby 2.2.3> i have done this
<LUMIA930> but still am getting this
<sarnold> LUMIA930: compare the output of "file" on your ruby interpreter against the output of `uname -a`
<sarnold> LUMIA930: it feels a bit like architecture mismatch
<LUMIA930> â¢Ruby-install will leave everything in /opt/rubies/ruby-2.2.3/bin.
<LUMIA930> but when i run this : RAILS_ENV=development bundle install
<LUMIA930> its pointing to the : -bash: /usr/local/bin/bundle: /usr/bin/ruby1.8: bad interpreter: No such file or directory
<LUMIA930> sarnold : how to point that ruby ?
<PCdude> http://askubuntu.com/questions/817572/openstack-fails-to-install-caused-by-juju
<MelRay> Hey everyone I've never installed server before 16.04. I assumed there would be no xorg or xfce like xubuntu. I found out this was not correct. I have an issue though and that is that when I run startx it starts to fire up everything then halts with an error. Is this a known problem?
<PCdude> MelRay:  I am not totally getting ur problem, but by default there is no desktop environment installed on the server version of ubuntu 16.04. if u wanna install a desktop environment look at the following link
<PCdude> http://askubuntu.com/questions/2093/how-to-install-gui-desktop-on-a-server#2096
<MelRay> PCdude: Thanks but I realized I executed startx without sudo in front of it. It does have it here and I didn't use apt-get to install it :-)
<PCdude> ah ok, enjoy ur install :)
<coreycb> ddellav, jamespage: the last day for non-client library releases was 8/25 so our next refresh of non-client libraries for openstack should be our last for newton
<ddellav> codedmart ok
<ddellav> errr coreycb
<coreycb> ddellav, of course there will probably be feature freeze exceptions, so I should say "mostly"
<cpaelzer> jamespage: does our openstack do anything about selecting a special machine type like pc_i440fx_2_4 or ubuntu-<release> or does it just take the default that qemu has unless the userspecifies something on his own?
<Tahvok> Hey guys!
<Tahvok> I'm reading about this:
<Tahvok> https://wiki.ubuntu.com/1404_HWE_EOL
<Tahvok> I'm trying to understand why can't I just downgrade to kernel 3.13?
<sarnold> Tahvok: you may be able to, if your hardware would work with it; there's a general rule that package upgrades should work, downgrades may or may not work, because package maintainers may have to make one-way transitions in the package scripts
<sarnold> Tahvok: so, you can try an old kernel, but if you run into trouble, well,... "that can happen, yes"
<Tahvok> sarnold: from the hardware side, it will work for sure - it's a vm. I'm more concerned of the package downgrade scripts... How can I downgrade? I'll create a snapshot first.
<sarnold> Tahvok: I think it'd be along the lines of "apt-get install linux-generic=3.13.0-95.142"
<sarnold> Tahvok: then once you're content with it, you could apt-get purge the linux-generic-lts-* package and whatever packages that package brought in
<Tahvok> sarnold: The downgrade process went well. Thanks a lot!
<sarnold> Tahvok: nice :)
<sarnold> Tahvok: pay attention to it for a bit to make sure you get the security updates that are coming in a few days
<Tahvok> Yeah, of course. Thanks!
<sbeattie> you will want the linux-image-generic (and maybe the linux-headers-generic) metapackage to keep up with updates as they are applied to the 3.13 kernels.
<sbeattie> Tahvok: ^
<blizzow> I have two drives with an mdadm raid 1 partition on them.  They have LVM on top of the raid device and in that LVM, I have my root partition.  I just tried to do a dist-upgrade and am getting errors saying:
<blizzow> grub-probe: error: disk `lvmid/{REDACTEDUUID}' not found.
<blizzow> Anyone know how I can get grub installing my kernel and updating my bootloader?
<RoyK> blizzow: never seen that error, and I'm managing servers with root on md/lvm
<blizzow> The worst part is I have no idea if it's safe to reboot my server and it's far away.
<Walex> blizzow: 'update-grub2'
<blizzow> Walex: that's the problem.  It says: "grub-probe: error: disk `lvmid/{REDACTEDUUID}' not found."
<sarnold> Odd_Bloke: congratulations :)
<kgirthofer> why is my screen session only in b&w
<kgirthofer> well just the prompt - ls and vim and what ever are in the right colors
<zioproto> Hello. Wow is full of people here. I am looking for Corey or James, anyone is around ?
<halvors> Hi. Anyone know it it's possible to setup 2 redundant servers using KEA dhcp server?
<hallyn> hm, i was dropping by to congratulate nacc but don't see him here.
<hallyn> Odd_Bloke: congrats :)
<sarnold> hallyn: very odd indeed isn't it? :)
<sarnold> he's -always- here..
<hallyn> he's out drinking to celebrate
<teward> nacc is hiding in /dev/null 'cause I have a flurry of PHP evils to send his way
<teward> :P
<hallyn> i don't blame him
<SupaYoshi> when running more than one website, is it advised to run both php-fqm and the actual website under different users for each website?
<SupaYoshi> and i do everything through console, right now I see this as a pretty big investment in security but the setup time is quite long.. Is there anyone
<rabbitdew> I'm having some issues getting an ubuntu server online. The other servers around it are good, it may be a link level issue, so I want to use lldpad. The only repo I have is the livecd
<rabbitdew> Why wouldn't that be included there?! Is there something similar I could use?
<rabbitdew> If I want to try using the other NIC, is there anything else I need to do then change /etc/network/interfaces and reboot
<rabbitdew> I noticed the first one sends and receives packets, but doesn't seem to be acknowledged by the network. When I just change that file and reboot tcpdump doesn't show me anything
<rabbitdew> ***change /etc/network/interfaces to the name (p2p2) of the other nic
<rabbitdew> Actually, better question: If I want to switch the NIC that is configured to another NIC; do I have to do anything more than switch the name of that interface at /etc/network/interfaces
<rabbitdew> ...and reboot (#service networking restart fails, as does # ifdown)
<tomreyn> rabbitdew: if you have OOB access, you would "service networking stop", reconfigure the interface, "service netrworking start"
<tomreyn> ...or just reboot after editing it
<tomreyn> if you want to show the interfaces file you have there (you can replace / redact the numbers) i could try to double check on the formatting etc. if you like
<rabbitdew> tomreyn: for sure. So my p2p1 was showing a lot of tx/rx packets but couldn't get on my network. I tried editing /etc/network/interfaces and replacing those values with p2p2, and rebooting
<rabbitdew> That should be all it takes
<rabbitdew> I just wanted to confirm I'm not missing some step
<rabbitdew> tomreyn: cheers
<tomreyn> welcome
<tomreyn> obviously this would only make sense if you have both p2p1 and p2p2
<mdunn> Hey guys, I have a question about systemd and sysv.
<mdunn> I'm running Xenial, and I'm not sure if sysv runs by default, or it's just systemd wrapping up init.d scripts and doing magic.
<mdunn> So if I run "update-rd.c app remove", is there a potential that apt will re-install the init.d script and run that init.d instead of systemd on a box reboot?
<JanC> Xenial uses systemd, which has some sysvinit compatibility features
<RoyK> mdunn: systemctl stop $service ; systemctl disable $service
<mdunn> Thanks!
#ubuntu-server 2016-08-30
<thai_> anyone got experiences with openstack on 16.04 multi nodes?
<RoyK> thai_: We have a 16-node openstack cluster (that is 16 compute nodes, routers and storage are separte) at work. I think it's till on 14.04. I could ask around if you have a specific question
<Zeranoe> I'm setting up a dev lamp server. How should I go about granting permissions for my user and the web server to write to /var/www/html?
<Zeranoe> I'm thinking maybe I should make www-data the owner of the group and add my user to that group?
<sarnold> Zeranoe: that's one decent way; I dislike the idea that the server may have the ability to write to those files, so I'd also suggest that www-data itself not own those files; either group or world permissions to read them is better, imho
<Zeranoe> sarnold: So maybe don't touch own, but add the user to www-data and grant write permissions for groups for /var/www/html ?
<Zeranoe> I guess I'm not seeing how that's any different...
<sarnold> Zeranoe: I just like to make sure that the web server itself can only write to its log files and database / fpm sockets if you can
<Zeranoe> sarnold: Doesn't that cripple a large number of web applications...?
<sarnold> Zeranoe: if your apps have to allow uploading stuff, maybe that too :)
<Zeranoe> I think a better solution would be to have decently written code so that rouge writes don't happen
<sarnold> somehow 'write bug-free code' just never gets to the top of manager's priorities :(
<Zeranoe> I hear that
<xmj> "write bug-free code" never gets on employee priorities either
<xmj> or rather, results
<xmj> so why bother
<Odd_Bloke> sarnold: hallyn: Thanks. :)
<ajikny> tomreyn: there?
<ajikny> I have got the kernlog and the syslog
<ajikny> from the server
<ajikny> I have a weird problem. I upgraded ubuntu server 12.04 LTS to 1.404 LTS.
<ajikny> 14.04 LTS
<ajikny> as soon as i upgraded the keyboard and network stopped working
<ajikny> The keyboard works till we load grub.
<ajikny> as soon as the kernel is load the kb stops working
<ajikny> funnily enough the kb and network both work in the live boot
<ajikny> i have tried installing image-extra-generic, update-usbids, linux-firmware, linux-firmware-nonfree followed by update-initramfs -k all -c
<zioproto> Hello. I have a weird problem I am building deb packages for ubuntu trusty, the control file in the deb package is broken. In the depends section some other packages have an extra dash - at the end, and this makes the package impossible to install
<zioproto> https://code.launchpad.net/~zioproto/ubuntu/+source/cinder/+git/cinder/+merge/304341
<zioproto> anyone had a similar problem before ?
<Odd_Bloke> zioproto: Can you paste an example of the problematic file?
<zioproto> Odd_Bloke: check the comment here https://code.launchpad.net/~zioproto/ubuntu/+source/cinder/+git/cinder/+merge/304341
<zioproto> I pasted the file
<Odd_Bloke> zioproto: Oh, apologies.
<zioproto> no problem
<zioproto> the package where I have the problem is python-cinder
<zioproto> di you see that python-novaclient- ? the ending - in the control file brakes the dependencies
<Odd_Bloke> zioproto: Yeah, that's strange.
<Odd_Bloke> zioproto: How are you building your .dsc?  (And can you paste it?)
<zioproto> sure, give me a second
<zioproto> here is the dsc http://pastebin.com/2jCg74gu
<Odd_Bloke> zioproto: Hmm, not really sure what the problem is I'm afraid.
<ajikny_> I have a weird problem. I upgraded ubuntu server 12.04 LTS to 14.04 LTS as soon as i upgraded the keyboard and network stopped working The keyboard works till we load grub. as soon as the kernel is load the kb stops working [14:30] funnily enough the kb and network both work in the live boot i have tried installing image-extra-generic, update-usbid
<zioproto> sorry I have to be away from keyboard for 30 mins, I will be back
<ikonia> do you see anything in /dev/input
<ikonia> do you see anything in the syslog about the network devices being loaded/brought online or failing
<ajikny_> ikonia: i get by-id by-path event [0-4] js0 mice mouse0 mouse1
<ikonia> where do you get that ?
<ajikny_> i have livebooted inot the system
<ajikny_> *into
<ikonia> thats not really going to help diagnose the problem
<ajikny_> i have the chrooted dmesg and sys logs
<ikonia> dmesg isn't a file
<ajikny_> if it can help
<ikonia> it's real time information
<ikonia> so thats not going to help getting dmesg from the livecd
<ikonia> I assume you have no boot splashes enabled when the system boots
<ajikny_> no
<ikonia> when the system boots do you see any errors/warnings, especailly around the network
<ajikny_> it stops at waiting for network configuration
<ajikny_> for 60seconds and then proceeds with boot without full config
<ikonia> are you using dhcp ?
<ajikny_> no.. static config
<ikonia> ok - so the syslog from the system should show it trying to bring up the devices
<ikonia> find that in the syslog and see what it's doing with the network
<ajikny_> okayyy... hold on
<zioproto> Odd_Bloke: I am trying to rebuild the package without my patch to see if I have anyway the problem with the control file
<zioproto> Odd_Bloke: James Page from canonical commented on the merge request saying I am maybe hitting a known bug of cloud-archive-utils
<zioproto> and yes I am able to reproduce the problem even with the upstream package without my patch
<zioproto> jamespage: oh you are here :)
<jamespage> zioproto, hey!
<jamespage> zioproto, just digging on your problem now
<zioproto> jamespage: to push the fixed changelog, should I just commit amend and push force, or you prefer a tree with multiple commits ?
<zioproto> jamespage: thank you !
<jamespage> zioproto, amend and force push is good for me
<jamespage> zioproto, can you confirm your cloud-archive-utils pkg version?
<zioproto> jamespage: Installed: 0.1-0~50~ubuntu16.04.1
<jamespage> zioproto, ok that's good
 * jamespage thinks
<zioproto> jamespage: check the merge request page, I added more stuff
<zioproto> jamespage: my doubt is, when I use the command sbuild-liberty, then in the chroot what version of could-archive-utils will be used ?
<jamespage> zioproto, for reference you can drop "-d trusty-amd64" from your build line - sbuild-liberty deals with that
<jamespage> zioproto, it should pick a trusty schroot
<jamespage> zioproto, schroot -l might be helpful here
<zioproto> I have a two lines output
<jamespage> zioproto, there is another possibility - do you keep your schroots up-to-date?
<zioproto> chroot:trusty-amd64
<zioproto> source:trusty-amd64y
<jamespage> zioproto, that's fine - no need for -d at all then
<zioproto> how can I update the chroot ?
<zioproto> I think this all vm where I am working is at most two weeks old
<jamespage> zioproto, well it should be ok
<jamespage> lemme dig out my update script
<jamespage> zioproto, http://paste.ubuntu.com/23111240/
<jamespage> I have that on cron.daily
<jamespage> zioproto, could you pastebin the entire build log?
<zioproto> jamespage: 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
<zioproto> looks like it was upgraded already
<zioproto> jamespage: yes, do I have to screen scrape it or it is automatically saved in a file ?
<jamespage> zioproto, its saved to a file locally
<jamespage> zioproto, we had a bug in dh-python todo with dependency parsing which this looks like
<jamespage> but I'm not 100% convinced
<jamespage> https://launchpad.net/ubuntu/+source/dh-python/1.20140128-1ubuntu8.2
<jamespage> as I fixed that
<zioproto> I guess you want this file right ?
<zioproto> /home/ubuntu/ubuntu/logs/cinder_7.0.2-0ubuntu1_amd64-20160830-0942.build
<zioproto> it is a symlink in the folder where I did the build
<zioproto> jamespage: can you download it from here ? https://www.dropbox.com/s/y8u3m1sxog39ovo/cinder_7.0.2-0ubuntu1_amd64-20160830-0942.build?dl=0
<jamespage> zioproto, hmm
<jamespage> dh-python_1.20140128-1ubuntu8
<jamespage> that looks wrong
<zioproto> how can I update that ?
<jamespage> zioproto, well the sbuild-liberty wrapper should do that
<ajikny_> ikonia: read the logs, the network manager tries to do a dhcp request
<ajikny_> ikonia: this is my first time reading logs and understanding so sorry it took some time to understand. I am willing to learn and get my hands dirty tough
<zioproto> jamespage: so I did not do anything wrong :)
<jamespage> zioproto, no - just trying to figure out why you schroot is not picking that update up - mine is!
<jamespage> so is our backport-o-matic for the UCA
<jamespage> grrr
<zioproto> what is backport-o-matic ?
<ajikny_> Respected all
<ajikny_> I got a really weird problem on upgrading ubuntu 12.04 to 14.04
<ajikny_> after the upgrade the keyboard an network stopped working.
<ajikny_> I cannot ssh into the system and i cannot type anything
<ajikny_> The keyboard and network work in liveboot
<ajikny_> Also keyboard works untill the grub menu. Once linux starts loading the wired usb keyboard stops working. The leds turn off
<jamespage> zioproto, its the automated process of backporting openstack related packages to the ubuntu cloud archive
<jamespage> it uses cloud-archive-utils as well
<jamespage> infact that's what cloud-archive-utils was written for
<zioproto> jamespage: how can I get a shell into the chroot using the schroot command to debug
<zioproto> ?
<zioproto> got it schroot -c trusty-amd64
<zioproto> in the chroot I have this candidate for dh-python Candidate: 1.20140128-1ubuntu8
<zioproto> jamespage: I think the problem is in the chroot I have only trusty/main and not trusty-updates/main
<jamespage> zioproto, yes that's the problem
<jamespage> zioproto, not quite sure how you managed to create a schroot like that tho
<jamespage> zioproto, how did you create the schroot?
<zioproto> following the ubuntu docs :)
<zioproto> hold on
<zioproto> I will show you
<zioproto> I use this script https://github.com/zioproto/ubuntu-cloud-archive-vagrant-vm/blob/xenial/bootstrap.sh
<zioproto> jamespage: look at this documentation page at step 7, there is SKIP_UPDATES=1 https://wiki.ubuntu.com/SimpleSbuild
<zioproto> jamespage: could that be the problem ?
<jamespage> zioproto, yeah - I'd drop
<jamespage> SKIP_UPDATES="1"
<jamespage> SKIP_PROPOSED="1"
<jamespage> SKIP_UPDATES stops you getting any updates for packages in 14.04
<jamespage> which you need
<zioproto> OK, I will update my scripts
<jamespage> proposed is SRU's in test for 16.04
<zioproto> I have to grab something to eat now, I will test asap and I will give you feedback. I will try to do it today.
<zioproto> are you going to be around in IRC for the rest of the day ?
<zioproto> thanks for the help. talk to you later
<jamespage> zioproto, yeah I'll be around for another 6 hrs or so
<LuisM> hi folks
<Ussat> o/
<LuisM> Ussat: how to deal with memory recognize in Ubuntu Server inside ESXi?
<Ussat> not sure what ya mean
<LuisM> i want to do ubuntu hot add memory
<LuisM> https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1012764
<LuisM> look
<LuisM> acpi_memhotplug
<Ussat> I have never hot-added memory in ESXi
<Ussat> in my experiance never works right
<patdk-wk> why wouldn't it work right?
<Ussat> X86 just isnt up to it imho
<patdk-wk> I have never had it not work
<patdk-wk> what does x86 have to do with it?
<patdk-wk> hot add/remove memory was designed for laptops
<patdk-wk> for power savings
<LuisM> yeah, i'm in x86, but my VMs are 64-bits
<Ussat> I always down my Linux systems to add mem or cpu or whatever, my AIX systems otoh, I DLPAR flawlessly
<ogra_> patdk-wk, it surely works better oon all these s390x laptops than on x86 ... :P
<patdk-wk> ogra_, never had an issue doing it on linux, to add/remove ram
<Ussat> I know in theory it works on Power arch, but I have not tried it
<Ussat> maby will play with that a bit sometime
<Ussat> and by X86 I dont mean 32 or 64 bit, I mean X86 chipset
<LuisM> uhum
<LuisM> Ussat: in Solaris or AIX, hot memory add is flawlessly
<LuisM> :D
<Ussat> LuisM, I have not use Solaris in YEARS
<Ussat> I use AIX daily
<LuisM> aix > solaris
<LuisM> ;)
<Ussat> LuisM, but more and more I am doin all the Linux stuff here
<LuisM> same here
<patdk-wk> hot-add has always worked flawless in linux and ubuntu for me
<patdk-wk> dunno what the issue is
<patdk-wk> now, when you add memory, linux kernel doesn't enable that memory automatically, so you have to do that, or add a udev rule to do it for you
<patdk-wk> SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1", RUN+="/usr/bin/logger onlining cpu: $env{DEVPATH}"
<patdk-wk> SUBSYSTEM=="memory", ACTION=="add", TEST=="state", ATTR{state}=="offline", ATTR{state}="online", RUN+="/usr/bin/logger onlining memory: $env{DEVPATH}"
<Ussat> patdk-lap, but in AIX or Solaris its automatic.....thats my point
<patdk-wk> and the kernel won't get the notification if you don't have the acpi_memhotplug module loaded
<patdk-wk> Ussat, heh? it is automatic here also
<Ussat> you dlpar  memory,or a cpu or a pci slot...
<Ussat> <patdk-wk> now, when you add memory, linux kernel doesn't enable that memory automatically, so you have to do that, or add a udev rule to do it for you
<Ussat> you said it wasnt right there......
<patdk-wk> maybe you mean, enabled
<patdk-wk> it's automatically added, it's not automatically enabled
<Ussat> I mean its automatically added and enabled
<patdk-wk> enabling it automatically, can be debated if that is a good idea
<Ussat> Not much to debate, you add it when you need it.
<Ussat> Or you have a system that automatically balances based on load
<patdk-wk> hmm, I add it to my laptop when I don't need it
<Ussat> laptop.....
<patdk-wk> I don't want to waste power in my laptop, except when I NEED the memory
<Ussat> ok I am talking real enterprise systems
<patdk-wk> this hot-add was designed and created for laptops
<Ussat> not little laptops
<patdk-wk> my laptop has 64gigs ram
<Ussat> I dont even bother with a *nix on a laptop or desktop
<Ussat> like I said, little laptop
<patdk-wk> your loss
<Ussat> Not at all
<Ussat> My enterprise systems have 512G RAM or better
<patdk-wk> still, I dunno of any laptops that can have >64gigs ram
<Ussat> and he was talking about ESXi, not a toy
<patdk-wk> no, he was talking about hot-add memory
<patdk-wk> that acpi interface was designed by and for laptops
<patdk-wk> the fact esxi uses it, doesn't matter
<patdk-wk> it's a power savings feature
<Ussat> No, it really isnt desinged as that, but believe what you eill
<Ussat> will
<patdk-wk> http://www.acpi.info/DOWNLOADS/ACPIspec-2-0a.pdf
<Ussat> I know what acpi is, and it was not desinged for hot-add memory......that was a later addition
<patdk-wk> heh?
<patdk-wk> how is a later addition, and being part of the specification matter?
<patdk-wk> it's part of the acpi standard
<patdk-wk> it doesn't matter if it was in predraft 0.1, or 1.0, or 2.0, or whatever, if it's in there it is a part of it
<Ussat> believe what you want, you are incorrect
<patdk-wk> well, go tell esxi to stop using acpi for hot-add memory
<Ussat> esxi uses an extension to acpi to accomplish that correctly, and it still has some major issues
<Ussat> and it can not be solved via software, its a limitation of the X86 chipset
<LuisM> patdk-wk: so, how to load that module in ubuntu-server if it doesn't exist? :(
<LuisM> what package do i need to install?
<LuisM> acpid?
<LuisM> or just add that udev rule?
<patdk-wk> it does exist
<patdk-wk> I just posted all the info above
<LuisM> yeah
<LuisM> but the module doesn't exist :(
<LuisM> i'm in 16.04
<patdk-wk> hmm, haven't done 16.04 in a vm yet
<coreycb> ddellav, jamespage: I have changes coming for ceilometer in newton if you haven't already started those
<jamespage> coreycb, no  - I was looking at keystone
<jamespage> coreycb, ddellav: did the syncs from exp this am btw
<patdk-wk> LuisM, there is no need
<coreycb> ddellav, jamespage: ah thanks.  I'd like to wrap up the remaining non-client updates so that probably got some of them
<patdk-wk> it's built in on 16.04, it's not a module now
<patdk-wk> so only the udev rules would be needed to make it enabled automatically
<jamespage> coreycb, keystonemiddleware and auth1 need doing I think
<coreycb> jamespage, ok
<patdk-wk> CONFIG_ACPI_HOTPLUG_MEMORY=y
<LuisM> patdk-wk: hmmm, great
<ddellav> jamespage coreycb im working on keystonemiddleware atm
<LuisM> where did you see that?
<coreycb> ddellav, sweet thanks
<patdk-wk> in the kernel config file in /boot
<jamespage> coreycb, ddellav: seeing some crypto breaks in glance and barbican
<jamespage> something may be broken outside the direct dep chain I think
<jamespage> as xenial does not show the same issues
<LuisM> patdk-wk: config-4.4.0-21-generic
<LuisM> rox
<LuisM> I was doing a for to change state of memory to "online"
<LuisM> in /sys/devices/system/memory/memory/*/state
<LuisM> ;)
<jamespage> coreycb, urgh
<jamespage> https://launchpadlibrarian.net/281726807/buildlog_ubuntu-yakkety-amd64.python-oslo.db_4.12.0-1_BUILDING.txt.gz
<jamespage> our switch in default dialect in sqlalchemy bites us at long last
<coreycb> jamespage, I think I've patched tests like that before because of the switch
<coreycb> jamespage, but I wonder if we should revisit that switch
<coreycb> jamespage, python-mysqldb now uses the mysqlclient fork so it has py3 support
<jamespage> grag
<coreycb> jamespage, anyway too late for this cycle
<coreycb> jamespage, for the openssl errors, I wonder if this has something to do with it: https://github.com/openstack/requirements/blob/master/upper-constraints.txt#L95
<coreycb> jamespage, we're at 1.5 in yakkety-proposed
<jamespage> coreycb, I think so - I see google hits on other projects seeing similar issues
<jamespage> that said that was an autogenerated upper-constraints that fixed that at 1.4
<zioproto> jamespage: I confirm that removing SKIP_UPDATES and SKIP_PROPOSED lines fixed the problem
<zioproto> jamespage, coreycb: any change to fix the documentation on this page ? https://wiki.ubuntu.com/SimpleSbuild
<coreycb> zioproto, personally I'd be hesitant to update that one since it's not mine, is there a bug?
<zioproto> jamespage: can you recap the bug about python-dh ?
<jamespage> zioproto, I've updated the wiki
<zioproto> coreycb: you replied by email that is metter to push multiple commits when contributing, while james suggested git commit amend and force push :) Both are fine to me. Just agree on one that makes everyone happy :)
<jamespage> made it more opininated 'might -> will'
<jamespage> and added a comment why
<jamespage> zioproto, I prefer a single commit per change - if you need to tweak a commit, its still a single commit
<jamespage> not multiple imho
<coreycb> zioproto, for little changes like a typo I'm ok with a new commit, but ideally a clean history is what we want to see
<zioproto> I see the point. So if I am working with a Yakketi schroot I can safely put SKIP_UPDATES=1. But trusty is old so I need the updates
<zioproto> OK guys, I hope my contributions will get cleaner with time
<jamespage> zioproto, tbh I run all my chroorts with updates enabled
<jamespage> so that at the point of release its a no-op to switch
<blut> how can i setup nis with a preseeded configuration?
<Braven> what does it mean if after selecting the kernel in grub. The screen goes black
<zioproto> talk to you later, I have another meeting, 1 hour away from keyboard :) thanks for the help!
<LuisM> patdk-wk: your udev rule didn't work :(
<patdk-wk> oh? maybe something more has changed in 4.4 kernel
<ddellav> can i get an archive admin promote python-yaql and python-monascaclient to main please? There were approved MIR's but no promotion: https://bugs.launchpad.net/ubuntu/+source/python-yaql/+bug/1586069 https://bugs.launchpad.net/ubuntu/+source/python-monascaclient/+bug/1590836
<ubottu> Launchpad bug 1586069 in python-yaql (Ubuntu) "[MIR] python-yaql" [Undecided,Fix released]
<ubottu> Launchpad bug 1590836 in python-monascaclient (Ubuntu) "[MIR] python-monascaclient" [Undecided,Fix released]
<ddellav> they are needed for heat
<jamespage> coreycb, oslo.db fixed up
<jamespage> ddellav, that might be better asked in -devel
<ddellav> jamespage ack
<jamespage> ddellav, its possible that if the heat version that required those new deps was not in main at the time of promotion, they would have been demoted again
<ddellav> coreycb heh, keystonemiddleware is already at 4.9, it's just unreleased
<ddellav> jamespage that's what coreycb was saying, yea
<coreycb> ddellav, that makes it easier, I wonder if it didn't get release for a reason though.  jamespage may know.
<jamespage> ddellav, when I looked last week depends in experimental where not quite up-to-date
<ddellav> coreycb looks like it was updated 5 days ago
<jamespage> I pushed as much of my local work as I could
<jamespage> as everything else would catchup
<ddellav> jamespage same deal with keystoneauth1?
<jamespage> yah
<jamespage> gah I need to fixup python-hacking still
<jamespage> yikes
<coreycb> ddellav, can you note that in bug 1579670 that you tested designate-dashboard and it's working in yakkety?  be good to let them know.
<ubottu> bug 1579670 in designate-dashboard (Ubuntu) "Designate dashboard 2.0.0-1 throws exception in Xenial" [Undecided,New] https://launchpad.net/bugs/1579670
<ddellav> coreycb done
<jamespage> coreycb, ddellav: python-hacking munged to work again
<jamespage> its not submittable upstream atm
<coreycb> jamespage, that's good, so we're sorted out for yakkety?
<jamespage> coreycb, yah
<jamespage> basically its a 'skipall'
<jamespage> but tbh I'm less worried about hacking tests being skipped in distro
<coreycb> jamespage, I agree
<xnox> jamespage, coreycb: i have patches for ceilometer test suite. Am I allowed to just upload it and commit/push to https://git.launchpad.net/~ubuntu-server-dev/ubuntu/+source/ceilometer ? or should I raise a merge proposal against it?
<xnox> what's the drive-by contribution workflow for server team?
<jamespage> xnox, oh I think coreycb was working those as well?
<xnox> heh
<coreycb> xnox, contributions are always welcome :)
<coreycb> xnox, was it etc updates or something else?
<xnox> coreycb, what is "etc" ?
<xnox> i'm looking at ADT test suite failure for s390x, which looks like missing depends in the debian/tests/control and/or packages themself.
<xnox> e.g. python-libvirt is now needed to start the services and test that they are available.
<xnox> however not sure how come things succeed on other arches for that test, as i'm failing to see how that would ever work =)
<xnox> i'll send a merge proposal to discuss =)
<coreycb> xnox, there was just some config file updates that I was working on.  ah, ADT fixes would be a welcome addition, I wasn't working on that.
<coreycb> xnox, thanks
<zioproto> coreycb, jamespage : I am going offline, let me know with launchpad comments if I can improve my patch. Thank you
<xnox> coreycb, ceilometer-agent-ipmi fails to start and in the journal it tells me that EmptyPollstersList: No valid pollsters can be loaded with the startup parameters polling-namespaces and pollster-list.
<xnox> is there a way to have some "dummy" ipmi pollster-list? or what is missing to get that going?
<xnox> note that i'm inside a lxd container for adt tests on s390x, rather than bare metal =/
 * xnox ponders if ipmi is a thing on s390x
<xnox> looking at https://code.launchpad.net/~bladernr/checkbox/no-ipmi-s390x/+merge/291946 i'm guessing no
<xnox> coreycb, i am pondering of how useful ceilometer-agent-ipmi is on s390x
<xnox> given the lack of ipmi on that platform.
<stevenm> is there need for putting 'NEED_IDMAPD=yes' in /etc/default/nfs-common now a redundant one (in 14.04 onwards) as I can see with just nfs-kernel-server installed - that rpc.imapd is running
<stevenm> (sorry for repost, i realised this chan was a better bet)
<coreycb> xnox, I agree if s390x doesn't have ipmi support then ceilometer-agent-ipmi isn't going to be of much use
<xnox> coreycb, i'll skip testing that in the autopkgtest, but keep everything else in place for now.
<coreycb> xnox, ok sounds good
<coreycb> ddellav, are you done with the oslo's too?
<ddellav> coreycb: not yet, I'll post with packages for your review in a bit
<coreycb> ddellav, ok I can take some and work on them this afternoon if you haven't started on them.  let me know.
<cscf> Anyone know of a tool to multithread encryption?  Trying to zfs send over ssh and ssh's encryption is a bottleneck
<RoyK> cscf: known problem - using mbuffer will help, but won't give you encryption
<RoyK> cscf: so I guess it all depends on whether you're on a provate network or are using zfs send over the open internet
<cscf> RoyK, I thought surely there must be a tool that AES-encrypts in a pipeline in parallel, like pigz for gzip
<cscf> RoyK, Internal network but a big one, encryption preferred
<RoyK> cscf: ok - not sure - sorry
<coreycb> ddellav, did you do neutron-lib?  if not I'll do that. neutron needs it.
<coreycb> ddellav, neutron-lib uploaded, that should at least fix some of the current neutron failures
<xnox> coreycb, jamespage: please review https://code.launchpad.net/~xnox/ubuntu/+source/ceilometer/+git/ceilometer/+merge/304406
<xnox> darn, i need to rebase i think
<coreycb> xnox, sorry, yeah I pushed some changes under you
<xnox> coreycb, repushing, lp should regenerate in a second or two
<xnox> coreycb, looking good now.
<coreycb> xnox, ok looking
<coreycb> xnox, pushed, thanks!  ok if that doesn't get uploaded for another week?  we are expecting upstream to release beta 3 this week or early next week.
<xnox> coreycb, that's all good.
<xnox> coreycb, hopefully next upload with actually migrate to yakkety from -proposed.
<xnox> coreycb, at the moment yakkety release is still at 6.0.0 and b2 is stuck in yakkety-proposed, but my patches should fix that.
<coreycb> xnox, let me upload then so we can make sure it migrates from proposed.  it's a little tricky ,but manageable, because we've been pushing updates to keep up with upstream tip.
<ajikny> i have an intersting problem to solve. I just upgraded the server form ubuntu 12.04 to 14.04. As soon as the upgrade got over, which went smooth,
<ajikny> After the reboot the keyboard and mouse stopped working.
<ajikny> After the reboot the keyboard and *network stopped working
<ajikny> so i neither i can ssh into the server nor can i type anything.
<ajikny> What can i do??
<JanC> ajikny: you have physical access to the server to confirm it actually booted?
<ajikny> yess
<ajikny> i was in front of it. i see the tty
<ajikny> i have livebooted and kb works in the live boot and uptil grub
<ajikny> i have got syslogs. if you are interested
<JanC> I (or others) can look at them maybe
<ajikny> ya sure, ill upload them in a sec and share the link
<ajikny> http://s000.tinyupload.com/?file_id=43836141207735899929
<sarnold> can you use a site that doesn't serve junk ads?
<ajikny> sorrry
<sarnold> this is polite :) http://paste.ubuntu.com/
<JanC> I didn't really see any ads  :)
<sarnold> I made the mistake of enabling the site in noscript to try to download the original file
<sarnold> I never saw the file, just junk..
<ajikny> http://paste.ubuntu.com/23113849/
<sarnold> ahh much better, thanks ;)
<ajikny> :)
<sarnold> it sure looks like the keyboard disconnects and reconnects every minute
<sarnold> is it a broken KVM switch perhaps? or a broken keyboard?
<JanC> there seem to be a lot of mouse connect/disconnects
<sarnold> broken usb switch?
<ajikny> the keyboard works very normally in live boot and in grub.
<JanC> Sachiru: not sure where you see keyboard disconnects?
<JanC> sarnold: I mean
<sarnold> JanC: `you're right they're all mice; I just assumed it'd be a keyboard on a server and didn't read it all that closely
<JanC> ajikny: is that a an USB keyboard?
<ajikny> yes
<JanC> there seems to be an USB keyboard directly attached, and one attached to or emulated by an USB KVM switch?
<RoyK> s/an/a/
<RoyK> :P
<ajikny> There is only one that is attached..
<ajikny> I dont follow, Does kernel accept only the emulated kb??
<JanC> it should not
<JanC> things can break when a broken keyboard (real or emulated) is attached though
<JanC> still wouldn't explain networking, I guess
<ajikny> Yes, they must be unrelated.
<ajikny> =https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1313497 Can this be relevent??
<ubottu> Launchpad bug 1313497 in linux (Ubuntu) "USB keyboard unresponsive on fresh install" [Undecided,Invalid]
<ajikny> :)
<JetNet> hi
<JanC> ajikny: did the networking need special drivers that are not part of the default Ubuntu kernel?
<JanC> or were not
<JanC> also âAug 30 13:20:01 mail mdadm[1471]: DegradedArray event detected on md device /dev/md/1â
<ajikny> JanC: no, just confirmed with previous sys admin. on 12.04 The nework ran on the default kernel
<ajikny> This error must refer to the one disk of raid 1 which is wearing out. . The machine has a softwware raid
<JanC> âAug 30 13:19:58 mail kernel: [   21.372184] init: failsafe main process (1071) killed by TERM signalâ
<JanC> not sure where that comes from
<danny_> What NAS protocol do you guys recommend for being used as additional storage to one ubuntu computer?
<JanC> ajikny: for the keyboard I would suggest trying to use one keyboard/KVM at a time & maybe try a different keyboard too
<JanC> and try different USB ports & disconnect all other USB devices
<JanC> just to make sure it's not a hardware issue
<JanC> maybe try PS/2  :)
<JanC> ajikny: did you see my suggestion?
<ajikny> JanC: I have connected on all ports and have different keyboards,
<ajikny> yes..
<ajikny> i lost connection for a while
<JanC> no network card detected in the kernel logs from what I can tell
<JanC> danny_: I would use NFS, but that's mostly because Samba needs special options to be able to use filenames that contain "?" and such over SMB/CIFS  :)
<danny_> JanC, Ok, I was planning on using NFS.  Now my only problem is getting write permissions on my pc
<ajikny> okay.. That is grave
<ajikny> JanC: if in the live boot i mount the drive and bind /dev /dev/pts/ /proc/ /sys /run ... followed by chroot mount point will my commands run as if i was on host system
<sarnold> danny_: 'easiest' is to ensure that youv'e got the same userid numbers on both systems
<danny_> sarnold, Sorry I'm new to this, what exactly does that mean?  Is userid different than username?
<sarnold> danny_: yes; unix assigns every user a number for their userid, a number for groupid, and numbers for group memberships
<sarnold> danny_: the names often suffice but NFS uses the numbers; it expects them to be common to all clients and servers
<sarnold> danny_: in larger installments people may use NIS or NIS+ or LDAP or something similar ot synchronize user accounts across all their machines
<sarnold> with just two machines it's probably easier to create users with the correct account nubmers from the start, or manually modify them as needed
<danny_> sarnold, ok, I gotta eat then I'mma learn all of this
<trippeh_> I just used sshfs for this for a while. Wasn't too bad actually!
<trippeh_> even the torrent client was happy haha.
<sarnold> trippeh_: ha I figured you'd use xprtrdma :)
<trippeh_> hehe yeah sshfs didnt scale too well to 10G
<trippeh_> topped out at a hair over 2Gbps
<sarnold> trippeh_: patdk's got some ssh patches he uses for high-throughput ssh..
<sarnold> https://www.psc.edu/index.php/hpn-ssh
<trippeh_> yeah, they arent too effective for local stuff, unless you hit cpu limits on the crypto parts
<trippeh_> nowadays just use aesni ;)
<JanC> some NAS devices have settings to allow all NFS clients equal access (haven't looked at how exactly they implement this)
#ubuntu-server 2016-08-31
<danny_> I truely hate nfs now
<trippeh_> danny_: that is normal
<trippeh_> :-)
<RoyK> danny_: why?
<patdk-lap> trippeh_, they are effective for local transfers, more effective for higher latency though
<patdk-lap> the increased/dynamic window scaling for ssh helps a crapload
<patdk-lap> the crypto changes doesn't help as much with aes-ni, sure
<danny_> RoyK, Permissions issues, I can mount it but it won't let me open it sometimes, I change something and I can open it but can't put anything into it
<FarhaadN> hi, how can i execute 2 command , in one secend? i need 2 command exacly run togheter
<FarhaadN> together*
<hateball> FarhaadN: is "command1 & command2" not exact enough ?
<Sling> FarhaadN: 1 second is an eternity, what do you mean with together? are the commands related?
<Sling> is this a concurrency problem you're trying to solve or do you just want 2 things to run really fast after each other
<FarhaadN> no not enough
<hateball> FarhaadN: have a look at parallel
<FarhaadN> Sling: i need together ,not after each other
<Sling> FarhaadN: give more details if you want better help :)
<FarhaadN> i need to run 2 command for voip service to find what channel is hangup and writ to 2 file,and then search one by one channel in 2nd file
<sarnold> FarhaadN: what have you tried so far? what about it didn't work?
<sarnold> as Sling points out, one second is an eternity
<FarhaadN> if i run my 2 commands after each othe ,properly one of them not exacly what i am need
<FarhaadN> sry my language
<FarhaadN> is bad
<FarhaadN> how can i run command parallel?
<sarnold> command1 & command 2
<sarnold> the & puts the first command in the background
<FarhaadN> you mead if i use & , 2 commands run in one moment?
<FarhaadN> mean*
<FarhaadN> or run command 1 , and then run command 2?
<hateball> & means it starts the second one straight after it starts the first one
<hateball> FarhaadN: but as I said, have a look at the command parallel
<sarnold> FarhaadN: try this: "sleep 2 & sleep 2 & echo hi"  -- then hit enter a few times
<FarhaadN> [1] 4974
<FarhaadN> [2] 4975
<FarhaadN> hi
<FarhaadN> output is
<FarhaadN> what is that?
<sarnold> note how the 'hi' shoewd up immediately
<sarnold> hit enter again
<sarnold> and the shell will then tell you about the two commands that have also finished
<FarhaadN> hateball: can u tell me command for use parallel?
<FarhaadN> sarnold: yes , when use few time,is goes up
<FarhaadN> you mean commands exec emediatly?
<sarnold> yes
<FarhaadN> tanx for helps hateball, Sling and sarnold  ;-)
<sarnold> have fun FarhaadN :)
<hateball> FarhaadN: the command is called "parallel"
<hateball> FarhaadN: "man parallel" to read how it works
<FarhaadN> hateball: tnx man
<jamespage> ddellav, coreycb: https://bugs.launchpad.net/ubuntu/+source/pyopenssl/+bug/1618765
<ubottu> Launchpad bug 1618765 in python-cryptography (Ubuntu) "TypeError: cannot instantiate ctype 'EVP_MD_CTX' of unknown size" [Undecided,New]
<jamespage> for crypto issue
<jamespage> I think we'll need to bump in a new pyopenssl version
<ddellav> ok, i'll take a look jamespage
<jamespage> ddellav, I'm on it - focus on the other oslo bits for now
<ddellav> allllrighty
<jamespage> ddellav, oh also uploaded os-vif to experimental
<jamespage> hopefully that will get through the NEW queue quickly
<ddellav> awesome
<xnox> coreycb, obviously ipmi test failed on armhf & ppc64el =) so actually restrict ipmi test to armhf/ppc64el. Pushed to master https://git.launchpad.net/~ubuntu-server-dev/ubuntu/+source/ceilometer/commit/?id=d8500b77fa9f3f619439bac404da1d7469be066a
<xnox> should be good to go, with next proper release.
<jonah> Hi all, you'll have to excuse my ignorance as I've always been a cPanel user, but I've just set up a small lamp server without any gui tools etc. In cPanel though you tend to have a site that has it's own home user, then it's own databases and users for mysql. So I'm just trying to get my head round how this works without cPanel to keep things relatively secure. If I just want say 5 sites on this small server, do I make each site it's own
<jonah> home user, or do I just put them in seperate directories in /var/www/vhosts/site1 /var/www/vhosts/site2 etc. If I do the latter do I just login to mysql to make databases and users - or will these not then work as they're all under one main user? Or if I make each user it's own linuc user with home dir, will those marry up with mysql? Just a little confused how this works with permissions/chown etc...
<xnox> jonah, create mysql database per website. Use separate credentials for each per-site mysql database (non-admin)
<xnox> coreycb, nova-scheduler.service fails to start simply after package installation, and hence autopkgtest fails - see https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1618812
<ubottu> Launchpad bug 1618812 in nova (Ubuntu) "nova-scheduler autopkgtest fails" [Undecided,New]
<jonah> xnox: so don't bother with different linux users, just have different database users and databases?
<xnox> jonah, yeah.
<jonah> xnox: and then do I just chown them all to my normal ubuntu sudo user?
<jonah> xnox: the site files I mean
<xnox> jonah, thats minimal separation. why ubuntu?
<xnox> why not www-data?
<xnox> (all webservers on ubuntu use www-data user to serve websites)
<jonah> xnox: ah ok, just as I login with my normal user to make the files, create the site files whatever, so then I just chown them instead to www-data...?
<xnox> jonah, just read this https://help.ubuntu.com/lts/serverguide/
<jonah> xnox: I just ask in terms of users really because say one site is a cms and a hacker gets in with a mysql injection or something, won't they then be able to insert a file that can also get at the other sites due to them all using www-data rather than their own user perms?
<xnox> jonah, no... because mysql user cannot write to www-data owned directories.
<xnox> jonah, if you care about security, you can run all of your websites and databases inside lxc containers with lxd and/or juju local provider.
<xnox> that way things are namespaced and protected/isolated from each other. But uses a little more resources.
<jonah> xnox: so the mysql user is safe, but what about if say the cms is bruteforced and then a file can be uploaded from the backend - would that then mean all the files use the sames owner and can be compromised?
<ddellav> any chance i can get unbanned from #ubuntu? Looks like the bot banned a few billion ipv6 addresses
<ajikny> JanC: Thanks.. Issue solved. We gave up on diagnosing kb
<coreycb> xnox, I'll take a look and include your change in the next upload.  thanks again for the fixes!
<ddellav> coreycb ready for review: lp:~ddellav/ubuntu/+source/python-oslo.cache, lp:~ddellav/ubuntu/+source/python-oslo.db, lp:~ddellav/ubuntu/+source/python-oslo.vmware, lp:~ddellav/ubuntu/+source/python-oslotest, lp:~ddellav/ubuntu/+source/designate
<coreycb> ddellav, great, will look shortly
<ddellav> coreycb im working on horizon b3 right now but (as usual) horizon is fighting me
<coreycb> ddellav, oh is it out?
<ddellav> stupid debuild applies the patches, then complains that the files have changed because of the applied patches
<ddellav> yea
<ddellav> coreycb according to your little webapp ;)
<ddellav> very helpful :)
<coreycb> ddellav, oh yeah look at that :)  "webapp"
<ddellav> hey, it's on the web and it's app-like so the name is appropriate
<ddellav> heh
<coreycb> xnox, were you planning on fixing the nova-scheduler issue?
<coreycb> xnox, just planning my next move, I don't want to dup any work
<munsking> Hello, i'm trying to set up an ubuntu web server, but i need php 5.x, is that still in the official repos?
<rbasak> munsking: yes - use 14.04.
<munsking> rbasak: dang, i just spent a few hours setting up and configuring 16.04
<xnox> munsking, you can run lxc 14.04 container with php.
<xnox> and reverse proxy to that from the 16.04 host.
<munsking> xnox: alright, i'll take a look at that
<NetworkingPro> Anyone familiar with lsof?
<xnox> munsking, $ lxc start ubuntu:x myphpserver -> $ lxc exec myphpserver bash
<xnox> in $ lxc ls -> you will see it's ip address.
<coreycb> jamespage, beisner: when you get a chance, can you promote cinder 2:7.0.2-0ubuntu1~cloud1 to liberty-proposed?
<jamespage> coreycb, yah
<coreycb> jamespage, thx
<jamespage> coreycb, done
<coreycb> ddellav, oslo.db is failing -- https://launchpadlibrarian.net/281914828/buildlog_ubuntu-yakkety-amd64.python-oslo.db_4.13.0-0ubuntu1_BUILDING.txt.gz
<ddellav> coreycb thats the same error i saw when i was building in yakkety without the delta (the patch)
<ddellav> i builds fine for me locally, i'll push it up to a PPA
<coreycb> hmm, which patch?
<ddellav> coreycb are you building from the ubuntu/newton branch?
<ddellav> coreycb pymysql-default.patch
<coreycb> ah, yeah that's what I'm prob missing
<rbasak> cpaelzer: if there is no objection to my post about ntpdate, does that allow you to Won't Fix (with reference to the thread in a comment) all of the ones you're blocked on? Or would there be any left?
<jamespage> coreycb, ddellav: crypto problems sorted in xenial and yakkety
<jamespage> oslo.config is failing to backport atm
<coreycb> jamespage, \o/
<ddellav> jamespage  :D
<coreycb> jamespage, ddellav: all of ddellav's packages from earlier today are uploaded now
<jamespage> \o/
<ddellav> coreycb yay
<med_> hi, is there a way to remove all but the latest two kernels in a stream in Ubuntu server like there is in yum?
<med_> zul, jamespage, kirkland ^
<med_> coreycb, ^
<smb> med_, usually "sudo apt-get autoremove --purge" should do what you want (and also clean other unneeded packages)
<med_> smb, yep we're considering that without the purge
<med_> (purge takes some config files we likely need)
<med_> was looking for a kernel specific solution as I'm running out of space on /boot (specifically)
<med_> ah kirkland has buried this in byobu
<med_> purge-old-kernels
<kirkland> med_: you're welcome.
<med_> :-)
<med_> maybe only on newer (xenial)
<med_> doesn't work in my trusty env
<med_> yep in xen not in t
<med_> 5.103 or newer
<med_> ah previously in bikeshed
<coreycb> ddellav, what's left for non-client libraries?
<coreycb> hi med_  o/
<ddellav> coreycb afaik the only one left is oslo-log and that was a bit of a pickle. The version currently available won't build for me. It's throwing test failures
<coreycb> ddellav, got a link to any?
<ddellav> coreycb just pull-lp-source python-oslo.log. I went and updated it but it's still failing at the same test. Even when built via ppa
<coreycb> ddellav, ok I'll take a look
<radish_> hi everyone! Just a quick question: On my mysql server, character_set_server and collation_server are set to the defaults latin1 and latin1_swedish_ci respectively. If I would change them to utf8 and utf8_general_ci respectively, will this affect existing databases/tables/data?
<ddellav> coreycb can you move your bug back to fix committed and comment that it needs to be re-promoted? I did yaql and monasca but this one is owned by you: https://bugs.launchpad.net/ubuntu/+source/python-aodhclient/+bug/1552415
<ubottu> Launchpad bug 1552415 in python-aodhclient (Ubuntu) "[MIR] python-aodhclient" [High,Fix released]
<coreycb> ddellav, done
<coreycb> ddellav, I'm going to work through the rest of the deps from the upstream report this afternoon and come back to oslo.log
<ddellav> coreycb ok, what are some of the ones you're going to work on? I guess it's not super clear which ones need updating
<coreycb> for now I'm working from here to see what's done: https://private-fileshare.canonical.com/~coreycb/ca_upstream_versions_newton.html
<coreycb> ddellav, ^ after that I just refer to upper-constraints.  at this point we will likely limit our uploads to openstack specific packages, unless a non openstack-specific package needs to get updated.
<coreycb> since we're past feature freeze
<coreycb> ddellav, although, be careful with that report. the os-brick upstream version from https://github.com/openstack/releases/tree/master/deliverables/newton doesn't match what's in upper-constraints.
<coreycb> ddellav, https://review.openstack.org/#/c/360878/
<cpaelzer> rbasak: rbasakI think that will allow me to close all formerly assigned - I''ll do so and let you know only in case there are any left
<cpaelzer> rbasak: I was able to drop all but bug 1046340 due to that reference
<ubottu> bug 1046340 in ntp (Ubuntu) "ntp package missing logcheck exceptions file /etc/logcheck/ignore.d.server/ntp" [Low,Triaged] https://launchpad.net/bugs/1046340
<cpaelzer> rbasak: and that one remains at low prio waiting for Debian
<UNIcodeX> during the install process, i'm asked if i want to encrypt my home directory, to which i replied 'yes'. but then i have the option to use encrypted LVM. could anyone advise me on how to proceed?
<sarnold> the encrypted home directories are done using ecryptfs; this allows different users to have different keys/passwords for their data
<rbasak> cpaelzer: thanks!
<rbasak> cpaelzer: I wonder if that one is worth a delta. What do you think?
<sarnold> the encrypted lvm is probably faster but all the data is going to be encrypted with the one key
<UNIcodeX> sarnold, thanks. that's on top of drive encryption, if it were to also be enabled? is the drive encryption LUKS based? I think I have that right...
<sarnold> UNIcodeX: yeah, the lvm / encrypted disk ought to be LUKS .. I haven't looked lately :/
<cpaelzer> rbasak: yeah, especially not particularly important, but since I'm like 75% done anyway already probably the easiest way to handle it
<cpaelzer> rbasak: not sure about an SRU thou, but easy enough to be fixed in Yakkety at least
<cpaelzer> rbasak: I assigned to myself, but don't know yet when I get to it
<rbasak> cpaelzer: sure, thanks.
<UNIcodeX> sarnold, thanks. Do you know if it makes use of the patch, which is talked about on the Kali site? The patch allows the definition of a nuke password.
<sarnold> UNIcodeX: I'd be surprised but I haven't looked :)
<UNIcodeX> sarnold, k
<UNIcodeX> sarnold, I found what I needed to know. Here it is in case you'd be interested in it. http://linuxbsdos.com/2014/01/14/apply-the-nuke-patch-to-luks-cryptsetup-in-linux-mint-16-and-ubuntu-13-10/
<sarnold> neat
<kirkland> med_: purge-old-kernels has been around for ages in bikeshed;  honestly, it's not supposed to exist at all, as 'sudo apt autoremove' should just work
<kirkland> med_: alas, apt autoremove doesn't actually work in many cases
<kirkland> med_: so purge-old-kernels is kind of a stop-gap, hack
<kirkland> med_: but it's saved many people who have long running cloud instances (more than a year old), and their little 8GB root disk fills up with 37 kernels in /boot
<kirkland> med_: it's a standalone shell script that you can wget or bzr or git grab and run on any Ubuntu machine
<b3h3m0th> Can ubuntu server 14.04 running on 16 gigs ram and 3gz quad core xeon processors handle ~5000 tcp connections to a socat wrapper running a process ?
<sarnold> b3h3m0th: that sounds like a definite "maybe" :)
<sarnold> b3h3m0th: how long do the processes live? how long do the connections live? how much data through them?
<ogra_> and how good is your network card/connection
<NetworkingPro> hey everyone
<NetworkingPro> Anyone know a command that can dump the contents of memory relating to a particular appliation?
<sarnold> NetworkingPro: gdb's gcore command https://sourceware.org/gdb/onlinedocs/gdb/Core-File-Generation.html
<b3h3m0th> sarnold:  child process spawned by the socat (each instance of the app socat is running) runs for under a minute
<b3h3m0th> and TCP traffic is there from start to end of that minute
<b3h3m0th> actually << 1 min
<b3h3m0th> ~10 seconds
<b3h3m0th> so in the worst case, all 5000 users connects simultaneously
<sarnold> how long does it take to exec each of those?
<Exec> ...
<sarnold> lol
<b3h3m0th> the process is interactive session
<b3h3m0th> with sigalarm of 15 seconds
<sarnold> sounds promising
<danny_> What are the advantages of Ubuntu-server over debian?  Systemd?
<RoyK> danny_: debian 8 has systemd
<RoyK> danny_: tbh, I prefer debian over ubuntu for servers
<danny_> RoyK, Yeah, I just wanted to hear why I might be wrong about debian being generally better for servers
<danny_> RoyK, I guess if you need something really up to date on a server for some reason
<RoyK> then either debian backports or some fresh ubuntu install
<RoyK> (or fedora, if you're completely out of your mind)
<danny_> other than that I guess just familiarity with ubuntu
<jesseg> hahaha how stupid. So I stuck latest ubuntu-server ISO on an external USB mass storage unit because, I mean, in 2016, who really wants to burn a *CDROM*? OK great, install fails, because it can't mount the cdrom. Yes, Virginia, the path "/dev/sr0" seems to be hardcoded into ubuntu server installation scripts! LOL LOL. So I deleted /dev/sr0 and symlinked it to /dev/sdb and now it's happily installing
<RoyK> managing ubuntu is about the same as debian
 * RoyK hands jesseg a stack of floppies
<sarnold> jesseg: how strange, I did the same dd to usb and it worked flawlessly first try :/
<sarnold> jesseg: please file bug, 'ubuntu-bug debian-installer', I think that'll get it to the right place
<RoyK> sarnold: perhaps he's installing Hardy ;)
<sarnold> RoyK: hehe :)
#ubuntu-server 2016-09-01
<Nitsuga> Has anyone got systemd-networkd's DHCP sever working?
<Nitsuga> Here it doesn't seem to do anything, but it doesn't complain either.
<RoyK> Nitsuga: does systemd has a dhcp server? I usually just use isc-dhcp
<sarnold> it does
<Nitsuga> RoyK, there's a systemd-everythingd
<sarnold> see dhcpserver in https://www.freedesktop.org/software/systemd/man/systemd.network.html
<patdk-lap> there is a systemd-emacs?
<patdk-lap> or emacs-systemd?
<sarnold> Nitsuga: oh by the way I think systemd happy continues on when it doesn't understand something
<sarnold> Nitsuga: try systemd-analyze verify
<Nitsuga> sarnold, I introduced a typo on purpose and it didn't complain. Let's seeâ¦
<Nitsuga> The systemd.network man page promises very uncomplicated routing with the DHCPServer= and IPMasquerade= options, but the latter doesn't work in Ubuntu either.
<Nitsuga> It's missing a compile option.
<Nitsuga> sarnold, systemd-analyze verify returns nothing.
<sarnold> dang
<RoyK> patdk-lap: there should be https://xkcd.com/378/
<baptistemm> hello, I'm looking for to script unattended upgrade using 'do-release-upgrade -f DistUpgradeViewNonInteractive'. In order to replace conffile with newest version I put a config file /etc/apt/apt.conf.d/local with 'Dpkg::Options { "--force-confdef"; "--force-confnew"; }'. However after the upgrade I find files *.dpkg-new
<baptistemm> Am I forgetting something
<baptistemm> I'm upgrading from 12.04 to 14.04
<frickler> jamespage: coreycb: neutron has just landed a patch that requires dhcp_release6, which was only added recently in dnsmasq 2.76, see https://review.openstack.org/301747. now that is fine for yakkety, but not for xenial, do you think it would be possible to just add the new utility back into dnsmasq for xenial?
<jamespage> xnox, yikes - did the gpg switch likely kill things like add-apt-repository?
<xnox> jamespage, shouldn't.
<xnox> let me test that one.
<jamespage> Warning: apt-key output should not be parsed (stdout is not a terminal)
<jamespage> gpg: no valid OpenPGP data found.
<xnox> har har
<jamespage> xnox, all of our yakkety branch builds for OpenStack and tripping on that today
<xnox> interesting how that is not an autopkgtest....
<xnox> yeah, gnupg2 with the switch is in -proposed already.
<jamespage> xnox, yah - we build with proposed enabled
<xnox> let's see if i can fix that quickly.
<jamespage> pls
<xnox> jamespage, good! =)
<baptistemm> No one has an hint for me ? (I can repeat my question if needed)
<baptistemm> I'm doing an unattended upgrade from 12.04 to 14.04 using 'do-release-upgrade -f DistUpgradeViewNonInteractive'. In order to replace conffile with newest version I put a config file /etc/apt/apt.conf.d/local with 'Dpkg::Options { "--force-confdef"; "--force-confnew"; }'. However after the upgrade I find files *.dpkg-new
<baptistemm> am I missing somethings
<xnox> jamespage, off by chance do you know if autopkgtests allow enough internets for add-apt-repository to add launchpad ppas?
 * xnox is adding an autopkgtest to software-properties to make sure add-apt-repository doesn't break in the future
<jamespage> xnox, i'd have to defer to pitti for that
<rbasak> xnox: AFAIK, you can do that.
<xnox> yeay =)
<jamespage> coreycb, ddellav: fyi add-apt-repository is currently foobar on yakkety - xnox is working on a fix
<jamespage> its blocking all yakkety branch builds atm
<jamespage> coreycb, ddellav: xenial builds are ok apart from barbican and trove
<xnox> jamespage, waiting for launchpad to crank it out.... https://launchpad.net/ubuntu/+source/software-properties/0.96.24.4
<jamespage> xnox, good oh - thanks for picking that up!
<xnox> jamespage, rmadison says new software-properties-common is in. could you retrigger/retest whatever was failing for you please?
<jamespage> xnox, ok so add-apt-repository looks ok now - but tripping over something sbuild-ish now
<jamespage> Failed to sign dummy archive Release file.
<xnox> jamespage, right "sudo rm -rf /var/lib/sbuild/apt-keys" on the host
<xnox> which invokes sbuild
<xnox> jamespage, or alternatively the host should have up to date sbuild, and run sbuild-update -k
<xnox> jamespage, do you care about building things for before precise?
<jamespage> xnox, no
<xnox> then purge the apt-keys
<xnox> and don't ever call sbuild-update -k | --keygen
<jamespage> xnox, hmm that's not working so well
<jamespage> E: Local archive GPG signing key not found
<jamespage> I: Please generate a key with 'sbuild-update --keygen'
<jamespage> do I have to twiddle a knob somewhere to allow that?
<xnox> jamespage, what is the version of sbuild on your host?
<xnox> what is the ubuntu release of the host?
<jamespage> xnox, 0.69.0-2ubuntu1~ubuntu14.04.1~ppa201607141228
<xnox> i've backported sbuild from yakkety and run that on xenial....
<jamespage> its a backport of the xenial version on trusty
<xnox> ok.
<xnox> either get a newer backport. or there are manual things you can run on the host to export a armored keypair.
<xnox> let me test that combo here.
<xnox> or rebackport yakkety's sbuild....
<xnox> jamespage, you are not gonna upgrade the host to xenial just yet, are you?
<jamespage> xnox, that's a bit more than I can do right now
<jamespage> it hosts some other bits and pieces as well
<jamespage> I can backport it
<xnox> jamespage, but NEWS file in sbuild claims that since 0.67.0 key generation is optional.
<xnox> jamespage, can i see the logs of what/where breaks for you?
<xnox> or access those systems, no?
<xnox> backporting yakkety's sbuild is best option, i think.
<jamespage> xnox, apparently we where still runnng and older sbuild version
<jamespage> not sure why
<jamespage> checking now
<NetworkingPro>  anyone know a good server config management tool?
<JanC> the command line?  :)
<JanC> also: what do you actually want it to do?
<patdk-wk> ssh :)
<lunaphyte> your brain?
<smoser> rbasak, you have a dep8 test example you'd suggest me start from ?
<coreycb> ddellav, can you also include sahara 4.0.1 and ironic 5.1.2 in the mitaka SRU
<ddellav> coreycb sure
<coreycb> ddellav, thanks
<coreycb> ddellav, also for liberty can you pick up aodh 1.1.2, designate 1.0.2, ironic 4.2.5, manila 1.0.1, and sahara 3.0.2? (sorry if this went through twice, got disconnected)
<ddellav> coreycb i'll add it to the bug
<coreycb> ddellav, sound good
<sarnold> rbasak: hey are you and nacc doing the git thing for multipath-tools? someone in #ubuntu-devel just proposed a patch, see 1231182 -- it got lost last time around...
<xnox> jamespage, https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=827315
<ubottu> Debian bug 827315 in src:sbuild "sbuild: Does not work with gnupg 2.x installed in the chroot" [Important,Fixed]
<xnox> *soon* things will break =)
<xnox> or might. basically new enough sbuild is required.
<xnox> actually no, all has been fixed.
<mundus2018> Can someone help me to get back into my system
<mundus2018> I got locked out when trying to install openvpn
<mundus2018> I can mount it
<RoyK> mundus2018: was this after accidentially enabling ufw after forgetting to allow 22/tcp (or ssh) ?
<mundus2018> Yeah I enabled ufw and didnt allow ssh
<mundus2018> but is it on at reboot?
<RoyK> mundus2018: ufw doesn't require a "save" argument - it just sticks
<RoyK> mundus2018: what sort of machine is this?
<mundus2018> like hardware or software?
<RoyK> vm? physical?
<mundus2018> phisical
<jelly> <mundus2018> in /etc/default/ufw I changed /etc/default/ufw DEFAULT_FORWARD_POLICY="DROP" to DEFAULT_FORWARD_POLICY="ACCEPT"
<jelly> that however doesn't say anything about INPUT and OUTPUT
<RoyK> mundus2018: do you have KVM access to it or something?
<mundus2018> its a kimsufi box
<jelly> RoyK, he can boot a rescue image, and that's what he's doing right now
<RoyK> within reach?
<RoyK> jelly: ok
<jelly> the rescue image provides some sort of remote access apparently
<mundus2018> yeah
<jelly> this is a common setup with cheap providers
<mundus2018> this is essentially ovh
<jelly> (hardware that has proper remote mgt is usually more expensive)
<jelly> mundus2018, you could chroot into the thing, see if you can disable the ufw service with service or systemctl command, then reboot
<mundus2018> alright I can try that
<mundus2018> im rebooting to see if something I did worked
<jelly> if you're playing with cron jobs, you can also make one that does "iptables-save > /root/iptables-saved-temp" and look at that file afterwards
<mundus2018> that will save the active config?
<jelly> that would confirm whether it's iptables state that's keeping you locked out
<mundus2018> ok
<jelly> sorry, "/sbin/iptables-save > /root/iptables-saved-temp"
<jelly> cron PATH does not contain /sbin by default
<mundus2018> ok so reverting that stuff did not wor
<jelly> which error message do you get when you try to ssh into your system?
<mundus2018> can I chroot and just do sudo ufw allow 22/tcp
<jelly> maybe, I'm clueless about ufw
<jelly> and you don't need "sudo" when you're already root
<mundus2018> on putty, Network Error: Connection Timed Out
<jelly> right, if ping works that does point to a local fw issue
<mundus2018> how can I pipe errors?
<mundus2018> this is my rules  http://termbin.com/j621
<mundus2018> Ill add -A ufw-user-input --dport 22 -j ACCEPT and hopefully that will work
<RoyK> mundus2018: 2>&1 | pastebinit
<mundus2018> thanks
<RoyK> mundus2018: there's no port 22 in that pastebin
<mundus2018> yeah just added it
<mundus2018> I added "-A ufw-user-input --dport 22 -j ACCEPT"
<RoyK> usually ufw allow 22/tcp should do
<mundus2018> I couldnt do that, it said erros with /dev/urandom
<RoyK> it really shouldn't need /dev/urandom
<mundus2018> so that didnt work
<mundus2018> brb
<eatingthenight> hey, how can I can create a FAT32 partion with sfdisk, I don't see an option for it when i run sfdisk -T
<eatingthenight> ops
<eatingthenight> sorry
<eatingthenight> blind, it's W95 FAT32
<tarpman> eatingthenight: normally I think you want 0x0c (W95 FAT32 (LBA))
<tarpman> eatingthenight: note of course that the partition type code is orthogonal to what filesystem you actually choose to format the partition with...
<rbasak> sarnold: we're updating trees on request right now - we haven't cronned the importer yet.
<sarnold> aha
<renatosilva> will ubuntu server updates address ssl3 disabling?
<rbasak> renatosilva: try asking in #ubuntu-hardened, though most full time people are done for the day now so it might be worth asking tomorrow when there are more people about.
<sarnold> renatosilva: I think we've only disabled ssl3 in xenial and yakkety
<renatosilva> sarnold: why not all?
<sarnold> renatosilva: likely because we don't want to break existing deployments
<renatosilva> so you people prefer instead keep a LTS server vulnerable to ssl3 attacks? weird!
<renatosilva> I would never knew the server I manage is vulnerable if I did not randomly found it, a pity that updating the system packages actually does not bring all important updates!
#ubuntu-server 2016-09-02
<pradiprwt> Hi everyone, I am facing issue with provider network it is not able to acceesseble from outside . and route gateway interface showing always down
<pradiprwt> can any one please help me ho I can trouble shoot
<pradiprwt> I have deployed openstack using autopilet
<tjbiddle> Hi all. Could really use your help. Iâm going on day 3 of banging my head on the wall without success here. Trying to setup a NAT server so that another server can have internet access. The NAT server has full Internet access, the client server has DNS working - but ping, curl, etc. will not work. All ports are wide open for internet and intranet from my hosting providers security group ACLs. Hereâs some debug information on both machines -
<tjbiddle> https://gist.github.com/thomasbiddle/ef9ad16d33df722f5061106042c2d2ae
<jamespage> ddellav, coreycb: nova b3 underway
<jamespage> ditto cinder
<pradiprwt> Hi Everyone, Can anyone help me to understand openstack-neutron in autopilot deploymnet
<jamespage> ddellav, coreycb: nova, cinder and networking-ovn all uploaded
<jamespage> moving on to keystone
<jamespage> as well as manila
<jamespage> coreycb, ddellav: doing glance
<jamespage> keystone uploaded
<jamespage> coreycb, ddellav: manila uploaded
<jamespage> glance needs cursive to work its way in - in the Ubuntu binary NEW queue atm
<jamespage> ddellav, coreycb: picking ceilometer up
<jamespage> branch builds make this high throughput - thankyou for helping to make that effective!
<jamespage> and aodh
<zioproto> hello all
<zioproto> jamespage: are you here today ? :)
<jamespage> I am
<zioproto> I tried to rebuild the stable/liberty horizon package
<zioproto> and did not build for trusty
<zioproto> https://www.dropbox.com/s/faadi3sfkmfjgeb/horizon_8.0.1-0ubuntu1_amd64-20160902-0955.build?dl=0
<zioproto> where is the proper place to fill a bug for this compile issues ?
<jamespage> zioproto, ok so horizon is a little awkward to build, due to a packaging nuance - it uses two orig.tar.gz 's
<jamespage> one for horizon, and then we bundle all of the xstatic dependencies in a second one
<jamespage> zioproto, have a look at debian/README.source
<jamespage> for details on how to generate that second tarball
<zioproto> OK will try to build it after reading the README and I will give you feedbacl
<zioproto> I noticed because I am upgrading the documentation I have not github
<jamespage> coreycb, ddellav: aodh uploaded
<ddellav> jamespage very productive this morning
<ddellav> jamespage is there a list somewhere of what's left? Or is it just todays releases - what you've just done?
<ddellav> s/-/minus
<zioproto> jamespage: so running the command ./debian/rules refresh-xstatic will create the tarball
<zioproto> it creates it in the wrong directory
<zioproto> I moved it build-area
<zioproto> sorry I mean the tarball horizon_8.0.1.orig-xstatic.tar.gz
<ddellav> zioproto i found it's better to use debuild with horizon.
<ddellav> i've not gotten gbp to work
<zioproto> ddellav: what I have to exactly that will differ from here: https://wiki.ubuntu.com/OpenStack/CorePackages ?
<ddellav> zioproto it's basically the same, except instead of using gbp buildpackage -S you use debuild -S
<ddellav> obviously the xstatic-orig tarball needs to be in the same directory as the orig.
<zioproto> so I should call before manually ./debian/rules refresh-xstatic and place the tarball in the right place
<zioproto> I will try
<ddellav> you call refresh-xstatic, that will put the extra orig tarball in ../
<ddellav> then from inside the same directory you run debuild
<ddellav> it's a drop-in replacement for gbp
<zioproto> ddellav: when I run gbp buildpackage -S it will download in the folder ../build-area the tarballs. But when I use debuild -S -us -uc it complains there are no tarballs
<ddellav> zioproto are you trying to update horizon or just build what's there in the repo already?
<zioproto> just build what is in the repo already
<zioproto> ddellav: started with debcheckout --git-track='*' horizon
<ddellav> ah ok, well gbp will pull out the existing orig tar from the pristine-tar branch, hence it's name: git-buildpackage. I thought you were updating, so you would've had to pull an updated orig tar ball for that to work.
<zioproto> then I checkout the stable/liberty branch
<zioproto> I pasted the build log earlier
<ddellav> ok, since you're not updating, you don't want to do refresh-xstatic. That's only if you're updating a non-stable release. What you want to do is pull the existing orig-xstatic tarball from the archive and use that
<ddellav> zioproto i assume you're using the 8.0.1 release? Here is the existing xstatic tarball: https://launchpad.net/ubuntu/+archive/primary/+files/horizon_8.0.1.orig-xstatic.tar.gz
<ddellav> you put that, plus this: https://launchpad.net/ubuntu/+archive/primary/+files/horizon_8.0.1.orig.tar.gz in the directory directly above the horizon repo directory, then from INSIDE horizon, run debuild -S -us -uc
<ddellav> and that will get you the .dsc so you can run sbuild or do whatever it is you want to do
<coreycb> ddellav, zioproto: updatd https://wiki.ubuntu.com/OpenStack/CorePackages for horizon
<coreycb> ddellav, jamespage: we can follow here to see where we're at with b3: https://private-fileshare.canonical.com/~coreycb/ca_upstream_versions_newton.html
<ddellav> coreycb ah right, good call
<coreycb> ddellav, jamespage: I fixed it up to favor upper-constraints versions over what's in github.com/openstack/releases/deliverables/newton
<zioproto> ddellav: I was able to build horizon using the debuild command
<zioproto> But I really want to understand
<zioproto> I did a diff of the dsc files
<zioproto> file
<zioproto> I mean the one created with debuild and the one created with gbp buildpackage
<zioproto> http://pastebin.com/j8cXRWFQ
<zioproto> so different horizon_8.0.1-0ubuntu1.debian.tar.xz are generated ?
<ddellav> so im confused, you ran gbp and it completed and generated a dsc?
<zioproto> yes
<ddellav> ok then what was the issue?
<zioproto> the fail is later, when I run sbuild-liberty against the dsc
<ddellav> i mean if that worked you didn't need to use debuild
<zioproto> exact, but the dsc generated by debuild is proper so that later I can build sucessfully with sbuild-liberty
<ddellav> what is the issue with sbuild? is it saying it can't import xstatic?
<zioproto> https://www.dropbox.com/s/faadi3sfkmfjgeb/horizon_8.0.1-0ubuntu1_amd64-20160902-0955.build?dl=0
<zioproto> here the build log
<ddellav> yea, no module named xstatic.main
<ddellav> that's what always happened when i used gbp as well which is why i use debuild. For whatever reason gbp does not properly import the xstatic tarball
<ddellav> though it's showing a weird patch problem as well which shouldn't be happening
<zioproto> yes I am bit confused what is the real problem
<zioproto> btw I found the workaound of using debuild
<coreycb> jamespage, ddellav, I'm working on horizon
<zioproto> coreycb: fyi I was on the stable/liberty branch
<coreycb> zioproto, ok.  sorry I was talking about newton b3 there.
<zioproto> no problem
<ddellav> coreycb jamespage im working on neutron
<ddellav> i mean neutron-*
<zioproto> something not openstack specific. The Vagrant image ubuntu/xenial64 has this bug open since long: https://bugs.launchpad.net/cloud-images/+bug/1565985 Do you guys know who to ping about this ?
<ubottu> Launchpad bug 1565985 in cloud-images "vagrant vb ubuntu/xenial64 cannot mount synced folders" [Undecided,In progress]
<Odd_Bloke> zioproto: I believe the fix to that is sitting in -proposed at the moment. :)
<zioproto> I guess a lot of Vagrant users will not move to xenial until the image is stable as the trusty one
<zioproto> I updated my tool to build the UCA packages https://github.com/zioproto/ubuntu-cloud-archive-vagrant-vm
<zioproto> now it is aligned to the official docs
<ddellav> coreycb please review: lp:~ddellav/ubuntu/+source/neutron-fwaas
<ddellav> jamespage coreycb working on murano
<coreycb> ddellav, we sync murano from debian
<coreycb> ddellav, looking at neutron-fwaas
<ddellav> coreycb ok
<ddellav> coreycb i'll take trove then
<coreycb> ddellav, +1
<ddellav> coreycb nvm, trove b3 is already pushed just not released
<coreycb> ddellav, you  sure?
<ddellav> coreycb oh i didn't see the dev release there, nvm
<ddellav> coreycb ready for review: lp:~ddellav/ubuntu/+source/neutron-vpnaas
<coreycb> ddellav, can you move psycopg2 and pymysql to BD-Indep?
<ddellav> coreycb sure
<coreycb> ddellav, nm I'll do it, quicker that way
<ddellav> ok
<jamespage> zioproto, gbp does not support multi-orig tar balls
<jamespage> zioproto, so yes you have to use debuild to generate the source package
<jamespage> its less than ideal - apologies
<zioproto> ok great ! at least we have a specific reason for that !
<coreycb> ddellav, neutorn-fwaas pushed/uploaded
<jamespage> coreycb, ddellav: ok glance uploaded
<jamespage> cursive published (was blocking)
<jamespage> I'll raise the MIR for that now
<coreycb> jamespage, awesome, moving right along!
<coreycb> ddellav, neutron-vpnaas pushed/uploaded
<coreycb> jamespage, ddellav: horizon's going to take a bit, I want to test it a bit since the xstatic imports/patch have changed quite a bit
<coreycb> how many times can I say bit
<jamespage> coreycb, ta - thanks for taking that one
 * jamespage least favourite
<ddellav> i think it's everyones least favorite
<drfritznunkie> Anyone here from Ubuntu who maintains the community AWS AMIs? It looks like all instance store AMIs in us-east-1 are gone/invalid
 * ddellav thinks horizon is a great candidate for snapping
<jamespage> coreycb, could we add ubuntu-proposed to the version tracker?
<jamespage> that might be useful for spotting blockages
<coreycb> jamespage, I'll look into it, should be fairly simple
<jamespage> coreycb, ddellav: oslo's look up-to-date now - do  we have some work todo on clients as well still?
<jamespage> I think we do looking at the report
<drfritznunkie> we're seeing errors when launching new instances like: Client.InvalidManifest: HTTP403 (Forbidden) for URL ubuntu-us-east-1/images/hvm-instance/ubuntu-trusty-14.04-amd64-server-20160620.manifest.xmlï»¿â â â â 
<coreycb> jamespage, yeah, they could use some work.  they should be fairly small bumps.  I took a pass earlier this week but they've released new versions since.
<ddellav> coreycb please review lp:~ddellav/ubuntu/+source/neutron-lbaas
<jamespage> coreycb, I'll take a run through
<coreycb> jamespage, ok thanks
<coreycb> ddellav, neutron-lbaas pushed/uploaded, thanks
<freenerd> I'm seeing the same problem as drfritznunkie in us-east-1 for instance store AMIs. All (even some over a year old) are gone.
<freenerd> 14.04 LTS that is
<jamespage> coreycb, ddellav: cinder and congress clients uploaded
<coreycb> jamespage, ddellav: working on backporting some things to the uca in the background here
<jamespage> ddellav, coreycb: python-ironic* uploaded
<jamespage> fwiw I'm prepping them as uploads for experimental and pushing to git, then versioning for ubuntu and direct uploading
<jamespage> I'm not pushing the ubuntu versions to git
<coreycb> jamespage, ok that's what I've done as well
<jamespage> coreycb, ok
<coreycb> jamespage, ddellav: I'm working on os-brick
<jamespage> coreycb, doing clients still
<jamespage> coreycb, ddellav: python-keystone* done
<jamespage> auth and middleware
<jamespage> doing magnum now
<zioproto> Odd_Bloke: I found as a workaround to install vagrant-vbguest plugin
<coreycb> jamespage, ddellav: os-brick uploaded.  looking at oslo.db/messaging.
<jamespage> coreycb, ddellav: magnumclient done
<jamespage> manilaclient next
<coreycb> jamespage, oh yay no more merging of oslo.messaging, thanks
<jamespage> coreycb, \o/
<jamespage> yeah that should help a bit
<jamespage> coreycb, you might want to hold on oslo.db - .1 broke gnocchi
<coreycb> jamespage, ah ok will do
<coreycb> jamespage, I'll just push it
<jamespage> coreycb, ddellav: manila and murano client libs done
 * jamespage looks for next
<jamespage> neutron and nova clients on my list next
<dlloyd> anyone else seeing 403s with ubuntu amis from cloud-images.ubuntu.com in us-east-1?
<Odd_Bloke> dlloyd: Yep, it's a known problem; we're working to fix it.
<dlloyd> ok cool, thanks
<Odd_Bloke> dlloyd: It should only affect instance-store AMIs, so you could switch to using EBS whilst we sort it out. :)
<dlloyd> hah, fair
<dlloyd> its also on the mile long todo list to vendor our own ;)
<dlloyd> out of curiosity is this the best forum to see status about similar issues in the future?
<ddellav> coreycb can you review liberty branch of lp:~ddellav/ubuntu/+source/neutron for sru?
<drfritznunkie> the AMI outage in us-east-1 has been ongoing since 5AM EDT
<drfritznunkie> where would the best place, as dlloyd mentions, for problems like this? here?
<drfritznunkie> Odd_Bloke: any idea how soon the AMIs will be back up in us-east-1?
<jamespage> ddellav, coreycb: nova and neutron clients uploaded
<jamespage> that's the lot I think
<coreycb> jamespage, \o/
<jamespage> coreycb, ok
<coreycb> jamespage, ddellav: oslo.messaging uploaded
<ddellav> jamespage :D
<jamespage> aodhclient got missed - doing that now
<coreycb> ddellav, neutron 7.1.2 uploaded to liberty-staging
<ddellav> coreycb thanks
<Odd_Bloke> drfritznunkie: We're restoring now, it's just a matter of data transfer. :)
<drfritznunkie> Odd_Bloke: the AMI ids are not going to change, correct?
<Odd_Bloke> drfritznunkie: Correct.
<Odd_Bloke> drfritznunkie: dlloyd: You should be seeing AMIs becoming usable again now; we're doing August, then 2016, then everything.
<dlloyd> Odd_Bloke: awesome, thanks for the update
<drfritznunkie> Looks like August and June are back up Odd_Bloke, thanks for the hard work!
<drfritznunkie> Any idea what happened?
<Odd_Bloke> drfritznunkie: Yep, a job accidentally "cleaned up" all the files in our us-east-1 S3 bucket. :)
<drfritznunkie> ha BTDT, I feel your pain
<drfritznunkie> Odd_Bloke: this is pushing up our backlog for vendoring our own images, do you all have your build scripts/process posted somewhere? I can find anything that is recent
<drfritznunkie> ...can't find anything recent
<Odd_Bloke> drfritznunkie: Unfortunately not, it's a lot of Jenkins and shell scripts.
<Odd_Bloke> drfritznunkie: And our case is pretty specialised (i.e. publish the same image in _every_ region for every storage type and every virt type), so you probably wouldn't need a lot of the complexity that we have. :)
<drfritznunkie> I figured as much, but our AMI rolling scripts are 3+ years old at this point, and didn't know if we've missed the state-of-the-art in AMI rolling ;)
<coreycb> ddellav, jamespage: I uploaded a couple remaining client packages for designate,mistral,senlin,openstacksdk, and will be uploading zaqarclient, openstackclient shortly.
<jamespage> coreycb, glance-store as well?
<jamespage> tbh I'm done for today - I'll leave the rest in your's and ddellav's capable hands
<coreycb> jamespage, I'll take a look, see ya
<renatosilva> anyone had problems upgrading from 14.04 to 16.04?
<RoyK> quite possibly ;)
<RoyK> renatosilva: sometimes some libs get messed up, especially if you're using 3rd party repos
<RoyK> but mostly it just works
<ducasse> is it maybe time to remove the last section of the topic?
<RoyK> ducasse: perhaps ;)
<blizzow> Urgh, running my root partition on top of an mdadm raid1 device is destroying my life. Does anyone else here experience terribly slow performance with this?
<blizzow> Running apt-get update just sends dpkg into a D state for minutes at a time.
<blizzow> All the drives are returning good info via smartctl
<blizzow> Nothing in the logs is saying anything bad.
<blizzow> But a dist upgrade that hits the linux kernel/headers packages takes more than 15-30 minutes. :(
<sarnold> ouch
<genii> Maybe your /boot is getting small
<sarnold> misaligned sectors comes to mind as a possibility but I have trouble seeing how even that could force twenty minute dist-upgrades
<compdoc> raid1 shouldnt be slow
<compdoc> you could test it by breaking the mirror and running on one disk
<sbeattie> blizzow: check your syslogs for disk errors as well, smartctl sees most things, but not always.
<JanC> if it's SATA controller / SATA communication errors, SMART might not see it, I guess
<patdk-lap> smart log would show it
<patdk-lap> atleast if the error made it to the cable
<patdk-lap> blizzow, what drive are you using, and what is it's age?
<patdk-lap> fill filesystem will make it slow
<patdk-lap> bad sectors will make it slow, until they actually fail, and they can take 45min to fail
<patdk-lap> using one of these new shingle drives and a non-cow filesystem will be painful
<sarnold> oww
<patdk-lap> cow isn't so important, as much as writting data as a log is
<blizzow> sbeattie: I've looked in syslog.  The only weird thing I see is this:  dev-disk-by\x2dpartlabel-EFI\x5cx20System\x5cx20Partition.device: Dev dev-disk-by\x2dpartlabel-EFI\x5cx20System\x5cx20Partition.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:04.0/0000:02:00.0/host8/port-8:7/end_device-8:7/target8:0:7/8:0:7:0/block/sdi/sdi1 and /sys/devices/pci0000:00/0000:00:04.0/0000:02:00.0/host8/
<blizzow> patdk-lap: brand new SAS 7200 RPM seagate 4TB drives.
<sarnold> standard systemdism, those happen everywhere :(
<patdk-lap> that looks fine
<patdk-lap> ah, so scsi smart, that is completely different from sata smart
<patdk-lap> better and worse at the same time, the drive has smarts in it, so it doesn't show you raw info
<blizzow> Filesystems are super clean. 2.8GB of 15GB used for the root partition.  3.6MB of 511MB used for /boot/efi
<blizzow> sarnold: I did an alignment check on all the drives, they're optimal.
<sarnold> topped up on blinker fluid?
<patdk-lap> :)
<blizzow> The pokey bits are plugged into the recepticals and the electrical faeries are flowing as far as I can tell.
<sarnold> hehe
 * patdk-lap goes to change his headlight fluid
<tarpman> make sure the switch is set to MORE MAGIC, etc
#ubuntu-server 2016-09-03
<renatosilva> please add git-bzr-ng to 16.04 server, it's been lost since upgrade from 14.04
<renatosilva> source package seems there but no binary in repo
<renatosilva> https://launchpad.net/ubuntu/xenial/+source/git-bzr-ng
<mundus2018> does a firewall limit outgoing connections at all
<JanC> mundus2018: that totally depends on how you configure it
<RoyK> renatosilva: heh - what did you put in that?
<iliv> my vixie cron (installed by default on xenial lts) fail to send cron jobs output saying: (CRON) info (No MTA installed, discarding output) I use msmtp and other tools like smartd find it and can use it. what else do I need to configured so that vixie cron starts using msmtp as well?
<yaboo> hey guys is there a issue with ubuntu server networking
<yaboo> just did a dist-upgrade and when I reboot it complains about not able to raise up the ethernet ports
<yaboo> even this it does but drops me in maintenance mode
<yaboo> even control d does nothing
<renatosilva> RoyK: what do you mean
<RoyK> reveredge: a firewall can restrict *anything* - it just depends how you configure it
#ubuntu-server 2016-09-04
<KevinSisk> How do I install this package?: http://packages.ubuntu.com/trusty/mediawiki
<KevinSisk> I do apt-get install mediawiki
<KevinSisk> then is says "package has no installation canidate"
<OerHeks> Not available for Xenial 16.04 ?? odd
<RoyK> KevinSisk: https://bugs.launchpad.net/ubuntu/+source/mediawiki/+bug/1527657
<ubottu> Launchpad bug 1527657 in mediawiki (Ubuntu) "Drop mediawiki from Xenial" [Undecided,Fix released]
<RoyK> no idea why - it should be pretty easy to install from the sources, though
<KevinSisk> Yea, i ended up just using FTP
<KevinSisk> quit
<KevinSisk> crap
<cncr04s> How do I setup my own repo. I write my software for my business, I run ubuntu, naturally. When I need to update it, I rename the current executable, scp in the new one, then have the program restart, it loads the new one. What I want to do is just apt-get upgrade, and it will just grab the new program I compiled, and added to my repo, and do that automatically with the scripting system
<cncr04s> in place for that sort of thing.
<PryMar56> cncr04s, at least make a binary *.deb.. install debhelper and learn to use dh_make
#ubuntu-server 2017-08-28
<cpaelzer> good morning
<lordievader> Good morning
<cpaelzer> coreycb: in the preliminary qemu 2.10 rc4 the issues you mentioned (and more) are fixed - FYI https://launchpadlibrarian.net/334565303/buildlog_ubuntu-artful-amd64.qemu_1%3A2.10~rc4+dfsg-0ubuntu1_BUILDING.txt.gz
<coreycb> cpaelzer: great, thanks
 * ddellav waves to coreycb 
<coreycb> hey ddellav!
<ddellav> hi
<cpaelzer> coreycb: beisner: hey usually (and on pike in particular) do you usually pick src:ipxe for qemu-ipxe from the latest release (artful atm)?
<coreycb> cpaelzer: i have no idea, i imagine that would likely be a nova default
<coreycb> cpaelzer: there's not a single hit for qemu-ipxe in nova source code if that helps
<cpaelzer> coreycb: well I'll check the ppa - is the pike porposed public?
<coreycb> cpaelzer: the only thing we modify in the qemu package is libseccomp-dev version in d/control. the pike-staging ppa is public if you want to take a look there.
<cpaelzer> coreycb: it really is a different src, not qemu nor any else but ipxe
<coreycb> cpaelzer: ah, i see
<cpaelzer> coreycb: checked - you do not have it in cloud archive
<coreycb> cpaelzer: ack
<cpaelzer> well htat makes it even worse :-/
<nacc> ahasenack: if you want to test: http://paste.ubuntu.com/25418476/ dtrt here
<nacc> ahasenack: for that bug
<nacc> ahasenack: your simple fix, not the more complicated one to not manipulate the current repository at all
<nacc> cpaelzer: I believe src:libvirt should be caught up now, can you check?
<albech> looking for some information on installing a newer verison of dovecot-sieve dovecot-managesieved than the ones being shipped with 16.04. It this possible without compiling?
<nacc> albech: i don't see anything in backports, so unlikely. You could search PPAs with the normal caveats
<albech> nacc: dang, thanks tho
<nacc> albech: np
<ahasenack> weird, postinst of grub-pc, php and samba failed in the same apt install with exit code 10
<ahasenack> https://launchpadlibrarian.net/334576890/DpkgTerminalLog.txt
<Hexian> I wonder if anyone who was around when I originally went through my intermittent performance is still here
<Hexian> disabling swapping eliminated major page faults and the application performance drops that they caused, but now at random times, minor page faults are a serious issue
<Hexian> minor page faults can cause my real time processes to freeze for over 500ms straight while user CPU time is replaced with system CPU time for that process... and it seems like it is impossible to have the kernel preload the entire application and prevent minor page faults
<Hexian> I've eliminated almost all disk IO completely, there are zero disk reads and infrequent writes of a few KB at a time
<Hexian> it does not seem right that minor page faults should be able to lock up processes for 500ms+ at a time, I can't help but think that either the kernel is faulty, or the hardware is
<sarnold> buy more RAM?
<Hexian> sarnold: more than 50% of the 32GB ram is free at all times
<Hexian> these random minor page faults causing large process freezes are happening after the process has been running for almost a day, so it's not like it's a fresh process which is hitting memory for the first time
<Hexian> I don't know what to do about this, it is a really serious problem
<Hexian> any calls that block these processes for 1ms are a problem, this is causing frames that are over 800ms, which is almost entirely waiting on a minor page fault and then catching up on more than half a second of lost CPU time
<sarnold> Hexian: o_0 you had excessive swap traffic with ~16 gigs _free_??
<sarnold> Hexian: that feels worth a bug report
<Hexian> sarnold: not excessive, very infrequent, but that often caused IO wait times and caused performance drops in these real time applications, by default the kernel assumes every server has a SSD and doesn't mind causing IO wait times even with plenty free ram
<Hexian> these minor page faults are a much bigger concern though
<Hexian> I guess I'm going to have to try downgrade the kernel a couple years and see if that helps
<sarnold> Hexian: maybe try fiddling the 'swappiness' or whatever parameter a bit first?
<Hexian> swappiness is set to 0, there are zero major page faults, zero disk reads (from system or applications), infrequent tiny writes (writes that are more than a few kb get made to a ram disk)
<Hexian> disk IO is virtually non-existant
<Hexian> minor page faults are caused when data is already loaded into ram but is not yet accessible, or something like that
<Hexian> I have no idea why that is happening at all after hours or uptime, let alone a day uptime, or why it makes my processes lock up for over half a second to do the minor page fault system calls
<Hexian> this happens at random times, with random severity, even outside of peak times with 30% total system CPU usage
<lunaphyte> hi.  i've recently set up a new install of 17.04, and am having symptoms where it hangs at shutdown, complaining that it failed to unmount a number of filesystems.
<lunaphyte> how can i troubleshoot this further?
<ahasenack> do you have anything special in your mount points?
<ahasenack> today I was just looking at a bug where the user had lvm over mdadm raid1 and quotas enabled
<ahasenack> and he said something about errors during shutdown
<lunaphyte> i don't believe i do, but let me double check
<lunaphyte> it fails to  unmount /var/log/, /var/, /tmp/, and /home/
<lunaphyte> here's a little bit of info: http://dpaste.com/0AMSEF9.txt
<ahasenack> ah, btrfs
<lunaphyte> sorry the fstab spacing is all janky
<ahasenack> that qualifies as "special" :)
<ahasenack> and lvm
<lunaphyte> oh, ok :)
<lunaphyte> i guess so
<lunaphyte> yeah, lvm
<ahasenack> and two vgs
<lunaphyte> yeah
<ahasenack> did it work in another ubuntu release?
<lunaphyte> here's one more bit of info if it's might be helpful: http://dpaste.com/0NRC48C
<lunaphyte> well, it's possible i'm crazy, but iirc, it was working right earlier with this install
<lunaphyte> i haven't used this setup in a previous release though
<ahasenack> I suggest a bug report then, with the info you provided
<ahasenack> I don't know how to debug that in systemd
<ahasenack> and "btrfs fi sh" probably too
<lunaphyte> i did :) - https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1709384
<ubottu> Launchpad bug 1709384 in systemd (Ubuntu) "failed to unmount filesystems during shutdown, reboot hangs" [Undecided,New]
<fatdrago_> can ssh public keys be all numbers?
<nacc> fatdrago_: well, i think the key itself is all numbers, but the base64 encoding you see in, e.g., ~/.ssh/id_rsa.pub is more than just the key, it has a header indicating that it is a key and what kind of key and is base64-encoded: https://tools.ietf.org/html/rfc4253#section-6.6
<nacc> fatdrago_: or by 'numbers' did you actually mean the text in the public key file?
<fatdrago_> I asked a user to generate pair of keys and she send me bunch of numbers . it don't look like rsa format to me..with all hex mixed in!
<fatdrago_> it does say rsa-key-20170828, but it might be different system or something..all numbers!
<sdeziel> fatdrago_: could be puttygen's format
<sdeziel> the standard format which isn't OpenSSH
<sdeziel> `ssh-keygen -i -f pub` should convert it to OpenSSH format
<fatdrago_> I send her three links instructions how to generate in windows/putty and mac and linux.. let's see if she comes through and can work through the problem and convert the putty one!
<nacc> fatdrago_: ah ok
#ubuntu-server 2017-08-29
<nacc> rbasak: fyi, i'm starting the refactor for main/cli_main. build was really easy, so was clone. importer is going to be a bit ugly, because of how much state we have, but i should have it done tmrw AM. Hopefully the rest by the EOD tmrw.
<cpaelzer> nacc: the libvirt import is good now, I assume you removed it from the blacklist so it will stay current right?
<cpaelzer> and good morning everybody
<cpaelzer> nacc: s/remove from blacklist/added to whitelist/ but you know what I meant anyway
<krzyzaq> Hi All
<krzyzaq> I have an issue finding in internet an answear for my problem - how to allow AD group users to login to the xrdp on ubuntu
<RoyK> krzyzaq: I've looked around for solutions for that, and they exist, with winbind or other stuff - never setup anything like that myself, though
<RoyK> that is - only looked for ssh auth with AD, but should be the same, more or less
<RoyK> I guess it's a PAM thing after all
<krzyzaq> RoyK: yeap, I think also it should be done by /etc/pam.s/xrdp-sesman
<krzyzaq> but I didn't found any example or solution that works 100%
<RoyK> not sure - sorry - never tried myself
<krzyzaq> sure, thx
<eagles0513875> hey all
<albech> Hi all. Will pinning look for dependencies of a certain package i wish to upgrade or simply forcefully upgrade that one package?
<eagles0513875> i have a quick question I am setting up a server with a particular user which will be SFTP access only.
<eagles0513875> i setup up the user with out a shell so /bin/false in this case and added the user to the ssh group
<eagles0513875> when I try to connect via filezilla its giving me an EOF message and its unable to connect
<eagles0513875> does the user need to have a bash shell
<eagles0513875> to be able to sftp
<RoyK> try rssh
<RoyK> it's made for just that sort of stuff
<eagles0513875> RoyK: hey i remember your name hehe. i managed it seems like a line needs to be changed in the ssh config
<eagles0513875> now the next issue I need to figure out RoyK  is how to restrict a particular user to a particular directory to upload files to
<RoyK> eagles0513875: that requires chrooting, which is a longer story
<RoyK> seems it's easier now than what I remember https://unix.stackexchange.com/questions/9853/restricting-an-ssh-scp-sftp-user-to-a-directory
<gQuigs> I know it was just released, but any plans to bump neutron ocata to 10.0.3?
<coreycb> jamespage: do you know the history behind running db sync commands in our packages when there's no connection string?
<coreycb> jamespage: for example: https://git.launchpad.net/~ubuntu-server-dev/ubuntu/+source/nova/tree/debian/nova-common.postinst#n50
<coreycb> i'd like to limit that check to the 2nd sqlite check ^
<jamespage> coreycb: gah we duped on https://bugs.launchpad.net/ubuntu/+source/python-pyperclip/+bug/1713617
<ubottu> Launchpad bug 1713617 in xclip (Ubuntu) "[MIR] python-pyperclip, xclip" [High,New]
<coreycb> jamespage: ah shoot, i should've pinged you
<jamespage> coreycb: no worries - xclip looks rusty fwiw xsel might be better
<jamespage> coreycb: re running db syncs; I'll all up for disabling those completely in the packaging - they create nothing but woes
<jamespage> I'd hoped to have spent time of those earlier in cycle
<jamespage> but tbh I think we could drop them now anyway
<jamespage> coreycb: what do you think?
<jamespage> coreycb: the maintainer script execution should be limited to the default sqlite based connection value only
<coreycb> jamespage: i think that's a good idea. i can take a pass today.
<jamespage> coreycb: pls
<jamespage> coreycb: I've switched the branch builds to use stable/pike branches where possible....
<coreycb> jamespage: there's a lot more in that postinst script that is general setup (non-sqlite) so i think i'll just drop the db migrations
<coreycb> jamespage: ok thanks
<jamespage> coreycb: yeah - just drop the sync calls would be my recommendation
<coreycb> ok
<nacc> cpaelzer: ack
<gQuigs> or if I should spend time doing a single bugfix for neutron (https://bugs.launchpad.net/neutron/+bug/1696889)
<ubottu> Launchpad bug 1696889 in neutron "external_dns_driver causes floating ip association to fail" [Undecided,Fix released]
<gQuigs> (it is included in 10.0.3)
<Epx998> todays task, make a 360 character regex readable for my lower level admins
<Epx998> guess i cannot break this up into lines
<nacc> ahasenack: as to raphael, is it actually a bug we have the file in the name we do?
<nacc> ahasenack: cpaelzer implied it was delta, without explanation?
<nacc> ahasenack: rather than adding yet-another symlink, is it better to drop the delta, and symlink to the 'correct' (Debian) location for b-c until 18.04 is out
<ahasenack> nacc: you mean reverse the symlink?
<ahasenack> nacc: do deb upgrades work "just fine" when a file becomes a symlink? No extra treatment?
<nacc> ahasenack: you'll need a maintscript
<ahasenack> I smelled something like that :)
<nacc> I *think*
<gunix> guys
<gunix> is mysql_secure_installation required after a fresh install?
<smoser> nacc,
<smoser> git fetch --all
<smoser> git pull
<smoser> the second is redundant
<smoser> is there a simple way to do that ?
<nacc> smoser: to do which? :)
<nacc> smoser: git-fetch doesn't manipulate the branch pointed to by HEAD
<nacc> smoser: git-pull does
<smoser> right
<smoser> git reset --hard <branch that this one tracked>
<smoser> or git merge <branch that this branch tracked>
<nacc> smoser: right, the notion would be the latter
<nacc> smoser: becuase pull will only pull if it ffs
<nacc> *FFs
<nacc> or maybe it was, I'm trying to recall
<nacc> there is pull --no-ff and pull --ff-only
<nacc> smoser: so you want to do that easily, without using pull?
<smoser> nacc, well, i'll go into a git dir and type:
<smoser>  git fetch --all
<smoser> which makes sense to me.
<smoser> but say my 'master' is tracking origin/master
<smoser> to update that  i have to 'git pull'
<nacc> smoser: ah right
<smoser> which goes and does (i think) git fetch origin
<nacc> smoser: so if you "know" it was origin/master
<nacc> smoser: you should be able to do a `git merge origin/master` after `git fetch --all` from master
<nacc> and it will FF master to origin/master
<smoser> right. i just was wondering if there as anything even that would fast forward only all my branches
<smoser> i can script somethign for sure, but it seems like it is a common use case.
<nacc> smoser: sorry, ENOPARSE -- you want to ff all possible branches that were identical to remote-branches when the remote-branches move?
<smoser> especially for the case of 'master' tracking 'origin/master'
<nacc> smoser: is that right?
<smoser> often i use 'git fetch --all'. that takes quite a while as i have lots of remotes.
<nacc> yep
<smoser> after doing so, i then basically want 'master' branch to move to 'origin/master'
<smoser> unless i've got changes locally.
<smoser> that can be done with:
<nacc> but only master?
<smoser>   git checkout master
<smoser>  git pull
<nacc> yep
<smoser> but git pull is a network operation
<smoser> which is again slow.
<nacc> and it's already done in theory because you just did fetch?
<smoser> yeah. the network portion is already done.
<smoser> so i want to fetch all the data
<smoser> and then any branch locally that is tracking a remote branch, i'd like to fast forward it
<smoser> (fast forward only, in case i have local changes)
<nacc> (to be clear `git pull = git fetch; git merge FETCH_HEAD`)
<nacc> smoser: ok, but why do you have a local branch that's identical to a remote-tracking branch?
<nacc> smoser: just checkout the remote-tracking branch and be in a detached HEAD state
<nacc> when you do need it
<smoser> well, yeah. thats an option. i have master generally track origin/master. as then i checkout the thing, do a merge and push
<smoser> git checkout master; git merge <something>; git push origin HEAD
<smoser> but yeah, you're right. if i have tracking only branches that is kind of pointless.
<nacc> yeah, I tend to only use local branches only when they differ from remote branches
<nacc> and then delete them when they don't :)
<nacc> but yeah, in general, I don't think there is a 'fetch and FF all my local branches that used to be identical to remote-tracking branches which have moved due to the fetch'
<nacc> I am thinking you might be able to get `git pull` to do it on a per-remote basis (it uses remote.<repository>.fetch), but i'm not 100%
<nacc> smoser: i think it points to what i said just now, though, the idea of local branches is they store state that's not present somewhere else. For branches from the remote, they are stored in the remote-tracking branches. There's not really any benefit to having a local branch that is identical to a remote branch (IMO). At least, once you get used to not having it :)
<smoser> right. i guess its really just 'master' that typically is set up to track a remote and generally expected not to
<smoser> er... generally expected not to differ
<nacc> yeah
<nacc> and that's why people use `git pull` for master
<nacc> rbasak: i recalled why we have move now -- and i'm working on replacing it anyways. But it has to do with when we need to make changes to the pristine-tar contents (e.g. the importer). In that case, it's not sufficient to just start a new branch named pristine-tar at the same spot, we need to change the distribution's pristine-tar branch's contents
<nacc> rbasak: basically, the difference between using a 'local' distribution pristine-tar branch and a remote-tracking one
<nacc> rbasak: http://paste.ubuntu.com/25428010/
#ubuntu-server 2017-08-30
<nacc> rbasak: there's something off with the snap, but i'm not sure what yet, will deubg in the AM. since it's classic, it's workaroundable by installing the deps
<madLyfe> if i did a sudo command and it wants me to type in my password but i want to cancel the command, is that possible?
<madLyfe> ctrl-C
<Epx998> madLyfe: indeed
<lunaphyte> ahasenack: i've [mostly] figured out my issue with systemd and filesystems not unmounting/shutdown hanging
<lunaphyte> coincidentally, i ended up solving another problem at the same time.  in fact, the reason i figured out the answer to the reboot problem was because i was troubleshooting the other problem, and i had an aha moment, realizing they were related
<cpaelzer> good morning
<albech> how do I dig a DMARC record? It doesn't show with 'dig domain TXT'
<andol> albech: _dmarc.example.com
<albech> andol: it doesnt show, so i am guessing its not a valid string i have entered at the DNS
<RoyK> dig txt _dmarc.blah.com
<RoyK> works for me (tm)
<albech> RoyK: yeah nothing shows and I entered it 10+ hours ago.
<RoyK> albech: try querying the server directly, dig @servernameoraddress
<albech> RoyK: did that too
<RoyK> which domain is this? I can check from here
<albech> ipat.dk
<RoyK> looks fine
<albech> hmm
<RoyK> erm - no
<RoyK> wait
<albech> its the spf you see
<RoyK> this is me dig txt _dmarc.karlsbakk.net
<RoyK> what sort of dns server?
<albech> RoyK: no idea.. its some hosting company
<RoyK> gratisdns.dk...
<albech> will im seing that my dmarc string ends with a ; and yours doesnt..
<andol> albech: Hmm, according to https://web.gratisdns.dk/domaener/dns/ it looks like gratisdns.dk pushes updates in batches, five times per day. If you did your change 10 hours ago, perhaps they are simply running a bit behind?
<andol> albech: If nothing else you could always make another change, and see if that at some point later results in a newer SOA serial.
<albech> andol: possibly
<albech> andol: good idea.. will do that as well, just to verify when the push has run
<andol> If nothing else the current SOA serial *suggests* that nothing has been pushed the last 10 hours
<andol> 2017082102
<albech> andol: could be why its not showing.. didnt even think of that
<andol> Might also be that it didn't push/reload due to a syntax error in the dmarc record. On the other hand, ideally the web-ui ought to have complained about that to you.
<albech> v=DMARC1; rua=mailto:postmaster@nixmail.net; ruf=mailto:postmaster@nixmail.net; p=none; sp=none
<andol> albech: Strictly you should have quotes surruning that, but may be that the web-ui taxes care of that for you. Your spf txt record does have those quotes, so unless you had to enter them then, that is probably not the issue here.
<andol> s/taxes/takes/
<albech> andol: that is taken care of by the webui
<albech> i checked that myself
<andol> Well, unless you are in a hurry the easiest way forward is probably to remove the potentially problematic _dmarc TXT record, create some other simple/safe test entry, and see if that propagtes during the day.
<cpaelzer> rbasak: you and nacc just "did" the git ubuntu upload tagging - I want to update the wiki and then do my first one
<cpaelzer> rbasak: if you could take a look if you think http://pad.ubuntu.com/XD5ESDSBne is right that would be great
<rbasak> cpaelzer: yeah that's right. So before upload, git ubuntu tag --upload, push that, and then dput as normal.
<cpaelzer> rbasak: thanks for the confirm
<cpaelzer> rbasak: could you re-imort virt-manager or teach me how to do it?
<cpaelzer> it has the upload tag from the merge two weeks ago but not imported since then
<cpaelzer> I might be able to work on the base of the upload tag thou
<cpaelzer> that would give us time to check why it wasn#t reimported automatically
<rbasak> cpaelzer: "mkdir -p /tmp/import; git ubuntu import -v -d /tmp/import/virt-manager virt-manager" is what I use.
<cpaelzer> maybe it isn't in the default set that is synced
<cpaelzer> rbasak: ok let me run that and check if it stumbles somewhere
<rbasak> cpaelzer: default set: https://git.launchpad.net/usd-importer/tree/gitubuntu/import-cron-packages.txt
<cpaelzer> rbasak: the import is failing me on the pristine-tar of 1.4.0
<cpaelzer> rbasak: do you mind running the same import and checking if that is a local issue to me?
<rbasak> Trying
<rbasak> 08/30/2017 13:16:54 - ERROR:stderr: pristine-tar: Unknown subcommand "verify"
<rbasak> 08/30/2017 13:16:54 - ERROR:stderr: pristine-tar: Unknown subcommand "verify"
<rbasak> 08/30/2017 13:16:54 - ERROR:stderr: pristine-tar: Unknown subcommand "verify"
<rbasak> Aargh.
<rbasak> Anyway, I think I'd hit that already - my pristine-tar is too old now.
<rbasak> (on Xenial)
<cpaelzer> IIRC nacc meant some things only work on zesty
<cpaelzer> yep artfuls pristine tar has the verify command
<cpaelzer> but I thought the snap would encapsulate such things
<rbasak> I'm not using the snap.
<cpaelzer> maybe it only does so with the git ubuntu libs but not external executables
<cpaelzer> hmm, maybe I currently don't either
<cpaelzer> let me check
<rbasak> It's kinda hard to develop with the snap directly :)
<cpaelzer> well I do use it and it fails more or less the same
<cpaelzer> let me run it in an artful container
<cpaelzer> rbasak: well maybe the snap does encapsulate it
<cpaelzer> rbasak: I don't have it failing for missing "verify"
<cpaelzer> rbasak: the actual verification fails for me
<cpaelzer> rbasak: nacc: http://paste.ubuntu.com/25431785/
<rbasak> cpaelzer: there is a known pristine-tar bug
<rbasak> 08/30/2017 14:12:29 - ERROR:stderr: fatal: ambiguous argument '6c61fcfe2c55a9269c94b6ec2fbeac09759bf80b^{tree}': unknown revision or path not in the working tree.
<cpaelzer> yep
<rbasak> That seems like a bug in git ubuntu though
<rbasak> nacc: ^
<cpaelzer> I'll work on the upload tag for now and let nacc take a look later then
<jamespage> coreycb: doing the ceph l release right now plus some associate ceph-deploy updates
<coreycb> jamespage: awesome
<coreycb> jamespage: looks like qemu backport is fixed up
<cpaelzer> coreycb: which one?
<coreycb> cpaelzer: the build failure from rc3 was preventing it from backporting
<cpaelzer> yeah I handled that (I thought) - so -rc4 is building for you?
<coreycb> cpaelzer: yep
<cpaelzer> in artful it is still on dep8
<cpaelzer> coreycb: nice, good to get that confirmed
<coreycb> cpaelzer: thanks
<cpaelzer> yw++
<coreycb> jamespage: i'm starting to work through final uploads of pike packages
<jamespage> coreycb: do you want to split the list, do it in half the time?
<jamespage> coreycb: I need to help admcleod with something and then I can be all yours...
<coreycb> jamespage: sure, i've started from the top of the alphabet if you want to start from the bottom
<zul> jamespage/coreycb: come on chop chop ;)
<coreycb> hey zul :)
<jamespage> coreycb: okies
<jamespage> coreycb: zaqar it is then
<jamespage> coreycb: zaqar uploaded - I'm using lescina branch builds to de-risk build failures btw (not blind uploading)
<jamespage> coreycb: watcher next
<jamespage> no-op
<jamespage> moving on
<coreycb> jamespage: ok yep
<jamespage> coreycb: swift next
<jamespage> coreycb: I feel like mr no-op today
<jamespage> swift - no-op
<coreycb> jamespage: :)
<jamespage> coreycb: moving onto openstack-trove
<coreycb> jamespage: ok i'm almost done through the murano's. i'll stop at networking-bgpvpn.
<jamespage> coreycb: I'll pick sahara next
<coreycb> jamespage: oh btw, i dropped the db sync's from the postinstall scripts where it made sense. basically any package that wasn't inherited from zigo.
<coreycb> jamespage: that does affect some services in that they won't start without user running the db sync commands. so i had to update some of the autopkgtests to run the cmds.
<jamespage> coreycb: we should sync with wendar on that front - she's decommissioning all of the upstream git repos and working on moving things back to debian infra
<jamespage> coreycb: ack - I think that's OK
<jamespage> coreycb: I'd be tempted to go further and not auto-start services as well - but maybe that's to late this cycle :-)(
<jamespage> coreycb: build-testing sahara - no branch build to look at
<coreycb> jamespage: well, i don't think some will start without a configured db
<jamespage> coreycb: I think you're right
<jamespage> coreycb: that was some of the original rationale for defaulting to sqlite, but its really quite useless and does not help anyone IMHO
<coreycb> jamespage: makes sense, and I agree. there are docs that describe how to set up openstack if you want to do it manually.
<jamespage> coreycb: its the approach we took with ceph - nothing starts automatically on install - ditto on restarts for package updates
<coreycb> jamespage: ok
<jamespage> coreycb: plus there are some top level systemd targets for restarting all ceph daemons on a machine
<jamespage> that might be neat - openstack.target for example
<jamespage> as part of a common package
<jamespage> coreycb: ok sahara done
 * jamespage looks for next target
<jamespage> coreycb: nova?
<coreycb> jamespage: go for it
<jamespage> actioning nova...
<jamespage> coreycb: nova done
<jamespage> coreycb: neutron next?
<coreycb> jamespage: yep go for it, i'm on networking-bagpipe/bgpvpn
<jamespage> ack
<coreycb> jamespage: ok those are uploaded. i'll get networking-odl/ovn
<jamespage> coreycb: picking the neutron-* ones next
<jamespage> coreycb: ok neutron-* done
<coreycb> jamespage: great, one left. want to get networking-sfc and we're done?
<jamespage> coreycb: neutron-vpnaas is dead right?
<coreycb> jamespage: i think so. let me check the repo though.
<jamespage> coreycb: hmm they did push an rc1
<coreycb> jamespage: yeah looks like it
<jamespage> coreycb: ok done
<jamespage> coreycb: lemme get networking-sfc unless you already did?
<coreycb> jamespage: go for it
<jamespage> coreycb: doing so now
<jamespage> coreycb: done
<coreycb> jamespage: great \o/
<coreycb> jamespage: anything else to do?
<coreycb> jamespage: could use a new pylxd i think
<jamespage> coreycb: yes and I also need to cut the nova-lxd release tag upstream
<jamespage> coreycb: but that's looking good as well
<jamespage> https://review.openstack.org/#/c/499140
<coreycb> jamespage: great, i assume you'll upload nova-lxd and pylxd together?
<jamespage> coreycb: I will
<coreycb> jamespage: ok
<nacc> cpaelzer: can you do the import with --no-fetch first
<nacc> cpaelzer: i'm 95% sure that it's because of us chaning how imports work
<nacc> cpaelzer: although you are indicating a future bug is going to be hit
<nacc> cpaelzer: pristine-tar can't be used directly, because of copmonent tarballs
<cpaelzer> nacc: the --no-fetch fails at pristine-tar verify still
<cpaelzer> nacc: just at another step of it I think
 * cpaelzer comparing logs
<cpaelzer> nacc: http://paste.ubuntu.com/25432445/
<cpaelzer> nacc: that is the error now
<nacc> cpaelzer: that's using the snap?
<cpaelzer> nacc: my case is snap, rbasak'S was from git
<nacc> cpaelzer: right, from git is probably basically broken in principle, we now need new gbp versions, new pristine-tar functionality. Hence the snap
<nacc> cpaelzer: i'm setting up a fresh VM to test the snap, one moment
<rbasak> Yeah mine is known broken for a different reason.
<cpaelzer> nacc: I'm ok without you hurrying
<nacc> i think there's something more fundamental with the snap
<nacc> something wonky with PATH and possibly PYTHONPATH
<cpaelzer> nacc: let me know til tomorrow morning if the import worked and is there or not
<nacc> cpaelzer: ack
<cpaelzer> nacc: then I'll upload "classic" or do a new MP
<cpaelzer> nacc: that gives you time to do it right
<drab> hi, anybody knows what's going on with python and openssl?
<drab> I was running pip and got an error: AttributeError: 'module' object has no attribute 'SSL_ST_INIT'
<drab> googling that showed a whole bunch fo reports all over the place about incompatibilities between python and older ssl lib versions
<drab> which happens to be what's default in ubuntu xenial
<drab> I'm not sure why I'm seeing this problem now and didn't see it a while back, some of the reports are from dec 2016
<genii> Have you tried sudo pip install -U pyopenssl
<powersj> preseed question: trying to install a package from universe during a preseed install. It appears that "d-i pkgsel/include" cannot find it.
<powersj> I then tried using d-i preseed/late_command to do an apt-get update; apt-get install -y pkg; which appears to have worked.
<powersj> Is this the best way to install a package from universe?
<nacc> powersj: i think there is a wrapper for the latter 'apt-install' ?
<powersj> nacc: ah ok - I'll look into that one
<powersj> thx
<drab> genii: yeah, I get the same problem
<drab> so right now I'm basically stuck as far as I can see
<drab> I need the openssl working to fetch the packages so I can't update openssl
<drab> powersj: d-i pkgsel/include string package-name1 pkgN
<drab> however ime it doesn't work despite being in the docs
<drab> so I do all my touch-ups, including installing some packages, in a post-inst script
<powersj> drab: I used that to install packages, but only works with things found in main or on the CD
<powersj> for example "d-i pkgsel/include string openssh-server" works
<drab> powersj: d-i preseed/late_command string wget -O /target/var/tmp/postinst http://... ; in-target sh /var/tmp/postinst
<powersj> ah ok
<drab> oh, ok, I couldn't get it to do that reliably either for some reason
<drab> weird, it works just fine on a clean ubuntu container
<drab> I wonder what's wrong with pip on my desktop...
<drab> ah, weird, had to apt remove --purge python-openssl
<genii> Strange
<drab> yeah, don't get it
<nacc> rbasak: around?
<nacc> (ideally not)
<drab> lol
<drab> anybody around happens to know if and how it's possible to get an AD compatible system to integrate a few windows desktop on the lan?
<drab> a bunch fo googling seems to indicate a world of pain is ahead for anybody going that direction
<gQuigs> is there an existing bug for the next round ocata stable releases?  (like this - https://bugs.launchpad.net/ubuntu/+source/swift/+bug/1696139)
<ubottu> Launchpad bug 1696139 in neutron-fwaas (Ubuntu Zesty) "[SRU] ocata stable releases" [Undecided,New]
<gQuigs> oh, I see we likely need to get Pike released first
<genii> !info pike
<ubottu> Package pike does not exist in zesty
<gQuigs> I mean Pike release of openstack
<Epx998> so my boss gets me a new version of the intel x550 10gb nic and of course the latest ixgbe driver doesnt seem to see it
<sarnold> Epx998: doh
<Epx998> 10G X550T hmm ok why am I not seeing its interface
<Epx998> there we go
<strixUK> so, i've installed node-less, but npm list -g doesn't include the less module as it should.
<strixUK> is this a case of 'abandon hope all ye who enter here', or is there a chance of actually fixing/bringing to the attention of someone who will care to fix it?
<strixUK> IOW, should i invest the effort to find out why ubuntu's pkgs are broken, or should i just do what i have to do to fix it?
<sarnold> I know nothing about node or npm but quite often the approach of cpan / pip / etc kinds of tools is that they manage something separate from distribution-provided tools
<sarnold> so it's possible that node-less works for what it is intended for, and npm isn't expected to work with those packages -- I stress that it's -possible- that this is working as intended, I don't know..
<strixUK> yeah.  i could go via the manual/npm approach, but it seems a bit silly to provide a node-less package if it simply won't work.
<strixUK> i guess that's plausible
<strixUK> however, it's still broken from the pov of node and anything that relies on canonical node procedure
<sarnold> here's the files that the package installs http://paste.ubuntu.com/25434873/ -- does npm look in those locations for information?
<strixUK> good question.  i don't know enough about npm's internals to guess how it works.  i'm looking into that now.
<Epx998> this is weird, I created two extra files in interfaces.d, one for 10g and another for 1g (different interfaces) when I just enable the 10G, I cannot reach the server at all.  If I enable the 10G and the 1G - I can get to the server via either interface.
<Epx998> why would the 10g only be pingable when the 1g is enabled?
#ubuntu-server 2017-08-31
<krj7709> Hello everybody!
<krj7709> I need a little help with Ubuntu Server 17.04 setting Bind9 up to work with Apparmor if anyone can spare a few mins pls msg me
<sarnold> are you getting any DENIED messages in your dmesg?
<krj7709> well I want to setup bind9 to work with apparmor but I have no clue on how to tell bind which directories to use
<sarnold> just configure bind however you normally would
<nacc> cpaelzer: sigh, it took all day, but i think i finally got our snap to use pristine-tar and gbp (from upstream) only (not using the system one). I just pushed to master so it should build shortly. Meanwhile, i'm doing a virt-manager import from scratch on my VM at home now with basically nothing but git-ubuntu installed
<nacc> rbasak: i'm starting to think, even though we're a classic snap, that our snap application wrapper scripts should not use any system paths at all (right now they prepend to the system paths, if set). Thoughts?
<nacc> rbasak: our intention is we have all the deps in the snap, so that mostly allows workarounds.
<nacc> rbasak: i'd test it in my vm first, as well, just to be sure we don't happen to need some system dependency that's not correctly expressed in the snap
<nacc> rbasak: but right now, my VM doesn't even have git installed and it's working :)
<nacc> rbasak: also, perhaps i should import your ssh key(s) to the bastion, so you have a snap-based place to import from?
<cpaelzer> good morning
<cpaelzer> nacc: sad to hear that it turned out so complex
<nacc> cpaelzer: so virt-manager imported except patches-applied (in my vm)
<nacc> cpaelzer: try it yourself, if you can, and you can use --skip-applied if need be
<nacc> cpaelzer: i'll check in again tmrw
<cpaelzer> nacc: you didn't push that right?
<cpaelzer> nacc: a clone now did still point ubuntu-devel to 1.3.x
 * cpaelzer importing locally
<cpaelzer> nacc: with and without the --skip-applied I fail at the verify step with ambiguous argument
<cpaelzer> let me ensure the snap is refreshed and/or run it in a VM/Container
<cpaelzer> nacc: the latest snap in artful with and without --skip-applied is failing on the ambiguous error
<cpaelzer> trying from git now
<cpaelzer> nacc: still the same from git
<cpaelzer> nacc: but please go to bed
<cpaelzer> nacc: I can provide a valid upload tag commiting onto the last upload-tag and ignore importing "for now"
<cpaelzer> nacc: I'll file a bug thou so we can keep track
<lordievader> Good morning
<Krj7709> Hello! Can someone take a couple minutes and help me troubleshoot samba. Ubuntu Server 17.04 (base system tools, Budgie desktop, Bind9)
<lordievader> What is the problem?
<imraan> Hi I need a help on the Libvirt Snapshot mangement
<cpaelzer> imraan: any more details
<imraan>      I am using the Command: virsh snapshot-create-as --domain MWIF_18 bkp2_2017-08-31_14.42.15.986080 --disk-only --atomic --no-metadata to create the snapshot    To commit I am using the command:   virsh blockcommit MWIF_18 vda --active --pivot --shallow --verbose --timeout 120      But sometimes i am observing the error like below what is the reason how do i recover:   Error:   1. : block copy still active: disk 'vdb' alread
<parlos> Good Morning, I've got a question wrt. Landscape (standalone) and MAAS. My aim is to use autopilot to deploy OpenStack. In my initial MAAS node commisoned nodes, i only had single nics. Landscape/Autopilot complained, so I hooked up one more network, recommissioned that node. However, Landscape/Autopilot did not detect the change. So I then removed the node, and started it from scratch, and commissioned it.. MAAS detected the new network automat
<imraan> @cpaelzer I had posted my scenario
<imraan> any help is greatly appreciated
<imraan>   FOR WORKAROUND: 	Currently using below: 		virsh blockjob MWIF_18 /mnt/alcm/packages/MWIF_100018/MWIF_18/MWIF_100018.bkp2_2017-08-31_14.42.15.986080  --abort 		virsh blockcommit MWIF_18 vda --active --pivot --shallow --verbose --timeout 120 		But this also dont work all the time
<imraan> @cpaelzer: Got my question?
<cpaelzer> yeah just a second
<imraan> Okies..waiting
<Odd_Bloke> parlos: Your message got cut off at "MAAS detected the new network automat".
<cpaelzer> imraan: if you have long tests/log s to share use pastebinit
<parlos> ... , but still L/A complains. It seems that L/A does not check/is not updated that the MAAS nodes have changed.. Any pointers?
<cpaelzer> and report the link
<cpaelzer> sry I meant parlos with the latter
<imraan> okies
<parlos> cpaelzer link to what?
<cpaelzer> parlos: if you have a long log/text you can do like "cat log  | pastebinit" which gives you a link
<parlos> Thnx
<cpaelzer> parlos: then on caht you can "hey I have a maas question ...link"
<cpaelzer> imraan: I assume that is on Xenial?
<parlos> Hey, I've got a maas/landscape question: https://pastebin.com/kxvD67SJ
<imraan> cpaelzer: I am using ubuntu release: 16.04 LTS
<parlos> :)
<cpaelzer> yeah ok
<cpaelzer> imraan: once in the "bad" state does it resolve over time or is that job always considered active?
<imraan> cpaelzer: it is always considered active.
<cpaelzer> imraan: is that a production environment or better "how much debug can you do"?
<cpaelzer> there were a few related simimlar issues
<cpaelzer> but so far all ended non reproducible enough to finally solve them
<imraan> cpaelzer: It is not resolved over time.... After applying workaround only i am able to recover...
<cpaelzer> I have a few (now outdated but still testable) xenial libvirt builds, obviously testing on newer Ubuntu would be great - but that depends a lot on your case (how often it triggers and how mcuh you can do)
<cpaelzer> that is why I ask "how much" you can debug that
<imraan> cpaelzer: In my environment i have 2 servers in one server it is continously reproducible
<imraan> cpaelzer: Can u give me direction how to debug further...I am just struck not able to know where to check further
<imraan> cpaelzer: If u can give me guidance on which logs or some steps to execute i can try out and get u back the inputs needed
<cpaelzer> imraan: let me collect a few things what you could do
<imraan> okies
<imraan> let me know what all things u need
<cpaelzer> none yet, I try to repro the crap I want to recommend before leading to you bullshÂ§$%
<imraan> cpaelzer: Okies
<cpaelzer> imraan: lets start comparing what libvirt and qemu think on active block jobs
<cpaelzer> I assume the issue is only on vdb as you reported, please go to others as well if needed
<cpaelzer> the libvirt view would be
<cpaelzer> virsh blockjob <guestname> vdb --info
<cpaelzer> while we can "ask" qemu what it thinks with
<cpaelzer> virsh qemu-monitor-command --pretty <guestname> '{ "execute": "query-block-jobs" }'
<cpaelzer> imraan: ^^
<imraan> cpaelzer: the issue is not only with vdb it is with many other volumes also..
<imraan> i had just posted the sample
<cpaelzer> imraan: I guess it is a race of some sort
<cpaelzer> imraan: when you have the issue again, you can check with those commands
<cpaelzer> imraan: and from there one can debug/think further
<cpaelzer> imraan: if you have a test environment that you could use to recreate you could do so until you feel that you have a way to 100% recreate - once you did you could try https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/2620 and https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/2619
<imraan> okies...u have any idea on which scenrio the blocking of jobs can happen....if some operations ongoing on the volume os that sort?
<cpaelzer> those contain (now outdated) libirt builds around a similar issue
<cpaelzer> so you'd need to downgrade
<cpaelzer> but it makes no sense to try these if you are not convinced to have a way to "from fresh sys to 100% sure to repreoduce"
<cpaelzer> imraan: it is just a thought/theory - but on the test builds it was about soe jobs not being considered jobs
<cpaelzer> imraan: so they could race against each other
<cpaelzer> imraan: but backporting the fix is very invasive, so without a way to reproduce/test that isn't going to SRU
<cpaelzer> imraan: https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_39a8dbb93caf4ec889f8a1b7f69885db/bileto-2619/2017-03-21_11:13:47/xenial_libvirt_content.diff
<cpaelzer> imraan: and OTOH the fix is available in any later release (without the uncertainty to accidentially breaking a lot of people on upgrades=
<cpaelzer> imraan: and many people use the Ubuntu Cloud Archive, so even on 16.04 they are on newer qemu/libvirt levels quite often
<cpaelzer> imraan: anyway - sorry that this is no easy "do this and you are good" case
<cpaelzer> imraan: I hope that will help to get better info next time
<cpaelzer> imraan: in case my response latency is very high, I'm soon out for lunch
<imraan> Thanks cpaelzer..
<imraan> For ur inputs
<imraan> I will further check on this have ur lunch... :)
<TafThorne> Should the fsck option from the recovery menu be failing becasuse I have a Samba / cifs mount in my /etc/fstab file?  It seems that it tries to mount everything read-only, cannot do a DNS lookup on my cifs server, then fails.
<lordievader> Is the entry marked to be checked?
<frickler> openstack folk: what is the intended difference between xenial/pike and xenial-updates/pike in UCA? most deployers seem to use only the latter, but that is lagging behind currently. see e.g. https://github.com/openstack/kolla/blob/master/docker/base/sources.list.ubuntu#L13
<frickler> the updates column here looks pretty red, too: http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/pike_versions.html
<frickler> jamespage: coreycb: ^^
<jamespage> frickler: yes we've not release yet
<jamespage> frickler: xenial-proposed/pike has latest atm
<jamespage> frickler: cpaelzer and I are working on a libvirt hang that we really don't want to push to -updates :-)
<TafThorne> lordievader: the recovery menu seems to claim it makrs everything to be checked.
<frickler> jamespage: so with the release, xenial-updates would get synced and people installing only from there will be correct again?
<jamespage> frickler: yes
<frickler> jamespage: great, thanks for clarifying
<lordievader> TafThorne: It shouldn't, I get the feeling every entry is simply marked to be checked.
<lordievader> TafThorne: Did you read the documentation of fstab?
<TafThorne> lordievader: when I added the cifs share 2 years ago I probably did. I iwll give it another look
<TafThorne> lordievader: my sixth field is set to 0.  fsck should be leaving the network mount alone..
<TafThorne> the / mount point is 1 so it should be getting checked first.  Maybe it only takes a couple of seconds and I am missin git?
<lordievader> Hmm, indeed. That would suggest it does not get checked.
<lordievader> Does it succeed if the entry is commented?
<TafThorne> lordievader: I'll give that a check this evening.  Rebooting my main machine is rather disruptive in office time.
<nacc> cpaelzer: sorry, i meant to say, also use --no-fetch
<nacc> cpaelzer: i did get some failures from the cron job, i'm guessing related to what you see
<cpaelzer> nacc: ok, thanks for the info
<heistheDude> hello. i need some help installing python anaconda in my ubuntu x64
<andol> heistheDude: https://docs.continuum.io/anaconda/install/linux.html looks fairly straightforward? What issues are you running into?
<jamespage> cpaelzer: ok revised a little - but I think this works OK - http://paste.ubuntu.com/25439408/
<jamespage> cpaelzer: I just revert back to always doing the deferred processing path
<jamespage> coreycb: cpaelzer: https://launchpad.net/~james-page/+archive/ubuntu/libvirt-testing has the patched version for Xenial
<heistheDude> andol, i did this exactly still i cannot see anaconda in the application list
<coreycb> jamespage: that seems to do the trick! i made it through 20 hard reboots without a problem.
<heistheDude> how can i run installed anaconda software in terminal?
<jamespage> coreycb: basically I ripped out the in-context call that caused the problem and switched back to deferred calling which works OK
<jamespage> I think
<coreycb> jamespage: seems like a good work-around at least until upstream has something better
<jamespage> coreycb: cpaelzer: patch attached to https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1714254
<ubottu> Launchpad bug 1714254 in libvirt (Ubuntu) "libvirtd hangs after # instance creation events" [Critical,Triaged]
<jamespage> coreycb: i get a clean tempest smoke with that patched version installed
<coreycb> jamespage: excellent
<nacc> rbasak: figured out the issue, i think
<nacc> or a issue, at least :)
<nacc> rbasak: stupid typo on my part
<rbasak> nacc: ah. The hardest type of bug to debug :)
<andol> heistheDude: Well, that installation page mentions the command "conda list". Can you run that from the terminal?
<heistheDude> no andol
<heistheDude> says conda command not found
<andol> heistheDude: What about step #9?
<heistheDude> that workss fine.
<heistheDude> source ~/.bashrc i mean
<heistheDude> program is getting installed fine, but i cannot find it anywhere
<nacc> heistheDude: you ran exactly those steps?
<joeyjoerobinson> Will live kernel patches feature be in the community version of Ubuntu server?  Looks like its a Canonical service 'paid' https://www.ubuntu.com/server/livepatch
<nacc> joeyjoerobinson: as mentioned on that page (perhaps in the video) you get 3 free machines
<nacc> joeyjoerobinson: and it's already available, afaik
<joeyjoerobinson> Yeah, I mean truly free. Not paid after 3
<nacc> joeyjoerobinson: no
<nacc> joeyjoerobinson: not afaik
<nacc> joeyjoerobinson: also not entirely sure what you mean by 'community version'? there is only one ubuntu server. that there is a canonical supported stack above it doesn't change that (afaics)
<joeyjoerobinson> nacc the Canonical supported version offers this as a PAID feature.  It should be obvious that I am looking to know if it will be a feature that is supported by the Ubuntu community without any requirement of payment
<nacc> joeyjoerobinson: as i said, the community has 3 free
<joeyjoerobinson> That sux and is antithetical to FOSS.
<madLyfe> tried installing server on an old laptop and it couldn't find the CD ROM. in this case it has a CD ROM but then the rest of the install goes to shit.
<madLyfe> why does it need a CD ROM? never had that issue on other server boards without a CD ROM.
<tomreyn> it needs a cdrom if you're installing from a cdrom
<Ussat> Yea....nothing should EVER cahrge for anything...it all should be free.......damm those developers who want to eat....
<Ussat> what a moron
<tomreyn> Ussat: ? looks like #ubuntu-offtopic to me
<Ussat> Yea well...fair buff
<Ussat> fair nuff
<madLyfe> in installing from USB drive. and I'm not fully installing I'm just trying to see if the installer recognizes my new USB Ethernet adapter
<genii> It can't boot to USB stick?
<tomreyn> madLyfe: if you're seeking help with this issue, you should describe more closely how you installed, which version and variant you installed, which error messages you ran into in which context
<madLyfe> don't 16.4.3? whatever the most recent version is and used the HWE option during install
<tomreyn> the most recently released ubuntu version is 17.04
<madLyfe> sorry most recent stable
<tomreyn> that's the most recent stable release. releases are always stable.
<tomreyn> (at least that's the goal)
<madLyfe> most recent LTS. that should do it.
<tomreyn> Ubuntu Server 16.04.3 LTS then
<madLyfe> hmm changed USB ports and regular install(not HWE) and no issue this time
<madLyfe> still not seeing my USB Ethernet adapter tough
<madLyfe> we
<madLyfe> wtf. it notices the laptops onboard LAN and wireless though. lol
<madLyfe> guess it just didn't like USB Ethernet adapters
<madLyfe> ya odd. the HWE install doesn't like not having a CD ROM
<madLyfe> https://usercontent.irccloud-cdn.com/file/YSnCfbj2/irccloudcapture1753526775.jpg
<madLyfe> https://usercontent.irccloud-cdn.com/file/RfWQO85y/irccloudcapture1408436233.jpg
<madLyfe> https://usercontent.irccloud-cdn.com/file/bw5g39gW/irccloudcapture1261440775.jpg
<madLyfe> also I wonder if I need to plug in an Ethernet cable into the USB Ethernet adapter for it to recognize
<ikonia> shouldn't need to
<ikonia> it should be the chipset it detects, not the link
<madLyfe> thats crazy that server doesnt find two different usb ethernet chips but finds laptop wifi chip.
<ikonia> why is that crazy
<ikonia> are the chipsets supported by linux, if so, are the supported by the kernel you're using
<madLyfe> USB and ethernet seem more along the lines of server than wireless
<ikonia> does it see the device at all at a dumb level
<ikonia> no it doesn't
<ikonia> usb is supported, ethernet is supported
<ikonia> usb ethernet is a niche thing with odd chipsets
<ikonia> is the chipset supported under linux at all, is it supported with the kernel you're using, does it see the device but not know what it is
<madLyfe> both of these adapters are recognized fine under the desktop live iso
<ikonia> so what chipset are they and what kernel module is driving them under the desktop ?
<ikonia> how do you know they are not detected under the server install
<ikonia> (apologies if I've missed your background to this)
<madLyfe> ya i can get them to run after install. i want them to be recognized and run during install. i dont want to manually setup them up after.
<ikonia> how do you set them up ?
<ikonia> (what are the steps you take)
<madLyfe> just check lsusb to make sure its recognized then add it to network interfaces
<ikonia> as in the network interface file
<madLyfe> yes because it was not automatically put there.
<ikonia> so if you look at the syslog when you boot the server install do you see it detect the usb devices ?
<madLyfe> how do you do that?
<ikonia> swap to the different virtual console and open the syslog in a text editor
<madLyfe> why key to swap to a different virtual console?
<ikonia> alt+F$something
<squashua> Stokachu :( no conjure-up versions --stable or -edge work for me localhost. --edge http://paste.ubuntu.com/25440328/ And with --stable I get http://paste.ubuntu.com/25440328/
<squashua> Would any of this be caused by me having manually created a controller and have a running machine attached?
<stokachu> those are the same links
<squashua> Sorry
<squashua> --stable http://paste.ubuntu.com/25440433/ stokachu
<stokachu> squashua: shouldn't
<stokachu> squashua: can you `touch ~/.cache/conjure-up/test.db`
<squashua> Stokachu it didn't give any output
<stokachu> do you see it in `ls ~/.cache/conjure-up/`
<squashua> Yes
<stokachu> ok so must be something else
<squashua> Well then
<madLyfe> ikonia: alt-f brings up the installer menu for me
<squashua> Stokachu I stay @ a dead end with this.. It's so frustrating.
<stokachu> squashua: this is the first time you've come for help that i recall
<madLyfe> https://usercontent.irccloud-cdn.com/file/KAerBpRP/irccloudcapture401038223.jpg
<squashua> Stokachu sorry.. I can't keep my name cause I keep getting disconnected. This is Brazier. This is the same poweredge you have been on recently.
<stokachu> squashua: i see, so what happened? it was working
<squashua> & it's still open to you
<stokachu> can you pm me the ip and username again
<stokachu> squashua: ^
<braziercustoms-m> Stokachu sorry again. Not really. I never was able to launch I stance or anything. It just successfully deployed. So I refreshed today and these are the problems I'm having.
<braziercustoms-m> Instance*
<stokachu> braziercustoms-m: can you p m me the ip and username
<braziercustoms-m> Yeah
<braziercustoms-m> Can you pm me first stokachu
<braziercustoms-m> It's complicated on this phone lol
<madLyfe> https://usercontent.irccloud-cdn.com/file/KH9G61vP/irccloudcapture777044987.jpg
<brazier> Stokachu thanks for showing me my ignorance again :D. on the database is locked, what if it checked for another instance and asked what you want to do with it?
<brazier> That would have kept a dummy like me off here longer :D
<brazier> Stokachu http://paste.ubuntu.com/25440789/
<stokachu> looking
<stokachu> brazier: it's going now, i need to look into this db lock stuff more
<brazier> What happened?
<stokachu> the database was locked again
<stokachu> spells need to be udpated to make sure flush the db and close the connection
<brazier> Hmm it didn't say db locked that time did it it Stokachu
<stokachu> yep in the logs
<brazier> It does further up the log I see now.. guess I been caught up in the last error it showed.
<brazier> Stokachu can I intern with you for a while :D
<stokachu> haha
<brazier> Stokachu I thought the novalxd was the only one you could install on a single machine? That was the one I been trying to install
<stokachu> brazier: nah they all work on a single machine
<stokachu> i just forgot that was the one we were using before
<brazier> I asked you before.. must have been a miscommunication..
<brazier> Stokachu will the whole conjure up cloud run inside of an lxd machine?
<stokachu> brazier: yea you gotta do some tweaks to the host container though
<stokachu> brazier: https://stgraber.org/2017/01/13/kubernetes-inside-lxd/
<brazier> Stokachu nice..  thanks.
<brazier> Stokachu since you chose kubernets you think that would be  the better one to try to learn/understand?
<stokachu> brazier: it is the new kid on the block
<stokachu> if it were me i would be learning kubernetes
<brazier> Stokachu sometimes I have a hard time figuring out where to begin.
<stokachu> yea, it's a fast paced industry
<brazier> Stokachu are any of the systems available capable of creating instances where the software can control specific hardware?
<brazier> On the physical machine?
<stokachu> not sure i follow what youre asking
<brazier> Stokachu Could I say create an instance that could communicate with a usb device attached to the physical machine
<brazier> Stokachu btw it deployed successful
<nacc> rbasak: so we have a case of an  old/new mismatch causing an issue. ipsec-tools will fail to import for the time being, becuase the debian changelog object has no means to handle bad changelogs (it can either emit a warning and skip entries or make bad entries fatal). We are more lenient (in the shell parsing/dpkg-parsechangelog). Thoughts?
<nacc> rbasak: http://paste.ubuntu.com/25440934/
<madLyfe> nacc: so is there a way to get the server installer to force the r8152 driver or whatever to recognize the chip?
<nacc> madLyfe: we concluded the driver wasn't available on the iso, right?
<nacc> cpaelzer: fyi, i am running the virt-manager import again now, to see if the same error is seen or not. Worst case, I'll do the reimport this afternoon
<madLyfe> I'm not sure. I thought you guys said it was in that extra package
<madLyfe> we got it working after the install but not during.
<nacc> madLyfe: yeah, so the -extra package is probably not available/used during the install itself
<nacc> i'm not 100%
<nacc> madLyfe: the simplest ('simple') choice would be use a custom kernel and initrd
<nacc> probably just an initrd, really
<nacc> madLyfe: that does have the module in it
<nacc> madLyfe: and then install the -extra package during install time
<madLyfe> what does that mean. ELI5 please
<nacc> madLyfe: you'll need a system that is working to work on
<nacc> madLyfe: you'll uncompress the installer initrd there
<nacc> madLyfe: then you'll add files to the installer initrd
<nacc> madLyfe: recompress the initrd
<nacc> madLyfe: and remaster the iso and burn to USB (normally you'd do this for a network install, but since you don't have one)
<rbasak> nacc: if it's entirely historical, could it be an easy fix to keep a patch for it?
<rbasak> nacc: if we ignore the issue, what would be the consequence in this case?
<hehehe> https://www.hugedomains.com/domain_profile.cfm?d=sarnold&e=com
<hehehe> wtf
<hehehe> sarnold: you are for sale :D
<hehehe> 24 monthly payments of $134
<hehehe> thats cheap :)
<nacc> rbasak: not sure what you mean? in theory, any changelog that has those entries in it (the ones that appear to be being dropped) will fail to import without being patched
<rbasak> nacc: HO if you're free?
<nacc> rbasak: yep
<nacc> cpaelzer: finally getting aroudn to implementing --reimport. I'm testing it now, I think it actaully will be pretty easy to do (presuming this passes)
<nacc> rbasak: ok, so i'm still not 100% sure on what's going on, as running the importr by hand has gotten past that point. I'll debug that a bit (I do think it's the script calling to the cli, maybe). But in any case, I'm realizing anohter place we're not currently tracking (for eventual shell removal) is when we do fail to strictly parse a changelog -- there's no logging or any way to know we fellback to
<nacc> shell parsing. HOw should we handle that?
<nacc> cpaelzer: it would appear the reimport will succeed, but i'll let it finish first (it succesfully imported all of debian again). So I think the failure we're hitting now is the bug in the repository construction with the old vs. new pristine-tar. I'll make sure it integrates your two upload tags and then i'll update the snap (it's a hidden flag for now0
<nacc> rbasak: yeah, so the by-hand import of ipsec-tools, from my bastion, succeeded
#ubuntu-server 2017-09-01
<nacc> rbasak: cpaelzer: i'm killing the importer cron job until i can debug this better -- i'm seeing different results from the import-cron script and the git-ubuntu invocation, which shouldn't happen
<rbasak> nacc: ack
<nacc> cpaelzer: i pushed --reimport to the snap, if you refresh it should be available and i think will work for you
<heistheDude> andol: yes. effective processing of the step isnt fruitful
<lordievader> Good morning
<TafThorne> morning
<lordievader> Hey TafThorne
<TafThorne> hello to you too lordievader
<jamespage> cpaelzer: morning - shall I send you a git review with my workaround patch for the double-locking issue in libvirt? I'd like to get this resolved AM today if possible
<cpaelzer> jamespage: hmm, not sure yet
<cpaelzer> jamespage: unfortunately upstream didn't respond at all yet
<cpaelzer> jamespage: but we can "hotfix" it now and reroll to the upstream solution later
<jamespage> cpaelzer: +1 we need todo that
<cpaelzer> jamespage: I can take the diff from the LP bug
<cpaelzer> jamespage: were there any updates to is since you attached?
<jamespage> cpaelzer: nope that's the one we tested with last night
<cpaelzer> result?
<cpaelzer> good?
<jamespage> cpaelzer: yes - commented to that effect on the bug just now
<cpaelzer> perfect I'll take over
<jamespage> cpaelzer: thanks!
<cpaelzer> review, build, test and ping you then
<cpaelzer> AM for the uplaod maybe, but to migrate I'm not so sure
<cpaelzer> jamespage: so you ended up not moving the dmn->autoShutdownCallingInhibit = true; that I asked about
<cpaelzer> jamespage: and only do the "hotfix as a partial revert
<jamespage> cpaelzer: yeah basically - by switching back to the older behaviour, that call is always good
<cpaelzer> which might return what the former patch fixed but seems better than what we have for now
<cpaelzer> ack
<cpaelzer> as hotfix ok
<jamespage> cpaelzer: agreed - its needs a more complete fix if upstream want to maintain that fast-path behaviour
<jamespage> cpaelzer: upstream fix also looks reasonable btw
<cpaelzer> jamespage: already building with it
<cpaelzer> jamespage: https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/2942
<cpaelzer> jamespage: for a fast minor sniff before pushing
<cpaelzer> jamespage: I'll give you a ping when it is ready to test for you as well
<cpaelzer> jamespage: or would you need the xenial backport to verify your case?
<jamespage> cpaelzer: ideally but I can deal with that
<cpaelzer> jamespage: https://launchpad.net/ubuntu/+source/libvirt/3.6.0-1ubuntu4
<cpaelzer> jamespage: strictly speaking it is AM :-)
<cpaelzer> jamespage: yet we need to guide it through migration
<cpaelzer> jamespage: this is already the upstream fix
<cpaelzer> you even have 1 extra hour of AM left for you
<jamespage> cpaelzer: have it built - but had some image issues in our cloud so only just re-deploying...
<jamespage> cpaelzer: apologies
<cpaelzer> jamespage: all fine, let me know how  the retests on this fix behave
<jamespage> cpaelzer: tbh unless you see a regression I'd go with that patch for upload to arftul
<jamespage> cpaelzer: it looks 100% reasonable and correct to me
<jamespage> smoser: I owe you a simplestreams change for a niggle I need to scratch
<jamespage> smoser: I'd like to split 'keep' from 'maxnum' in resolve_items (optionally)
<jamespage> as its desirable to start seeding a cloud with a smaller number of image syncs, but retain synced images over time
<jamespage> it looks like keep may have done that at some point in the past, but its a boolean toggle now and will only keep up to maxnum
<albech> anyone know of a small cmd line tool that can create sha512 crypt passwords? Dont really want to install dovecot for dovecotpw.
<TafThorne> albech: sha512?
<TafThorne> As in that is what it comes up with on my command lines.  I think it is a thing out of openssl dgst
<jamespage> cpaelzer: looping a reboot --hard right now
<jamespage> looking good so far
<TafThorne> Oh I was meant to try out something for fsck now I modified /etc/fstab.  Time for me to reboot too.
<jamespage> cpaelzer: +1 on that fix - lots-of-reboots later still all good
<TafThorne> fsck finished suspiciously fast (few seconds on a 1.8 TB spinning drive) but it passed.
<jamespage> cpaelzer: I've pushed from the bileto ppa to artful - thanks for preparing that update
<jamespage> nm I see you already did that...
<jamespage> doh
<smoser> jamespage, hm... yeah. keep is supposed to be separate from maxnum.
<smoser> zlthough originally i think it was only to be less. keep being less than maxnum.
<smoser> have to think again.
<tomreyn> how would i nopte down that this remains unfixed for xenial while it is fixed in yakkety? https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1568097
<ubottu> Launchpad bug 1568097 in mdadm (Ubuntu) "Error messages about mdadm --incremental failing" [High,Fix released]
<ahasenack> tomreyn: you can nominate it for xenial
<tomreyn> other bug reports have these clock (?) icons here listing xenial, yakkety and zety as separate targets, e.g. https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1608495
<ubottu> Launchpad bug 1608495 in mdadm (Ubuntu Zesty) "IMSM fakeraid handled by mdadm: unclean mounted volumes on shutdown/reboot" [Critical,Confirmed]
<tomreyn> but i can't seem to find out how to add these nominations on https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1568097
<ubottu> Launchpad bug 1568097 in mdadm (Ubuntu) "Error messages about mdadm --incremental failing" [High,Fix released]
<ahasenack> tomreyn: there is a "nominate for series" link
<ahasenack> "Also affects project (?) Also affects distribution/package Nominate for series" under the task list
<ahasenack> or maybe you don't have it?
<tomreyn> "[X] Also affects project" and  "[X] Also affects distribution/package" is what i have
<tomreyn> using a standard ubuntu account without any special permissions
<tomreyn> ahasenack: since you have this option, could you do it?
<tomreyn> (or more precisely, could you please evaluate whether it needs to be done)
<nacc> cpaelzer: you didn't try again with --reimport, did you? (re: src:virt-manager)
<nacc> cpaelzer: if you're able to join standup early, we can talk about that
<nacc> rbasak: so i'm thinking we need to adjust something -- --no-fetch no longer implies --no-push if --no-fetch --directory is how you reuse adirectory?
<nacc> cpaelzer: virt-manager reimported and you can see that your upload tags were preserved: https://git.launchpad.net/~usd-import-team/ubuntu/+source/virt-manager/log/
<nacc> cpaelzer: will check on the MPs
<nacc> cpaelzer: interesting, they got deleted -- can you resubmit? i'll adjust the code
<coreycb> beisner: hi, can you promote nova - 2:14.0.7-0ubuntu2~cloud1 to newton-proposed?
<nacc> ahasenack: can you try your samba build with the latest snap?
<ahasenack> I sure can
<ahasenack> 211?
<nacc> ahasenack: yeah, i think so (211/212 depending on arch)
<ahasenack> nacc: is build-source aborting now when the tree is dirty?
<nacc> ahasenack: yes
<ahasenack> hm
<nacc> ahasenack: because pristine-tar doesn't like that
<nacc> ahasenack: to be clear, it was before too
<ahasenack> I change the version in d/changelog to add a ~ppa1 just before a build, so I can upload to a ppa
<nacc> ahasenack: (before == yesterday)
<ahasenack> no, before it complained, but continued
<ahasenack> ok, earlier then
<nacc> ahasenack: yeah, earlier than yesterday :)
<ahasenack> nacc: ok, didn't work
<ahasenack> 09/01/2017 14:10:14 - ERROR:stderr: dpkg-buildpackage: error: fakeroot not found, either install the fakeroot
<ahasenack> nacc: full output: http://pastebin.ubuntu.com/25446047/
<nacc> ahasenack: is that branch pushed somewhere?
<ahasenack> yes, my samba repo
<nacc> ahasenack: i think we can allow for building from dirty trees, i just was being overly cautious
<nacc> ahasenack: ok, one moment
<nacc> grrrrr
<nacc> ahasenack: fakeroot is an update-alternatives postinst
<nacc> ahasenack: so i'll need to fake that in my yaml :/
<ahasenack>  /usr/bin/fakeroot-sysv
<ahasenack> interesting
<nacc> yeah
<ruben23> hi guys is it standard that apparmor si running and installed by default on any stock ubuntu server.?
<ahasenack> nacc: you don't get the symlink when you install fakeroot in the snap?
<ahasenack> ruben23: very much so, not all of the server packages have apparmor profiles, but many do
<ahasenack> nacc: the postinst doesn't run?
<ahasenack> nacc: you can probably pass -r and use fakeroot-sysv then?
<nacc> ahasenack: yes, re: symlink
<ruben23> coz i wanted to move the directory of data storage for mysql database to a newly installed separate drive. and most of teh how to i see need to configure apparmor to be successfull on that
<nacc> ahasenack: yes, re: postinst (by design in stage-packages)
<nacc> ahasenack: i'm just making our snaps' fakeroot be fakeroot-sysv
<ahasenack> ruben23: you can adjust the apparmor profile for mysql, in /etc/apparmor.d/local/
<nacc> ahasenack: to do what you're saying, i'd actually need to write a wrapper for dpkg-buildpackage (i'm trying to avoid that)
<ahasenack> ah, you don't call that directly, but something else? Like debuild?
<nacc> ahasenack: no, we do
<nacc> ahasenack: we call dpkg-buildpackage directly
<nacc> ahasenack: but it's wrong for us to call dpkg-buildpackage -r /usr/bin/fakeroot-sysv on a non-snap install
<nacc> ahasenack: we should use what the user provides
<ahasenack> ah, the run-from-tree case
<nacc> ahasenack: yeah
<ahasenack> fair enough
<nacc> i guess 'wrong' is too strong
<nacc> but tweaks should be in the snap, imo
<ahasenack> "unexpected"
<nacc> yeah
<madLyfe> nacc: so to do the iso installer thing i need to do it from ubuntu and not win, correct?
<nacc> madLyfe: i suppose you can do it from windows, but i have no idea how
<nacc> madLyfe: seems easier to do it from linux
<madLyfe> i found this: https://askubuntu.com/questions/409607/how-to-create-a-customized-ubuntu-server-iso not sure if it is what you were talking about.
<madLyfe> can it be desktop?
<nacc> madLyfe: to make the image? yeah
<nacc> madLyfe: you just want to use an server iso to start
<nacc> ahasenack: i've pusehd the fix, but the snap builders are loaded down, it seems
<ahasenack> nacc: ok, will check periodically
<madLyfe> ok ill get setup here.
<nacc> ahasenack: in this particular case, the 'success' will be that pristine-tar will fail and then we'll failover to using launchpad (note you can also pass --no-pristine-tar to `git ubuntu build{,-source}` and it should work
<nacc> ahasenack: and the implication is samba also needs a reimport
<ahasenack> what will the cron job use?
<nacc> ahasenack: which cron job?
<ahasenack> pristine-tar for new packages?
<ahasenack> the one that imports packages
<ahasenack> or unrelated
<nacc> ahasenack: unrelated, i think
<nacc> ahasenack: or maybe i'm confused
<nacc> ahasenack: the importer was fixed to use the correct paths for imports as part of the pristine-tar changes
<ahasenack> it's just because you mentioned reimport that I wondered about the cron job, because it does imports
<nacc> ahasenack: that led to some breakage for old imports
<nacc> effectively, we changed the hashes
<nacc> ahasenack: the cron job always just imports
<nacc> ahasenack: reimport is a special mode when we need to wipe a repo
<ahasenack> tomreyn: hey, what is the scenario in that mdadm bug on xenial? Just install with raid and reboot?
<ahasenack> does it have to be raid5, or a raid with more than two disks?
<madLyfe> i can do this from desktop live usb, nacc ?
<nacc> madLyfe: i guess so, yeah
<madLyfe> do i need to download an new server iso or use the one that has already been put onto a usb key?
<nacc> madLyfe: you can use the same iso, but you're going to end up putting that on the usb key eventually
<madLyfe> ill just download a new one then if im going to overwrite it anyways
<madLyfe> hmm i wonder if 17.04 would work out of the box.
<madLyfe> is it even worth it to try that before i delve into this iso alteration?
<madLyfe> im going to try JIC
<madLyfe> this is 17.4 and lsusb from installer shows the USB Ethernet adapter (realtek) https://usercontent.irccloud-cdn.com/file/yah8cP1r/irccloudcapture2130928741.jpg
<madLyfe> but not here https://usercontent.irccloud-cdn.com/file/Vd2MF8Ry/irccloudcapture870703048.jpg
<madLyfe> during network detection
<tomreyn> ahasenack: i'm on xubuntu 16.04.3 with mainline kernel and raid 1 with dm-crypt-luks + lvm on top
<tomreyn> ...and i got those
<tomreyn> running 4.12.10-041210-generic
<tomreyn> i can try the default image if it helps - let me know
<ahasenack> tomreyn: so luks on the md device?
<ahasenack> or luks on each disk and then raid?
<cpaelzer> nacc: I can confirm that on the reimport my MPs seem still to match
<madLyfe> nacc: should i also download desktop since i know thats where the usb ethernet adapter works properly? to get the correct files from?
<tomreyn> ahasenack: ahasenack luks on top of md on top of partitions
<ahasenack> tomreyn: ok
<cpaelzer> nacc: new MPs in the trello card and on active reviews
 * cpaelzer is actually not here
<nacc> cpaelzer: thanks and sorry abou tthat
<nacc> madLyfe: you should just need to get the -extra .deb corresponding to the installer kernel
<cpaelzer> nacc: not a problem
<cpaelzer> nacc: if there wouldn't be issues the tool would be unused
<cpaelzer> so be happy that we find some
<nacc> ahasenack: can you try your build again (snap r213/214)
 * cpaelzer leaves to EOW
<nacc> cpaelzer: yeah, i told you the wrong hiddne flag (or no tell) -- it's --reimport
<nacc> cpaelzer: enjoy your w/e
 * ahasenack fetches r214
<ahasenack> nacc: worked, and was very silent, only printing out this bit:
<ahasenack> 09/01/2017 15:21:08 - INFO:Successfully built using fetch_orig_from_parent_dir(source=None)
<ahasenack> that's a debugging detail?
<nacc> ahasenack: ah you already had the tarballs in the parent dir?
<ahasenack> probably, let me clean that up
<ahasenack> ok, from scratch now
<nacc> ahasenack: yeah, if you could clear those out and retry, it'd help
<ahasenack> dirty tree, cleaning up
<nacc> :)
<nacc> ahasenack: i have a fix for that bit queued (i filed a bug to track it), it needs some thinking
<ahasenack> nacc: exactly same thing, just that info line
<nacc> ahasenack: as even with a dirty-tree, we should be able to use the pristine-tar
<nacc> ahasenack: then parent_dir ?
<nacc> ahasenack: then the orig tarballs were found in the parent directory
<ahasenack> it has the source package, tarball, all the stuff
<madLyfe> well thats confusing, if i never had internet, and the installer doesnt recognize the usb ethernet adapter at install, but can be manually added after install, it means the software is there already.
<nacc> ahasenack: right, you need to delete the tarballs from the parent dir
<nacc> ahasenack: in order to not use them
<ahasenack> I did
<ahasenack> you are lagging :)
<nacc> ok wait
<nacc> delete orig tarball from parent dir
<nacc> cd samba; git ubuntu build-source
<nacc> that should not say fetch_orig_from_parent_dir
<nacc> ahasenack: are you sure you didn't misread the messag?
<ahasenack> nacc: http://pastebin.ubuntu.com/25446444/
<nacc> ahasenack: i just tested locally and pristine-tar worked
<nacc> fetch_orig_from_cache
<nacc> ahasenack: *different* message
<ahasenack> oh, would you look at that
<nacc> ahasenack: clear the cache (rm -rf .git/git-ubuntu-cache)
<ahasenack> I don't have a /home/andreas/.git/git-ubuntu-cache
<nacc> in samba
<nacc> ahasenack: in the git repo, i mean
<ahasenack> ok
<ahasenack> gone
<ahasenack> nothing in parent
<ahasenack> cache gone too
<nacc> try the build-source again (might need a `git clean -fd`)
<ahasenack> debian/files is always there to make me smile
<nacc> yeah, that needs the build changes to use lxd to work (as we need to run clean)
<ahasenack> ok, now it's ugly
<ahasenack> still running
<nacc> ahasenack: in your current local state, i expect you'll see pristine-tar try, fail, and then fetch_from_launchpad succeed
<ahasenack> nacc: correct: http://pastebin.ubuntu.com/25446453/
<nacc> ahasenack: now what's interesting is that on my bastion, it built with pristine-tar
<nacc> ahasenack: have you run `git fetch pkg`?
<ahasenack> nacc: not just now, but earlier today sure
<nacc> ahasenack: and/or, can you try with a fresh clone somewhere (e.g., in /tmp) and see if you also see it?
<ahasenack> sure
<nacc> ahasenack: it doesn't make sense for us to see differing behavior there -- I get (http://paste.ubuntu.com/25446470/)
<ahasenack> Receiving objects:  93% (74038/79039), 146.04 MiB | 2.70 MiB/s
<ahasenack> samba is big
<nacc> ahasenack: :)
<ahasenack> next time let's try base-files
<ahasenack> ok, clone complete
<ahasenack> switch to my branch, or try straight from ubuntu/devel?
<nacc> ahasenack: your branch
<ahasenack> nacc: http://pastebin.ubuntu.com/25446481/
<nacc> ahasenack: yeah
<nacc> ahasenack: so something (seems) wonky with your local repo
<nacc> ahasenack: i'm not sure why yet
<nacc> ahasenack: as the last paste shows pristine-tar working
<nacc> ahasenack: now, it doesn't really matter, but you do want to use pristine-tar (it's faster at least) if you can
<madLyfe> what about this? http://fai-project.org
<sarnold> madLyfe: a channel regular (used to?) really like it; I've never tried it myself
<madLyfe> dont think it will work if i dont have network though.
<sarnold> that's my understanding
<sarnold> if you don't have a network but still want customized automated installs then you have to figure out either debian installer preseeding (which is .. poorly documented at best) or figure out the kickstart compatability stuff; or figure out how to use a cloud image on the hardware in question with cloud-init
<madLyfe> its frustrating since i dont really know nix and trying to communicate the language.
<madLyfe> and when i see it working fine on desktop but not server.
<ahasenack> tomreyn: ok, got it on xenial
<ahasenack> just raid1
<ahasenack> tomreyn: http://pastebin.ubuntu.com/25446612/
<ahasenack> one per disk I'm guessing, or partition
<ahasenack> tomreyn: is it just an annoyance?
<ahasenack> nacc: around?
<ahasenack> nacc: something related to manpages broke in some recent version of the git-ubuntu snap
<ahasenack> $ git ubuntu --help
<ahasenack> man: error while loading shared libraries: libmandb-2.7.6.1.so: cannot open shared object file: No such file or directory
<ahasenack> same for the rest of the git (non-ubuntu) commands
<nacc> ahasenack: ok, will debug, it' sprobably more PATH stuff
<nacc> ahasenack: so `git tag --help` also is broken?
<nacc> rbasak: cpaelzer: fyi, running the cron job from outside the snap is working. so I'm going to just let that run (it's running in verbose mode for now in a screen on my bastion)
<nacc> rbasak: i'm thinking maybe we do an integration test that's just a few triival commands ... e.g., a from-scratch (no-push) import of srcpkg (maybe one without a long history, so it's fast), a git ubuntu clone, and maybe a build right after clone?
<nacc> rbasak: and add that to CI?
<nacc> it will slow down CI quite a bit, but I think is also our best sanity check
<nacc> ahasenack: i think i agree with you as well, i will stop letting the snap auto-build propoate directly to stable
<nacc> ahasenack: i *think* it's stable right now :)
<nacc> let's talk it over on tuesday
<nacc> powersj: --^ how hard would that be to add? (basically, if we're doing a local snap build, install that snap locally (it'd need `snap install --dangerous --classic /path/to/snap` and then a `git ubuntu import --no-fetch --no-push ipsec-tools` or something
<nacc> rbasak: ooh, well, refactored (contentfully a rewrite, just due to indentation changes) of importer.py and it's working (in local testing). So that's a good sign :)
<powersj> nacc: not hard at all
<powersj> Can you file a bug or make a card and I can take care of it Monday
<powersj> (And assign me)
<nacc> powersj: good to know, i think we'll want to talk it over tuesday AM and then I'll file it then
<nacc> powersj: yep, thanks -- we'll make sure the testcase is well-defined first. I think 3 simple ones would be a good first integration checkpoint.
<sarnold> powersj: btw us federal holiday monday
<powersj> Oh yeah I get the day off! ;)
<nacc> powersj: yeah :)
<nacc> ahasenack: oh and `git ubuntu --help` works again
<nacc> ahasenack: and just pushed a fix to silence the 'failed to find manpage for git-tag' from `git tag --help` with our gitconfig workaround
#ubuntu-server 2017-09-02
<rbasak> nacc, powersj: sounds good. Thanks!
<madLyfe> nacc: what do you think about my theory on the server installer not needing any additional software for the usb adapter?
<ruben23> guys when i port scan my ubuntu server i seee open prot SSH - 22 and Http - 80, how do i close this somehow.?, i have firewall iptables already
<ruben23> any iodea guys
<nacc> madLyfe: sorry, i might have missed the theory
<madLyfe> "Â <madLyfe>Â well thats confusing, if i never had internet, and the installer doesnt recognize the usb ethernet adapter at install, but can be manually added after install, it means the software is there already."
<nacc> madLyfe: i'm going off what you reported before, that you didnt' have internet in the server installer
<nacc> madLyfe: if you do, then you do.
<nacc> madLyfe: what do you mean by 'can be manaully added after install'?
<madLyfe> like the usb network adapter wasnt detected during install. after i installed server i was able to go in an manually add it to the network interfaces file.
<nacc> madLyfe: so it worked on boot after install?
<madLyfe> so the software is already there for it to run. its just not being picked up by the installer.
<madLyfe> no
<nacc> madLyfe: then you're not being clear :)
<madLyfe> i had to manually configure it.
<nacc> madLyfe: what does that specifically mean?
<nacc> all you had to do was add it to /e/n/i ?
<nacc> also it's about 6pm here on a friday -- i'm eod/eow. Can we pick it up on tuesday (holiday here on monday)
<madLyfe> if i remember correctly? maybe there were some other commands but i didnt add any other software from somewhere else is what im getting at.
<nacc> madLyfe: then what you are describing now isn't what i remember you describing for and we should start over :)
<madLyfe> ya np. i feel ya.
<nacc> madLyfe: just ping me on tuesday and i'll try to dedicate some more brainpower to it
<sarnold> madLyfe: I thoght you had this thing up and running a few days ago?
<madLyfe> ok thanks nacc!
<madLyfe> sarnold: ya you guys got it up and running for me and that just answered the question that its possible. now i need to get it to recognize the adapter during install like everything else.
<sarnold> madLyfe: oh? why do you want to re-do the installer step?
<madLyfe> what do you mean re-do?
<madLyfe> we got it running by manually configuring the adapter after a full install and after boot.
<sarnold> like, why aren't you saying "yay time to install minecraft and start playing around"?
<madLyfe> you can play minecraft on server?
<madLyfe> these servers are just for mining. need quick and painless setups and push them to production.
<sarnold> yeah running minecraft servers is big business
<madLyfe> never played. dunno.
<sarnold> look I don't know what people -do- with their computers :)
<madLyfe> lol
<sarnold> you .. use the computer to build tools to use on the computer. uh. what more do people do with them? like there's cat videos and all but this doesn't seem like a cat video kind of computer.
<sarnold> so you've got it up and running, right? what's stopping you from installing the mining software and rolling in the dough?
<madLyfe> sorry sarnold, i had to do the dishes and made some tea.
<madLyfe> its not running at all. that was just a trial to see if my idea would work. and we proved it would. now to set up a bunch of them. i need the installer to function the way i want though.
<madLyfe> thats the last part
<madLyfe> i dont want to be manually fixing the ethernet adapter on all of these headless servers.
<sarnold> madLyfe: so, two thoghts -- if you've got more of them, are they -all- running outside the chassis? no nics? terrible usb thing for all of them?
<sarnold> madLyfe: .. could you just dd the one image around to all the disks and change what should be unique on them all after they're up and running?
<madLyfe> sarnold: yep
<madLyfe> well once i can remote into them its a breeze, as you can understand. getting the ability to remote in is my issue.
<lordievader> Good morning
#ubuntu-server 2017-09-03
<albech> is the only way for a directory to inherit parent privileges (both user, group and other) by using ACLs? Can't this be achieved with setgid and setuid?
#ubuntu-server 2018-08-27
<cpaelzer> good morning
<lordievader> Good morning
<neildugan> I have been trying to use upnpc to open up an external access (for ssh) to a lxc VM ... but every time I try to connect (with ssh) I get a "connection refused" error ... but I can find no indication in the VM that a connection was even attempted ... can anyone help here?
<tomreyn> neildugan: does the configuration change take place on the router the upnpc is talking to, though?
<neildugan> tomreyn, I think so upnpc -l returns a line "TCP 30000->10.8.0.134:30000 'libminiupnpc' '' 14400"
<neildugan> tomreyn, I have also setup ssh to listen to that port.
<tomreyn> neildugan: does the router also report that this configuration has been applied?
<neildugan> tomreyn, no idea, there appears to be no way to interogate (via web interface) anything about upnp (except it being enabled)
<tomreyn> neildugan: i see. so i'm afraid i forgot how lxc does networking. does it do bridging, nat or routing?
<neildugan> tomreyn, this VM is connected to my LAN (via a bridged interface) that has the router on it.
<tomreyn> ok, so the lxc host (the main ubuntu system) has a different IP address than 10.8.0.134, but one on the same subnmet?
<tomreyn> *subnet
<neildugan> tomreyn, yes , and I just tested to make sure the the VM can ping the router directly.
<tomreyn> neildugan: do you have another computer on the same subnet?
<neildugan> yes
<tomreyn> neildugan: can you connect to 10.8.0.134:30000 from the ubuntu lxc host? nc -vv 10.8.0.134 30000
<tomreyn> (ignore my question about other computer on the same subnet for now, this is now unrelated)
<neildugan> tomreyn, yes that connected
<tomreyn> i just tried to connect to your public ip address on tcp port 30000, and the connection was refused.
<tomreyn> is this the external ip address you'Re trying to port forward?
<neildugan> tomreyn, I can also ssh in with the ssh -p 30000 10.8.0.134
<tomreyn> from where?
<neildugan> the lxc host
<tomreyn> neildugan: can you try this from the other computer on the same subnet, too?
<tomreyn> you didn't comment on my connection attempt, on purpose?
<neildugan> tomreyn, was that last message for me?  I did ssh in via port 30000 from a different computer
<tomreyn> neildugan: yes it was
<tomreyn> <tomreyn> i just tried to connect to your public ip address on tcp port 30000, and the connection was refused.
<neildugan> yea that is my problem
<tomreyn> so this is a router issue.
<neildugan> tomreyn, I think so, but all the examples I have found for upnpc don't say I need to do anything else to get a connection.
<tomreyn> you could try deleting, then re-addig the port redirection using upnpc. but with a router bahving so unreliably, i guess i would rather try to setup port forwarding statically.
<tomreyn> some routers will accept the port forwarding configuration but not actually forward traffic unless you also enable a upnp option (such as pon their web interface)
<neildugan> tomreyn, when you tried to connect it should have said 'permission denied public key'  not 'connection refused'
<tomreyn> it would have, if port forwarding had actually taken place, yes
<neildugan> tomaw, the upnp is enabled.
<tomreyn> but apparently your router or a firewall between the router and the lxc guest just drops the traffic.
<tomreyn> it may be a good idea to try port forwarding a port to the lxc host first, and to test whether this is reachable form the internet.
<neildugan> tomreyn, the only firwall operational for that VM is on the router.
<tomreyn> this way you save some complexity
<neildugan> I thought it would be fairly simple myself that is why I am testing it.... but something unknown is wrong.
<tomreyn> run this on the lxc host: while true; do echo 'You are connected.' | nc -vv -l 6000; sleep 2; done
<tomreyn> then run this in a separate terminal window on the lxc host, substituting LXCHOST by the lxc hosts' LAN ip address: upnpc -a LXCHOST 6000 6000 TCP
<tomreyn> confirm the symmetric port forwarding for port 6000 to the lxc host was correctly configured using 'upnpc -l'
<tomreyn> then tell me to test it.
<tomreyn> neildugan: ^
<tomreyn> you can also use a service like this to test it (specify port 6000): http://canyouseeme.org/
<neildugan> I was just test a already setup port forward to a different VM . it isn't working ether I am getting a 'Connection timed out' error with that one.
<neildugan> tomreyn, it I understand the second part this will allow you into the host computer?
<tomreyn> neildugan: 'connection timed out' (no response was received at all, traffic was dropped) is different from what i was seeing, i got connection refused (traffic was actively denied, so a TCP RST was returned to me).
<neildugan> tomreyn, that was via a NAT port forward setup in the router via the web-interface into a different VM
<tomreyn> neildugan: the second part of the command would allow me (or anyone) to connect to the netcat (nc) process on your lxc host, which you start in the first part. this netcat process only accepts an incoming tcp connection and responds to it with the text 'You are connected.'. it doesn't grant any means of running commands on your system
<neildugan> ok
<tomreyn> neildugan: you dont need to trust me there, though, pick any port for nc and (the same) for upnpc and try it with canyouseemee.org or a similar service of your choice.
<tomreyn> the idea there is to ensure that your router actually does the port forwarding from the internet
<neildugan> ok those commands have been done
<tomreyn> neildugan: i just seem to have connected fine. you should see my connection on the terminal window you ran 'nc' on
<tomreyn> i didn't get to see the expeced output, thougfh, and my connection was dropped after it had been established
<neildugan> tomreyn, I don't , but I do see my test it
<tomreyn> neildugan: how / where from did you test?
<tomreyn> oh, it worked this time
<neildugan> are now I see your connection, I have an externally host VM
<tomreyn> so now you should see my connection attempt, from an ip address ending 101.119
<neildugan> yes
<neildugan> tomreyn, from mue-88-130-101-119.dsl.tropolys.de 56240 received!
<tomreyn> so port forwarding does work generally on the router
<neildugan> tomreyn, so if the VMs sshd_config was wrong in some way wouldn't the attempt still show up in /var/log/auth.log
<tomreyn> it is possible that you need to enable ip forwarding on your lxc host.
<tomreyn> whether or not the connection attempt would still show on the lxc guests' /var/log/auth.log would depend on whether or not sshd is binding to the right ip address and port on the lxc guest.
<tomreyn> luckily, you can repeat the same 'nc' test on the lxc guest
<tomreyn> i.e. stop the ssh server on the lxc guest, then verify the port forwarding to its port 30000 is still configured, using upnpc -l
<neildugan> the modem has a 'ping' dianostics ... it can ping the VM
<tomreyn> okay, but this doesnt tell you too much about whether tcp traffic would reach it, and would be returned to where it was initiated from
<tomreyn> 'ping' (icmp echo request / response) is a different ip protocol than tcp, and is much more simple (which is why it is a good first test, but not as meaningful as a tcp data transfer test)
<neildugan> tomreyn, I was just seeing if the modem could initiate a connection to VM.  I set up the same test as before on port 6001 on the VM it seems to work.
<tomreyn> neildugan: good. but you really want to test this from the internet
<neildugan> tomreyn, I did, from my external VM
<tomreyn> oh, so you can now reach port 6001 on the lxc *guest* from the internet?
<tomreyn> neildugan: ^
<neildugan> tomreyn, port 6001 is redirected to the VM
<tomreyn> if you can reach port 6001 on the lxc *guest* from the internet, then all that's left to diagnose is the ssh server configuration.
<neildugan> hang on a sec I am going to test with port 30000
<neildugan> can't do that 30000 is in use by sshd
<tomreyn> and it should be :)
<tomreyn> you could stop the ssh service temporarily
<tomreyn> systemctl stop ssh
<tomreyn> then you could run the looped 'nc' command on port 30000 as we did above
<tomreyn> i still get:    nc: connect to [your wan/internet ipv4] port 30000 (tcp) failed: Connection refused
<neildugan> here is the sshd_config .... I can't see anything wrong... https://paste.ubuntu.com/p/qCNmQQfyGb/
<neildugan> tomreyn, atm I am connected to the VM via SSH so wouldn't stoping the service lock me out.
<tomreyn> neildugan: if, by 'the VM' you are now referring to the lxc guest on your LAN we are diagnosing, then my answer would be: yes, it would lock you out in terms of ssh. but you will containue to be able to manage it through lxc from the lxc host.
<whislock> Stopping the ssh service does not terminate your current session.
<neildugan> I have found something strange here... i disconnected from the VM ... and got in another way... but what I found was that when I setup nc on port 30000 ... it didn't work ... but it does on port 6001
<tomreyn> whislock: are you sure there? i know that restarting the ssh service doesn't, but stopping it wouldn, doesn't it?
<whislock> Just stopped it on this system, and I'm still here talking.
<whislock> So yes. I'm quite certain.
<tomreyn> whislock: whoops, i guess i should have known this, thanks.
<ahasenack> cpaelzer: can you import a new package into git-ubuntu?
<neildugan> tomreyn, I have setup nc on both port 6001 and 30000 , and only port 6001 is working... any ideas why the port number seems to be important?
<tomreyn> neildugan: since you have so far referred to both the lxc guest on your (bridged) LAN (the system you are trying to connect to by ssh) as well as some VM somewhere on the internet as 'VM', using this term is now ambiguous.
<whislock> What are we trying to accomplish here exactly?
<tomreyn> neildugan: if you have both nc on the lxc gues on your lan running in listen mode on these ports AND have setup port synchronous forwardings for these ports on the router and you can establish a tcp connection to only one of the two ports from the internet then this would suggest that your isp blocks connections to one of these ports.
<neildugan> tomreyn, I suppose so, sorry, but by VM I am referring to my local VM not the one I am testing from.   What I need to do is be able to connect to the local VM from the Internet ... I am using an external VPS to check the connection from the Internet.
<neildugan> ahh... I wonder if some other use of port 30000 is a problem... and is getting blocked.
<tomreyn> whislock: neildugan has an lxc guest on his LAN at 10.8.0.134, with ssh binding on port 30000. he uses a network bridging configuiration for LXC. he is trying to make this lxc guests' sshd port available from the internet on the sam port (30000) by using miniupnpc's demo client 'upnpc' to setup UPNP port forwardings on his internet / WAN router (which is reachable at neildugan's IRC clients' public ipv4 address).
<whislock> One immediately asks... why?
<neildugan> I have want to give limited access to certain resources to some people.
 * tomreyn did not, doesn't consider it an unsual use case
<whislock> The use case is not unusual. The method is.
<tomreyn> whislock: you mean you'd prefer a static port forwarding / symmetric NAT configuration on the router?
<neildugan> I shifted the ssh to port 6001 and had no trouble... all this time wasted because of the ISP ... I have been trying to get this to work for quite a few days now.
<cpaelzer> ahasenack: I can import
<ahasenack> cpaelzer: libcloud please
<cpaelzer> ahasenack: are you looking for a one-off import or to include something in the regular automation
<cpaelzer> ?
<whislock> Yes. UPnP is a security nightmare.
<michal_f> hello. I can't do: apt install lxc
<ahasenack> cpaelzer: well, I don't know what our policy is
<michal_f> should I add any repo's ?
<ahasenack> cpaelzer: if we don't import, then I can't make an MP, and it will be a debdiff
<michal_f> package is not found
<cpaelzer> ahasenack: so far I have imported some as one-off which was fine - if it turned out to be a regular need I made a suggestion to add it to the whitelist
<whislock> tomreyn: The right way to do this is to leave the ssh service port as the default, and forward whatever desired public port to the service's IP/port.
<cpaelzer> ahasenack: I'd do a one-shot import now ok?
<ahasenack> cpaelzer: make it so
<ahasenack> :)
<ahasenack> thanks
<kstenerud> morning!
<ahasenack> hello kstenerud
<tomreyn> whislock: so asymmetric nat, yes this can be a little easier, but not much.
<neildugan> whislock, yes I know, but as port 30000 was being blocked (unknown to me) when I tried by using external port 30000 -> internal 22 it did work I tried other things.
<whislock> tomreyn: It's FAR easier, and it's the right way to do things.
<tomreyn> neildugan: pointing this out earlier could have helped ;)
<neildugan> tomreyn, thanks for all the help.
<tomreyn> neildugan: you're welcome
<neildugan> tomreyn, pointing out what .... I said I was using port 30000 and I was trying to use SSH ... what did I miss
<sdeziel> michal_f: do you want to use LXC/LXD containers?
<michal_f> actually I'm following instructions to install Zulip server https://zulip.readthedocs.io/en/latest/development/setup-vagrant.html#ubuntu
<michal_f> a development installation
<sdeziel> michal_f: oh OK. They use of the old lxc which I'm not familiar with, sorry
<tomreyn> neildugan: diagnosing this could have been easier if you had previously stated that an assymetric nat configuration (WAN port 30000 to LAN port 22) failed to work (and how). but don't worry about it now.
<michal_f> sdeziel, thanks! any tips at all I could follow ?
<sdeziel> michal_f: I don't know zulip but maybe you could join their chat: https://zulip.readthedocs.io/en/latest/contributing/chat-zulip-org.html
<neildugan> tomreyn, ok .... but it failed in exactly the same manner ... just 'connection refused' .... anyway thanks
<cpaelzer> ahasenack: it imported 8 versions and then died on a http 410 - retrying and taking a look
<ahasenack> ok
<cpaelzer> always seems to break on 0.5.0-1
<ahasenack> I still hit this bug every now and then when using git-ubuntu build-source
<ahasenack> FileNotFoundError: [Errno 2] No such file or directory: '.pc'
<ahasenack> that's 410 again?
<ahasenack> gone
<cpaelzer> gone it is
<ahasenack> 0.5.0 is the first one we have in lp
<ahasenack> https://launchpad.net/debian/+source/libcloud
<ahasenack> hm, no
<ahasenack> publishing history has more
<tomreyn> michal_f: you are probably affected by #1783129
<tomreyn> https://bugs.launchpad.net/subiquity/+bug/1783129
<ubottu> Launchpad bug 1783129 in subiquity "Only "main" component enabled after install" [High,Confirmed]
<tomreyn> neildugan: okay, then ignore my remark, sorry.
<cpaelzer> ahasenack: I was playing around with arguments, but I can't get it like libcloud
<ahasenack> cpaelzer: n/m then, it's a simple diff
<michal_f> tomreyn, thx. looking intoit
<ahasenack> thanks for trying
<neildugan> where would I make a suggestion on an improvement to the UI
<michal_f> tomreyn, that was it. thank you
<tomreyn> you're welcome, michal
<tomreyn> ahasenack: is there a chance to have a 18.04.1 live-server (subiquity) installer iso rebuild (i.e. before .2) to help people affected by this (and maybe some of the other more serious bugs in it)?
<tomreyn> the 'cannot install package' issue affects a lot of users from what i see on irc.
<tomreyn> (and its not immediately obvious what the cause is / what they need to search for to find a fix)
<tomreyn> alternatively, a hint on this could be placed in the (network/canonical) generated motd 'news' (unless this is strictly reserved for marketing purposes).
<ahasenack> tomreyn: yeah, I know, I pinged about this
<ahasenack> tomreyn: dpb1 is back today, maybe he can do something about it
<tomreyn> thanks
<ahasenack> cpaelzer: I attached a debdiff to the bug, and local dep8 runs. Bileto is still running: https://bugs.launchpad.net/ubuntu/+source/libcloud/+bug/1788931
<ubottu> Launchpad bug 1788931 in libcloud (Ubuntu) "FTBFS libcloud does not work or build with py3.7" [Undecided,In progress]
<ahasenack> cpaelzer: this basically disables py3.7 support, since it's not working, and upstream is aware
<kstenerud> ahasenack: Can you help me set up virtual networking that will allow a VM to talk to a container? I keep hitting dead ends trying to configure bridges and virtual interfaces
<kstenerud> trying to get an IPA server and an IPA client to talk to each other
<kstenerud> but keep getting things like: libvirtError: error creating macvtap interface macvtap0@vethWG668H (52:54:00:6d:b9:15): Device or resource busy
<ahasenack> hm
<ahasenack> kstenerud: are both (vm and container) on the same host?
<kstenerud> yes
<ahasenack> kstenerud: don't use macvtap then
<ahasenack> kstenerud: when creating the network with libvirt,
<cpaelzer> kstenerud: could you be rather brute and just make lxd and this VM use the same bridge?
<cpaelzer> might need to make sure that only one has a dhcp server on it
<ahasenack> kstenerud: and I use virt-manager for that (the gui), I just tell it to create a bridge, nat, and forward to any physical interface
<ahasenack> I don't even have to put them on the same bridge, the host can route between them
<kstenerud> TBH I have no idea what I'm doing. I have an existing br0 running, and that just exposes anything that uses it directly to the lan, which is fine
<cpaelzer> kstenerud: who does dhcp on that br0 ?
<kstenerud> but when I try to connect a kvm to it via virtual manager, my two options are NAT and macvtap
<kstenerud> dhcp is another machine on the lan
<cpaelzer> kstenerud: ok just a sec
<cpaelzer> kstenerud: you want to create a new network in libvirt as outlined at https://netplan.io/examples#bridging
<kstenerud> There seem to be two different places to set up networking on vmm
<cpaelzer> kstenerud: essentially put that in a .xml and run "virsh network define filename.xml"
<cpaelzer> then you have a new netwrok called br0
<RoyK> a new nic
<cpaelzer> and even in virt manager you can then select that as an alternative to the default network
<RoyK> not a network
<cpaelzer> when you edit the nic
<cpaelzer> virsh net-define I meant
<cpaelzer> thanks RoyK
<cpaelzer> but I meant him to create a new "network" in the sense of a libvirt network with a name and an associated configuration
<RoyK> bridging is nice for VMs or containers
<RoyK> sometimes you want them on a separate VLAN, if so, configure the bridge to setup one etc
<RoyK> mostly you don't need that
<cpaelzer> ack
<kstenerud> hmm I have br0 on this machine somehow. It's not defined anywhere that I can see in /etc
<kstenerud> # brctl show br0
<kstenerud> bridge name	bridge id		STP enabled	interfaces
<kstenerud> br0		8000.fee2613aae5b	no		vethMBNVBM
<RoyK> kstenerud: pastebin output of "ip r" and "ip a"
<RoyK> kstenerud: don't paste it here
<RoyK> !pastebin
<ubottu> For posting multi-line texts into the channel, please use https://paste.ubuntu.com | To post !screenshots use https://imgur.com/ !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic.
<kstenerud> https://paste.ubuntu.com/p/cFBhndyYWK/
<kstenerud> The lan is 10.5.0.0/16
<kstenerud> dhcp is from 10.5.0.1
<RoyK> you don't want an ip address on the bridged interface
<RoyK> that is - if your physical is eth0, you don't want an ip on that - only on the bridge
<ahasenack> cpaelzer: hold off on that review, I also actually need to fix the py3.7 incompatbility. The package is installable with just that debdiff, but I need more to unblock the strongswan migration (long chain of dep8 tests)
<ahasenack> and upstream seems more responsive in github than in jira
<ahasenack> surprise! :)
<cpaelzer> ahasenack: ok holding back
<ahasenack> kstenerud: did you see cpaelzer's comment in https://code.launchpad.net/~kstenerud/ubuntu/+source/openssh/+git/openssh/+merge/353531 ?
<ahasenack> kstenerud: you should have gotten an email about it
<kstenerud> Oh yes I see it
<kstenerud> I'm not sure what it means...
<ahasenack> it means it's your last chance to make changes :)
<ahasenack> he is ready to upload it for you
<kstenerud> oh ok :)
<ahasenack> kstenerud: make sure you are subscribed to bug #1771340 so you don't miss it when the SRU team accepts it, or wants to get in touch with you
<ubottu> bug 1771340 in openssh (Ubuntu Xenial) "sshd failed on config reload" [Low,In progress] https://launchpad.net/bugs/1771340
<lob0> hi!
#ubuntu-server 2018-08-28
<xase> Anyone have a good document for nameservers with bind? I'm having trouble getting my nameservers set up and resolving... and I'm not sure now what is wrong at this point.
<lordievader> Good morning
<lordievader> xase: Is bind a must? Else I'd recommend using PowerDNS, much easier to set up.
<cpaelzer> good morning lordievader
<cpaelzer> hope your week started well
<lordievader> Hey cpaelzer
<lordievader> It did, busy but good.
<lordievader> How about you?
<cpaelzer> lordievader: about the same :-)
<lordievader> Any plans for today?
<cpaelzer> The days I could have ONE plane for a day are long over :-)
<cpaelzer> but for like "now" ppc64el build errors between stdbool/altivec headers
<cpaelzer> well I also never had a plane, but I meant "plan" obviously
<lordievader> ð
<gartral> fejwf
<gartral> hey all, I had a webapp crash... I logged in and I was getting "out of space errors" I ran a df -h and got back /dev/xvda1      299G  287G     0 100% / where did those 12gb go?!
<gartral> hello all
<gartral> I went to use a service I run for myself tonight and it wouldn't load, I ran a df -h and got back /dev/xvda1      299G  287G     0 100% /
<gartral> where sis those 12gb go?!
<gartral> did*
<lordievader> ext filesystems reserve a bit of space for emergency logins.
<lordievader> This can be configured and was per default 5 percent IIRC.
<lordievader> The tool `dumpe2fs` can show these configurations.
<gartral> that seems... extreme for so much disk space... especially since it doesn't reserve the space for non-root purposes
<gartral> sudo tune2fs -m 2 /dev/sdb1 would set it to a reasonable 6gb for a 300gb disk, yes?
<mwhudson> yeah it's a default from a different era really
<mwhudson> in fact i thought it had changed but maybe not...
<gartral> Apeparently not. >.>
<gartral> I was not pleased to discover that 300gb had 12gb reserved XD
<gartral> in the era multi-terrabyte disks... 5% of a disk is an unreasonably large percentage to reserve >.>
<gartral> 1-2% is reasonable.. but still a significant portion to set asie
<gartral> aside*
<ahasenack> cpaelzer: hi, how would I best come up with a url that identifies that my patch was merged upstream, given https://github.com/apache/libcloud/commits/trunk shows that my change was rebased on top of master, instead of merged?
<ahasenack> the PR link which is closed and says "fix committed"? That's equal to "Forwarded:"
<ahasenack> (context: dep3 headers)
<ahasenack> I could construct a url in github that shows the diff on master between two commits perhaps
<ahasenack> or just not add the applied-upstream header bit
<cpaelzer> hi ahasenack
<ahasenack> I think "yes" would suffice for this case, but the dep3 header template says it must be either a url, version, or commit
<cpaelzer> ahasenack: do I understand it correctly that you'D want one (=1) link that you can use as "origin:" for all the commits you had here?
<ahasenack> cpaelzer: not origin, but "Applied-Upstream"
<cpaelzer> ah ok
<cpaelzer> I tihnk "yes" will do in this case
<cpaelzer> you can add in the description the link you just shared
<cpaelzer> ahasenack: to some extend (but that is opinion) we never should be overzealous on the last bits of interpreting these tags - after all they are meant to make things understandable and to be able to follow the path of a change
<ahasenack> sure
<cpaelzer> IMHO if the metadata you add will allow one to find the right spots, then it doesn't matter too much if you write yes/yez/y or similar
<cpaelzer> I know tools will have to rely on a certain format
<cpaelzer> but just saying, I'd want not to spend "too much" on trying to be too accurate
<ahasenack> just checking if there was precedence
<ahasenack> cpaelzer: ok, I made libcloud green (https://bileto.ubuntu.com/excuses/3384/cosmic.html)
<ahasenack> including a few "always failed" cases
<ahasenack> I know why the i386/armhf ones are failing, but I don't know why they are *not* failing in the other arches
<ahasenack> it's caused by a change in python-requests: https://github.com/requests/requests/issues/3477
<ahasenack> an old-ish one even
<ahasenack> I checked versions and i386 and amd64 have the same ones
<ahasenack> I'll investigate a bit more, but we are already in a better place
<cpaelzer> thank you ahasenack
<cpaelzer> for not giving up!
<cpaelzer> and the ppc64 build issue fixed (again)
<cpaelzer> ready for lunch ...
<ahasenack> cpaelzer: https://code.launchpad.net/~ahasenack/ubuntu/+source/libcloud/+git/libcloud/+merge/353856
<ahasenack> I'm counting on that one unblocking paramiko and strongswan from cosmic-proposed
<ahasenack> I might have to ask for some sort of in-step migration, though
<cpaelzer> ahasenack: checking the MP now ...
<cpaelzer> ahasenack: #placeholder
<cpaelzer> did I miss something why this is needed in your MP?
<ahasenack> yo don't like it? :)
<cpaelzer> I might like it once I get what it is about
<ahasenack> git-ubuntu build is failing miserably in packages that do not have debian/patches
<ahasenack> so I'm trying to get used to it
<ahasenack> I've seen "#placeholder" for d/p/series in some packages that do not have any patches
<cpaelzer> didn't consciously see that before, but I see why it is there now
<cpaelzer> gmm
<cpaelzer> hmm
<ahasenack> so I started by adding it, when there was no patch
<ahasenack> and just added patches on top
<cpaelzer> but since you now have patches it looks odd from an empty file to
<ahasenack> but of course it's not needed
<cpaelzer> #plaeholder + 2 changes
<ahasenack> ok, I can drop it
<ahasenack> it's really a g-u bug
<ahasenack> but will probably be a long while until it's addressed
<cpaelzer> yes
<cpaelzer> can the file be just empty?
<ahasenack> that I don't know
<ahasenack> haven't seen it
<cpaelzer> just like you don't use g-u submit I gave up on g-u build-source
<ahasenack> hehe
<cpaelzer> I could not go on day-by-day without the review/backport features
<cpaelzer> but it seems the non core features are unrelibale (as we know)
<cpaelzer> ahasenack: once I push the ta after you dropped the placeholder - do you need sponsoring on this?
<cpaelzer> I see you have a PR for the i386/arm tests
<cpaelzer> next question: do you want to wait and bundle those once accepted?
<cpaelzer> or get it fixed soon to unblock migrations?
<kstenerud> morning!
<sdeziel> good morning!
<cpaelzer> hi sdeziel and kstenerud
<cpaelzer> ahasenack: I see you dropped the #placeholder
<cpaelzer> ahasenack: what about my question on the i386 test fix
<cpaelzer> do you want to do all that in one upload?
<cpaelzer> or get things up to unblock now
<cpaelzer> and if you want things now, do you need more than tags being pushed?
<cpaelzer> since it is not imported you don't even need that actually
<xase> lordievader, I followed a server setup setup for ISPConfig... Will powerDNS drop in as a replacement?
<lordievader> Err, no idea. However, PowerDNS is a lot easier to set up than bind.
<cpaelzer> harr - we are more upstream than upstream on the openvpn service file and that breaks us
<xase> It sounds easier. With a name like powerDNS
<cpaelzer> andol: kstenerud: their .deb has older .service than their actual git repo - it is a trap if the former works and the latter which we follow doesn't ?
<kstenerud> ahasenack: Do I need to use the web interface for this ipa testing env? What else needs to be set up?
<ahasenack> there is a command-line tool, but I used the web interface in the past
<ahasenack> I think it demands kerberos authentication now. So you would have to install krb5-user on your machine, set it to the example.com realm, and tell it that the kdc is the vm where you installed freeipa
<ahasenack> then run "kinit administrator" and get a ticket for the admin user (or kinit admin, I don't remember the exact name)
<kstenerud> OK, I typed that and it didn't complain. Where would the ticket be?
<ahasenack> run klist
<kstenerud> OK. what does the ticket do?
<ahasenack> it's an authentication token, valid for X hours. It can allow you access to other kerberized services without having to type in a password again
<ahasenack> kstenerud: before going further,
<ahasenack> kstenerud: this setup used your bind9 package with that linker patch, right?
<kstenerud> I have the ppa version installed
<ahasenack> kstenerud: ok, so I think you can propose that fix for cosmic
<ahasenack> since you just validated it yourself :)
<kstenerud> by installing it?
<ahasenack> by completing the freeipa installation
<kstenerud> oh right :)
<ahasenack> you can further check /var/log/syslog to see if there are bind9 crashes in there
<ahasenack> named-pkcs11 iirc is the binary name
<kstenerud> yeah no crashes
<ahasenack> so, MP it away
<ahasenack> we can still upload fixes
<ahasenack> but after a while, even those need an exception
<ahasenack> keep an eye on the calendar: https://wiki.ubuntu.com/CosmicCuttlefish/ReleaseSchedule
<kstenerud> ahasenack: I'm getting weird behavior with uvt-simplestreams-libvirt. When I sync a release, the command returns with no error, but the image doesn't actually download
<ahasenack> kstenerud: does it delete the images you already had in the query command?
<ahasenack> kstenerud: you have been having weird behavior with that tool since the beginning, something is going on
<ahasenack> maybe purge everything and start over (libvirt, uvt)
<ahasenack> cpaelzer: after the libcloud upload, paramiko and strongswan migrated right away \o/
<ahasenack> and look at that green wave: http://autopkgtest.ubuntu.com/packages/libcloud
<ahasenack> it's just starting
<DammitJim> Any recommended cluster (distributed) file servers on ubuntu?
<ahasenack> "Windows, i.e., NetBIOS, name resolution is on Linux provided for by the "winbindd" daemon" uhh?
<ahasenack> "One does not need either of the other two Samba daemons "smbd" and "nmbd" when NetBIOS name resolution is all you need"
 * ahasenack wonders what he things nmbd does
<RandomTech> Hello, Would anyone here be able to answer a question i have about ceph?
<dpb1> best just to ask the question, RandomTech :)
<RandomTech> I was just wondering if it is still recomended to have my journal on a seperate ssd (I have 2 3TB HHD and 1 250GB SSD) and if so how should i calculate the space needed
<RandomTech> For ceph of course
<kstenerud> cpaelzer: Can you help me with a virtual bridge problem?
<RoyK> !ask | kstenerud
<ubottu> kstenerud: Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience
<cpaelzer> kstenerud: virtual bridge problem with <detail> ?
<cpaelzer> ahasenack: libcloud -> paramiko -> strongswan and co - \o/
<ahasenack> yeah, that was nice
<kstenerud> cpaelzer: It was the virtual bridge you'd helped me set up yesterday. For some reason now when I try to uvt-kvm create with the template, it fails:
<ahasenack> and quick
<kstenerud> uvt-kvm: error: libvirt: Cannot get interface MTU on 'br0': No such device
<kstenerud> Something's changed on my system because it used to work until a few hours ago
<kstenerud> and I can't for the life of me figure out what
<cpaelzer> hmm
<cpaelzer> one moment
<kstenerud> virsh net-list shows br0
<kstenerud> active
<cpaelzer> does it exist with brctl show ?
<cpaelzer> and ifconfig br0 is the device there
<cpaelzer> or did lxd take it away for some reason?
<kstenerud> hmm nope none of that
<cpaelzer> kstenerud: so it doesn't exist with brctl show ?
<kstenerud> correct
<cpaelzer> arr
<cpaelzer> what does lxc profile edit default tell you
<cpaelzer> what is the bridge
<cpaelzer> to be used in lxd
<cpaelzer> ?
<kstenerud> it shows eth0 bridged to br0
<kstenerud>     name: eth0
<cpaelzer> hmm, maybe it is rather dynamic
<kstenerud>     nictype: bridged
<kstenerud>     parent: br0
<cpaelzer> are any lxd containers up that would use the bridge?
<cpaelzer> if not it might have removed the bridge
<kstenerud> nope. no containers
<cpaelzer> yeah , there you go
<cpaelzer> we created a frankenbridge intentionally
<cpaelzer> you need to have once
<cpaelzer> one container most likely to ahve it existing
<cpaelzer> sorry for the typos
<cpaelzer> check my telegram pic for the reason
<cpaelzer> :-)
<kstenerud> lol
<kstenerud> OK so basically as soon as I do the net-define thing I need to create some minimal container to use it?
<cpaelzer> yeah, and never remove the last container or the bridge will go away
<cpaelzer> you could follow the netplan.io example I linked
<cpaelzer> to create a fully separate bridge
<cpaelzer> you'd need to make sure there is some dhcp server on it external or no
<cpaelzer> not
<cpaelzer> but then the lifecycle of that bridge would neither depend on libvirt nor lxd
<cpaelzer> kstenerud: I'd leave for today, is that enough to get it working?
<kstenerud> so if I did a full on netplan bridge, that would do it?
<kstenerud> I think so
<cpaelzer> yeah that bridge would exist all the time
<cpaelzer> just as I said, someone should provide dhcp on it
<cpaelzer> but that might as well be your router or whatever is on the bridged ethdev#
<kstenerud> cpaelzer: This is all going to be local to the VM I'm running this whole shebang on
<kstenerud> I'm doing a cloud-side version of this tester setup
<ahasenack> I just use the libvirt-created bridge
<ahasenack> virbr0
<ahasenack> that's where I 'hang' my vms
<cpaelzer> fine for me if that is what you want
<kstenerud> but can lxd and vms talk to each other on virbr0?
<ahasenack> I hang containers in another bridge, the one created by lxc init
<ahasenack> they get another network
<_KaszpiR_> should be doable
<ahasenack> but can talk to each other just fine
<ahasenack> just not dns, usually
<ahasenack> the host routes
<cpaelzer> ahasenack: he had it on the one by lxc init
<ahasenack> dns got messy after systemd-resolved
<cpaelzer> he jsut ran into that bridge being removed when the last container goes away
<ahasenack> I had it working so nicely in xenial, with the containers resolveable by name automatically
<cpaelzer> kstenerud: being on the same bridge with an IP of the same subnet will work
<cpaelzer> it really behaves like a switch most of the time :-)
<kstenerud> cool :)
 * jhebden is back from [afk] - 426524h:56m:12s away
<mason> 48 years?
<jhebden> think about it for a sec, it was a very magical time for *NIX
<mason> Ah, yes.
 * RoyK guesses a raspberry pi with no rtc or ntp
<madLyfe> is NFS installed by default on sever?
<RoyK> the kernel modules, yes, but you need nfs-common to use it
<RoyK> (iirc)
<madLyfe> so here is my situation. i setup(partially, not fully configured to my liking yet) samba on ubuntu server to share a zfs mirror array to my network. i have mapped that share on win 10 as a drive and now im getting my ubuntu desktop setup. im not sure if i should use NFS or just use the the samba share?
<RoyK> should work if the smb share is mounted
<RoyK> just install nfs-common first and configure /etc/exports - exportfs -a and test it
<sarnold> jhebden: just wait until you see what's been done with unix, you'll /away again in a real hurry..
<RoyK> sarnold: ?
<madLyfe> what do you mean should work if the smb share is mounted?
<sarnold> RoyK: I suspect polkit could make anyone cry..
<jhebden> sarnold: I'm just now reading about javascript and am digging a bunker brb
<sarnold> jhebden: lol
 * RoyK installs windows 95 on sarnold's machine
<sdeziel> madLyfe: ubuntu can mount smb/cifs shares just fine
<sarnold> RoyK: hehe, in 1995 that actually seemed pretty neat. :)
<RoyK> for a short while
<madLyfe> ya im just asking if i should stick with the smb share i have setup or setup NFS as well? transition to only using NFS for nix and win both?
<sarnold> RoyK: I mean, I had plug-n-play with linux earlier, and 32 bit protected mode, and better screensavers, and web browser, and web servers, and so on.
<sarnold> RoyK: but as far as windows went it was neat ;)
<RoyK> on windoze it was plug-n-pray, remember?
<sdeziel> madLyfe: I'd stick with samba if you have it working already.. unless you want to learn nfs
<sdeziel> madLyfe: I don't know how well win10 supports nfs mounts
<madLyfe> i will just do that for now. do they play well running together?
<RoyK> windows has supported nfs since nt 3.51 or something, but it has always sucked - don't use nfs with windows
<madLyfe> is it possible to make the share act like a drive? and auto mounted?
<RoyK> !autofs
<ubottu> Automount is the modern way to mount directories over a network. It is much easier to manage and  more economic in bandwidth than static mounts via fstab. For more info - https://help.ubuntu.com/community/Autofs
<madLyfe> well do i need cifs-util?
<RoyK> to mount cifs, yes
<sarnold> yeah I tried NFS on windows. do not recommend. stick to smb/cifs stuff for windows.
<madLyfe> but since this is a samaba server hosted by ubuntu server i dont need that?
<sdeziel> madLyfe: this is needed client-side only
<madLyfe> the autofs is only needed you mean, sdeziel?
<sdeziel> madLyfe: no I meant cifs-utils
<RoyK> cifs-utils is the client side to mount cifs, samba is the server-side. autofs is the client thing to automount things
<madLyfe> so the samba server is using cifs standard?
<RoyK> samba is using SMB3, also called CIFS
<RoyK> the worst name of a network filesystem ever - "common internet filesystem"
<madLyfe> ok cuz i read that cifs was IMB or MS? implementation of the samaba protocol?
<RoyK> and SMB really sucks over a slow link
<madLyfe> i have cat5
<madLyfe> so i need the cifs-utils and the autofs?
<RoyK> madLyfe: cat5 can be used for a lot of things, including strangling people - the question is the data rate of your network
<sarnold> (don't strangle people)
<RoyK> IIRC cat5 can be used for gigabit up to 50m or so, cat5e up to 100m, cat6 can do 10Gbps over 50m, cat6a 10Gbps over 100m
<madLyfe> so what is the difference between just right clicking and adding the share to the file manager(nautilus or dolphin) and do the cifs-utils/auto mount?
<madLyfe> *crickets* :P
<tomreyn> the former does, i think, not automount. the formeruses udisks2 to make it happen.
<madLyfe> did you mean the latter on the last part of your statement?
<sarnold> you may not get great results asking desktop tools questions in a server channel, hehe
<madLyfe> true
<sarnold> I've never seen nautilus or dolphin, can't tell you what they do :)
 * tomreyn was referring to the graphical file manager in both cases
<madLyfe> oh ok. checks out. :P
<madLyfe> i know the system has a GUI settings area to auto mount things like the internal drive.
<madLyfe> so is that what i need the cidfs-utils for? to make the samba share 'look' like a drive?
<madLyfe> so if im going to mount the network share, 'sudo mkdir /media/servershare' is their example, how do you make the dir a drive and not a dir on the OS drive?
<madLyfe> i obv dont understand fully how that works but im just trying to formulate the question as best i can.
<sarnold> the only way to access files on linux is to mount the drive into the filesystem.
<nacc> sarnold: i think you're confusing yourself about filesystems and disks
<madLyfe> so like all my internal drives are already auto mounted and are listed under devices. i right clicked and added the network share under places called 'kinghat-server': https://usercontent.irccloud-cdn.com/file/1SqiE56E/image.png
<sarnold> nacc: unlikely. :D
<madLyfe> my goal is to make the share show up like a disk under devices.
<nacc> sarnold: bah, sorry, meant madLyfe!
<sarnold> nacc: I'm doing my best to respond to madLyfe's questions .. hehe.
<nacc> and had already started typing your nick about something previously :)
<sarnold> aha! :D
 * nacc goes back to other things
<madLyfe> is there a proper place in root where i should be making the dir at?
<Xase> lordievader, you're familiar with PDNS, did you get a failure on your first startup? and if so how did you fix it?
<Xase> The error is very vague.
<sarnold> Xase: what's the error?
<Xase>   Process: 10232 ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --disable-syslog --log-timestamp=no --write-pid=no (code=exited, status=1
<Xase> Oh something else
<Xase> Unit start.service could not be found.
<sarnold> Xase: that feels likely to be a misuse of the systemctl command
<sarnold> what exactly did you type to get that error?
<Xase> service pdns.service start
<Xase> err service pdns start
<Xase> not .service
<sarnold> try systemctl start pdns
<Xase> Same... reading journalctl gave me a better error.
<Xase> Hold while I pastebin it.
<madLyfe> ooooo https://usercontent.irccloud-cdn.com/file/DnEyLZmU/image.png
<Xase> https://pastebin.com/puLf9yK1 sarnold
<Xase> I stopped bind before starting it...
<Xase> Disable bind from starting and reboot server?
<sarnold> Xase: okay, something is already trying to do dns. what's binding that port? you can check with netstat -ulnp | grep :53
<sarnold> madLyfe: what tool is that?
<Xase> udp    21504      0 127.0.0.53:53           0.0.0.0:*                           802/systemd-resolve
<sarnold> aha
<madLyfe> you mean that is displaying that info?
<sarnold> madLyfe: yeah
<madLyfe> dolphin file manager if thats what you mean: https://usercontent.irccloud-cdn.com/file/FWpFzG9C/image.png
<sarnold> Xase: try systemctl mask systemd-resolved.service
<sarnold> aha :) so that's dophin..
<Xase> Created symlink /etc/systemd/system/systemd-resolved.service â /dev/null.
<Xase> Should I reboot my server or stop resolved?
<Xase> also should I mask Bind9 so it doesn't interfere? I don't want to remove it, incase pdns doesn't work out.
<sarnold> Xase: that's prolbably a good idea
<Xase> But that systemd-resolved might've been stopping bind from working too right?
<sarnold> I don't understand exactly when a disabled service would be re-enabled but I understand some standard action does that... but I don't think any automatic tool will undo the 'masking' operation
<sarnold> yes
<Xase> alright this time, no errors starting pdns.
<Xase> Sweet. On the right track.
<Xase> sarnold, thanks :)
<sarnold> Xase: all sorted? woo
<Xase> Seems to be... now I just need to figure out how to configure powerdns :D
<Xase> Good lord -_-
<madLyfe> well since adding the fstab entry my computer no longer shuts down. it shuts down and then turns right back on. lol
<madLyfe> its a dual boot machine and windows shuts down fine
<madLyfe> hmm its like it got confused and mixed suspend and shutdown..
<madLyfe> anyways, not server related.
<madLyfe> thanks for the help guys!
<Xase> I have another small issue after setting up PowerDNS via a tutorial for ubuntu.
<Xase> When I dig @localhost. the status is SERVFAIL and it says (2 servers found)
<sarnold> can you pastebin the whole thinig?
<Xase> Of course sarnold
#ubuntu-server 2018-08-29
<Xase> sarnold, https://pastebin.com/62dH02gj
<Xase> Could it have to do with the fact I was previously running bind?
<Xase> I'm still new to this, I apologize for any novice level questions.
<sarnold> heh, I've never seen dig used with just the @server parameter before..
<sarnold> try dig @localhost www.google.com A
<Xase> https://pastebin.com/bfsezwMW
<Xase> Same with different ID it seems
<Xase> This is the tutorial I followed https://webilicious.xyz/linux/complete-powerdns-setup-guide-on-ubuntu-server/
<Xase> But I previously had installed bind from another tutorial.
<Xase> The tutorial for powerdns shows there should be 1 server, but mine reports 2 servers with DiG
<sarnold> Xase: okay, how about asking your server for a record that it should actually have? maybe smy suggestion of google.com was a bad ida
<Xase> sarnold I haven't set any up. I was going to set it up to work with ISPConfig.
<cryptodan> nacc: I found a stable kernel and distro for my server
<lordievader> Xase: I see now that I forgot to ask you yesterday if you were looking for a recursive resolver or an authoritative dns server.
<raddy> Hello Everybody
<raddy> Is live patching available for Ubuntu 16.04.3 LTS
<lotuspsychje> raddy: first update your server, 16.04.5 is out
<lotuspsychje> raddy: alot of new security flaws came out since
<lotuspsychje> !livepatch
<ubottu> Canonical Livepatch is a service offered by Canonical for 64 bit 16.04 installs that modifies the currently running kernel for updates without the need to restart. More information can be found at https://ubottu.com/y/livepatch and https://www.ubuntu.com/server/livepatch
<lordievader> Good morning
<jelly> but can they live patch from .3 to .5 !
<tomreyn> i'm hosting a mirror server for getdeb/playdeb, a now unmaintained third party software repository for ubuntu. there are people using my mirror directly through apt. i'd like to use this opportunity to somehow indicate that they should remove this repository and run ppa-purge against it. is there a way i could send such a message?
<tomreyn> i've seen some kind of a redirect to a new hostname with a message (such as this-archive-is-no-longer-maintained.example.org) which then showed up on apt output in the past, but am not sure how to do this or whether it's a good idea.
<tomreyn> thuis was an earlier, unrelated occasion where some apt archive did this to send a message
<Helenah> Ohai
<Helenah> For some reason the Ubuntu launchpad PPA keeps timing out on me, I'm not sure how to fix.
<blackflow> Helenah: you could install mtr-tiny and check where the packet flow breaks.     mtr -i 1 -c 5 -r <hostname-or-ip>    it's an advanced tracert tool thingy.
<Helenah> hmm
<Helenah> I'll give it a try
<blackflow> also check if the DNS is resolving, etc...
<Helenah> It is
<Helenah> blackflow: Could node 7 be the problem? https://paste.ubuntu.com/p/RbY2tSpbvj/
<Helenah> It's never up
<blackflow> Helenah: no, it only means that particular node is limiting/dropping icmp packets
<blackflow> and loss% is only relevant if the _last_ node _upward_ shows any
<Helenah> hmm
<blackflow> Helenah: welp looks like networking on your end is fine, the trace goes deep into canonical turf. what's teh PPA url?
<Helenah> ppa.launchpad.net
<Helenah> Or you mean the full URL?
<Helenah> It's the Greek Schools repo
<blackflow> I don't know it, can you post it? or better yet, check via browser if it's accessible?
<Helenah> blackflow: https://paste.ubuntu.com/p/X2G3zF6gWS/
<blackflow> Helenah: well if you can ping or trace up to and including that ip  (use -n for mtr to see IPs), then I doubt there's anything you can do. possibly some transitional issue.
<Helenah> I really need this software, it's used for my fat clients.
<blackflow> see if you can pull the file directly with wget,  eg.   wget http://ppa.launchpad.net/ts.sch.gr/ppa/ubuntu/pool/main/l/ldm/ldm_2.18.06-1+t201807230407~ubuntu18.04.1_amd64.deb
<Helenah> blackflow: Worked
<Helenah> But with APT, the same packages time out, there is no getting around it...
<Helenah> This is a fresh install.
<blackflow> try shove it in /var/cache/apt/archives/   and see if apt/dpkg will reuse it from there. Other than putting the file in the apt cache like that, I don't know if anything else needs to be done
<ahasenack> Helenah: is apt using a proxy perhaps?
<ahasenack> Helenah: check /etc/apt/apt.conf.d/* and related files, maybe do "grep -i proxy -r /etc/apt"
<sdeziel> that ^ or this: apt-config shell PROXY Acquire::http::proxy
<ahasenack> is that case insensitive?
<ahasenack> looks like
<sdeziel> in fact, this seems better: apt-config dump Acquire::http::prox
<ahasenack> $ apt-config dump|grep -i proxy
<ahasenack> Acquire::http::Proxy "http://squid-ds216.lxd:3128/";
<ahasenack> yeah :)
<madLyfe> hey sdeziel i have my zfs smb share mounted on windows and nix but i cant seem to write anything to it.
<ahasenack> cpaelzer: what is "preparing packages" here, do you know? https://bileto.ubuntu.com/#/ticket/3392
<sdeziel> madLyfe: by default, zfs filesystems are owned by root so maybe you need to chown some dirs?
<ahasenack> the packages are built in the ppa
<ahasenack> yeah, first check the unix side: make sure the user you want can write to the dirs/files you want
<ahasenack> then repeat over samba
<ahasenack> there are layers of permissions here
<madLyfe> hmm
<cpaelzer> hmm
<cpaelzer> ahasenack: I'm not remembering the prepare step
<ahasenack> I also didn't see it mentioned in the dos
<ahasenack> docs
<ahasenack> as a status
<ahasenack> I did click "build" one more time after the packages were built in the ppa, and bileto wasn't "seeint" that
<ahasenack> seeing
<cpaelzer> did you hit publish?
<ahasenack> no
<ahasenack> as usual, when creating the ticket, I forgot to select "cosmic"
<ahasenack> it was at its default of zesty or something old like that
<ahasenack> so I clicked build again after changing it to cosmic
<ahasenack> oh, it moved
<cpaelzer> it seems the diff is not created for cosmic
<ahasenack> now it's green
<cpaelzer> I re-triggered the diff
<ahasenack> thanks
<ahasenack> I set lander to approved
<ahasenack> now it's starting the tests, all looks good
<cpaelzer> ahasenack: when you click on diff you'll see a log of the former diffs
<cpaelzer> there was none
<ahasenack> ok
<cpaelzer> despite the old (zesty) diff being exitsing
<cpaelzer> so I thought why not re-create
<cpaelzer> and that seems to have brought it back to normal
<Helenah> ahasenack: It isn't
<ahasenack> Helenah: can you pastebin the apt-get update output?
<madLyfe> sdeziel: did you use the winbind to sync system users to samba usr db?
<sdeziel> madLyfe: I only run smbd so I manually sync the users
<sdeziel> madLyfe: I am probably using a weird setup though
<madLyfe> atm im the only one accessing the share(from a couple locations), can i have it just inherit the ubuntu server user/pass?
<sdeziel> madLyfe: I prefer to decouple the Unix and samba accounts
<sdeziel> madLyfe: all my samba users have /bin/false as their shell
<madLyfe> sdeziel: /bin/false/ as their shell?
<sdeziel> madLyfe: the samba accounts have matching Unix accounts but I set their shell to be /bin/false
<sdeziel> madLyfe: the idea is those users can only use samba and not connect to the server using SSH for example
<madLyfe> can you sync the unix accounts(only one in my case) and manually add on samba users later? ones that wouldnt be added to the server? or would it sync those as well?
<sdeziel> madLyfe: I am not sure I understand your question. How could you sync Unix -> samba is the samba user is only created later?
<madLyfe> sdeziel: samba supports the ability(through another installed package?) to sync the systems users/password database? if i only had one user on the system i would only have one in samba. if i was to add more samba users later, would those then get synced to system as well? or is it only a one way sync from system to samba? or do i have the whole thing wrong?
<ahasenack> madLyfe: for every samba user, there needs to be a corresponding linux user
<ahasenack> to sync passwords, the maybe simplest way (but also error prone?) is via "unix password sync"
<ahasenack> you will also need "passwd chat"
<ahasenack> I think there is a default/example in ubuntu's smb.conf
<ahasenack> but I haven't used that in a while
<madLyfe> so useradd also adds that user to the ubuntu server as well?
<Helenah> ahasenack: It's an LTSP chroot I'm trying to set up.
<franciscodelgado> Hi people!
<Helenah> sdeziel: You can configure SSH to only allow users in certain groups to SSH in.
<sdeziel> Helenah: yes, I know thanks :)
<franciscodelgado> Have an old computer, it was dumped on the trash lol, was wondering about to use it as server with ubuntu server: It's Intel Core 2 Duo e7500 @ 2.93GHZ, 4GB of RAM, 1 160G HDD (for system, i.e.) and another disk with 1TB. x64 arch. processor, what do you think about this for data, download and local apache server?
<sdeziel> people really throw away anything...
<Helenah> franciscodelgado: In the UK, that's called robbing
<xase> Right?
<franciscodelgado> Wow so I don't want to live on the UK
<Helenah> (Just saying)
<sdeziel> franciscodelgado: this would make a pretty decent headless server
<franciscodelgado> In spain it's called to take what another ones don't want anymore lol
<Helenah> franciscodelgado: I would've done the same thing tho
<sdeziel> throwing computers in the trash should be criminal in the UK
<Helenah> Core 2 Duo is nice btw, especially for a server.
<xase> franciscodelgado, run forensics on it first...
<Helenah> sdeziel: Yeah, all those toxins, and that wasted metal
<franciscodelgado> Helenah: I feel like a little child on christmas now hahaha
<sdeziel> Helenah: yup
<ahasenack> madLyfe: /usr/sbin/useradd only cares about linux, and smbpasswd only cares about samba. There are effectively two user databases
<franciscodelgado> xase: forensic?
<Helenah> I went passed a skip on my estate, it had computers, hifi systems, fridges, freezers, so much electronics, I believed most of it worked and was just thrown because the owner was looking for an excuse to buy new.
<ahasenack> after the users are created, then the password can be sort of kept in sync if it's changed via samba. If it's changed in linux, then maybe via a pam module to also change it in samba
<ahasenack> madLyfe: it gets complicated the more users you have, that's why such setups normally resort to using ldap
<xase> Yeah like scrape the hard drive, make sure there isn't anything useful on it?
<sdeziel> franciscodelgado: I run 80% of my home infra on a similar machine also with a Core 2 Duo
<xase> You never know.
<franciscodelgado> xase, oh right
<Helenah> How about you shred the drive? I don't know about the laws in Spain, however in the UK, if there is illicit material on it, for example CP, it's enough to get you put on a criminal register.
<xase> Or just wipe the harddrive completely clean first. You don't want to be caught with someone else
<franciscodelgado> so I will give a try on it, it's incredibly silent also
<Helenah> Don't even check what's on it.
<xase> Yeah that Helenah
<xase> Just shred it.
<madLyfe> ahasenack: that seems way over my head
<franciscodelgado> Don't know what the hell was going on with this pc to waste it
<Helenah> Checking is a way of incriminating yourself
<xase> Helenah, I thought about the CP issue after I said it.
<xase> Shred the drive.
<franciscodelgado> I think there is an option on ubuntu-server installer to do womething like shred, right? like overwrite it with zeroes or similar
<sdeziel> madLyfe: it's simpler than it sounds. The Linux/Unix account is used to access the files on the samba server itself. The samba account is used by clients to authenticate against the server
<franciscodelgado> something*
<sdeziel> franciscodelgado: before the installer started its thing, you can fire up another console and do something like that: cat /dev/zero > /dev/sda
<franciscodelgado> sdeziel: oh, cool, another command to my notebook :D
<Helenah> franciscodelgado: If you really want to use the drive, do dd if=/dev/zero of=/dev/sda several times
<Helenah> You wanna overwrite the shadow several times
<franciscodelgado> Helenah, yes, I think it's the best option, clean it all and forget what the HDD could contains
<Helenah> franciscodelgado: The idea is to "Not know".
<franciscodelgado> Will begin with this tonight
<Helenah> You don't wanna have yourself know by checking the drive.
<Helenah> Because that's where information slips if you get put under suspicion.
<franciscodelgado> Helenah, sorry, maybe it's because of my english, i tried to say "forget the idea of wanting to know what is inside"
<Helenah> I understand
<franciscodelgado> :D
<Helenah> xD
<Helenah> There are more non-natives on IRC than there are natives.
<franciscodelgado> Yes, and the fact is almost people on IRC speaks in english so, everyone have to learn some to come here
<Helenah> It's better to know English anyway, if you have to join a channel like #ubuntu-es, you are missing out on the majority of the community which could've supported you otherwise.
<franciscodelgado> of course
<sdeziel> franciscodelgado: I use this for extra safety when erasing drives: https://paste.ubuntu.com/p/rSJhqT2XkR/
<franciscodelgado> And, about here in Spain, about CP, if I find a computer or HDD or whatever containing CP the first thing to do is call the police, they will try to find the owner and you will be left on the shadows, you are suposed to be helping them
<sdeziel> but now that I look at man shred, it seems that is supports shredding whole drives too
<Helenah> In data centres, drives are shredded, even if they are only a week old, if they had somes data on them, they are shredded, never reused.
<Helenah> It's to protect the data centre
<Helenah> So there is no room for accusions
<franciscodelgado> aham
<sdeziel> there is shred(1) and shred (physically)
<tomreyn> unfortunately the former can't really be relied on.
<sdeziel> tomreyn: on files, I'd agree but on whole disks/partitions I'd be much less worried
<tomreyn> if its whole disks, i'd rather use ata 'secure' erase, or rather combine the two, but only if i failed to use full disk encryption.
<xase> alright... so I purged bind, and reinstalled bind. but I still have my local router ip listening on port 53 AND 127.0.0.1 and :::53. I'm having trouble setting up bind for my name servers.
<xase> All the tutorials seem to be for local dns. Or isn't quite clear on how to configure for external.
<xase> I can't use powerdns which seemed to be a lot easier, it's not supported real well by ispconfig.
<Guma> Hello, I was asked to figure out how to setup up a "stage" server for productions machines so they all will point for all updates to our internal server that I need to setup. This server would be a gate keeper between ubuntu server and out production servers. So all of our production servers would be getting updates only from our internal "stage" server. When ever I update stage server then productions machines will upgrade to that version as well. Well
<Guma>  in reality there would be development stage server and production stage server. Production stage server would be pointing to development. I never did set this up. Where do I start? Any suggestions or links?
<nacc> cryptodan_mobile: nice
<ahasenack> kstenerud_: following your freeipa pastebin instructions now (https://pastebin.ubuntu.com/p/8pnKw3pHj4/) to see what's going on
<ahasenack> kstenerud_: so two things missing from that pastebin so far
<ahasenack> kstenerud_: one we talked about, the reboot.
<ahasenack> kstenerud_: the other one I just remembered is to make the ip a static one, and not dhcp assigned, to avoid surprises
<ahasenack> I'm doing that now over here
<ahasenack> kstenerud_: third, I think this answer is wrong:
<ahasenack> * Enter an IP address for a DNS forwarder, or press Enter to skip: (machine's IP address)
<ahasenack> it's not your own address: it's your home dns,
<ahasenack> or the libvirt provided one
<ahasenack> using yourself as a forwarder would create a loop
<kstenerud_> ahasenack: So if I'm using libvirt provided one, what would it be? Would I find it in resolv.conf?
<ahasenack> it would be x.x.x.1
<ahasenack> the .1 of the libvirt network
<kstenerud_> ok
<ahasenack> or, another way, it would be the bridge ip on the host
<ahasenack> in my case,
<ahasenack> virbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
<ahasenack>         inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
<ahasenack> 192.168.122.0/24 is the libvirt "default" network
<ahasenack> or, don't set any forwarder, but then you won't take advantage of the host's dns cache
<ahasenack> or its knowledge about other libvirt networks
<ahasenack> I use squid in a lxd container, in another network, so I use libvirt's .1 DNS so that I can reach the proxy by name
<ahasenack> from the vm
<ahasenack> kstenerud_: the dns forwarder config means, "forward the dns request to this forwarder if the name being asked is not one of my own zones"
<ahasenack> usually that would be the root servers, but if you have a forwarder configured, the forwarder is asked instead
<kstenerud_> but if I don't configure a forwarder it should still complete installation, right?
<ahasenack> kstenerud_: yeah, that must have been it, the config just finished for me on a brand new vm
<ahasenack> kstenerud_: yes, but I haven't gone down that route
<ahasenack> in my case it probably wouldn't finish because of my proxy named "squid-ds216.lxd", I would have to replace that with an IP, or not use the proxy
<ahasenack> since the root servers don't know about squid-ds216.lxd :)
<ahasenack> kstenerud_: I also did the other two changes: fixed ip, and reboot after that
<kstenerud_> ahasenack: OK so just to be clear, you used the x.x.x.1 address for the dns forwarder, and also to make the address static, and the reboot?
<ahasenack> yes
<kstenerud_> ok
<ahasenack> now, don't follow the ip tip blindly
<ahasenack> make sure your x.x.x.1 is a dns server
<ahasenack> try dig with it
<kstenerud_> In theory it should work fine with DHCP since I'm only going to run it for a few mins
<ahasenack> dig @x.x.x.1 gnu.org
<ahasenack> yeah, it's just to avoid surprises
<kstenerud_> hmm install failed again :/
<ahasenack> did you check /etc/hosts?
<ahasenack> you must have something else going on
<ahasenack> did you use the bind9 ppa?
<kstenerud_> Do you have this in your hosts:
<kstenerud_> 127.0.1.1       cosmic-freeipa.example.com      cosmic-freeipa
<ahasenack> no, that's what I told you to remove :)
<kstenerud_> Without that it won't auto-populate fields
<ahasenack> you have to have that entry with the real ip
<ahasenack> not 127.x.x.x
<ahasenack> and drop the bit without the domain
<ahasenack> 192.168.122.40 cosmic-freeipa.example.com
<ahasenack> just one line, like that
<sylario> Hi, I am trying to instal NVM on an ubuntu server, and despite bashrc being modified, it seems the added lines have no effect : https://github.com/creationix/nvm
<sylario> Thoses are the lines added in bashrc :
<sylario> export NVM_DIR="$HOME/.nvm"
<sylario> [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"  # This loads nvm
<sylario> [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"  # This loads nvm bash_completion
<sylario> But even after a new login, nvm is undefined and $NVM_DIR is empty
<sylario> I had no problem installing nvm on a non server ubuntu
<ahasenack> sylario: what's with the \.?
<sylario> idk
<ahasenack> use just ., or replace "." with "source" (no quotes)
<ahasenack> and no \
<sylario> It changed nothing
<sylario> This code is the same on my ubuntu workstation and it works
<sylario> And echo $NVM_DIR is still empty
<sylario> not sure why
<ahasenack> if you source .bashrc, does it get defined?
<sylario> Yes, with the first line I posted here
<ahasenack> do you have a ~/.profile?
<sylario> Thoses line have been added by the nvm install script
<ahasenack> that is what sources ~/.bashrc
<sylario> yes
<ahasenack> do you have a $BASH_VERSION variable defined? Try echo $BASH_VERSION
<sylario> 4.3.48(1)-release
<ahasenack> also check "getent passwd <youruser>" and confirm that the shell for that user is /bin/bash (it's the last field)
<sylario> adminâ1000:1001::/home/admin:/bin/bash
<sylario> It seems my bashrc is full of config for color prompt, yet <hen I ssh this server it'sq monochrome
<ahasenack> then you will have to trace the login path
<ahasenack> check if ~/.bashrc could be exiting before your NVM_DIR addition
<sylario> how do I do that?
<JanC> sylario: that's usual because the shell on the server doesn't know your terminal can show colours
<JanC> usually*
<sylario> Is there something I could put in bashrc or profile to check if they have been run?
<sylario> How do I debug that?
<tomreyn> have them touch a file in /tmp
<JanC> they are just shell scripts, so you can echo something or create a file or such
<sylario> I added touch /tmp/profile at the start of profile, i delogged relogged, and the file is not in tmp
<tomreyn> so your shell is probably bash
<sylario> what does that mean?
<tomreyn> head -n 5 ~/.profile
<sylario> # ~/.profile: executed by the command interpreter for login shells.
<sylario> # This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login
<sylario> ls -l
<tomreyn> that's just 2 of 5 lines, but yes
<sylario> I  have no idea what I should conclude from that
<tomreyn> so do you have ~/.bash_profile or ~/.bash_login ?
<sylario> I have a bash_profile
<sylario> ok
<tomreyn> well, as the message on top of ~/.profile you just partially quoted says, if you run bash, then ~/.bash_profile (if it exists) is executed instead of ~/.profile
<sylario> so RVM tanked my shell
<JanC> ~/.bash_profile can source ~/.profile
<sylario> https://www.irccloud.com/pastebin/y0KQ3tPi/
<sylario> maybe I can put that in bashrc and delete bash_profile ?
<tomreyn> maybe. and maybe the script it sources is not compatible with other shells
<tomreyn> * scriptS
<sylario> Thanks a lot
<sylario> it works!
<sylario> I added source ~/.profile
<sylario> Now I have coloration in ls
<sdeziel> sylario: hmm, aliases for ls are added in the standard .bashrc on Ubuntu IIRC
<sdeziel> sylario: weird that you had to do anything to get those
<sylario> yes, and my bashrc was not run
<sylario> because rvm created a bash_profile
<tomreyn> i think ~/.profile sources bash_rc if run by bash
<tomreyn> i think ~/.profile sources ~/.bash_rc if run by bash
<sylario> now I can install node and npm to restart the cursed deployment tool that use npm and bower and node and ember (and bootstrap)
<outernational> howdy. getting "We are currently unable to retrieve the requested key. Please try again later." on https://auth.livepatch.canonical.com/. email is verified.
<tomreyn> sylario: had you considered https://github.com/rvm/ubuntu_rvm
<sylario> I should try to do more bash instead of doing python/ruby script
<sylario> @tomreyn did not knew it existed
<sylario> thx
<tomreyn> sylario: it's the first thing said under 'basic install' at https://rvm.io/rvm/install
<sylario> I installed rvm on this server 5 years ago
<tomreyn> maybe you followed some other instructions
<tomreyn> i see
<tomreyn> 5 years is when ubuntu goes EOl, hope you upgraded in the meantime
<sylario> it's ubuntu 18.04
<kstenerud_> ahasenack: I'm not sure what I'm doing wrong, but no matter what it always fails with Unable to retrieve CA chain: [Errno 111] Connection refused
<tomreyn> :-)
<ahasenack> kstenerud_: did you check /etc/hosts? :)
<ahasenack> did you test the forwarder with the dig command?
<ahasenack> is the output of the hostname command the fqdn?
<sylario> It was an unbuntu 12.04 at first according to the hosting interface
<kstenerud_> https://pastebin.ubuntu.com/p/g7qfmwf6P8/
<ahasenack> kstenerud_: what is on line 13?
<kstenerud_> Not sure. That got added by one of the apt installs I think
<kstenerud_> Line 12 is what I added
<ahasenack> and when you added it, the other one was there alreayd?
<ahasenack> try removing 13 again, and reboot. See if it's cloud-init during boot that is adding it
<kstenerud_> yup it got added after reboot
<ahasenack> kstenerud_: ok, so it's cloud-init
<ahasenack> so
<ahasenack> maybe mine isn't messing with it because I supply a custom user-data to import my ssh key, set my local proxy and local ubuntu mirror
<ahasenack> kstenerud_: there are a few ways to sort it
<ahasenack> hammer, and non-hammer
<ahasenack> hammer is "apt purge cloud-init"
<ahasenack> non-hammer is to edit /etc/cloud/cloud.cfg and remove some lines
<ahasenack> maybe these 3:
<ahasenack>  - set_hostname
<ahasenack>  - update_hostname
<ahasenack>  - update_etc_hosts
<kstenerud_> ugh it did it again
<kstenerud_> hosts is clean. hostname returns fqdn, but I still get connection refused
<kstenerud_> This is what I'm doing: https://pastebin.ubuntu.com/p/yj35Gp8GSK/
<kstenerud_>   [13/28]: publishing the CA certificate
<kstenerud_>   [error] RuntimeError: Unable to retrieve CA chain: [Errno 111] Connection refused
<sdeziel> kstenerud_: it would be nice to see where it's trying to connect. strace/tcpdump should tell you
<dpb1> DNS/Cert/hosts modifications/FreeIPA, what could go wrong!
<ahasenack> kstenerud_: do you have cosmic-proposed enabled by any chance?
<kstenerud_> grep proposed /etc/apt/sources.list returns nothing
<ahasenack> the ca server probably failed to start, the logs could tell why, maybe it's obvious in there
<ahasenack> but it just worked out of the box for me, in a fresh cosmic vm
<ahasenack> and your bind9 ppa
<sarnold> kstenerud_: note there's also /etc/apt/sources.list.d
<kstenerud_> I'm running all of this in a uvt-kvm created vm. Everything in that pastebin is exactly what I did, in that order
<dpb1> can I attempt?
<ahasenack> well, we did changes after that pastebin
<ahasenack> do you have an updated?
<ahasenack> ah, I see
<ahasenack> let me check that
<kstenerud_> I literally copy-paste that line by line into a terminal
<ahasenack> did you test the forwarder with dig?
<kstenerud_> yup
<ahasenack> the one thing we still have different is that I setup a static ip
<ahasenack> https://pastebin.ubuntu.com/p/yDqsMjSh6T/
<dpb1> huh
<dpb1> /etc/hostname as the FQDN?
<kstenerud_> yup
<dpb1> installing from the PPA now
<dpb1> hahahaha
<dpb1> 402 packages
<kstenerud_> :)
<ahasenack> dpb1: yeah, freeipa is weird
<ahasenack> I think it's a redhat bug, and since they develop on rh...
<ahasenack> kstenerud_: in the meantime, can you try to fetch some logs?
<ahasenack> like the install log it suggests
<kstenerud_> The logs just reiterate the error, and a python stack trace leading to a cli call
<dpb1> ok
<dpb1> those packages are finished installing
<dpb1> now next
<ahasenack> no, something must have failed to start, otherwise there wouldn't be a connection refused
<ahasenack> check /var/log/pki
<kstenerud_> ok hang on I need to rebuild the vm. Running a static address broke things and I can't get into it anymore
<ahasenack> you didn't copy mine bit by bit, did you? :)
<dpb1> I'm in the magic phase now
<kstenerud_> It's the same subnet so it should have worked
<ahasenack> but I also had a mac address in there
<dpb1> my cpu is really churning
<dpb1> it's like I'm on hangouts
<kstenerud_> doh!
<ahasenack> kstenerud_: careful what you copy and paste from the internet! :)
<kstenerud_> lol
<ahasenack> the mac address isn't needed
<ahasenack> but it was there already, so I kept it
<ahasenack> https://netplan.io/examples has a static address config example
<kstenerud_> oh just to stop it from cycling ips?
<ahasenack> it's what cloud-init generated for me
<teward> dpb1: stop mining bitcoins, that'll solve the CPU usage :P  (just kidding xD0
<ahasenack> it's a filter
<ahasenack> dpb1: check your /etc/hosts, in another terminal probably
<dpb1> teward: my nuc has thusfar mined .0000000001 bitcoins, I'm afraid
<dpb1> kstenerud_: I'm past the 13/28 failure you pasted earlier at least, still chugging
 * dpb1 wonders why he has 3 other uvt-kvm machines
<dpb1> ahasenack: http://paste.ubuntu.com/p/Jv7ZWgGCbT/
<dpb1> note, the magic is still running.
<ahasenack> ð
<ahasenack> this thumbs up looks remarkably different from the web page where I copied it from
<ahasenack> it's even the wrong hand
<kstenerud_> Doesn't render with the default font
<dpb1> https://www.dropbox.com/s/kg05oz6pfqf52yu/thumbs.png?dl=0
<sarnold> I get a nice square box
<sarnold> very solid, sturdy looking
<dpb1> hehe
<kstenerud_> https://imgur.com/a/ANF6PJH
<dpb1> OK, it's done now kstenerud_ I have a nice 'next steps' screen
<kstenerud_> using exactly what I posted?
<dpb1> yes
<kstenerud_> weird...
<dpb1> cut-and-paste
<dpb1> my uvt-kvm is not virgin, but it's pretty unmodified
<kstenerud_> mine is whatever the defaults are
<dpb1> that's the ubuntu font, no clue why the emoji doesn't render
<dpb1> anyway
<teward> dpb1: black magic from the system perhaps?  (Emoji don't work in a lot of IRC clients heh...)
<dpb1> teward: ya, I have to admit, I may have done something to get it working.  been a while
<teward> i keep having to ask this, is there a way to run package autopkgtests from within a 16.04 system, and if so what's the commands :P
<kstenerud_> Rerunning the install with a static address and grabbing lunch brb
<dpb1> (I'm on weechat)
<dpb1> so terminal comes into play for me
<ahasenack> teward: there is a bunch (of commands)
<dpb1> kstenerud_: ko
<teward> ahasenack: i forget what they are for 16.04's commands, happen to know any of them offhand or where I can find details?
<ahasenack> teward: you basically need to setup vms or lxds first, and then run the tests in them with an autopkgtest (or adt?) command
<teward> i know they cahnged names between 16.04 and 18.04
<ahasenack> teward: are the executables autopkgtest* or adt*?
<teward> adt*
<ahasenack> kstenerud_: do you have that autopkgtest session noted down somewhere?
<teward> and LXD isn't much of a problem, I already use it so I can utilize those pretty well
<teward> utilize that environment (and build the LXDs for the autopkgtests)*
<ahasenack> teward: here is an irc session I had with kstenerud_ about autopkgtests: https://irclogs.ubuntu.com/2018/08/17/%23ubuntu-server.html#t16:59
<ahasenack> just rename the autopkgtest prefix to adt I think
<ahasenack> or maybe check if there isn't something in xenial backports
<ahasenack> kstenerud_: you can put that bind9 mp up I think, with these instructions you have, since they worked for dpb1
<dpb1> ahasenack: +1
<teward> ahasenack: that helped.  But so did this thing I found: https://people.debian.org/~mpitt/autopkgtest/README.running-tests.html
<teward> (google helps?)
<teward> (at least to run the basic autopkgtests I need to run)
<ahasenack> +1
<kstenerud_> OK, MP is in. The fact that different uvt-kvm setups can cause app installs to succeed or fail is worriesome, though
<ahasenack> kstenerud_: maybe we can revisit this one at the sprint. You seem to have gotten it to work yesterday
<ahasenack> then today all is failing
<kstenerud_> yeah :/
<ahasenack> kstenerud_: please also mention in the MP (description I think: can't think of a DEP3 header for this now) that debian is using the same patch
<ahasenack> we are always conserned with adding delta to debian
<kstenerud_> Oh. I got the patch from fedora. Is it in debian?
<ahasenack> you can find a link to debian's patch in salsa.debian.org, bind9 project
<ahasenack> timo pushed it to debian
<ahasenack> kstenerud_: see https://bugs.launchpad.net/ubuntu/+source/freeipa/+bug/1769440/comments/56 and https://bugs.launchpad.net/ubuntu/+source/freeipa/+bug/1769440/comments/59
<ubottu> Launchpad bug 1769440 in bind9 (Ubuntu) "freeipa server install fails - named-pkcs11 fails to run" [High,Confirmed]
<ahasenack> kstenerud_: take a look at some logwatch bugs, see if perhaps many can be killed in one swoop: https://bugs.launchpad.net/ubuntu/+source/logwatch
<ahasenack> all the "unmatched" types
<ahasenack> kstenerud_: also, https://code.launchpad.net/~kstenerud/ubuntu/+source/bind9/+git/bind9/+merge/354002 should be against ubuntu/devel, since cosmic isn't released yet
#ubuntu-server 2018-08-30
<gartral> hey guys, I spun up a vm of ubuntu server 18.04 on my laptop and I'm having a head-scratcher here... I can't install mariadb or phpmyadmin on the VM, the packages don't exist... are they in universe/multiverse or something?
<sarnold> both are in universe
<xase> I seem to have a nameserver conflict... I have probably mangled something.
<dpb1> gartral: https://bugs.launchpad.net/subiquity/+bug/1783129
<ubottu> Launchpad bug 1783129 in subiquity "Only "main" component enabled after install" [High,Confirmed]
<dpb1> gartral: looks at comment #27
<dpb1> gartral: I encourage you to mark the bug as affecting you if it fixes your issue
<xase> I'm using ispconfig, and my server is ns1.gethosting.today Now all is fine and dandy, it worked... but when I add gethosting.today as a client in ispconfig ns1.gethosting.today ceases functioning as a visible website.
<xase> I think I've configured something wrong =/
<dpb1> xase: I want to understand your question, but I don't. :/
<dpb1> ispconfig is like cpanel from what I gather
<xase> dpb1, I'm trying to sort through the issue, till I find the actual question I'm meaning to ask.
<xase> Yes.
<dpb1> ok, do you have shell access to the server?
<xase> Ok so. I was having an issue with BIND so I switched to PDNS, PDNS wasn't compatible so I switched back to bind.
<xase> Yes
<sarnold> what's ispconfig *do*?
<xase> Server is under my control completely, aside from not sharing the same physical location.
<xase> sarnold, it's for managing a server, either internally or externally.
<sarnold> so's ssh and juju ..
<sarnold> what's it *do*? :)
<xase> so ns1.gethosting.today binds properly to the ip it seems, but when I add the zones for gethosting.today itself to add it as a website it brings it up, but seems to bring down ns1.gethosting.todays connection to the ip.
<xase> What do you mean what does it do?
<xase> I'm sorry if I don't understand the question.
<dpb1> if it were me, I'd drop ispconfig
<sarnold> xase: if it just manages dns zone files then it's probably a waste of your time..
<xase> sarnold, it doesn't just manage dns. It's also a webpanel, for installing software and adding client sites.
<xase> dpb1, the alternatives cost too much.
<sarnold> "gethosting.today zone: The following NS name(s) did not resolve to address(es): 174.105.101.49, ns1.gethosting.today, ns2.gethosting.today"  -- http://dnsviz.net/d/ns1.gethosting.today/dnssec/
<xase> Well I recently just readded the A names, would it take time to settle again?
<xase> I had websites pointed at ns1.gethosting.today earlier. I just removed them and started from zero, because gethosting.today stopped working.
<xase> are there some related commands so I can see the dns entries server side?
<sarnold> I think the dnsviz folks query from the servers directly when asked, every time.
<xase> Crap... so upon fixing one thing I've broken another... again XD
<xase> Well... is my setup of my hostname for the server ns1.gethosting.today appropriate... or should I have just named it gethosting.today?
<sarnold> if it's a nameserver then ns1.gethosting.today is a good name for it.
<xase> So, I don't need like a core server named just gethosting.today/
<xase> ?
<xase> I'm asking inane circular questions, I know :(
<sarnold> probably not..
<xase> Alright... let me try removing all sites again.
<xase> o_O
<xase> It's working with out DNS zones set...
<xase> See, I hate learning new stuff... but love it.
<xase> It's just confusing to get the hang of things.
<xase> gethosting.today I mean
<xase> Alright... sites and dns zones removed...
<xase> Alright...
<xase> So... with all that removed.
<xase> Gethosting.today resolves to the default apache page.
<xase> not ispconfig.
<xase> Nothing is configured... So it must be some sort of conflict.
<xase> I'll ask again in ispconfig channel... but they're unofficial and small.
<gartral> dpb1: thnk you
<xase> Oy -_- Now I can't even point to ns1.gethosting.today... I don't know what magic mojo is going on behind the scenes... but it's elusive to me.
<xase> Ok... so my nameservers aren't working at all.. I knew something was amiss.
<xase> I mean... the urls must be working partway...
<xase> Since myzera.com is pointed at ns1/ns2...
<xase> But I just disabled those ns records on ispconfig... and the site still seems to resolve
<xase> Because of the A name I have set pointing to the IP I assume.
<gartral> can apparmor die already? please? of all the problems i've ever had, apparmor makes the most mess of things... >.<
<andol> gartral: I'm sure there are people who feel the same way about basic filesystem permissions.
<andol> gartral: But no, I don't see it going away. From what I have seen Debian too are in the process of making it the default.
<gartral> andol: my issue right now is that I can't start mariadb and AA seems too be the culprit
<andol> gartral: In that case, look in your log files, and then add the needed rules?
<gartral> andol: it's complaining about 'failed name lookup' everythhing is local to the vm and should be using localhost, so :/
<cpaelzer> good morning
<xase> Good morning.
<xase> Yeah. Not having a fun time with bind9 Not sure how to configure it, there's conflicting information everywhere, and #bind is quiet atm. I don't get why I can bring up websites without nameservers, by using A names.
<rfm> xase, can't tell what your problem is.  Can you give a simple example of something you tried that didn't do what you wanted. and tell what happened instead?
<xase> rfm I want to set up my machine as ns1.gethosting.today and use it as a name server for my other websites.
<rfm> xase: oh, trying to set up lying nameservers is screamingly complicated, I can't help because I've never managed it myself.  I think dnsmasq might be where you want to go reather than bind
<xase> lying nameservers?
<xase> rfm I'm not sure I understand... but I'll try taking a look at dns masq.
<lordievader> xase: So what troubles are you facing?
<xase> lordievader, my trouble is, all the bind tutorials seem to be for internal networks, I'm trying to create an external network, so I can serve my domains on this server properly. I had set up powerdns, only to find that it's not fully compatible with ispconfig currently, and I couldn't find any documentation that was relevant on how to do it.
<xase> So I can't use PowerDNS, though it seemed really neat.
<xase> Either way, I'd still be at this crux most likely.
<lordievader> Hmm, I see. That is too bad.
<lordievader> Anyhow, how is bind currently set up?
<xase> Default.
<lordievader> Did you disable recursing?
<lordievader> We already have too many open resolvers, so that would be step one, disabeling recursing.
<lordievader> https://knowledgelayer.softlayer.com/procedure/disable-recursion-dns
<ktechmidas> Can I ask who's idea it was to put a bit.ly link about IDE's in 18.04 server?
<ktechmidas> https://gist.github.com/ktechmidas/234bc342bda178ebbcfd3fde3b192949
<ktechmidas> "Check out 6 great IDEs now available on Ubuntu...."
<ktechmidas> using MOTD in this way devalues it... using it for vulnerabilities is fair, but for ads for a bit.ly link? Ugh
<ktechmidas> why does everything creep into ads?
<xase> ktechmidas, I was confused about that message too.
<xase> Are they really 6 great IDEs?
<blackflow> badum-tss!
<Tsien> èä»ä¹
<ktechmidas> Tsien: ni hui shuo ying yu?
<blackflow> !cn
<ubottu> å¦æ¬²ç²å¾ä¸­æçåå©ï¼è«è¼¸å¥ /join #ubuntu-cn æ /join #ubuntu-tw
<ktechmidas> Ah there's a CN channel, nvm
<lordievader> ktechmidas: Did Ubuntu really add that or was it Amazon?
<Tsien> I can speak English.
<ktechmidas> Maybe it was Amazon?
<ktechmidas> Unsure
<ktechmidas> Although I can't see them doing it either... I'm using the official canconical image
<lordievader> Oh, yeah. Then it is Canonical. I figured you might be using Amazon's images.
<blackflow> thought it was Ubuntu.... there was a backlash about that some time ago.... see bug #1701068
<ubottu> bug 1701068 in base-files (Ubuntu) "motd.ubuntu.com currently shows media item (HBO's Silicon Valley using Ubuntu)" [Wishlist,Opinion] https://launchpad.net/bugs/1701068
<ktechmidas> blackflow: I'm quite disappointed after the authors message: https://bugs.launchpad.net/ubuntu/+source/base-files/+bug/1701068/comments/11
<ubottu> Launchpad bug 1701068 in base-files (Ubuntu) "motd.ubuntu.com currently shows media item (HBO's Silicon Valley using Ubuntu)" [Wishlist,Opinion]
<ktechmidas> that it's happened again a year later
<ktechmidas> I will file a bug in ubuntu-motd where they told us to last time
<blackflow> ktechmidas: well, Canonical is not shy from trying out new things. As long as I can remove update-motd or in fact not even install it (as I only do debootstrap'd server installations, mostly due to this kind of aftermarket bloat)
<ktechmidas> but advertising in a server MOTD? Really?
<ktechmidas> I used the MOTD to be imformed or reminded of security issues
<ktechmidas> now I need to turn it off to avoid pointless articles about IDE's
<ktechmidas> and other clickbait
<blackflow> if you used motd for that, you were doing it wrong :)
<ktechmidas> not sure how much good it will do but I reported a bug: https://bugs.launchpad.net/ubuntu/+source/base-files/+bug/1789850
<ubottu> Launchpad bug 1789850 in base-files (Ubuntu) "Advertising in the MOTD" [Undecided,New]
<ktechmidas> had to do it in basefiles again
<ktechmidas> since ubuntu-bug wouldn't allow me to report in ubuntu-motd
<blackflow> you mean update-motd?
<ktechmidas> blackflow: according to this https://bugs.launchpad.net/ubuntu/+source/base-files/+bug/1701068/comments/11
<ubottu> Launchpad bug 1701068 in base-files (Ubuntu) "motd.ubuntu.com currently shows media item (HBO's Silicon Valley using Ubuntu)" [Wishlist,Opinion]
<ktechmidas> it's ubuntu-motd
<ktechmidas> "here is a team of engineering managers at Canonical (the ~ubuntu-motd team in Launchpad)"
<blackflow> yeah but the package is update-motd
<ktechmidas> I will switch packages on this bug then
<ktechmidas> Oh it was Mark's commit
<ktechmidas> guess I'm getting nowhere with my report
<blackflow> yeah. :)  Canonical really wants to promote their stuff, especially snaps lately and if you're annoyed, just uninstall update-motd or reconfiugre it to stop fetching news.
<ktechmidas> I will leave it there... just in case they ever see the light...
<ktechmidas> I actually like snaps and lxd
<ktechmidas> lxd is something I've come to rely on... I wonder if it will work upstream
<cpaelzer> coreycb: jamespage: I just realized there is no new libvirt/qemu in https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/rocky-staging
<cpaelzer> am I looking at the wrong place or is this release not using the versions of Cosmic?
<jamespage> cpaelzer: no we don't automatically add it
<jamespage> only if there is a need or good reason
<jamespage> rocky is quite happy with libvirt from bionic
<blackflow> ktechmidas: upstream?
<ktechmidas> Debian
<jamespage> cpaelzer: hey - do you have a good way of measuring PPS for DPDK/OVS testing? I want to measure performance from within VM's
<blackflow> ktechmidas: oh, yeah. :)  yeah, lxd works there.
<blackflow> snaps too, but are less well integrated into the desktop.
<zetheroo> How do you stop mysql service from starting automatically?
<cpaelzer> jamespage: I usually use different workload profiles in uperf
<cpaelzer> jamespage: not sure if it is a good way
<cpaelzer> better would be a setup with a raw packet generator
<cpaelzer> but I don't have that
<blackflow> zetheroo: you 'disable' the service with systemctl. note that other things can still wake it up, depending on configuration, eg. socket based activation, dbus or whatever. to completely disable a service, you need to 'mask' it.
<cpaelzer> you can compare relative PPS easily, but probably not max PPS
<zetheroo>  blackflow: I already tried that and another method https://paste.ubuntu.com/p/gnJmVrn5hD/
<blackflow> zetheroo: didn't that disable the service?
<blackflow> zetheroo: or are you thinking about "stopping" instead? disabling means it shouldn't start on boot. stopping means you stop the currently running process.
<zetheroo> blackflow: After a reboot the mysql service is still running
<zetheroo> I want to stop it from automatically starting at boot
<zetheroo> I also tried commenting out the 'start on' line in /etc/init/mysql.conf
<blackflow> well... I don't know then. those split sysv-under-systemd-oh-hey-but-there-are-still-traces-of-upstart  services are a mess.....
<blackflow> that's xenial, right?
<zetheroo> yes, 16.04.5
<blackflow> some clues given here:  https://askubuntu.com/questions/57381/how-to-stop-mysql-from-running-at-boot-time     check the comment about  /etc/init/mysql.override
<CarlenWhite> I have a service in systemd that needs to have something mounted before it runs. Do I give the permission to the user defined in the systemd file or something else?
<blackflow> CarlenWhite: make a dependency for the service on the corresponding .mount unit?
<zetheroo> ok, tried 'echo "manual" >> /etc/init/mysql.override' - didn't work either
<CarlenWhite> Eh. It's a bit weird of a set up. Every attempt to get the required mount to function at boot results in the system entering Emergency Mode.
<CarlenWhite> It's a loop-file inside a zvol on a zpool.
<CarlenWhite> Because Microsoft SQL.
<CarlenWhite> I can't get the mount to work correctly at boot, but it can be manually done with a few mount commands.
<blackflow> loop file inside a zvol? that doesn't sound right. zvol is a block device, so are you talking about the zvol itself, or there's a filesystem on it, containing the file?
<CarlenWhite> I tried using the zvol directly but MSSQL throws a fit even though for all intents and purposes it's supposed to act like a normal block device.
<blackflow> CarlenWhite: precisely, so you need a filesystem atop of it. otherwise it's no different from a hard disk
<blackflow> CarlenWhite: however, you _might_ need to move all dataset mountpoints to legacy, in order to control them via fstab, if you're having dependency/ordering issues, with zfs being.... it's own thing.
<blackflow> alternatively, use .mount units that depend on zfs units in turn
<blackflow> zfs-mount.service in particular
<CarlenWhite> I might be misunderstanding. I did mount the zvol as a ext4 and attempt to use it with mssql.
<blackflow> or, zfs being what it is.... if mount doesn't work, you can have a regular .service executing `zfs mount`  instead of using the default mounting mechanics
<blackflow> oh ext4, yes
<CarlenWhite> As I've observed ZFS kicks in after reading fstab.
<blackflow> CarlenWhite: that's why I mention legacy mounts
<blackflow> it's a bit of a mess right now, but there's an open issue in Zfs on Linux, to deal with boot ordering
<blackflow> CarlenWhite: however, I'm curious, did you add the zvol mountpoint to fstab?
<CarlenWhite> Yeah I tried to. Causes the system to stop on boot.
<blackflow> CarlenWhite: with what error?
<CarlenWhite> Can't recall, but let me safely dismount and reboot the system and I'll tell ya'.
<zetheroo> so still have no solution for disabling mysql service from automatically starting on boot :/
<blackflow> CarlenWhite: you can add "nofail" to fstab options so it passes through, but yeah, I'd be interested in knowing the error
<blackflow> zetheroo: what if you renamed /etc/init/mysql.conf to /etc/init/mysql.conf.disabled?   found some hints via google
<CarlenWhite> System rebooting now. Waiting for hardware BIOS'...
<CarlenWhite> blackflow, zetheroo `systemctl disable mysql`?
<blackflow> zetheroo: or if nothing works.... then you've got something else explicitly launching mysql, not a service file
<blackflow> CarlenWhite: it's xenial, and no systemd unit yet on xenial
<CarlenWhite> Ah. Ignore me then.
<jamespage> cpaelzer: ta for the pointer to uperf
<jamespage> seem to be getting a reasonable pps and throughput figure
<zetheroo> blackflow: if I rename that file can I still start up the service manually?
<cpaelzer> jamespage: I have a set of profiles for uperf if you are not happy with what is delivered by default
<blackflow> zetheroo: I don't know, try it. you can always rename back if not
<cpaelzer> of the category: not better, but different :-)
<jamespage> cpaelzer: pointer?
<cpaelzer> jamespage: things starting with uperf in https://git.launchpad.net/~ubuntu-server/ubuntu/+source/dpdk-testing/tree/
<cpaelzer> I essentially search and replace different things in it to get a matrix of tests
<cpaelzer> one being slow and transactional to check latency/roundtrip (rr)
<cpaelzer> and the other one trying to somewhat burst-fill the line
<zetheroo> blackflow: renaming didn't stop the service from starting on boot
<blackflow> zetheroo: no idea then. I think you have some custom script or something starting it then
<blackflow> zetheroo: can you check the logs and see exactly when in the boot process the service was started?
<zetheroo> ok
<zetheroo> first entry I see in syslog after a reboot is: linuxrunner mysqld_safe: Starting mysqld daemon with databases from /var/lib/mysql
<zetheroo> blackflow: https://paste.ubuntu.com/p/43YnnHXRq7/
<zetheroo> and more ... https://paste.ubuntu.com/p/pJ7qJSWvJQ/
<blackflow> zetheroo: that's good, but the idea is to see where in the logs, relative to OTHER things, this appears. in this case what non-mysqld thing is right before that, and what is right after that
<blackflow> zetheroo: maybe just pastebin the entire bootlog?    journalctl -b
<blackflow> zetheroo: look through it, see if anything needs censoring/sanitizing before you pastebin
<jamespage> cpaelzer: ta
<CarlenWhite> blackflow, I noticed ZFS starting then fstab attempting to mount the zvol by it's UUID, hung, and proceed into Emergency Mode. Reviewing dmesg I can't find any meaningful information related to it.
<CarlenWhite> However there is some weird behavior in Emergency Mode. Can't mount the zvol manually. Running the appropriate mount command blinks the HDDs then returns with no error. But navigating to the mount shows nothing and umounting it says it wasn't even mounted in the first place.
<zetheroo> blackflow: found an earlier mention of mysql starting: Aug 30 11:52:55 linuxrunner mysql[1075]:  * Starting MariaDB database server mysqld
<blackflow> zetheroo: without the whole boot log I can't help you, you'll have to figure out yourself when in the chain it is started and then try to figure out what is starting it
<blackflow> CarlenWhite: sounds like the zvol is broken? when you boot the system normally, can you moutn it?  add "nofail" to the fstab options, and after it boots try mounting it via fstab (mount /just/target/path/)
<blackflow> and by zvol is broken I mean that particular volume is not mountable
<CarlenWhite> blackflow, Yeah. When I comment out the entry then once in the system, I can uncomment and mount them fine.
<zetheroo> blackflow: can I PM it to you?
<blackflow> zetheroo: no, sorry.
<blackflow> CarlenWhite: or you could've used "nofail" :)   sorry, I'd have to see error logs, no idea what's going on there.
<zetheroo> blackflow: ok, how's this? https://paste.ubuntu.com/p/rXcZ8qRrPn/
<blackflow> zetheroo: line 741 is relevant. that looks like systemd is using the sysv-init shim. are you 100% sure that disabling the service does not work?    systemctl disable mysql.service   (or whatever the unit name is)?   if "disable" doens't work, did you try "mask"?
<CarlenWhite> blackflow, I just took a high-speed footage just to capture the message
<blackflow> CarlenWhite: surely there are logs? journalctl?
<zetheroo> blackflow: "mask" worked!
<blackflow> which I mentioned way up, but okay :) glad you got it fixed.
<CarlenWhite> Reviewing the 240FPS footage it just mentions it timed out on waiting then two messages saying that two depends failed.
<zetheroo> Sorry, I didn't know what you meant by 'mask' earlier on
<zetheroo> blackflow: so to start the service manually it would have to be re-enabled I guess, because now 'service mysql start' complains 'Failed to start mysql.service: Unit mysql.service is masked.'
<blackflow> zetheroo: which is a great lesson. when you don't understand something being advised, look it up on google. "systemctl mask" returns quite a lot ;)
<CarlenWhite> blackflow This case, /mnt/microsoft-sql-block-holder then /mnt/microsoft-sql-block then Local File System.
<CarlenWhite> So correction three.
<blackflow> zetheroo: don't use systemctl, it's not a systemctl unit obviously. use service
<blackflow> wait, I think that's shimmed through systemctl..... uhhh.... use the /etc/init.d/.... script?    /etc/init.d/mysql....   start    ?
<blackflow> CarlenWhite: somewhere in the journal, or dmesg, there should be more info on why it failed.
<zetheroo> '/etc/init.d/mysql start' doesn't return anything
<blackflow> CarlenWhite: anyway, for all this trouble, you could've really written a .mount unit and make the the .service you want to start depend on it somehow.
<blackflow> zetheroo: but does it start it?
<blackflow> no output usually means no error
<zetheroo> blackflow: 'service mysql status' still shows it as being inactive
<zetheroo> is there another way to check?
<blackflow> yeah the service command is shimmed through systemctl methinks, that's why. using the init script directly should've worked
<blackflow> zetheroo: ps auxf | grep mysql
<blackflow> or /etc/init.d/mysql status
<zetheroo> root      2029  0.0  0.0  14544   988 pts/0    S+   12:18   0:00          \_ grep --color=auto mysql
<CarlenWhite> blackflow, Just: Timed out waiting for device dev-disk-UUID
<zetheroo> '/etc/init.d/mysql status' returns nothing
<blackflow> zetheroo: sigh...... yeah I don't know then. that's total mess there. systemd+init+upstart. I don't know any more, sorry.
<zetheroo> ok, thanks for your help
<CarlenWhite> blackflow, But yeah I'll give up and try a .mount unit.
<CarlenWhite> Failing that, bring up the disks and start mssql as a root crontab
<blackflow> CarlenWhite: yeah, thing is fstab is actually not used directly but turned into runtime .mounts by a systemd generator. this happens methinks independently of zfs because of the boot ordering issue zfs has, the zfs mount service should be started first. there's an open issue on ZoL github, I can't unearth it now, but I've seen it.
<blackflow> CarlenWhite: using an explicit .mount should work because that's how you can tell systemd all the dependency relations, that you can't via fstab
<CarlenWhite> blackflow, Thing is that I'm seeing ZFS start up before it even tries mounting the disk.
<CarlenWhite> Or zvol rather
<blackflow> CarlenWhite: one thing I didn't ask, what's your fstab line like?
<zetheroo> blackflow: 'systemctl unmask mysql.service' and then 'service mysql start' works
<zetheroo> so it just has to be masked again when done testing/using it
<blackflow> zetheroo: which is not good. the systemctl mask proves that it's systemd that initializes it starting on boot. but, it actually deferrs to upstart or sysv. meaning the _real_ place where you control if it starts or not is not systemd, but sysv/upstart.
<blackflow> which also means you should be able to start the service that way, outside of systemd, even if the systemd unit is masekd.
<blackflow> *masked
<blackflow> something doesn't seem right from all you're telling. it's a bit weird that you can't disable the service using any of the other methods, via that override file or myslq.conf or rc update
<blackflow> I'm willing to bet you didn't test those properly.
<zetheroo> probably not - but this messy solution should do for now.
<CarlenWhite> blackflow, Yeah I'm going with a root crontab that on reboot gets the drives ready then starts up the service by itself. When I have more time I'll try giving it another whack.
<CarlenWhite> It Worksâ¢
<CarlenWhite> For now the server is gonna get it's day of use shortly and I don't want to bring it down.
<CarlenWhite> But hey I won't have to be at the machine physically it get it back online.
<blackflow> that's maybe a stop gap solution, but really you should be doing through systemd. one of, if not THE, major issue(s) with people having problems with systemd is that those things are not implemented fully through the systemd interfaces, and breakage ocurrs because systemd is not told to do something correctly.
<blackflow> then hate ensues.
<blackflow> like things hanging on boot because systemd is now explicit about dependencies -- is not systemd's fault but configuration fault. if things are allowed to fail and aren't critical condition, then they should be marked as so. eg the "nofail" option in fstab.
<CarlenWhite> But in terms of the fstab mock config I presented, it should work?
<blackflow> CarlenWhite: not sure what you mean. thing with fstab is that it's being converted to systemd units via generator. while there are some x-* options one can use in fstab to instruct systemd, really the correct place is to use .mounts and express proper dependencies.  sure, a @boot hack works but, one day you'll forget it's there and will have hard time figuring out wth is mounting something while
<blackflow> there's no fstab entry, no .mount.
<cpaelzer> jamespage: for a minor change in qemu all kind of autppkgtests break
<cpaelzer> jamespage: is cinder 2:13.0.0.0 broken in general atm?
<cpaelzer> or at least the tests?
<cpaelzer> if so could we add a force badtest?
<blackflow> The openssh manpage is not clear whether AllowUsers directive can appear more than once, and whether they're then all combined into a single list. My testing shows it IS so, but was wondering if anyone knows of a problem with that?
<jamespage> cpaelzer: er - E: Unable to correct problems, you have held broken packages.
<cpaelzer> jamespage: umm what is going on ... ?
<jamespage> cpaelzer: looking now
<cpaelzer> jamespage: maybe something because of libc transition?
<cpaelzer> bit cinder-daemons and cinder-volumes have the same entry in d/t/control
<cpaelzer> shouldn't they either be both or none installable
<cpaelzer> jamespage: I found nothing obvious in the log
<cpaelzer> for now I hit retry
<cpaelzer> let me know if you see something
<cpaelzer> otherwise I'll need to set up a local repro for a better look
<jamespage> cpaelzer: no I know what it is
<jamespage> cpaelzer: the wsgi apache conf is in cinder-common, not cinder-api so there is a install orering ossie
<jamespage> coreycb: can we not put the apache configuration into the -common package please - its role specific and causes all sorts of issues
<cpaelzer> ok
<cpaelzer> jamespage: that sounds like wait "until you have the fix uploaded" for me then right?
<jamespage> cpaelzer: yep doiing it right now
<coreycb> jamespage: sure. there's no way we can get all the config into -common packages to make py2->py3 upgrade seamless.
<coreycb> i've come to realize that
<jamespage> coreycb: I'm not sure I follow you in the context of the apache2 configurations
<coreycb> jamespage: it's config that can be overridden
<jamespage> but it has to be in a package that actually depends on and hence enables the apache2 and associated wsgi plugin
<jamespage> otherwise apache2 borkes on configuration during install as cinder-common does not have a depends on apache2 etc...
<jamespage> coreycb: does that make sense?
<coreycb> jamespage: yeah i get that
<coreycb> jamespage: i'm just saying that the effort was an attempt to allow for upgrades from py2->py3 without re-installing config and that's really not possible for all config.
<jamespage> coreycb: I'm missing something then
<cpaelzer> coreycb: jamespage: should we force-badtest cinder/2:13.0.0~rc1-0ubuntu3 for now to give you soem time sorting things out?
<coreycb> cpaelzer: no it's an easy change
<jamespage> cpaelzer: yeah I'm just uploading it now
<cpaelzer> will that only adress cinder or also the two nova issues?
<cpaelzer> there ppc and i386 are on "Test dependencies are unsatisfiable" blame: nova
<coreycb> jamespage: i'll take a pass through any other affected packages
<jamespage> coreycb: ta
<jamespage> coreycb: I poked the ovs backport again - the test suite is notoriously racey
<coreycb> jamespage: ah good, thanks
<ahasenack> cpaelzer: libcloud getting greener and greener: http://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html#libcloud
<ahasenack> cpaelzer: ppc and s390 fail for fdroidserver because that's an android package. Not ported to those arches apparently
<ahasenack> the dependency doesn't exist there
<kstenerud_> good morning
<ahasenack> hello kstenerud_
<ahasenack> cpaelzer: could you please pick the more specific review slot, instead of the "canonical-server" one? Otherwise the Mp disappeaars from https://code.launchpad.net/~canonical-server/+activereviews
<kstenerud_> ahasenack: What would be a good bug to look at next?
<ahasenack> kstenerud_: the tiny logwatch ones I think
<ahasenack> as they probably still apply to cosmic
<ahasenack> kstenerud_: the "unmatched entries" ones from https://bugs.launchpad.net/ubuntu/+source/logwatch
<ahasenack> starting with the ones easier to reproduce, i.e., pick some service you are more familiar with, or that isn't complex to setup like freeipa
<xase> Anyone have experience with purftpd?
<xase> pureftpd^
<sdeziel> isn't FTP something from the 90' ?
<xase> sdeziel, yes. Yes it is.
<xase> But I'm trying to use it for my less tech savvy cohorts.
<whislock> Wait until he hears how old DNS is.
<sdeziel> xase: I find SFTP pretty easy to use, maybe the less tech savvy folks would agree?
<ahasenack> dput uses ftp by default :)
<whislock> Do browsers support SFTP natively?
<sdeziel> ahasenack: dput also supports SFTP but I didn't try
<ahasenack> I tried, uploads were getting stuck mid-transfer, and I was told to switch back to ftp
<sdeziel> whislock: no but clients are widely available
<whislock> sdeziel: I feel like you're missing the point of "less tech savvy."
<sdeziel> whislock: possibly. I just don't like to see new deployment of this crappy thing that is FTP
<xase> sdeziel, sftp I can use... but no, my cohorts want to be able to use a regular ftp client to regularly login to an ftp server =/ plus, how am I supposed to segregate my clients into their own sftp user, since ftp user and sftp users are separate
<whislock> sdeziel: If it's the right tool for the job, it's not crappy.
<whislock> sdeziel: Just because its old or whatever doesn't make it crappy. Is it secure? No. Does it need to be for this use case? Maybe not.
<sdeziel> whislock: don't waste you time trying to convince me that FTP is not crappy ... the control and data separation is terrible
<sdeziel> s/you time/your time/
<whislock> Then don't waste your time trying to inflict your biases on everyone else. Cuts both ways.
<whislock> Either be helpful, or shut up.
<Ussat> \o/ first prod 16.04lts --> 18.04LTS done :) no issues
<whislock> Ussat: I've heard from a few people that the xenial->bionic upgrade was far less breaky than they were prepared for.
<sdeziel> xase: most FTP clients I encountered also supported SFTP. As for the users, it should be possible to convert them to SFTP... if you wanted
<Ussat> I am VERY happy
<sdeziel> whislock: I didn't know xase knew about SFTP so I felt it was appropriate to draw some attention to it
<xase> sdeziel, yes, but my ftp credentials don't work for sftp? So what's the solution there?
<xase> Anyhow.
<whislock> xase: What issue are you having with pureftpd?
<xase> I have pureftpd mysql setup... via this https://www.howtoforge.com/tutorial/perfect-server-ubuntu-18.04-with-apache-php-myqsl-pureftpd-bind-postfix-doveot-and-ispconfig/
 * whislock sees "disable AppArmor" and gets angry instantly.
<sdeziel> xase: I don't know your setup but with SFTP (as provided by openssh) you can use password authentication just fine
<blackflow> thing is, it shouldn't.
<ahasenack> although to use mysql as a backend for authentication in that case means having to use a pam module
<whislock> xase: This howto is a flaming garbage pile.
<sdeziel> xase: oh sorry, I missed the mysql part
<blackflow> xase: telling your users to use FileZilla with an ssh agent, and pubkey SFTP auth, is not complicated, and very easy, especially to an windows users. putty works fine with that.
<blackflow> I seriously doubt non-tech-savvy users would need FTP anyway, so those people are at least savvy enough to set up filezilla with ssh keys.
<sdeziel> WinSCP is decent too
<Ussat> whislock, we were quite happy with the upgrade, although, I did about a week of testing first :)
<kstenerud_> ahasenack: For MP fixes, do I just reset the git repo back, commit new changes, and force push?
<blackflow> but uh be careful about enabling passwords for ssh. you do that, and you've removed the security benefit of keys.
<whislock> Ussat: I'm not a datacenter guy anymore, but my DC-oriented folks tell me that they're loving life with bionic so far.
<ahasenack> kstenerud_: it's a matter of preference, related to how big the whole diff is
<ahasenack> kstenerud_: sometimes the reviewer wants to be able to see what you did to fix his or her comment
<Ussat> whislock, quite happy here with it
<ahasenack> kstenerud_: in that case, committing on top helps
<ahasenack> kstenerud_: when there are a few such requests, I tend to commit on top, and only rebase + force-push at the very end
<Ussat> whislock, I have multiple test labs, one of them being in VM's on my laptop :)
<Ussat> whislock, we are REALLY paranoid here (we are a hospital - and shit HAS to work)
<whislock> I get it. I have several hospitals as clients, actually.
<xase> Well, whislock since you seem to understand AppArmor, can I go ahead and enable it again? Or is it a once it's gone, it's gone, start from scratch situation?
<xase> Apparently it causes some issues with scripts and ispconfig.
<xase> But according to recent threads it doesn't as much anymore?
<whislock> If AppArmor is causing issues, your application is poorly designed.
<whislock> Ditto SELinux.
<xase> Well it's not *my* application.
<xase> I just did what the tutorial told me to do.
<whislock> I know, yell at the maintainers of the apps in question.
<xase> This is a test run.
<whislock> Yeah, that's the problem. Howtoforge is populated with how-tos of questionable worth, filled with instructions of questionable wisdom, written by people of questionable experience.
<xase> So, some users (read: few) said it caused no issues in recent setups.
<xase> So can I re enable it?
<xase> Or do I have to start from ground zero since the system is 'compromised'?
<whislock> App-armor can be reenabled, sure. Let me point out ways to identify a how-to written by someone who doesn't know what they're doing, if I may?
<blackflow> if apparmor is causing issues then first and foremost the policy could/should be fixed. doesn't automatically mean the application is poorly designed.
<blackflow> defautl policies, especially for complex programs, will have issues.
<whislock> I entirely disagree, but oh well.
<Ussat> as do I
<Ussat> and I work with both Apparmor and SELinux daily
<xase> Well, I won't disable it next time when we re-do our setup.
<whislock> Ussat: I thought I remembered you from #rhel. :P
<blackflow> I do to. I have yet to encounter a situation where a (local) modification to a policy hasn't fixed a problem with it.
<Ussat> :)
<whislock> Yeah, that's a concept problem.
<whislock> If the security policy is complaining, then your widget is doing something it probably shouldn't be doing.
<blackflow> or the policy is incomplete.
<xase> So we're supposed to trust the policy from scrath though ?
<xase> from the default^
<blackflow> like firefox in ubuntu for example, for a while it carried no rules for the DRM plugin so you couldn't watch netflix with FF apparmor policy enabled.
<blackflow> I'm not sure when that was fixed, but I think it was recently. I run my own policy and don't notice when the packaged ones change
<cpaelzer> ahasenack: btw I added the server slot back
<ahasenack> thx
<cpaelzer> it is so inconvenient that you have to take extra care to not pick that slot
<blackflow> xase: personally I don't. default policies are too broad for my taste.
<tanuki> A fascinating blog where freenode staff member Matthew mst Trout recounts his experiences of eye-raping young children https://MattSTrout.com/
<tanuki> I thought you guys might be interested in this blog by freenode staff member Bryan kloeri Ostergaard https://bryanostergaard.com/
<tanuki> With our IRC ad service you can reach a global audience of entrepreneurs and fentanyl addicts with extraordinary engagement rates! https://williampitcock.com/
<tanuki> Read what IRC investigative journalists have uncovered on the freenode pedophilia scandal https://encyclopediadramatica.rs/Freenodegate
<Ussat> ...
<Ussat> die spammer
<Ussat> IRC investigative journalist...HAHAHAHA
<RoyK> I wonder what mr. Matthew did to this guy - he's been spamming for some time now
<samfty> A fascinating blog where freenode staff member Matthew mst Trout recounts his experiences of eye-raping young children https://MattSTrout.com/
<samfty> I thought you guys might be interested in this blog by freenode staff member Bryan kloeri Ostergaard https://bryanostergaard.com/
<samfty> With our IRC ad service you can reach a global audience of entrepreneurs and fentanyl addicts with extraordinary engagement rates! https://williampitcock.com/
<samfty> Read what IRC investigative journalists have uncovered on the freenode pedophilia scandal https://encyclopediadramatica.rs/Freenodegate
<bairdmich> A fascinating blog where freenode staff member Matthew mst Trout recounts his experiences of eye-raping young children https://MattSTrout.com/
<bairdmich> I thought you guys might be interested in this blog by freenode staff member Bryan kloeri Ostergaard https://bryanostergaard.com/
<bairdmich> With our IRC ad service you can reach a global audience of entrepreneurs and fentanyl addicts with extraordinary engagement rates! https://williampitcock.com/
<bairdmich> Read what IRC investigative journalists have uncovered on the freenode pedophilia scandal https://encyclopediadramatica.rs/Freenodegate
<Guest79333> A fascinating blog where freenode staff member Matthew mst Trout recounts his experiences of eye-raping young children https://MattSTrout.com/
<Guest79333> I thought you guys might be interested in this blog by freenode staff member Bryan kloeri Ostergaard https://bryanostergaard.com/
<Guest79333> With our IRC ad service you can reach a global audience of entrepreneurs and fentanyl addicts with extraordinary engagement rates! https://williampitcock.com/
<Guest79333> Read what IRC investigative journalists have uncovered on the freenode pedophilia scandal https://encyclopediadramatica.rs/Freenodegate
<compdoc> an IRC ad would only piss off everyone. thats crazy
<Ussat> great spammers are back
<Guest65669> A fascinating blog where freenode staff member Matthew mst Trout recounts his experiences of eye-raping young children https://MattSTrout.com/
<Guest65669> I thought you guys might be interested in this blog by freenode staff member Bryan kloeri Ostergaard https://bryanostergaard.com/
<Guest65669> With our IRC ad service you can reach a global audience of entrepreneurs and fentanyl addicts with extraordinary engagement rates! https://williampitcock.com/
<Guest65669> Read what IRC investigative journalists have uncovered on the freenode pedophilia scandal https://encyclopediadramatica.rs/Freenodegate
<johnpark_pj> A fascinating blog where freenode staff member Matthew mst Trout recounts his experiences of eye-raping young children https://MattSTrout.com/
<johnpark_pj> I thought you guys might be interested in this blog by freenode staff member Bryan kloeri Ostergaard https://bryanostergaard.com/
<johnpark_pj> With our IRC ad service you can reach a global audience of entrepreneurs and fentanyl addicts with extraordinary engagement rates! https://williampitcock.com/
<johnpark_pj> Read what IRC investigative journalists have uncovered on the freenode pedophilia scandal https://encyclopediadramatica.rs/Freenodegate
<Ussat> oh ffs..OPPP
<Ussat> kick the spammer
<RoBz23> ÐâtÒ»ouÉ¡Ò»tâÑÎ¿á¥âgâªyÑÂ mightÂ bá¥±âiá¥tá¥±restÐµÔâÑá¥âtÒ»isâblÐ¾É¡Â byâfreá¥±á¥Î¿deâÑtaffââ¿á¥±â¿bá¥±râá·ryanâkâ¼Ð¾eriâÐÑtÐµrÉ¡aÐ°rdâÒ»ttpsâáµâ§¸brÑÐ°á¥Î¿stÐµrgÐ°Érdï¼â½Î¿mâ§¸
<RoBz23> ReÐ°dâwhÉtââ á¡C Ñá¥á´ estiÉ¡Étâ°âÐµâÑÐ¾urnÐ°â¼â°ÑtsâÒ»avÐµÂ âªnâ½Ð¾âerá¥±ÔÂ Ð¾á¥Â tÒ»á¥±Â frá¥±á¥±nÎ¿deââ²£á¥±â¾oÑhâ°â¼ÑÐ°Â ÑÑÐ°á¥dalâhttâ²£sá¬â/ÐµnÏ²yÏ²là´ â²£Ðµâ¾ÑÉâ¾raâ¿atÑÑÐ°.rÑâá´reeá¥odÐµgÉte
<RoBz23> AâfÐ°scinatâ°nÉ¡âbâ¼â²É¡ whÐµrÐµâfrá¥±ÐµnÎ¿â¾Ðµ stÉffÂ mÐµmbÐµrâá·ÉttÒ»á¥±wââ¿stâá¢râ²âªtâÔÐ¾câªmÐµntÑ hâ°sâeÑâ²£ÐµriÐµá¥cá¥±Ñ eÑÐµï¹£rÐ°piá¥É¡âyÐ¾á¥nÉ¡ââ½hÑâ¼drá¥±á¥âÒ»ttpsË¸âáµÎattÐTrÎ¿á¥t.Ï²à´ mï¼
<RoBz23> áâ°thÂ Î¿âªrââ RCâÉd serâ¨â°â½eâÑÎ¿uâcanârÐµaÏ²Ò»âaâgâ¼Ð¾bÐ°â¼âÐ°udâ°enÏ²eâà´ fâá¥±á¥trá¥±â²£rá¥±neursâÉnÔ fÐµá¥taá¥Ñâ¼âÉÔâ¾â°Ï²tsâá´¡â°thâeâ¹trÉorâ¾â°nÐ°rÑâengÐ°gemÐµntÂ rates!âhttÑÑá¡áµâá´¡Ñlâ¼iÉmâ²£â°tÏ²â²Ñkï¼â½â²mâ§¸
<nhandler27> â âtÒ»Ð¾âªÉ¡ht yà´ á¥âguá»¿Ñâ¯â¿ÑgÒ»tâbá¥±ââ°nterá¥±steÔâÑá¥âthÑÑâbâ¼Ð¾gâbÑâfrá¥±á¥±á¥Î¿â¾á¥±Â stÉffâmeâ¿berâÐryÐ°nâkâ¼Î¿Ðµrâ°âOstÐµrgÉÉrdâ¯Ò»ttâ²£Ñï¼áµâbrá»¿aá¥à´ stÐµrgÐ°Érdï¼ÑÎ¿mâ§¸
<nhandler27> RÐµaâ¾ whÉtâÐá¡Ð¡âinâ¨á¥±ÑtigatÑveâÏ³à´ âªrnaâ¼ÑÑtsâhÉâeââªnÏ²Î¿â¨erá¥±dÂ Î¿nâtÒ»eâfreá¥±á¥Î¿dá¥±ââ²£edà´ phiâ¼ÑÉÂ sÏ²anÔÐ°â¼âhttâ²£Ñï¸áµ/Ðµá¥Ï²á»¿clÐ¾peâ¾â°Ð°drÐ°matiÑÐ°â¤rÑâFrá¥±enÐ¾ÔegatÐµ
<nhandler27> áâ°thâÐ¾ur IRâ²¤âÉÔÂ ÑÐµrâiâ½ÐµâÑoâªâÑÐ°nâ¯reaÏ²Ò»âaâ¯É¡â¼à´ bÉâ¼âÉâªdÑÐµá¥ceâ¯Î¿fâÐµá¥trá¥±prÐµneâªrÑâÉá¥â¾âfá¥±á¥tÉnylâÉÔâ¾ÑÏ²tÑâá´¡ithâeâ¹traâ²rdâ°á¥ÉryâÐµá¥É¡Ð°gá¥±â¿Ðµá¥t rÐ°tá¥±sâµâÒ»ttpsá¡ï¼â§¸wÑllÑaâ¿â²£ÑtÏ²à´ â½kâ¤â½oâ¿â
<nhandler27> ÐâfÐ°ÑÏ²â°nÐ°tÑá¥gâblÐ¾É¡âá´¡há¥±rÐµÂ freÐµá¥â²ÔÐµâÑtÉffÂ membÐµr ÐatthÐµá´¡ââ¿stâá¢roâªtâdâ²â½âªâ¿Ðµá¥tÑâÒ»isâÐµâ¹â²£Ðµrâ°enceÑÂ ÐµÑá¥±â´rÉpÑngâÑoá¥nÉ¡ Ï²hÑâ¼drá¥±nÂ httâ²£sËââÎÐ°ttáÎ¤roá¥t.Ï²om/
<oso96_200020> á³itÒ»âoâªrââ Râ²¤âadâÑá¥±râ¨iceâyà´ u â½Éá¥âreÐ°ÑhâÉâglÐ¾bÐ°â¼â¯Ð°á¥ÔÑeá¥Ï²á¥±â¯â²fâeá¥treâ²£rá¥±neá¥rsâanÔâfá¥±á¥taá¥ÑlâÐ°â¾dÑctÑâá´¡itÒ»âá¥±xtraÎ¿râ¾iá¥ÉrÑâengagÐµâ¿á¥±ntârÐ°tá¥±s!âhttpÑÖáµâ§¸wâ°â¼â¼â°Ð°mâ²£ÑtÏ²à´ Ñkï¼â½à´ â¿â
<oso96_200020> á¡á¥±aâ¾âwhatâÎáÏ¹âiá¥âá¥±stÑgÐ°tÑvá¥±âjâ²á¥rnaliÑtÑâhÉâ´á¥±âuá¥Ñâ²âeredâÐ¾á¥âthá¥±âfrá¥±eá¥â²â¾ÐµâÑedÎ¿Ñhiâ¼Ñaâsâ½Ð°ndÐ°â¼Â httpÑá¬âáµá¥±ncá»¿Ñâ¼à´ pá¥±â¾iaâ¾ramatâ°cÉ.rÑï¼á´rÐµá¥±á¥odegatá¥±
<oso96_200020> â âtÒ»à´ uÉ¡Ò»tâá»¿â²âªâguá»¿sââ¿ightâbÐµââ°á¥teresteâ¾âiá¥âtÒ»â°Ñâbâ¼â²gâbÑ frá¥±ÐµnoÔá¥±â¯ÑtÐ°ffââ¿Ðµmbá¥±râÐrÑaá¥âkâ¼oÐµrâ°âOstÐµrÉ¡aÉrâ¾âÒ»ttÑÑË¸ï¼ï¼brá»¿Ð°nâ²Ñtá¥±rÉ¡ÉÐ°rdï¼Ñâ²máµ
<oso96_200020> AâfaÑcinÉtâ°á¥É¡âblà´ gâwÒ»erá¥±âfrÐµeá¥oÔÐµâÑtaffâ¯â¿á¥±mbÐµrÂ MÉtthewââ¿st Î¤rà´ á¥tâÔoâ½á¥má¥±ntÑâÒ»â°sâ¯eâ¹ÑerienÑá¥±sâÐµá»¿e-rÉÑinÉ¡âyÎ¿á¥nÉ¡ââ½hÑlâ¾rÐµnâÒ»ttâ²£sâï¼â§¸á·attáÎ¤rÎ¿ut.Ï²à´ mâ§¸
<ahasenack> kstenerud_: did you see the review comments on that bind9 MP? Just checking
<ahasenack> smoser: hi, could you try to import snapper into git-ubuntu?
<nacc> ahasenack: if he is not around, i can do it, probably
<ahasenack> nacc: ah, true, go ahead if you can :)
<nacc> ahasenack: is the edge snap working again? i haven't used it in a while (i have my test snap installed locally)
<ahasenack> I don't know
<ahasenack> let me see what I'm using
<ahasenack> git-ubuntu  0.7.4+git94.eb6c24a  433   edge      canonicalâ  classic
<ahasenack> I'm on that
<nacc> ahasenack: thanks
<ahasenack> we hit a bug trying to import libcloud the other day, that one always fails, we haven't investigated
<ahasenack> we get a 410 gone from lp, and there is something about a disabled team/api user
<nacc> ah so maybe an error path needs to be handled
<smoser> ahasenack: did you need me to do that  still ?
<ahasenack> smoser: I think nacc is on it?
<nacc> trying to, yeah :)
<ahasenack> cool
<ahasenack> smoser: thanks
 * smoser ctrl-c's
<ahasenack> uhoh :)
<kstenerud_> ahasenack: Yes, I asked another question there and am waiting for a response
<ahasenack> kstenerud_: did you click "save"? :)
<kstenerud_> I clicked it now
<ahasenack> kstenerud_: in lp, even when replying to a diff comment and clicking the greek tick box, you have to scroll back up to the general comment area and click "save comment", even if you didn't type anything in that box in particular :(
<nacc> yeah, i prefer to reply via e-mail for that reason
<kstenerud_> oh weird
<nacc> the web interface is ... not usable, imo
<ahasenack> kstenerud_: I just replied
<ahasenack> yeah, and then you see a bunch of empty comments above the diff and wonder what is going on :)
<ahasenack> all the "discussion" is happening in inline comments further down :)
<nacc> heh
<nacc> or you do *everything* via e-mail
<nacc> and you don't use the web UI at all
<nacc> problem solved <dusting hands emoji>
<ahasenack> ...and just wrote an adendum
<nacc> ahasenack: i think your point is valid, you have to, imo, either do everything via the web UI or via e-mail. Mixing them gets really confusing
<ahasenack> I use the email more as a notification, "hm, there was an update in that MP"
<nacc> yeah that makes sense
<nacc> I used it more as "this is where the development happens", like a per-MP mailing list
<ahasenack> nacc: is the import running?
<nacc> ahasenack: yeah, it failed on one pristine-tar it seems, but otherwise it's moving along
<ahasenack> ok
<nacc> up to zesty-proposed patches-unapplied
<ahasenack> the libcloud one was chugging along until a certain point, where it hard failed
<nacc> ahasenack: was a bug filed? I didn't see it if so
<ahasenack> no
<ahasenack> I guess we wanted to check with rbasak
<ahasenack> (cpaelzer and I)
<nacc> ack
<nacc> is he out?
<ahasenack> holidays
<ahasenack> two more weeks iirc
<nacc> ah ok
<nacc> do you have a log?
<nacc> i can look
<nacc> ahasenack: ok, it got through the unapplied import; doing patches-applied and then it should be pushed (snapper)
<ahasenack> no, but I can run it again with --no-push
<nacc> yeah i just want to see the backtrace
<ahasenack> it's running, will have it in a minute
<ahasenack> nacc: https://pastebin.ubuntu.com/p/HMd4HDkT5c/
<nacc> ok, that's an easy fix
<mal27> áÐµaâ¾âwÒ»ÉtâÐáâ­âÑá¥â¨ÐµstÑgatÑá´ á¥±âÏ³â²urnÉâ¼istÑâhÉâeâá¥nÏ²Ð¾â¨eredâÎ¿nâtÒ»eâfrá¥±eá¥oâ¾eâÑÐµÔoÑhilâ°Ð°âÑÏ²Ð°á¥ÔalâÒ»ttpsâ¶ââ§¸eá¥Ñyâ½â¼Î¿pÐµÔâ°ÉÔraâ¿atiâ½É.rsâá´rÐµÐµá¥à´ dÐµgÉtÐµ
<nacc> it needs a try/except for that return code in importer.py:1646
<mal27> Ðâ¯fÐ°scÑnatiá¥gâblÐ¾É¡âá´¡Ò»á¥±reâfrÐµÐµá¥oâ¾eâÑtÐ°ffââ¿á¥±â¿bá¥±rÂ ÐÐ°ttÒ»á¥±á´¡ââ¿ÑtâÎ¤rà´ utâdâ²â½âªâ¿eá¥tÑâÒ»isâ¯eâ²­â²£ÐµrÑá¥±á¥Ï²eÑâeyá¥±ËraÑinÉ¡âÑÐ¾ungâ¯chÑlâ¾rá¥±nâhttpsââ§¸â§¸â¯ÉttSTrà´ utâ¤Ï²Î¿mâ
<mal27> ÎâthÎ¿á¥É¡Ò»tâÑâ²âªÂ É¡âªÑÑââ¿ightâbá¥±âÑntá¥±rÐµstá¥±ÔâÑá¥â¯tÒ»â°sâbâ¼oÉ¡âbÑâfreeá¥Ð¾ÔeâÑtaffâmembá¥±râ¯ÐrÑaá¥âkloá¥±râ°âÎÑtá¥±rÉ¡ÉÐ°râ¾âhttpsË¸âáµbrÑÉnostá¥±rgÐ°arâ¾â¤ÑÎ¿mï¼
<mal27> á³ÑtÒ»âà´ á¥râIáCÂ aÔâÑÐµrâÑcÐµâyoâªâÏ²Ð°nârá¥±Ð°ÑÒ»âÉâÉ¡lÎ¿bÐ°lâauÔieá¥ceâÎ¿fâÐµá¥treÑrÐµá¥á¥±âªrsâÐ°nâ¾âfá¥±á¥tanyâ¼â¯ÉÔâ¾Ñâ½tÑâá´¡ÑtÒ»âÐµâ²­trÐ°orâ¾â°nÐ°ryâá¥±ngÐ°É¡eâ¿ÐµntârÉtesï¼âÒ»ttâ²£Ñ:ââá´¡â°llâ°aâ¿pâ°tÑâ²ck.Ï²Î¿â¿/
<ahasenack> the suspended user is a real error I assume?
<ahasenack> when I hit that user up in lp, I do get a 410
<nacc> right, it measn we can't access their data in LP
<ahasenack> but I don't know why we would be doing that
<nacc> ahasenack: why what?
<nacc> ahasenack: e.g., a user can delete themselves in LP
<nacc> however, LP still knows that user is who uploaded the srcpkg
<nacc> following the link will fail to get their details
<ahasenack> why are we hitting that user? Where does it come from?
<nacc> https://launchpad.net/ubuntu/+source/libcloud/0.5.0-1
<nacc> click onthe "original maintainer" link
<ahasenack> ah
<ahasenack> and that's the first version in launchpad of that package (0.5.0)
<ahasenack> well, no
<ahasenack> but ok
<ahasenack> 0.3.1 for example had another maintainer
<ahasenack> ok
<ahasenack> got it
<chek14> Rá¥±Ð°ÔâwÒ»atâÐá¡Cââ°á¥âestigÐ°tivÐµâÑoâªrnÐ°â¼â°ÑtsÂ Ò»Éâeâá¥ncoâ¨Ðµrá¥±Ôââ²nâtÒ»á¥±âfreenÐ¾â¾ÐµâpeÔâ²ÑÒ»ilÑÐ°Â sÏ²Ð°nÔÐ°â¼âÒ»ttpÑË¸ï¼áµá¥±á¥cÑÏ²lÐ¾ÑÐµdiÐ°drÉâ¿atâ°Ñaâ¤rÑáµá´reÐµnÎ¿Ôá¥±É¡atá¥±
<chek14> ÎâthoâªghtâyouÂ É¡âªyÑââ¿â°É¡htâ¯beâÑá¥tÐµreÑtÐµâ¾âÑnâthâ°Ñâ¯bloÉ¡âbyâfrÐµenà´ dá¥±âstÐ°ffââ¿Ðµâ¿bá¥±râÎrÑÉnâklÎ¿ÐµriâOstá¥±rÉ¡Ð°ardâÒ»ttÑÑâ¶â§¸ï¼brÑÐ°á¥â²Ñtá¥±rÉ¡Ð°ardâ¤â½à´ mï¼
<chek14> á³ithâà´ âªrââ ááâÐ°dâÑerâ¨iâ½ÐµÂ Ñoâªââ½anârÐµÉcÒ»âÐ°âÉ¡â¼Ð¾bÉlâ¯aâªÔiá¥±nâ½á¥±âofâentrá¥±ÑreneâªrsâanÔâfÐµntanÑâ¼âÐ°â¾dÑÏ²tÑÂ á´¡ÑtÒ»âá¥±â¹trÉÎ¿rdâ°nará»¿Â enÉ¡ÉgÐµmá¥±ntârÉtÐµsï¸âÒ»ttÑsÖâ§¸â§¸á´¡ilâ¼iamâ²£itÏ²â²Ï²k.cÎ¿â¿â
<chek14> AâfaÑÑiá¥Étiá¥É¡âblâ²É¡âwhá¥±rÐµâfrÐµá¥±nà´ â¾ÐµâÑtÉffââ¿ÐµmbÐµrâÏºÉttÒ»Ðµá´¡âmÑtâá¢rà´ âªt ÔÎ¿cuâ¿entsâhÑsâeâ¹perâ°enÑeÑâá¥±Ñá¥±ï¹£rÐ°â²£Ñá¥gâÑâ²á¥á¥gâcÒ»Ñâ¼ÔrenâhttÑÑï¼/áµMÐ°ttÐÐ¢râ²á¥tï¼Ï²oâ¿â
<ahasenack> nacc: turns out there is a bug already: https://bugs.launchpad.net/usd-importer/+bug/1764814
<ubottu> Launchpad bug 1764814 in usd-importer "awscli import fails: package_creator.display_name results in HTTP error 410: Gone" [Undecided,New]
<ahasenack> I'll add your comment to the bug
<nacc> ahasenack: ack, thanks
<nacc> it seemed familiar :)
<ahasenack> nacc: hehe
<nacc> ahasenack: snapper should be imported, canyou check?
<ahasenack> let me try
<ahasenack> nacc: yep, it's cloning, thanks
<nacc> ahasenack: yw
<ahasenack> nacc: picking your brain a little bit,
<nacc> ahasenack: sure
<ahasenack> nacc: how would I go about fixing an existing debian patch?
<ahasenack> update the patch, add appropriate dep3 headers
<ahasenack> or add another patch
<ahasenack> that changes the troublesome file one more time, after debian
<ahasenack> case in point is this:
<ahasenack> -snapperd_LDFLAGS = -lboost_system -lboost_thread -lpthread
<ahasenack> +snapperd_LDFLAGS = -lboost_system -lboost_thread \
<ahasenack> debian dropped -lpthread by mistake I believe
<ahasenack> that's in a patch already
<nacc> ok, so there's a quilt patch in debian that removes -lpthread?
<ahasenack> debian wanted to add other libraries
<nacc> is that the only thing it does?
<ahasenack> no, it adds other libraries
<ahasenack> let me paste
<ahasenack> nacc: https://pastebin.ubuntu.com/p/ptkXFfrjMW/
<ahasenack> but in the process of adding that \ and the next line, they dropped -lpthread
<ahasenack> probably because it looks like -lboost_thread that was there already
<ahasenack> that is in debian/patches/0001-refresh-Add-DSO-linker-options-for-libsnapper.la.pat.patch
<nacc> ok
<nacc> did you file a bug in debian already? :)
<ahasenack> I pinged in their commit that made that change
<nacc> ack
<ahasenack> not sure if that will trigger an email to the right person
<ahasenack> nacc: https://salsa.debian.org/debian/snapper/commit/7b582167d840a7673b33d186891d2e07a4f3ee4d
<nacc> hrm, so i think you can do it either way you said
<nacc> in this case, in the sense of easily sending delta to debian
<nacc> i'd make it a -fix patch
<nacc> at the end of the series
<ahasenack> yeah, I was leaning towards that, as it would make adding an explanation easier
<nacc> yep
<ahasenack> nacc: and, no bug, because the build works in debian
<ahasenack> I don't know why
<nacc> there have been other times where i've taken a debian patch and *changed* the fix, because it was wrong, or in some way had changed on moving to to a newer upstream, e.g.
<ahasenack> I thought it could be another case of us using -Wl,-Bsymbolic-functions
<ahasenack> which debian doesn't use
<nacc> right
<ahasenack> but our build breaks even when I drop that flag
<coreycb> jamespage: i'm going to deploy and run with openvswitch shortly before promoting everything else that's in staging
<ahasenack> thanks, I'll go with a new patch
<ahasenack> it's also easier to drop later on if needed
<ahasenack> much more clear what to drop/change
<nacc> ahasenack: right i think from a maintenance/commit perspective it's the smallest change for the largest gain
<nacc> ahasenack: diffing a patch which itself is a diff is contextually difficult to parse
<ahasenack> ugh, yeah
<ahasenack> hm
<ahasenack> with my fix, it failed just in armhf, instead of all: https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/3397/+packages
<ahasenack> an improvement still
<Sven_vB> I'm having those "No buffer space available" problems with postfix again. is that buffer space a resource bound to one network interface? is it realted to txqueuelen?
<ahasenack> Sven_vB: no idea, do you have more logs? Where is that message seen, in dmesg?
<Sven_vB> in my syslog
<Sven_vB> meanwhile I read it's probably about the user_beancounters limits of virtuozzo
<Sven_vB> I'll monitor them, and txqueuelen, and see whether they might at least predict the outages.
<RoyK> Sven_vB: what sort of email volume are you dealing with?
<Sven_vB> RoyK, probably a lot. there are some accounts that collect spam at a few hundered mails per day.
<RoyK> possibly something here https://serverfault.com/questions/614453/no-buffer-space-available-on-connect
<Sven_vB> thanks, I'll look into that!
<RoyK> 2nd hit on google :Ã¾
<Sven_vB> RoyK, I think I even found that, I just didn't understand it was related.
<Sven_vB> that thread sounds like it's about connecting to the outside, whereas my problem was that incoming connections were accepted but no greeting ever arrived at the client.
<pilottage> Aâfasciá¥atinÉ¡âbloÉ¡âá´¡Ò»erá¥±âfreá¥±á¥Î¿ÔÐµâstaff â¿Ðµâ¿berâMÐ°ttÒ»eá´¡âmÑt Trout reÏ²oá¥nts Ò»Ñs exÑeriencÐµs ofâÐµyÐµ-raâ²£ÑnÉ¡âyoá¥ngâ¯Ï²hildrá¥±á¥Â https:/áµMattÐTrout.Ï²oâ¿â§¸
<pilottage> WÑth Ð¾urÂ ÐRC aâ¾ ÑÐµrviâ½e Ñou ÑÐ°nâreaâ½h a gâ¼obÐ°l auâ¾iÐµnâ½eâof Ðµntrá¥±Ñrá¥±neurÑ aá¥dâfentaá¥yâ¼âaâ¾diâ½ts á´¡itÒ»âextraordâ°nÉrÑÂ enÉ¡ageâ¿ent rates!âÒ»ttps:â/á´¡iâ¼liampitcockï¼cÎ¿máµ
<pilottage> I thÎ¿âªÉ¡htâyÎ¿á¥â¯É¡âªyÑ â¿igÒ»t be interesteâ¾ iá¥ tÒ»ÑÑâblÎ¿É¡ bÑ frÐµenoÔÐµâÑtaff má¥±â¿ber ÐrÑÉn kâ¼Î¿eriâOstá¥±rgaÐ°rd https://bryanosterÉ¡aÉrdï¼cÎ¿â¿/
<pilottage> ReaÔ wÒ»ÉtâIRÏ¹Â â°nveÑtigativeâÏ³ournalÑsts hÉá´ e uá¥coâerá¥±dâoá¥ tÒ»e frá¥±enÎ¿Ôá¥±ââ²£edÎ¿pÒ»â°lâ°aÂ sÑandalâÒ»ttpÑá¬/ï¼ÐµncÑÑlopediadrÐ°matâ°caï¼rÑ/Freenoâ¾ÐµÉ¡atá¥±
<pilottage> Aftá¥±râtheÂ aâ½qá¥isÑtÑon by PrivatÐµâIntÐµrnet AÏ²Ï²á¥±ÑÑâ FrÐµenÎ¿ÔeâÑÑ á¥ow beiá¥gâuÑá¥±dâ¯tà´  Ñá¥Ñh Iâ­OÂ scÉâ¿ÑâÒ»ttps:áµ/á´¡wá´¡â¤Ï²Ð¾indesk.Ï²Î¿m/hÉnÔÑÒ»akÐµ-revealÐµd-vÏ²Ñ-back-plÐ°n-tÎ¿-É¡ivÐµâ´Ð°way-100-milâ¼iâ²á¥-in-ÑrÑÑto/
<pilottage> "AllâtÐ¾ld, ÎanÔsÒ»akÐµ aÑâ¿Ñâtoâgâ°â´eâï¹©250Â wÐ¾rth ofâ¯ÑtÑâtokeá¥Ñ to *eacÒ»* á¥sÐµr Î¿f thÐµ wá¥±bsâ°tesâthe compÉá¥Ñ has partá¥erships wÑth â Gitâ²ubï¼ tÒ»Ðµ Pá¿P FÎ¿unÔation aá¥Ô *á´REÎNâ²DEâ,âa cÒ»atâchaná¥Ðµâ¼âfor Ñá¥±er-to-pá¥±á¥±r â²£roÏ³ÐµctÑ.âáªsâsá¥ch, ...
<pilottage> dá¥±âá¥±lâ²persâwÒ»â² hÐ°ve exâ°stÑng ÉcÏ²Î¿untsâoá¥âeacÒ» ÑoulÔ reâ½eÑvá¥± uâ²£ tÐ¾ $750 wÎ¿rth of HandÑÒ»Éke tÎ¿kens.ï¼
<Sven_vB> pilottage, obfuscating the URL breaks it.
<pilottage> HÐ°nâ¾sÒ»ÉkÐµ cryptÎ¿â½urrÐµncyâÑâ½amâÑs Ð¾ÑÐµratÐµd bÑâÐndrÐµw áeÐµ ï¼276-88ï¼05Æ·á®)ï¼âthá¥±âfrÐ°uâ¾ster inâchief at PrÑvÉtá¥± ÎnternÐµt AcÏ²esÑâwÒ»ÑcÒ» á¥Î¿w owá¥sâ¯FrÐµenoÔÐµ
<pilottage> Freeá¥odá¥± iÑâregâ°stá¥±reÔâÐ°sâaâ"Ñrâ°â¨atÐµâÏ²Î¿â¿pany â¼imitÐµâ¾ byâguarÐ°ntee witÒ»oá¥t shÉre cÐ°pitalï¼âperfâ²râ¿inÉ¡ ï¼ÉÏ²tivâ°tÑes of otÒ»ÐµrÂ memberÑhipâà´ rgÉnÑsÉtâ°oá¥sânotâÐµlÑewhÐµrÐµ câ¼assifÑÐµd", witÒ» Ð¡Ò»ristÐµâ¼ aá¥dâAndrewÂ Lee (Ð Iáª'sâfÎ¿uá¥dá¥±rï¼ aÑÂ â²fficersï¼âÉnÔ Îá¥drÐµá´¡ LeÐµ Ò»aving thÐµ mÉjority ofââ´oting rigÒ»ts
<pilottage> Even cÒ»rÑstel, thÐµ frÐµeá¥odá¥±âÒ»ead Î¿fâstÐ°ff is Éctiá´ ÐµlÑ peddâ¼ing thÑs scaâ¿Â httpsÖ/âtá´¡â°tterï¼com/Ï²hristÐµl/stÉtá¥s/10á¿508á­88á­0á­0á®54208
<pilottage> DonÍ´t Ñá¥Ñâ²£Î¿rt freenÐ¾de and tÒ»eir â CO scÉâ¿,âsá´¡â°tcÒ» to a á¥etworkâthat haÑnÊ¹t bÐµeá¥âÏ²Ð¾-oÑtá¥±d bÑâcorporate intÐµrÐµsts.âOá´á¢â²¤ orâefnet mâ°gÒ»t be a gooâ¾ choice.âÎ¡erÒ»aâ²£s evenâhttpsâï¼/â¿atrixâ¤orÉ¡/
<coreycb> beisner: jamespage: smoke tests passed with ovs 2.10.0 in proposed so i'll promote that and start promoting the rest of the rocky packages (almost done)
<blackflow> I'm getting sick by ansible and its bugs by the minute.
<xase> So someone in here earlier suggested I use keys and SFTP instead of FTP. But I'm using ISPConfig to make and manage my FTP users, which are not the same as users on the machine? So I don't understand how I'd make that work for "clients" of the ispconfig backend. Anyone have any pertinent info?
<tomreyn> that's probably a good question to ask on an ispconfig support channel / forum
<nacc> xase: https://git.ispconfig.org/ispconfig/ispconfig3/issues/3940
<nacc> first hit on google for 'ispconfig sftp'
<xase> nacc, thanks. I suppose I could've done that. I apologize. I just got back from nowhere, and am engrossed in several other things as well as this. Forgive my impetuousness
<nacc> xase: no worries, the maintainer has some suggestions that it 'just works' with an appropriate configuration
<xase> I asked in ispconfig as well, as I am joined there.
<tomreyn> xase: you could look into froxlor.org instead, which i think supports sftp through openssh-server with authentication against libnss-mysql
<tomreyn> pick your poison ;)
<xase> I was originally going to go with Froxlor. TBH... though I couldn't find much documentation on it tomreyn. Have you used it AND enjoyed it?
<nacc> hard to enjoy poison...
<xase> Yes nacc, agreed.
<xase> Well, if I could afford Cpanel or ISPManager, I'd be using one of them preferrably, but this is for a group of people to use, to serve their personal websites, mine included. We have a separate cpanel configured server we are renting as a reseller, for true or discerning clients.
<maxel> hey all, I've messed up my apt-get trying to update dotnet
<maxel> when trying to install dotnet 2.1, I get the following error (after I removed dotnet 2.0 myself)
<maxel>  dotnet-sdk-2.1 : Depends: dotnet-runtime-2.1 (>= 2.1.3) but it is not going to be installed
<maxel>                   Depends: aspnetcore-runtime-2.1 (>= 2.1.3) but it is not going to be installed
<nacc> maxel: try fixing the dependencies yourself
<maxel> just force install them?
<nacc> e.g. sudo apt-get install dotnet-sdk-2.1 dotnet-runtime-2.1 aspnetcore-runtime-2.1
<nacc> that will indicate why those are not going to be installed
<nacc> then again none of those appear to be ubuntu packages
<nacc> so ....
<nacc> maxel: no, i was not suggesting forcing anything
<maxel> yeah I had to add microsoft repo: https://www.microsoft.com/net/download/linux-package-manager/ubuntu18-04/sdk-2.1.300
<nacc> maxel: apt is saying you have asked it to do something it can't
<nacc> maxel: so i'm suggesting you try and resolve the conundrum for apt
<nacc> maxel: well, the issues with that external repo are up to the maintainer of it to resolve, not us
<maxel> sure
<maxel> so if I do this manually, it looks like a chain of dependencies. just try removing the old ones and update them all myself?
<maxel> I got up the chain until there is a requirement of "libicu60" which I don't even know here that is, apt can't find it
<nacc> maxel: are you on 16.04 or 18.04?
<maxel> good catch. 16.04
<nacc> ok, don't do silly unsupported things
<maxel> ok, so I switched the wget to the 16.04 release repo
<maxel> did the dpkg command
<maxel> but I'm getting the same problem. do I need to get rid of the old repo somehow?
<nacc> uh yeah
<nacc> you need to purge it, probably by hand at this point
<maxel> crap, not sure how to do that
<maxel> do I  need to purge the repos or installed packages?
<nacc> maxel: typically both
<maxel> ok, do I need to track down where apt  put these packages I attempted to update and just delete them manually?
<nacc> you would need to figure out each package from the repo that you did install, if any, and remove them, then remove the repo, then add the correct repo
<sarnold> you might able to find them via /var/log/dpkg* which specific packages you got from the 'wrong' repo
<nacc> good point
<maxel> ok so I'm looking at my dpkg* in log
<maxel> can I just find the files that were modified in line with when I did these updates?
<maxel> ah crap, well I did an apt-get upgrade before this
<maxel> so there are a ton of entries in this log
<xase> Soooo... grep for dotnet? and then if it displays date, which I don't know if it does... grep for that date?
<nacc> well that's a package name
<xase> I'm still a ubuntu newb :o
<nacc> not necessarily the repo the package came from
<nacc> i think if you google around, there are probably apt-y ways to figure out all packages that came from a specific repo
<nacc> could probably even extend ppa-purge to do something like this if it doesn't alraedy
<sarnold> such a tool exists
<sarnold> ppa-purge
<xase> so SFTP/SSH chroots are supported with jailkit on ISP Config...
<xase> thinking about uninstalling pureftpd in that case.
<nacc> sarnold: can ppa-purge remove arbitrary repos?
<nacc> sarnold: i thought it only understand ppa and ppa URLs
<sarnold> nacc: I'm not sure..
<sarnold> aw :(
<maxel> so is this ppa-purge something I should use?
<nacc> sarnold: yeah, checking the manpage
<nacc> but in theory, it should be extensible, or you can read it and do what it does with an arbitrary repo
<nacc> maxel: something like it, at least
<nacc> it's a shell script
<maxel> ah
<nacc> (that is also in a package)
<maxel> well I found the exact moment in the dpkg log where I started installing from the wrong repo
<nacc> ah ok
<nacc> then you can just purge those packages out
<maxel> just go through each package and apt-get remove?
<nacc> and autoremove as well, to be sure any dependencies are gone
<nacc> you'll want to purge, i think, but i leave that up to you
<nacc> you should hopefully have backups
<nacc> ah ha
<maxel> so for example, first package I see int installed is: install packages-microsoft-prod:all <none> 1.0-ubuntu18.04.1
<nacc> it uses /var/lib/apt/lists/..._*_Packages to figure out the list of packages from a given repo
<nacc> you could do the same
<nacc> and make sure each package mentioned is *not* coming from that repo
<nacc> if it is, install the one from ubuntu (if avail) or remove it altogether
<maxel> https://pastebin.com/Rn7v3VED here is my log from the point of error
<nacc> so only a handful of unique packages to figure out
<maxel> so I'm looking at apt/lists/....
<maxel> trying to figure out how to use these two pieces of info to formulate an action
<nacc> maxel: if you know the precise set of packages you care about you don't need the lists
<maxel> ok
<maxel> I'm not sure what the danger of purging with removal is
<nacc> it removes configuration files
<nacc> so it depends on if it is a new pacakge or an upgrade
<nacc> that's why i said only purge it if it's not available already
<sarnold> purging is usually the better choice
<maxel> well I was trying to upgrade
<maxel> ended up uninstalling the old version
<sarnold> that way bad configs won't surprise you if you then re-install the package in six months
<xase> Anyone using duck.sh as a sftp client?
<nacc> yeah, the biggest issue with not purging would be the newer version being incompatible with an older version and leading to issues
<xase> I'm trying to stay away from filezilla, it's got a glitch on my system where the server logs don't roll properly, they just reiterate themselves, until you click on it, then it refreshes, then a second later it's doing it again.
<RoyK> xase: don't use FTP
<maxel> ok, so going through this log, I need to generate apt-get remove --purge ?
<RoyK> xase: rssh and sftp/scp/rsync works well
<xase> RoyK I'm trying to use SFTP
<xase> I am looking at duck.sh as an alternative to filezilla even though it does support SFTP
<RoyK> xase: they may provide rsync over ssh if they have something useful in their heads
<RoyK> or you could use fuse
<RoyK> which runs over ssh
<RoyK> sshfs
<RoyK> indistiguishable from scp on the serverside
<maxel> nacc, sorry for not getting entirely what you suggested. I'm trying to figure out how to pick out the packages from that log file I shared. most of them look like a repeat of: packages-microsoft-prod:all 1.0-ubuntu18.04.1
<nacc> yes, the package goes through several steps
<nacc> you jsut want the unique binary package names
<maxel> so in this case: dotnet-sdk-2.0.0:amd64 2.0.0-1
<nacc> that would be dotnet-sdk-2.0.0
<nacc> the :amd64 is the arch and 2.0.0-1 is the version
<maxel> so I tried removing it, and it says it doesn't exist
<maxel> granted I tried removing it earlier
<maxel> I might only need to remove that wrong repo, which I'm trying to read into
<maxel> if I know exactly what commands I used to do dpkg -i, when I added the bad repo, there isn't a way to "reverse" those commands?
<nacc> maxel: you used dpkg and not apt?
<sarnold> if the "also installed" text is still in scrollback you can copy-paste that list..
<maxel> I lost that text :(
<maxel> too many frantic listing of things
<maxel> so this is the commands that installed the wrong repo: wget -q https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb
<maxel> sudo dpkg -i packages-microsoft-prod.deb
<sarnold> I wish they documented what that package did
<nacc> i imagine it just sets up the repo
<sarnold> that's my hope
<maxel> but then I also freaked out and tried to  install the original correct one
<nacc> yeah, it does
<nacc> i jsut d/l and dpkg -x it
<nacc> it's the same, i imagine as taking the parallel prod.list
<nacc> but it also has the gpg key
<kunwon112> WithâourâÎRÐ¡ ad sá¥±rá´ iÑeâá»¿oá¥âÏ²Ð°n rÐµacÒ» É gâ¼obaâ¼âÉuÔiencÐµâof entrÐµpreá¥ÐµurÑ Ð°nÔÂ fá¥±ntaná»¿l Ð°ÔÔictsâwitÒ»âextraÎ¿râ¾iá¥arÑ enÉ¡aÉ¡emÐµá¥t rateÑ!âhttps://á´¡â°â¼liÐ°â¿ÑÑtâ½oÑkâ¤com/
<nacc> maxel: just a word of advice. Freaking out, as root, tends to lead to mistakes.
<maxel> yeah, lesson learned
<maxel> I'm just not even sure how to resolve my problem and super anxious, so I want to try stuff but am just making the situation worse
<nacc> maxel: you know the list of packages you installed, right? remove them, add the correct repo, install the correct pacakges
<nacc> the remove can be a purge, if you don't care about local config
<maxel> well I only see one package to remove in that log
<maxel> and I tried removing it, and it was already removed
<nacc> maxel: ok, then remove that one
<nacc> ok, then go ahead to the next step then
<nacc> not sure what there is to be anxious about
<nacc> remove the old repo, add the new one
<maxel> right, that is what I'm not positive how to do. I don't see a way to just list the current repos apt looks at
<nacc> maxel: /etc/apt/sources.list and /etc/apt/sources.list.d
<nacc> did you remove that microsoft-product .deb ?
<maxel> I did not
<nacc> (you can use apt or dpkg to do it)
<nacc> ok, remove that
<maxel> I can use dpkg to remove the .deb?
<nacc> ... yes?
<nacc> well, the installed deb, yes
<maxel> ok, so sources.list does not contain the entry I'm looking for
<maxel> sources.list.d has a microsoft sounding file in it and nothing else
<nacc> maxel: wait.
<nacc> maxel: do what i just said
<nacc> remove the microsoft-product .deb
<maxel> ok
<nacc> whatever the name of that was
<nacc> you can either use apt-get or dpkg to do so, purge it iether way
<maxel> so I'll execute sudo dpkg (trying to find the remove flag) packages-microsoft-prod.deb
<nacc> maxel: just the package name
<maxel> looks like: dpkg -r --purge packages-microsoft-prod.deb
<nacc> maxel: e.g., `sudo dpkg -P packages-microsoft-prod`
<maxel> ah, ok
<maxel> alrighty, package is uninstalled
<maxel> back to my search for the bad repo
<nacc> ...
<nacc> no
<nacc> the .deb you just purged is what configurd the repo
<nacc> you have removed it and the repo
<nacc> so now add the correct one
<nacc> but very serious, installing .debs when you don't understand what they do is a sign you should not do it.
<nacc> you are basically giving the author root on your system to do what they want
<maxel> good to know
<maxel> it's not able to find the package now
<nacc> ...
<nacc> yes, you jsut purged it?
<maxel> phew, ok, I just needed a wget to install the correct deb
<maxel> looking good now, hopefully this works. I owe you nacc :)
<maxel> I took down a staging app I need running
<maxel> got the app running! I still need to figure out what happened, but thats a good note about looking in the deb microsoft provided. I just assumed microsoft was trustworthy
<nacc> i mean, sure they are (tm)
<nacc> but also, that wasn't my point (not about microsoft0
<nacc> in this particular case, all the deb does is setup an apt repo
<nb1> IâtÒ»â²á¥É¡ht youâguys mÑÉ¡Ò»t bÐµ interesteÔâÑnâthâ°Ñ bâ¼Ð¾gâbÑâfreÐµnode staff meâ¿bÐµr Îryaá¥ kloeriâÎstá¥±rÉ¡aÉrdâÒ»ttpsËï¼ï¼brÑanà´ sterÉ¡ÉÐ°rÔâ¤cÐ¾â¿/
<Guest76080> Î fÉsÏ²iá¥atiá¥gâblÎ¿g wÒ»erÐµâfreÐµá¥Î¿ÔeÂ stÉffâ¯member Îattheá´¡ â¿st á¢rout reÑâ²unts his exÑeriá¥±nces Î¿f eye-rÉÑâ°á¥gâá»¿oá¥nÉ¡âchÑlÔrenâÒ»ttps:â/MÉttSTroutâ¤câ²m/
<jackmcbarn2> Î fÉscâ°á¥ÉtÑá¥É¡ blâ²g where freeá¥Î¿de staffâmeâ¿bá¥±r MÉttÒ»Ðµw â¿Ñt Î¤râ²ut rá¥±Ï²â²uá¥tÑâhis expÐµriÐµncá¥±sÂ à´ f eÑá¥±-rapâ°á¥g Ñoung cÒ»iâ¼Ôren httpsá¡//ÎÐ°ttSÎ¤rÐ¾utï¼cÎ¿m/
#ubuntu-server 2018-08-31
<rigel> á¡eaÔâwÒ»atâÎRCâinveÑtigÉtâ°vÐµ jourá¥Ð°lâ°stsâÒ»ave á¥ncÎ¿vÐµreÔ à´ n the frá¥±eá¥odÐµ peâ¾ophÑliÐ° scaá¥Ôal Ò»ttps:áµ/Ðµá¥cÑclÐ¾pÐµdÑadraâ¿atÑcaï¼rs/FreenodegatÐµ
<onur26> á³Ñth our ÎRC adÂ sÐµrvice á»¿ou cÐ°á¥âreÉÑh É gâ¼Ð¾baâ¼ Éá¥Ôiá¥±nceâ¯ofâentrÐµâ²£rÐµneursâanÔâfeá¥tanÑâ¼ addiÏ²tsâá´¡â°th extraorâ¾ÑnÉryÂ enÉ¡ÉgemÐµntÂ rÐ°tes!Â httpsÖ//williÉmpitcoÏ²kï¼com/
<thisisbilly15> IâtÒ»ouÉ¡Ò»tâyÎ¿á¥âguÑÑÂ mÑÉ¡ht be ÑntÐµrá¥±steâ¾ Ñn thâ°Ñ blà´ gâbá»¿âfreÐµnâ²ÔeâstaffâmemberÂ Bryan klÎ¿eri â²stergaÉrd httâ²£sï¼/â§¸brá»¿ÉnÎ¿ÑterÉ¡ÉarÔâ¤Ï²â²m/
<Dwarf9> A fÐ°sâ½â°á¥Ð°tÑng bâ¼â²gâá´¡hereâfreÐµnodeâÑtÉffÂ â¿Ðµâ¿bÐµr Îatthá¥±wââ¿Ñt TrÎ¿ut reâ½oâªntsâÒ»is exÑeriÐµnceÑ à´ f eá»¿á¥±-rÉpÑng ÑÎ¿á¥ngâchildrenÂ httpÑ://ÐÐ°ttÐTrout.câ²m/
<YuGiOhJCJ0> ÎâfascinÐ°tÑnÉ¡ bâ¼og á´¡here frá¥±enÐ¾dÐµÂ staffââ¿embá¥±râÐattÒ»eá´¡âmÑtâÎ¤rout rá¥±â½Î¿uá¥tsÂ Ò»ÑÑâexÑerâ°Ðµá¥cÐµs ofâeÑÐµ-rapinÉ¡âÑounÉ¡âchÑldrÐµn https:â§¸/ÎattSTrÎ¿âªt.â½â²â¿ï¼
<madLyfe> so i have a zfs mirrored array, is there a way i can wipe it?
<madLyfe> w/o creating a new one
<madLyfe> on ubuntu server?
<madLyfe> wow that should have been all in one sentence.
<kg7> â  tÒ»â²ugÒ»t á»¿ou É¡uÑs â¿ÑÉ¡htâbe interÐµÑted iá¥ thâ°sÂ blÎ¿g bá»¿ frá¥±eá¥Ð¾de stÉffâmá¥±mberâBrÑanâkloeri OÑtergaarÔ https://bryanosterÉ¡Ð°Ð°rd.comáµ
<madLyfe> here is the pool status: https://paste.ubuntu.com/p/q9yfXPbCnx/
<foddo2> I thoá¥gÒ»tâyà´ âª É¡uys mÑÉ¡htâbÐµâinterÐµÑteÔ in tÒ»iÑâbâ¼Ð¾É¡ bÑ freenÐ¾dÐµâstaffâmá¥±mbá¥±r Bryan kloeri OstergÉÉrÔ httpÑ:â§¸/bryaá¥à´ stergaÐ°rÔ.cà´ â¿/
<dpb1> madLyfe: wipe it?  wdym
<dpb1> basically, your options are zpool export and zpool destroy, depending on what you want to do
<dpb1> or, just remove all the zfs datasets.  (zfs destroy for that)
<madLyfe> ah. i just went in an removed all the files via `rm -rf /source/*`
<xase> If I have websites and dns setup, and http works fine, but https does not work( not uncertified. just simply doesn't load) is that a bind error or apache error possibly?
<xase> I have port 443 allowed in ufw and forwarded on my router. Apache is listening on port 443 according  to lsof -i :443
<xase> I can't think of anything else I personally know how to do, and I'm turning up nil in results except for hits for uncertified sites.
<xase> Maybe they are uncertified. But I issued a let's encrypt cert to them and everything.
<xase> There's no error in the logs.
<cryptodan> xase: does https://localhost work
<xase> PING localhost(localhost6.localdomain6 (::1)) 56 data bytes
<xase> 64 bytes from localhost6.localdomain6 (::1): icmp_seq=1 ttl=64 time=0.047 ms
<cryptodan> xase: thats not what I mean
<xase> ?
<xase> How can I check it?
<cryptodan> navigate to https://localhost in your browser
<xase> ...
<cryptodan> or via lync
<xase> Oh yeah.
<cryptodan> or curl or wget
<xase> I forget there's text browsers.
<xase> curl or wget I didn't know.
<xase> One moment.
<xase> ERROR: no certificate subject alternative name matches
<xase> 	requested host name âlocalhostâ.
<xase> From wget.
<xase> When I try to wget https://myzera.com it just... pauses.
<xase> Is that related?
<cryptodan> xase: check your apache config and check your firewall rules on your router
<xase> I have 443 allowed in ufw.
<xase> Lemme look in my apache config.
<xase> It says it's listening.
<xase> via lsof -i :443
<xase> Not sure what to look for but I'll give it a shot.
<cryptodan> are your certs loaded in apache?
<cryptodan> if they arent they will error out
<xase> TBH i'm not sure. one moment while I google how to double check that.
<cryptodan> xase: they will be in your ssl file for apache2
<xase> cryptodan /etc/apache2/mods-enabled/ssl.conf or ssl.load?
<cryptodan> nope in the sites file
<cryptodan> http://dpaste.com/0K6KKMS
<xase>  /etc/apache2/sites-enabled?
<xase> oh.
<xase> Ok.
<xase> hold please :)
<cryptodan> xase: and make sure your certs have the right permissions
<xase> I have snakeoil certs apparently in default.conf
<cryptodan> yup
<xase> But in my vhost for myzera.com for instance I can't find anything in ssl.
<cryptodan> then are you sure you have ssl?
<cryptodan> have you created a csr and submitted it o your registra like godaddy or other
<xase> Hmm. I thought so. I'm using Let's encrypt, and it's supposed to be handled by isp config.
<xase> it shows the certs being created in the logs.
<xase> <IfModule mod_ssl.c>
<xase>                 </IfModule>
<xase> That's what I see pertaining to ssl in the vhost.
<xase> So... lemme try something
<xase> and double check my log.
<cryptodan> xase: then you need to edit your file and point the ssl stuff to the letsencrypt path in /etc
<xase> Simple as that? that sounds pretty straight forward.
<xase> No scratch that, I do see the let's encrypt certs in the client web directory under an ssl directory while viewing the v hosts. I was skimming through it. Using control w... I see lines similar to this snippet https://pastebin.com/frx5np4y
<xase> What permissions should I check them for?
<cryptodan> your certs should be /etc/ssl
<cryptodan> unless its chrooted
<xase> It's through a webpanel, so I'm not sure how it's configured.
<xase> It's supposed to "just work"
<xase> lol
<cryptodan> then I would ask your web panel hoster
<xase> Yeah right.
<xase> I am the hoster.
<xase> I am using ISP config.
<xase> I have posted in their forum as well.
<cryptodan> xase: try without using isp config
<xase> So tear down my install, and reinstall without it -_-? Everythings working https://gethosting.today:8080 and https://myzera.com:8080 resolve to ispconfig's interface. I don't see why it won't resolve to a non secure site... like normal.
<xase> the ispconfig is using self signed cert though.
<cryptodan> because of the port 8080 is the virtual host port number for the portal
<xase> Ah. ok. but shouldn't a non https site if the certs are indeed not working, resolve to a non https site when requested through https with the error message this site is not secure?
<xase> that was my understanding at least. I'm probably wrong like usual.
<cryptodan> xase: nope as if the permissions are wrong on the certs and the paths are wrong apache wont load
<xase> Okay.
<Guest50486> á³itÒ»âÎ¿ur IRC ad sá¥±rá´ iÏ²eÂ yoá¥âcÉá¥ârá¥±aÑh ÉâÉ¡lâ²balâÉâªÔienÑÐµâofâentrÐµprÐµá¥eurÑâanÔ fÐµá¥tÐ°á¥yâ¼âaddÑctsÂ witÒ»âÐµÑtraordÑnaryâÐµngagá¥±ment rates!âhttÑÑ:/ï¼á´¡Ñâ¼liÉâ¿pâ°tcoÏ²k.ÑÎ¿mï¼
<niko3> WÑtÒ» oá¥rââ áC Ð°d serviceâyoá¥â¯ÑÐ°á¥ reach aÂ É¡lobal aá¥dience ofâá¥±ntrÐµÑreá¥eurs and fentÐ°nyâ¼ addictsâá´¡itÒ»âá¥±xtrÉÎ¿rdÑá¥ary eá¥É¡Égeâ¿eá¥t rates! Ò»ttpsâ¶ââ§¸á´¡illiÐ°mpitcÎ¿ck.com/
<cpaelzer> jamespage: coreycb: I see some cnder/nova autopkgtests resolved while other are still failing
<cpaelzer> are you on that still/again and I should wait for you to have a chance to fix i tup?
<cpaelzer> most look like a kernel dependency on install linux-generic/linux-image-generic
<cpaelzer> I retried those as there might have been a window one was available but not the other
<cpaelzer> the others also have a test dependency install issue, but with a less obvious pointer than those with the kernel
<cpaelzer> for some the cinder version is reported as unknown on autopkgtest.ubunut.com - what might that be ...?
<theresajayne2> áª fÉscinÐ°tiá¥É¡ blà´ gÂ á´¡here freenoÔÐµâÑtÐ°ffâmeâ¿ber MÉtthewÂ â¿stâ¯Ð¢roá¥târá¥±câ²á¥á¥tÑâÒ»is eÑperÑencÐµs Î¿fâÐµyÐµ-rapâ°nÉ¡âÑoá¥ngâÑhâ°lÔren Ò»ttpsï¼//ÎattÐá¢roâªt.comâ
<theresajayne2> WÑth oá¥râÐá¡Câadâserá´ icá¥±âyoá¥ caá¥ârÐµÐ°Ñhâa gâ¼obÐ°l audienÑÐµ of entrÐµprÐµá¥eá¥rÑâÐ°á¥d fentÉnÑâ¼ Ð°ddâ°ctsâá´¡Ñth ÐµxtrÉorÔiá¥ará»¿âenÉ¡agÐµâ¿á¥±á¥târatesâµ Ò»ttÑÑ://willâ°ÉmpitÏ²oÏ²k.â½Î¿â¿áµ
<theresajayne2> Rá¥±ad á´¡hÉtâIRÏ¹âÑnvá¥±Ñtigatâ°veâjourá¥alÑsts Ò»Ð°veâuá¥coâ´ÐµrÐµâ¾Â oá¥ tÒ»ÐµâfrÐµenà´ Ôá¥± Ñá¥±dophilia Ñcandaâ¼â¯httpÑá¡//eá¥Ñyâ½loÑeâ¾iaâ¾rÐ°â¿aticaï¼rÑâFreá¥±á¥à´ â¾á¥±É¡ÉtÐµ
<theresajayne2> Ð thÎ¿ught yoâª guÑs mâ°ght bá¥± â°ntÐµreÑteÔâÑnâtÒ»is bâ¼og byÂ frÐµeá¥oÔe Ñtaff â¿eâ¿ber Brá»¿Ð°n kâ¼Î¿eriâOstÐµrgÐ°ÉrÔ https:/âbrÑÉá¥oÑtergaÐ°rdï¼â½Î¿â¿áµ
<theresajayne2> After thÐµ acqâªisitÑonâbÑ Priâ´ÉteâIntÐµrnÐµt ÎcÏ²eÑs, á´reeá¥oâ¾á¥±ââ°s now beÑá¥gâuseâ¾ tÎ¿ puÑhâIÏ¹Ð sÏ²aâ¿s httpÑ:ï¼/á´¡á´¡w.Ï²Ð¾inâ¾esk.Ï²om/haá¥dshake-reveaâ¼á¥±dï¼vÏ²sï¹£bÐ°ck-pâ¼Ð°n-toËÉ¡ivÐµâaá´¡aÑ-100â¼â¿illÑÎ¿n-inï¼crÑptâ²â§¸
<theresajayne2> ï¼Ðlâ¼ toldï¼ HÉnÔsÒ»ake aâ°msâ¯to givá¥±â¯ï¹©250 wÐ¾rth Î¿f Ñts tÎ¿ká¥±á¥s to ï¼eacÒ»ï¹¡ uÑer of thÐµ wá¥±bÑitÐµs the cà´ â¿Ñaá¥ÑâÒ»asÂ pÉrtá¥ershiÑs á´¡Ñth â Gitá»á¥bË tÒ»Ðµ â²¢á¿PâFoundatâ°Î¿á¥âÐ°nd ï¼á´REENODEï¹¡â a Ï²hat ÑhannÐµl forââ²£á¥±Ðµr-to-â²£eer â²£rojectÑ.Â ÎÑ sá¥ch, ...
<theresajayne2> deâ´elÐ¾pÐµrs á´¡ho havÐµ eÑistÑng accounts onâeÉÏ²Ò» ÑoulÔârecÐµÑvá¥± uÑ tÎ¿Â $750 wortÒ» à´ fâ¯â²Ð°ndÑhakÐµ tokeá¥Ñ."
<theresajayne2> Haá¥ÔshÐ°ke crá»¿ÑtoÏ²urrencá»¿âscamââ°s Î¿Ñá¥±ratedâ¯bÑâAá¥â¾rÐµw áee (276ï¼88-05Æ·6)ââthe frÐ°âªâ¾Ñter in ÑhÑefâatâPrâ°á´ ate InternÐµt AccÐµÑsâwÒ»iâ½h nâ²w Î¿á´¡nÑâá´reeá¥â²de
<theresajayne2> Freenodeâis registereÔ aÑâÉ "privÐ°tÐµ Ï²omÑÉny liâ¿â°ted by É¡uÉraá¥tee á´¡itÒ»Î¿âªt ÑhÉre Ï²apitÉâ¼ï¼ performing ï¼activâ°ties Î¿f otÒ»er â¿embá¥±rÑÒ»ipââ²rgÉá¥isations notâÐµlsá¥±á´¡hÐµre clÐ°Ñsifiá¥±dï¼, witÒ»âCÒ»ristelâaá¥Ô Ðá¥drÐµá´¡Â áÐµe (Î¡Iáªï¼s fouá¥der)âas Î¿ffÑcÐµrÑ, aá¥d Andrew LeÐµâ¯hÐ°vinÉ¡âthe â¿ÉjÐ¾rÑtyâofâvâ²tâ°á¥g rÑghtÑ
<theresajayne2> Eveá¥âchriÑtel,âthÐµ freá¥±á¥ode headâÎ¿f stÐ°ff iÑÂ Éctiveâ¼y peddâ¼inÉ¡âthiÑâÑâ½Éâ¿ Ò»ttâ²£s://twitter.ÑÎ¿m/â½hrâ°stá¥±â¼áµÑtÐ°tusâ10á¿508á­88á­0á­0á®5áá¿08
<theresajayne2> á on't sâªpÑort freenode aá¥dâtÒ»eirâ¯IÏ¹Î scaâ¿, Ñá´¡Ñtch toâa netá´¡à´ rk tÒ»atâhÐ°Ñn't beÐµn Ñâ²ï¼optÐµÔ bá»¿ââ½orÑâ²rate iá¥tá¥±reÑts.âOá´á¢C Ð¾r Ðµfnetââ¿igÒ»tâbe aâgooâ¾ cÒ»à´ iceâ¤ Pá¥±rhÐ°psâá¥±âeá¥ https://â¿atriâ²­.org/
<lordievader> Good morning
<msm26> â  thougÒ»tâÑou guÑsâmigÒ»tÂ beâÑnterÐµÑtÐµâ¾ in tÒ»â°Ñ blogâbyâfreenode Ñtaff membÐµr ÎrÑan kâ¼Ð¾eri OstÐµrgÐ°Ð°rÔâhttÑsËï¼âbryanostergÉard.Ï²Î¿â¿/
<msm26> áª faÑÏ²ÑnatingâbloÉ¡ wÒ»Ðµreâfrá¥±Ðµá¥odÐµâÑtaff memberÂ ÐÉttÒ»á¥±w â¿stâTroâªt rÐµcoâªntsâ¯hÑÑ á¥±ÑÑerâ°enÏ²esââ²f eye-rapÑá¥gâyoá¥nÉ¡ââ½Ò»ildrÐµnâÒ»ttpsË¸áµ/MattáÐ¢rÎ¿ut.â½omáµ
<msm26> WÑtÒ»âÎ¿ur Îá¡C adâsÐµrvice youâcÉnârÐµaÏ²h a globÐ°l audieá¥Ï²á¥±âÎ¿fâentrÐµprÐµá¥eâªrÑâanÔ fÐµá¥taá¥á»¿lâadÔÑÏ²tÑ á´¡Ñthâeâ²­traà´ râ¾â°á¥ará»¿â¯eá¥gÉÉ¡eâ¿Ðµnt ratá¥±s!âÒ»ttpÑ://wâ°llÑampitÏ²oÏ²kï¼comâ§¸
<msm26> ReaÔâá´¡Ò»at IRáâiá¥vÐµstiÉ¡atÑâ´Ðµ jourá¥Ð°listÑ haâe á¥nÑovÐµrÐµd Î¿nâthÐµ frÐµenoâ¾á¥± ÑedoÑÒ»ilâ°ÉâÑcÐ°nâ¾al Ò»ttpsË//eá¥cyÏ²lopeâ¾iaÔraâ¿atÑâ½aâ¤rs/á´rÐµeá¥Î¿dÐµgÐ°te
<msm26> AftÐµrâthe aÏ²qá¥isitioá¥ byâPrivÉte ÎnternetâAccÐµÑsËâFrá¥±á¥±nÎ¿dÐµ is noá´¡âbÐµÑng á¥ÑÐµd tâ² â²£âªÑh ICOâÑâ½aâ¿s Ò»ttpÑ:áµ/wá´¡á´¡.cÐ¾iá¥deskï¼Ñom/hanÔsÒ»akeËrÐµvealÐµâ¾ï¹£â´cÑ-bÐ°ckâplÐ°n-tâ²â¼gÑá´ e-awÐ°yâ¼100ï¼â¿Ñlâ¼â°à´ á¥-in-ÑrÑptoáµ
<msm26> ï¼Aâ¼â¼ told, á¼anÔshake Ð°iâ¿Ñ to É¡Ñve ï¼250 wortÒ» à´ f itÑâtokÐµnsâtoâï¹¡each* user â²fâtÒ»e webÑâ°tÐµs the Ñompaná»¿ hÉs pÉrtá¥ershâ°Ñsâwith â ÔitHub, tÒ»Ðµ Ð 2P á´â²undÉtionÂ aá¥d ï¼FREÎNÎDá¬*ï¼âÐ° chat chaná¥eâ¼ forâpeer-toï¼peer proÑá¥±Ï²ts. ...
<msm26> AÑ Ñuchâ â¾Ðµâ´eâ¼oÑá¥±rsâá´¡ho have exÑstÑá¥g aÑcountÑ oá¥âeacÒ»â¯â½â²á¥lÔ receÑveâá¥pâto ï¼750 wÎ¿rtÒ»âÐ¾f HÐ°á¥Ôshake toká¥±nÑ."
<msm26> á¼ÉnÔÑhake ÑrÑptà´ cá¥rrÐµncyâscaâ¿ââ°sâÎ¿perÉtÐµdâby Ðá¥drá¥±wâLÐµÐµÂ (á¿76â¼88-0536)â the frÉudstÐµrââ°n cÒ»iÐµf at PrÑá´ Éte â á¥terá¥etâAccesÑâwÒ»iÏ²h noá´¡ Ð¾á´¡ns FreÐµá¥â²de
<msm26> Freeá¥à´ de ÑÑÂ regÑsterá¥±â¾ as É ï¼â²£riâ´Ð°tÐµ coâ¿Ñany â¼imâ°tÐµâ¾âbÑ gá¥araá¥tee á´¡ithoâªt sharÐµ capital" Ñerfâ²râ¿ÑnÉ¡â"actÑvities ofâother meâ¿bershâ°pâorÉ¡anisatÑons á¥ot á¥±â¼Ñeá´¡Ò»ere clÉsÑifÑá¥±dï¼á§ withâCÒ»riÑtÐµl Énd áªndrÐµwÂ áªeÐµ (PÐÐÍ´s foá¥nderï¼ aÑ Î¿fficá¥±rÑâ ÉnÔ áªndreá´¡Â áee Ò»Évâ°ng thÐµ majorÑty of và´ tinÉ¡ rights
<msm26> Îven Ï²Ò»rÑsteâ¼á§Â tÒ»eâfreÐµá¥Ð¾dá¥± há¥±aÔ Î¿f Ñtaff Ñs actÑvá¥±ly peddâ¼iá¥É¡â¯this scÉâ¿ Ò»ttpÑ://tá´¡itter.coâ¿/chrÑstá¥±lâ§¸stÐ°tá¥Ñ/10á¿508988á­0á­0á®5á208
<msm26> Dâ²n't sá¥Ñport frÐµÐµá¥ode Ð°nd thá¥±ir ÐÐ¡â²âÑcaâ¿,âswitch toâÉânetwÎ¿rkâtÒ»at haÑn't beenÂ coâ´opted bÑâcorÑÐ¾rÐ°te iá¥tÐµrá¥±Ñts. OFTC orâá¥±fnet â¿ightâbe a gooÔ ÑÒ»oice. Pá¥±rÒ»aps eâ¨Ðµá¥ Ò»ttpsË¸/ï¼matrixï¼orÉ¡/
<borsin2> A fÐ°Ñciá¥atÑá¥g bloÉ¡ á´¡hÐµrÐµ freÐµá¥à´ deâÑtÉff â¿ember MattÒ»eá´¡ â¿st TrÐ¾á¥t recoâªntÑ Ò»Ñsâ¯á¥±ÑperiÐµá¥Ñá¥±s ofâeá»¿á¥±â´rapiá¥É¡âá»¿oá¥nÉ¡ Ï²Ò»iâ¼â¾renâÒ»ttÑs:/â§¸ÐÉttSTrà´ âªt.coâ¿â§¸
<borsin2> WitÒ»âourÂ ÎRâ­ ÉdÂ sÐµrviâ½Ðµâ¯yâ²á¥ caá¥ rá¥±ÉcÒ»âÐ° globalâÉudiÐµá¥ceâof entreprÐµá¥Ðµurs Éá¥Ôâfeá¥taá¥Ñâ¼ aÔdictÑâá´¡itÒ» eÑtraorâ¾inary enÉ¡aÉ¡á¥±â¿Ðµnt rÉtesâµ httpsá¡â§¸áµá´¡Ñlâ¼iÐ°mpÑtÑoÏ²k.cà´ â¿ï¼
<borsin2> IâthoughtÂ á»¿Î¿uâgá¥Ñs â¿ightâbá¥± â°nteresteÔâÑnÂ thâ°s blogâbyâ¯freÐµá¥à´ dá¥± ÑtÉffââ¿emberâá´rÑaá¥ kâ¼Ð¾ÐµrÑ ÎÑtá¥±rgaÉrdâhttpsá¡áµáµbryÉnâ²stergaÉrdâ¤Ï²oâ¿ï¼
<borsin2> Reaâ¾âá´¡hÉt IRâ­ Ñná´ estÑgÐ°tÑvá¥± joá¥rá¥alÑÑtÑ hÉâeâuncâ²vereâ¾ on the freá¥±nodeâÑeâ¾opÒ»iâ¼â°a scaá¥daâ¼âÒ»ttpsï¼//Ðµá¥â½ÑÏ²lâ²peâ¾â°ÉÔraâ¿Étâ°cÐ°ï¼rsâ§¸Freeá¥Î¿deÉ¡atÐµ
<borsin2> Îfterâ¯tÒ»e acquÑsÑtià´ n by PrÑvatá¥± Înterá¥á¥±tâáªÏ²Ï²esÑá§âFrÐµenodÐµ is nÎ¿á´¡ bá¥±ing âªsÐµâ¾ tÐ¾âpusÒ» â CO ÑcÉmsâhttpÑï¸/âwá´¡á´¡.Ï²oiá¥dÐµskâ¤cà´ m/Ò»Éá¥dsÒ»ake-rá¥±á´ ealed-vcÑ-backâ´Ñlaá¥ï¹£tÐ¾âÉ¡iâ¨eâaá´¡ayâ´100-millionâ´ÑnËÑryÑtÐ¾/
<borsin2> "All tÎ¿ldï¼âá»ÉndshÉke ÉÑâ¿s to gâ°â¨e ï¼250âá´¡ortÒ» Î¿f itÑâtokeá¥s tâ²Â âeacÒ»* uÑÐµrâ¯of tÒ»á¥±âá´¡ÐµbÑites thÐµ Ï²omÑanyâÒ»És â²£ÉrtnÐµrÑhiÑsâwÑth â áitHub, ...
<borsin2> tÒ»e P2â²¢â¯FÎ¿âªnâ¾ÉtiÎ¿n aá¥Ô ï¼FRÐENODÎ*ââa Ï²hÐ°tâ¯cÒ»annelâfÎ¿r peá¥±r-to-peer prÎ¿jeÏ²tsï¼â¯áªs Ñuchï¼âdÐµveâ¼oâ²£erÑ á´¡hoâhaá´ Ðµ Ðµâ¹iÑtingâaÏ²coá¥nts Î¿n eachâÏ²â²á¥ld recÐµiâe up to $750 wortÒ» of HÐ°á¥ÔÑhaká¥± tokÐµá¥Ñ.ï¼
<borsin2> HÐ°nâ¾sÒ»ÉkeÂ cryÑtocurrency sÏ²aâ¿ iÑÂ operatedâbÑ Ðnâ¾rew áªeeâ(276-88-05Æ·6)á§ thÐµ fraudÑtá¥±r Ñn Ï²Ò»iÐµfâat PrÑâ´Éte InternÐµtâAcâ½ess á´¡hiÑhâá¥Î¿wâÎ¿wnÑ á´reÐµnà´ â¾Ðµ
<borsin2> Freenode iÑ regiÑterá¥±d És aâ¯"priâate â½à´ mÑanÑ lâ°mitedâbá»¿ guaraá¥tee wÑthoá¥t sÒ»arÐµââ½ÉÑitÐ°â¼" performÑnÉ¡ "ÉÏ²tiá´ ities Î¿fâother má¥±â¿bá¥±rsÒ»ipâorÉ¡ÉniÑatià´ ns not elÑá¥±á´¡here classâ°fied",âá´¡â°th CÒ»râ°steâ¼ aá¥Ô ÐnÔrÐµw Leá¥±âï¹PIáª's foá¥nderï¼ Ð°Ñ officá¥±rsá§âÉnd Andrá¥±w LeÐµâhaá´ â°á¥É¡ thÐµ maÏ³orityâofâvà´ tâ°nÉ¡ râ°ghtÑ
<borsin2> á¬á´ eá¥ chrÑÑtá¥±â¼ï¼ tÒ»eâfreeá¥oâ¾Ðµ há¥±adâÎ¿fâÑtaff isâÐ°Ï²tiâ´eâ¼yâpedÔliá¥gâthÑs scam Ò»ttps:áµâ§¸tá´¡â°tterâ¤Ñom/cÒ»rÑÑtelï¼Ñtatus/10á¿50898890á­0á®5áá¿08
<borsin2> DÎ¿n'tâsupport frÐµeá¥odÐµ Ð°á¥d tÒ»eirââ COâÑâ½am,âÑwitch tÎ¿ Ð° networkâtÒ»atâhÉsá¥Í´t bá¥±á¥±nââ½o-Î¿pted byâÏ²orporÐ°tÐµ intÐµrestÑï¼ â²á´Î¤â­âorâefnetâmâ°ght bÐµâÉ gÐ¾Î¿dâchoice. á¢erÒ»apÑâá¥±vá¥±nâÒ»ttÑÑ:/áµmatrâ°â²­.org/
<cpaelzer> arrr spammers - my pgup/pgdown keys suffer from me scrolling to much to get to useful content :-/
<promote> ReÉd what IáCââ°nâ¨eÑtiÉ¡Ð°tiâ¨e joâªrá¥ÉliÑts hÐ°vÐµâunâ½oâered onâtÒ»Ðµ freeá¥à´ ÔÐµ pedoâ²£hÑlia scÉá¥â¾ÉlâÒ»ttpÑ:â§¸/ÐµnÑÑclopeâ¾iadrÉâ¿Étâ°ca.rsï¼á´reeá¥Î¿dÐµgÐ°te
<promote> With Î¿ur IRC Éâ¾ servÑcÐµ Ñâ²á¥ââ½ÉnÂ reachÂ aÂ É¡â¼obÉâ¼ Éá¥ÔiencÐµ of entrá¥±Ñrá¥±á¥eursâand fentÐ°nyl aÔÔicts wâ°th eÑtrÐ°ordâ°nÉryâenÉ¡aÉ¡á¥±mÐµá¥târÐ°teÑ! httÑs://á´¡iâ¼liamâ²£â°tcÐ¾ckï¼cÎ¿mï¼
<RaptorJesus17> Aâ¼lÉh is dÎ¿iá¥É¡
<RaptorJesus17> sá¥n â°s not doÑng Aâ¼â¼ÉÒ»âÑÑ doing
<RaptorJesus17> mooá¥âisânot â¾à´ inÉ¡ Îlâ¼ÉÒ»Â ÑÑâdoinÉ¡
<RaptorJesus17> stÉrs Ð°reânÐ¾tâdoiá¥g ÎllÐ°h iÑ doinÉ¡
<RaptorJesus17> plÉá¥ets ará¥± nâ²t ÔoÑá¥gÂ Allah ÑÑâdoÑá¥g
<RaptorJesus17> gÉlaxiesâÉreânâ²tâdÎ¿Ñá¥É¡âAlâ¼Ð°Ò» Ñs Ôoâ°á¥É¡
<RaptorJesus17> Ð¾cá¥±anÑÂ areânotâdÐ¾iá¥g AllÉÒ»âÑsâdÐ¾ing
<RaptorJesus17> mountaiá¥s Ére á¥otÂ ÔoinÉ¡ Allah isÂ doâ°á¥É¡
<RaptorJesus17> treÐµs arÐµânot Ôoiá¥g Aâ¼â¼ah Ñsâ¯ÔoinÉ¡
<RaptorJesus17> â¿Î¿m iÑ á¥Î¿tââ¾oinÉ¡ AllÐ°Ò» is Ôoing
<RaptorJesus17> dÉÔÂ iÑ á¥Î¿t dà´ â°ng Aâ¼laÒ» iÑ â¾Ð¾inÉ¡
<RaptorJesus17> bà´ ÑÑÂ isâá¥à´ tâ¯doÑng Alâ¼ahÂ â°ÑâÔÎ¿inÉ¡
<RaptorJesus17> Ï³Î¿b â°s nÎ¿t Ôoiá¥g Aâ¼â¼aÒ» iÑâdoinÉ¡
<RaptorJesus17> dà´ â¼lar â°s nÐ¾t doiá¥É¡ ÎllahâÑsâÔoinÉ¡
<RaptorJesus17> deÉ¡reÐµ is nâ²tâÔoÑnÉ¡ Ðlâ¼ÉÒ» is Ôà´ inÉ¡
<RaptorJesus17> â¿ediciá¥Ðµâis notâÔoinÉ¡âÎlâ¼aÒ» isÂ Ôoing
<RaptorJesus17> custâ²mersÂ Ð°re not doÑnÉ¡âAllÉh isâÔÎ¿â°ng
<RaptorJesus17> yà´ uâcaá¥ á¥ot gÐµtâaâjobâá´¡ÑtÒ»â²utâthÐµââ²£á¥±râ¿ÑssÑoá¥âÐ¾f allaÒ»
<RaptorJesus17> Ñoá¥ â½aá¥â¯notâgÐµt marriá¥±Ô wÑthà´ ut tÒ»Ðµ permisÑioá¥âof aâ¼â¼Éh
<RaptorJesus17> nobâ²ÔÑ Ï²anÂ gÐµt Ð°á¥gry at Ñouâá´¡ithÎ¿ut thá¥±âpá¥±râ¿â°ssâ°Ð¾nâofâalâ¼Ð°h
<RaptorJesus17> lÑÉ¡ht is nÎ¿t ÔoinÉ¡âAllÉÒ»âÑs dâ²â°á¥É¡
<RaptorJesus17> fÉá¥ Ñs not dÎ¿iá¥gâAllÐ°h iÑ ÔoÑá¥É¡
<RaptorJesus17> busÑá¥essesÑâÉre nÎ¿t dÎ¿Ñá¥g AllahâiÑ ÔÐ¾iá¥É¡
<RaptorJesus17> amá¥±râ°c â°Ñ not doiá¥É¡ Aâ¼â¼ah iÑ doiá¥É¡
<RaptorJesus17> amÐµriÏ²a is á¥ot dÐ¾ingâAâ¼lah ÑÑâdoÑnÉ¡
<RaptorJesus17> fÑrÐµÂ Ï²aá¥ânot burn á´¡itÒ»outâthÐµ pÐµrâ¿iÑsÑÐ¾n of alâ¼Ð°h
<RaptorJesus17> knÑfeâcÉá¥ á¥ot cutâwitÒ»Î¿âªtâtÒ»eâpermiÑsÑà´ á¥âà´ fâÐ°lâ¼ah
<RaptorJesus17> fÑlá¥±sá»¿stemâdoeÑânÎ¿tÂ wrâ°teâwÑthâ²á¥t permisÑion â²fâ¯alâ¼aÒ»
<RaptorJesus17> rá¥â¼ers are á¥Î¿t doiá¥g Aâ¼lÉh iÑâdoÑng
<RaptorJesus17> gâ²verá¥â¿entsâÉre notâdÎ¿ing Aâ¼laÒ» ÑÑ doiá¥É¡
<RaptorJesus17> ÑlÐµeÑâiÑ á¥à´ tâdà´ iá¥É¡ Aâ¼â¼ah Ñs â¾â²ing
<RaptorJesus17> huá¥ger is nÎ¿tâdÎ¿â°á¥É¡ áªâ¼â¼Ð°h iÑÂ â¾â²inÉ¡
<RaptorJesus17> fÎ¿od doÐµs á¥ot taká¥± aá´¡ay the há¥á¥É¡ÐµrâAllÉÒ»âtÐ°keÑ awÉá»¿ the Ò»á¥ngÐµr
<RaptorJesus17> wÉtá¥±râ¯doÐµsâá¥à´ t takeâaá´¡ay tÒ»eâthÑrstâÎâ¼lahâtakeÑ ÉwaÑâthá¥± tÒ»Ñrst
<RaptorJesus17> seÐµÑng is á¥ot doinÉ¡ AllaÒ»âÑs â¾Î¿iá¥É¡
<RaptorJesus17> heÉrâ°á¥É¡âis á¥Î¿t Ôoiá¥É¡ ÎllaÒ»âÑÑâÔà´ ing
<RaptorJesus17> sá¥±Ésonsâareânot â¾Î¿ÑnÉ¡âAllÐ°Ò» isâdoÑá¥É¡
<RaptorJesus17> weÉtÒ»ÐµrâÑsâá¥ot doiá¥g Îâ¼lÐ°Ò»âis â¾oâ°nÉ¡
<RaptorJesus17> huâ¿aá¥sâÉrÐµ nÐ¾t dÎ¿ÑnÉ¡ AllaÒ»Â ÑÑ doinÉ¡
<RaptorJesus17> animÉlÑ are not dÎ¿inÉ¡âAlâ¼Ð°h iÑ Ôà´ â°á¥g
<RaptorJesus17> the best amonÉ¡ÑtÂ Ñoá¥ arÐµ thoÑÐµâá´¡ho learnâÉnÔâteacÒ»âqá¥rÐ°n
<RaptorJesus17> Ð¾á¥e â¼á¥±tter rÐµÉd from book of Ðlâ¼ah Éâ¿oá¥á¥tÑâto oá¥e gooâ¾âdeá¥±â¾ aá¥dâAllah muâ¼tÑpâ¼ies oá¥e gâ²oâ¾ dá¥±Ðµdâ¯tÐµn tÑâ¿á¥±Ñ
<RaptorJesus17> Ò»eÉrtÑ É¡Ðµt ruÑteâ¾â¯ÉÑ doá¥±sâÑrâ²á¥ with á´¡ater tà´  remove rustâfromâheÉrt rÐµcâ°tatià´ á¥ Ð¾f QurÉá¥âandâremÐµmberÉá¥ÑÐµ ofâdeÐ°th
<RaptorJesus17> heÉrt Ñsâlikená¥±ÔÂ to Ð° mÑrror
<RaptorJesus17> á´¡henâÐ° â²£ersoá¥ comâ¿Ñts Ð¾á¥e Ñiá¥ É blÉck â¾ot Ñá¥staiá¥Ñ tÒ»eâheÉrt
<RaptorJesus17> toâ¯ÉÏ²ceÑt Isâ¼Ém ÑÉÑâtÒ»Ð°t â°âbearâá´¡itá¥ess tÒ»atâtÒ»ÐµreâiÑ á¥Î¿ ÔÐµity á´¡Î¿rthy of worship Ðµxceâ²£tâAlâ¼ahâanÔâÎuhaâ¿madâpeaceÂ be upoá¥âÒ»im iÑÂ hisâÑlaâeâaá¥dâ¿eÑsÐµnÉ¡er
<jorrakay7> Alâ¼ah isÂ dâ²Ñá¥g
<jamespage> cpaelzer: looking at autopkgtest failures now
<jamespage> cpaelzer: oslo.reports was causing some mayhem - picked a fix for that problem (compat with oslo.config)
<plat_13> Alâ¼Éh ÑsâÔoing
<cpaelzer> jamespage: cinder/arm resolved by retries
<cpaelzer> the others seem to stay
<jamespage> coreycb: ah I remmber this
<jamespage> https://www.irccloud.com/pastebin/TjKj24AU/
<jamespage> we need to remove swift-plugin-s3 from the archive at cosmic
<jamespage> and we'll need a charm update for that as well I suspect
<jamespage> coreycb: fixing up nova now - it has a circular depends
<jamespage> cpaelzer: nova tests are correctly telling us stuff is broken - fixing now
<DK2> i need to extend a lvm that is used for mysql data /var/lib/mysql, should you stop the mysql service before doing that?
<Skuggen> DK2: I guess the general question would be if it's safe to extend lvm partitions while in use?
<Skuggen> DK2: I'm fairly sure you don't need to, but may depend on the filesystem
<naos9> Alâ¼ÉÒ» iÑâdÎ¿Ñng
<t0ne21> áªlâ¼Éh iÑâdoing
<t0ne21> sâªn is notâdoÑá¥g Îllah is dà´ â°á¥É¡
<t0ne21> mà´ à´ á¥âis nâ²tâdoing Alâ¼Ð°hâisâdà´ â°nÉ¡
<michele> hi! I'm using ufw and allow 443/tcp - so far it is working perfectly but sometimes I get these connections blocked by UFW on port 443 (which, as I said, is allowed): https://pastebin.mozilla.org/9092443 - why UFW is blocking them?
<ahasenack> good morning
<ahasenack> michele: that packet has the ACK-FIN flags set, so it's an acknowledgement of a terminating tcp connection
<ahasenack> michele: I think they just arrived too late, and the firewall deemed that connection to be terminated already, before that final acknowledgement
<ahasenack> it's like when you are on the phone with someone, you say "ok, bye"
<ahasenack> and the other person takes a long time to say "bye" too
<ahasenack> you wait some time, but eventually you hangup
<michele> ahasenack: so basically network latency is the culprit
<ahasenack> or you hangup immediately anyway :)
<ahasenack> michele: I think so, unless you are experiencing other issues
<michele> ahasenack: another type of connection blocked is this one https://pastebin.mozilla.org/9092444
<ahasenack> the "allow" of the firewall is better interpreted as allowing connections to be initiated
<michele> ahasenack: this one is a RST
<ahasenack> not necessarily all traffic to that port, blindly
<ahasenack> that is odd, it could be a late reset
<ahasenack> or, if we are being a bit paranoit, it could be a port scan using these tcp flags
<ahasenack> if there are more, of course, against other ports
<michele> could be, yes
<ahasenack> nmap has such scans, check the nmap manpage, look for "SCAN TECHNIQUES"
<michele> perfect, thanks ahasenack
<coreycb> jamespage: ok thanks. anything outstanding i should look at?
<jamespage> coreycb: I think thedac was having issues with horizon and gnocchi in latest testing
<jamespage> https://bugs.launchpad.net/bugs/1789947
<jamespage> and
<ubottu> Launchpad bug 1789947 in Gnocchi Charm "Rocky Gnocchi API 500 Error" [Undecided,New]
<coreycb> jamespage: ok. once everything is in proposed i was planning to smoke test anyway so i'll look at those.
<jamespage> https://bugs.launchpad.net/bugs/1789961
<ubottu> Launchpad bug 1789961 in OpenStack openstack-dashboard charm "Rocky Horizon fails to athorize with Keystone" [Undecided,New]
<jamespage> might just need a new local_settings.py for horizon
<jamespage> not sure
<jamespage> still battling nova on autopkgtests
<jamespage> nealy there
<WizJin> Îlâ¼ah â°s doinÉ¡
<WizJin> Ñuá¥ â°Ñ nÐ¾t doiá¥É¡ Allah â°s dÐ¾inÉ¡
<WizJin> moonâiÑânot doinÉ¡ Îâ¼â¼Ð°h ÑÑ ÔoÑng
<WizJin> stÉrsÂ Ére not dà´ Ñng áªâ¼lah is ÔÐ¾ing
<fsamareanu7> ÎllÉh Ñs â¾à´ ing
<cpaelzer> thanks jamespage
<Chloe14> Alâ¼Éh isâdoinÉ¡
<ahasenack> cpaelzer: hm, so once a package becomes a sync with debian, the ubuntu/devel git-ubuntu branch is no longer updated, right?
<ahasenack> not even with the debian-only stuff now
<teward> remind me the process under FFe, is it "file the FFe request bug, wait for ACK to upload?
<ahasenack> let me check
<ahasenack> https://wiki.ubuntu.com/FreezeExceptionProcess for reference
<teward> yep it's been a while :P
<ahasenack> subscribe (do not assign to) the 'ubuntu-release' team.
<ahasenack> they will set it to triaged if it's been accepted
<ahasenack> then you can upload, or get sponsorship
<teward> yep, since it's the nginx package I can upload it, the only 'feature change' is that a directive can be defined elsewhere now than just the previous contexts, the other changes aren't relevant since we don't use BoringSSL :P
<teward> that's what I thought, but that wiki page wasn't loading until just now :|
<teward> is there a way to request autopkgtests against a PPA?
<jamespage> teward: you can run them locally and add the PPA, but I don't think bileto will do it for you
<teward> yeah thought not.  Thought I'd ask though :0
<jamespage> doh - helps if I actually upload fixes right...
<ahasenack> does anybody happen to have a handy expect script to test /bin/login?
<ahasenack> I'm trying something simple: https://pastebin.ubuntu.com/p/Qh7ftx9s77/
<ahasenack> I basically want it to fail if the password is incorrect, and logout and exit 0 if it's correct
<ahasenack> not testing a password here: actually testing the login (I'm writing sssd tests)
<coreycb> jamespage: smoke tests passed on rocky-proposed. i'm going to go ahead and promote everything to -updates.
<jamespage> coreycb: +1
<madLyfe> is there an rsync channel?
<dpb1> madLyfe: i'll let you guess the name
<madLyfe> :P
<dpb1> :)
<sysdoc> I have a slightly older version of Ubuntu server running at AWS. I just added space to the instance ans was wondering if there was a way to add to the partition the additional space using WebMin?
<sysdoc> Ver is 14.04.3 and the file system is ext4
<nacc> 14.04.3 is eol. You should be on 14.04.5 (and either the 3.13 or 4.4 kernel series)
<nacc> that's an FYI --^ sysdoc
<sysdoc> Yeah I know, been a few yeard that I have been on this server
<sysdoc> years too
<Sven_vB> what loglevel do I need for bind to not spam my syslog with "automatic empty zone: â¦"?
<sdeziel> sysdoc: you are just an "apt-get update && apt-get dist-upgrade" away from a supported machine ;)
<sysdoc> UI know that is why the above question cause the partition has 55 meg left. No room to upgrade the distro
<sysdoc> :)
<nacc> you're not upgrading the distro
<nacc> you're updating some packages in 14.04.
<nacc> anyways ... don't use webmin
<nacc> and if you need to use webmin, then it's a question for webmin not ubuntu
<sysdoc> lol, I know the arguements regarding Webmin. But I can not get connected to the instance with putty. Otherwise I would be well on my way through this process... :)
<ahasenack> sysdoc: you could try mounting some big directories in that extra space (an ebs volume, I'm guessing?)
<ahasenack> like /usr, or /var/log
<ahasenack> then move files over
<ahasenack> or /home, if that's big
<ahasenack> or maybe upgrade the instance type? I forget if aws allows that
<sysdoc> Is it even possible to expand the ext4 partition with the / (root file sys) on it with gparted?
<sysdoc> Contemplating just nukin the instance and standing up a new ubuntu server... But wanted to see if I could recover this instance and expand the space and upgrade to the new os ver
<zburns> Had a server in the cloud (with only 512MB of RAM) and ran do-release-upgrade (from 16.04.5 LTS to 18.04.1 LTS) and there were errors along the way about memory
<zburns> How can I validate the install - it says it's a 18.04.1 LTS after reboot, but want to make sure a package didn't get left behind
<nacc> zburns: what kind of errors about memory?
<kstenerud_> ahasenack: How do I handle when the git ubuntu build fails to get off the ground?
<kstenerud_> ERROR:Failed to run apt-get in ephemeral build container (attempt 6/6)
<ahasenack> that sounds like a container setup problem, i.e., real networking issue
<ahasenack> you can try giving it that --option to keep the build env
<ahasenack> it won't destroy the container at the end, and then you can login to inspect what is going on
<ahasenack> --keep-build-env I think?
<nacc> kstenerud_: if you do a `lxc launch ubuntu:x` and then exec bash in it, does the container have networking?
<nacc> (or whatever target release you are buildling for)
<kstenerud_> yup. Networking works perfectly
<kstenerud_> oh wait no that was bionic. Cosmic daily fails
<nacc> there ya go :/
<dpb1> :/
<nacc> we aren't 'smart', per se, with our build env. So if the container env is broke w/o git-ubuntu, it will be broken with it too
<dpb1> is right
<kstenerud_> OK so next question: If cosmic-daily is broken, should I do a git ubuntu build on cosmic instead?
<nacc> kstenerud_: there won't be a build of cosmic for lxd that isn't a daily, afaik?
<dpb1> you would have to use an old one
<nacc> or that
 * dpb1 just tries ubuntu-daily:cosmic here
<nacc> which can be a pain to find/reference correctly
<ahasenack> or bring one up, fix it, and create an image off of that
<nacc> yeah that would work too
<nacc> and tell git-ubuntu to use that image rather than the standard one (I think you can do that, but i'm not 100%)
<ahasenack> --lxd-image
<nacc> thanks, easy to forget the options when you don't use it every day anymore! :)
<ahasenack> no doubt
<sdeziel> sysdoc: maybe your putty client needs to be updated? Modern putty should have no issue connecting to Ubuntu
<dpb1> kstenerud_: cosmic-daily:ubuntu works here?
<dpb1> lxc launch ubuntu-daily:cosmic
<dpb1> I can ping and wget in the container
<kstenerud_> oh weird it's working here now too Â¯\_(ã)_/Â¯
<dpb1> hah
<dpb1> k
<dpb1> maybe move out of the canadian wilderness
<kstenerud_> lol
<kstenerud_> Trees... trees as far as the eye can see...
<dpb1> that does sound nice
 * nacc hums a Rush song
<madLyfe> this is odd but probably something simple: https://paste.ubuntu.com/p/fBNB4cBwzs/
<nacc> madLyfe: what version of ubuntu?
<nacc> madLyfe: do you have the universe component enabled?
<sdeziel> madLyfe: check that universe is enabled
<madLyfe> just a default install of ubuntu server 18.04.1
<dpb1> that bug needs to be added to the bot it's getting so common
<madLyfe> universe?
<nacc> !components | madLyfe
<ubottu> madLyfe: The packages in Ubuntu are divided into several sections. More information at https://help.ubuntu.com/community/Repositories - See https://wiki.ubuntu.com/RecommendedSources for the recommended way to set up your repositories.
<dpb1> hah
<madLyfe> other than nanoing into the sources list can i just print it somehow?
<sdeziel> pastebinit /path/to/file
<nacc> madLyfe: apt-cache policy
<sdeziel> or you can just use "cat" ?
<nacc> unless you mean what sdeziel said
<nacc> madLyfe: also, stop using nano and learn vi or emacs :)
<madLyfe> i did `cat` and its not returning anything
<nacc> madLyfe: we are all assuming some *basic* knowledge here
<nacc> e.g., not just `cat`
<sdeziel> madLyfe: cat /path/to/file
<nacc> but `cat /path/to/file`. Or in your case `cat /etc/apt/sources.list`
<nacc> possibly `cat /etc/apt/sources.list*`
<madLyfe> nacc: you should know me better than that by now :P
<nacc> madLyfe: no. I refuse to treat you like a child.
<nacc> madLyfe: you need to start learning quickly
<nacc> you may not like it, but we are all volunteers here
<nacc> you need less hand-holding at htis point, imo :)
<nacc> it's making you weak :-P
<sdeziel> I find vim to be so much better than vi ... both beat nano hands down though ;)
<madLyfe> https://paste.ubuntu.com/p/n8w6PfkQ3R/
<sdeziel> madLyfe: there you go, only "main" is enabled
<nacc> sdeziel: true, and if you install vim, vi -> vim, iirc
<madLyfe> nacc: i understand. there is soooo much information im covering that its just not possible to absorb all of it on first pass tbh.
<nacc> madLyfe: ack, just trying to clarify expectations, as i undestand them, from the volunteer folks that tend to be around.
<sdeziel> nacc: here at least, invoking 'vi' gives me the old school thing, the nice goodies are enabled only when called as vim
<sdeziel> nacc: $ readlink -f $(which vi)
<sdeziel> /usr/bin/vim.nox
<sdeziel> so presumably is behaves differently based on arg0
<nacc> sdeziel: ah ok, it might depend on the order in which various pacakges are installed
<nacc> here: /usr/bin/vim.basic
<madLyfe> so my sources list doesnt have anything thats commented out, as noted in the help.ubuntu article: https://help.ubuntu.com/community/Repositories/CommandLine
<dpb1> madLyfe: fyi, here is the bug: https://bugs.launchpad.net/subiquity/+bug/1783129 -- please mark as affecting you too at the top
<sdeziel> madLyfe: once you looked at the link nacc provided, you should understand main vs universe
<ubottu> Launchpad bug 1783129 in subiquity "Only "main" component enabled after install" [High,Confirmed]
<madLyfe> so i have to manually add them in?
<dpb1> there is a paste-style workaround in comment #27 on that bug
<nacc> i think add-apt-repository can do it too
<ahasenack> all that being said,
<ahasenack> get used to editing config files
<sdeziel> or: sed -i "s/main$/main universe/" /etc/apt/sources.list
<nacc> ahasenack: +1
<sdeziel> madLyfe: I find it weird that security.ubuntu.com doesn't appear in your sources.list. Have you previously manually tweaked that file before?
<sdeziel> (with archive.ubuntu.com, using security.ubuntu.com shouldn't matter all that much but still)
<zburns> nacc: It was something with the 16.04.5 install, so that was my purpose of upgrading to 18.04.1 - but during the upgrade the memory issues would kick me out of my remote ssh session and I'd keep having to reconnect - it was weird
<nacc> zburns: strange
<nacc> zburns: without knowing what the memory issues actually are, it's hard to say, but i wouldn't use such a system
<nacc> zburns: also 512m of ram ... very little
<zburns> yep - that's why I shut it down, and increased the memory to 2GB
<zburns> when I brought it back up it said 18.04.1 LTS
<nacc> zburns: +1
<zburns> but wanted to verify the packages
<nacc> zburns: you can make sure apt and dpkg are happy
<dpb1> there is a tool
<nacc> and i think there is a verify command
<dpb1> what was it
<dpb1> debsums or something
<zburns> ok I'll look
<zburns> thx
<dpb1> ya
<dpb1> that's it
<dpb1> look into that package
<zburns> thanks dpb1
<zburns> and nacc
<nacc> zburns: yw
<madLyfe> sorry guys, was looking over the bug report and logging in
<cpaelzer> ahasenack: if it isn't I'd ask for a bug - the devel-pointers should point to the debian imports IMHO
<cpaelzer> we said checkout ubuntu/(release-)devel should always be right
<cpaelzer> IIRC
 * ahasenack gets some memory object from cold storage to regain context
<ahasenack> zburns: try "sudo apt update; sudo apt -f install; sudo apt dist-upgrade", check if it installs more stuff, fixes existing packages, etc. And that it uses bionic packages, and not the older ones
<ahasenack> at this point there isn't much else to do I think
<nacc> if we are in sync with debian, ubuntu/(release-)devel for that series should just point at the debian import
<ahasenack> you probably had processes being killed by the kernel OOM handler mid-installation
<ahasenack> ah, that
<madLyfe> sdeziel: i hadnt manually tweaked that file, no.
<zburns> yep - did that stuff and running debsums now (cool tool)
<ahasenack> nacc: it's not. Maybe the importer crashed again
<ahasenack> or it's a bug, since pkg/debian/sid is pointing at the right version
<sdeziel> madLyfe: weird
<nacc> ahasenack: which repo?
<ahasenack> sssd
<ahasenack> pkg/ubuntu/devel is pointing at pkg/import/1.16.2-1ubuntu1
<nacc> ahasenack: it would appear the repo hasn't moved past 1.16.2-1ubuntu1 for ubuntu
<nacc> look at cosmic-proposed
<nacc> or the cosmic branch itself
<madLyfe> dpb1: i want to do `curl https://pastebin.freepbx.org/view/raw/136f66ca > /etc/apt/sources.list && apt-get update` as noted in #27: https://bugs.launchpad.net/subiquity/+bug/1783129/comments/27 ?
<ubottu> Launchpad bug 1783129 in subiquity "Only "main" component enabled after install" [High,Confirmed]
<ahasenack> well, 1.16.2-1ubuntu1 was the last ubuntu pkg
<nacc> right, i'm asying it's not a bug in the -devel branches
<kstenerud_> ahasenack: git-build is failing due to a nonexistent package
<kstenerud_> Package equivs is not available, but is referred to by another package.
<nacc> they are pointing at the last seen ubunt publish
<ahasenack> nacc: ok, that was my question. If ubuntu/devel would point at what is in the ubuntu archive, or the last ubuntu publish as you say
<nacc> it is modified by the importer
<ahasenack> nacc: so if someone wants to fix a bug in this package, and grab ubuntu/devel, they will be using the wrong branch?
<nacc> i'm saying the importer hasn't run on this srcpkg
<nacc> no.
<ahasenack> ah
<nacc> well, i mean, yes, but because of some infra that you or rbasak need to look at
<ahasenack> kstenerud_: that is usually temporary
<nacc> i'd assume the importer isn't running
<ahasenack> kstenerud_: it means apt update hasn't succeeded yet
<ahasenack> only monday now
<nacc> ahasenack: so yes, you can always use ubuntu/devel, for delta'd or sync'd packages.
<nacc> ahasenack: but only if the importer is keeping up :)
<ahasenack> I haz no creds, cpaelzer is eod :)
<nacc> y'all should fix that
<nacc> (given that rbasak was on vacation, there should be two people with creds now, i'd hope)
<nacc> and dpb1 should have them too, i think
<nacc> minimally to let others login
<ahasenack> which probably brings us to my coredev application :)
<nacc> heh, i don't think you need to be a coredev to administer git-ubuntu
<nacc> but sure :)
<ahasenack> well, that instance has credentials, is my understanding
<nacc> only for importing
<nacc> well, for manipulating the git repos in general
<ahasenack> which it does continuously? :)
<nacc> so yes, it's dangerous, but in the case of stuff like this -- where you need someone to kick it
<madLyfe> hmm `sudo curl https://pastebin.freepbx.org/view/raw/136f66ca > /etc/apt/sources.list && apt-get update` gives me permission denied for some reason
<ahasenack> madLyfe: "sudo curl" is only running curl as root
<ahasenack> not the "> /etc/..." bit
<ahasenack> and much less the apt bit later on
<kstenerud_> ahasenack: Still won't work :(
<kstenerud_> ERROR:Command exited 100: /usr/bin/lxc exec stable-robin -- apt-get install -y devscripts equivs sudo
<madLyfe> so is it `sudo curl https://pastebin.freepbx.org/view/raw/136f66ca > sudo /etc/apt/sources.list && sudo apt-get update` ?
<ahasenack> kstenerud_: have you tried that option to keep the build env?
<sdeziel> madLyfe: curl can fetch an URL without needed root privs. "sudo /etc/apt/sources.list" would be trying to execute  /etc/apt/sources.list as a command which is not what you want
<sdeziel> madLyfe: "curl $URL | sudo tee /etc/apt/sources.list" would do what you intend
<RoyK> sdeziel: or just learn to read and modify sources.list instead of just overwriting it with something someone posted on the irc
<sdeziel> RoyK: can't agree more, it's why I provided the sed command above
<sdeziel> but "sudo something > /path/to/file" is a nice thing to learn when you start using sudo
<madLyfe> it was posted here in the bug report: https://bugs.launchpad.net/subiquity/+bug/1783129/comments/27
<ubottu> Launchpad bug 1783129 in subiquity "Only "main" component enabled after install" [High,Confirmed]
<ahasenack> kstenerud_: just to be clear, that option is to allow you to login and see what's wrong with the lxd container
<ahasenack> not to fix it
<kstenerud_> yeah, trying to figure out why there's no ip address
<ahasenack> kstenerud_: you had this issue before, I remember
<ahasenack> remember it worked when you ran dhclient by hand?
<ahasenack> but I thought that had been with a vm, not a container
<kstenerud_> but what would cause bionic to get an address, and cosmic to not?
<kstenerud_> I have both running side-by-side. Bionic gets an address
<kstenerud_> Actually both get an ipv6 address. Only bionic gets ipv4
<kstenerud_> https://pastebin.ubuntu.com/p/Kqfs56TJHw/
<nacc> kstenerud_: is cosmic configured to do ipv4?
<nacc> i'm guessing netplan-ish?
<kstenerud_> both have identical 50-cloud-init.yaml, with dhcp4: true
<nacc> hrm
<nacc> that seems like a bug
<nacc> smoser: --^ ?
<RoyK> sdeziel: sudo something > /etc/somewhere isn't very wise unless you understand what you're doing
<ahasenack> kstenerud_: try "netplan generate" followed by "netplan apply"
<ahasenack> see if that does anything, even if just errors
<ahasenack> and then, there is a file in /run to check what netplan rendered iirc
<ahasenack> kstenerud_: my cosmic image has a serial of 20180829, what's yours?
<ahasenack> # cat /etc/cloud/build.info
<ahasenack> build_name: server
<ahasenack> serial: 20180829
<madLyfe> `deb http://us.archive.ubuntu.com/ubuntu/ bionic main restricted` is restricted necessary?
<ahasenack> and the file in /run I mentioned is: /run/systemd/network/10-netplan-eth0.network in my case
<sdeziel> RoyK: yeah, understand the mechanism is the key
<nacc> madLyfe: if you want access to the restricted pocket
<kstenerud_> build_name: server
<kstenerud_> serial: 20180829
<madLyfe> i dont know? i just want whats supposed to be there by default that the bug didnt allow to happen.
<sarnold> madLyfe: here's what's in restricted on my mirror http://paste.ubuntu.com/p/Qgcvsn7Jdb/
<kstenerud_> 10-netplan-eth0.net are the same between my cosmic and bionic versions, except cosmic also has LinkLocalAddressing=ipv6
<sdeziel> sarnold: intel-microcode/iucode-tool moved to main recently, no/
<RoyK> sdeziel: then don't give people shortcuts - they won't learn anything
<ahasenack> kstenerud_: if you run dhclient eth0, does it get an address then? assuming eth0 is your nic
<ahasenack> inside the lxd
<sdeziel> RoyK: what shortcut? the curl stuff wasn't mine
<sarnold> sdeziel: indeed they did, good point :) no need for restricted if those are the only two packages you care about
<kstenerud_> ahasenack: Yup, I get an ipv4 address. Ping to an IP works then, DNS does not
<kstenerud_> ping: gnu.org: Temporary failure in name resolution
<ahasenack> kstenerud_: you seem to have a gift to find issues in virtualization setups, kvm or lxd :)
<kstenerud_> lol
<ahasenack> fixing that by hand is easy
<ahasenack> but I don't know what is causing it
<kstenerud_> Even though I try desperately to do everything to the defaults, and use standard h/w
<ahasenack> I would check the basics. The network the container is attached to (its dhcp settings)
<ahasenack> then the lxd profile
<ahasenack> then also the logs of the container bootup
<ahasenack> systemd-networkd must have complained about something
<ahasenack> that's the guy responsible for issuing the dhcp request
<sarnold> wait containers, dns.. kstenerud_ are you seeing 1789627 ?
<kstenerud_> syslog:Aug 31 18:35:42 cosmic networkd-dispatcher[220]: No valid path found for iwconfig
<kstenerud_> syslog:Aug 31 18:35:42 cosmic networkd-dispatcher[220]: No valid path found for iw
<kstenerud_> syslog:Aug 31 18:35:42 cosmic networkd-dispatcher[220]: WARNING: systemd-networkd is not running, output will be incomplete.
<kstenerud_> 1789627?
<sarnold> https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1789627
<ubottu> Launchpad bug 1780227 in linux (Ubuntu Bionic) "duplicate for #1789627 locking sockets broken due to missing AppArmor socket mediation patches" [Critical,Fix released]
<ahasenack> kstenerud_: what is the OS on your host?
<kstenerud_> bionic
<ahasenack> cpaelzer mentioned this week something about having to update the host kernel
<ahasenack> I'm on 4.15.0-33-generic
<ahasenack> (bionic)
<kstenerud_> hmm how do I check that? lsb_release doesn't work
<madLyfe> sarnold/sdeziel: i commented out at the top what came to me as default(bug): https://paste.ubuntu.com/p/brQ6bqdRRs/
<ahasenack> kstenerud_: uname -a
<sarnold> madLyfe: safe but *really* limiting :) hehe
<kstenerud_> oh, looks like I need to reboot. brb
<ahasenack> ok
<kstenerud_> -33 is installed, but I'm still running 32
<madLyfe> sarnold: which is limiting?
<sarnold> madLyfe: not having access to universe packages
<madLyfe> are you talking about the commenting out at the top or the rest of the file? from what i understand they are there: https://usercontent.irccloud-cdn.com/file/ixAHRi6J/image.png
<sarnold> madLyfe: I just meant that the installer leaving you with only those three lines is pretty limiting.
<madLyfe> oh ya. dpb1 pointed me to the bug causing that
<kstenerud> OK, past that hurdle. Next:
<kstenerud>   gpg: /tmp/debsign.6YRijtwn/logwatch_7.4.3+git20161207-2ubuntu2~ppa1.dsc: clear-sign failed: Operation cancelled
<kstenerud>   debsign: /usr/bin/gpg error occurred!  Aborting....
<ahasenack> missing gpg key
<ahasenack> or you didn't see the prompt for it in time and it gave up
<kstenerud> I'm running it again. I don't get any prompt. It just sits at apt-get install -y devscripts equivs sudo
<kstenerud> Oh there we go. got the prompt :)
<ahasenack> if you miss it, you can just run debsign on the source.changes file
<Guest13837> ÐllÉh ÑÑ ÔÎ¿â°á¥g
<papabear69> Alâ¼ah isâdÐ¾inÉ¡
<samilk> whats the best dns app?
<sarnold> for what use?
<samilk> to host websitesites
<sielicki> Ðâ¼â¼aÒ» ÑÑ Ôoiá¥g
<sielicki> sá¥nââ°sâá¥ot â¾oiá¥g Aâ¼laÒ»âisâdà´ inÉ¡
<sielicki> â¿oon ÑÑ nà´ t dÎ¿ing Îâ¼laÒ» iÑ dÎ¿ÑnÉ¡
<sielicki> starsâ¯areÂ notâÔoiá¥g Aâ¼lÐ°Ò»âis doiá¥É¡
<samilk> i need configure nameservers, this is my first dedicated server and i ahve no management panel
<samilk>   sarnold
<samilk> u awake? help me  out bro
<samilk> lol
<sarnold> samilk: powerdns auth server has a nice database backend that's pretty good for dynamic systems
<samilk>  ok  ok....
<samilk> im running ubuntu
<RoyK> smoser: https://www.isc.org/downloads/bind/ is the world's most used dns server - works well
<sarnold> while ubuntu does package up powerdns you're probably better off installing it from their repo
<cfields13> AllÐ°h ÑÑ doiá¥g
<dumhed> having a bit of trouble installing 18.04.1 with a manual ip config. It keeps on coming back at me 10.0.0.5 is not contained in '255.255.255.0/24' but unless i am looking at everything wrong- it seems to be. Even when i log into my router config from another computer it tells me its on 255.255.255.0. Not sure why its fighting me
<nacc> one of those is an IP and one of those is a gateway in CIDR notation
<nacc> dumhed: you almost certainly meant to use 10.0.0.0/24
<nacc> s/gateway/netmask/ sorry
<nacc> and i'm not sure it makes sense to think of a netmask in cidr, since that's ... the point?
<nacc> anyways, it's a confusing screen to some, but you just gave it the wrong data, dumhed :)
<nacc> dumhed: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing
<dumhed> nacc, this is what i have in the install window after i select manual- subnet: 255.255.255.0/24 | address: 10.0.0.5 | gateway: 10.0.0.1 but i cannot move forward due to above error
<nacc> yes, your subnet is incorrect
<nacc> 255.255.255.0 is /24
<nacc> it doesn't make sense to provide them both
<nacc> read the wikipedia page for more details, but i'm pretty sure what i wrote is what you should be using
<dumhed> nacc, if i dont add /24 it tells me im missing it
<nacc> dumhed: i didn't say to not add /24
<nacc> dumhed: i said "you almost certainly meant to use 10.0.0.0/24"
<nacc> dumhed: what you provided it, instead of a CIDR subnet, was a netmask and a netmaks
<nacc> which doesn't make sense
<dumhed> nacc, im sorry i guess i just dont understand what you are saying. 10.0.0.5 should be a valid ip address for 255.0.0.0/24 should it not
<dumhed> i mean 255.255.255.0
<nacc> no.
<nacc> you are confused about what CIDR is, I think
<nacc> 10.0.0.5 is a valid ip address in 10.0.0.0/24
<dumhed> ok so what would the subnet be then
<nacc> 255.255.255.0/24 is the range of ips from 255.255.255.0 to 255.255.255.255
<nacc> i just told you
<nacc> twice
<nacc> https://www.ipaddressguide.com/cidr may also help educate you
<dumhed> nacc, your additude is not helping. if you dont want to help dont.
<dumhed> but telling me to read a link is not helping me
<nacc> dumhed: what link?
<nacc> dumhed: "dumhed: i said "you almost certainly meant to use 10.0.0.0/24""
<nacc> see how i feel like you aren't listening?
<nacc> that's 3 times where i've given you the subnet now
<nacc> i gave you links so you could ideally educate yourself on why you were wrong
<nacc> but to actually get your answer, you never had to read any of them.
#ubuntu-server 2018-09-01
<casdr10> ÎllÉhâÑs ÔÎ¿inÉ¡
<samilk> whats the command to upload a file through the shell when im at sftp> already....the file i need to send is abc.zip in c:\
<sarnold> put
<samilk> ----> didn't work what am i missing?
<samilk> "sftp> put C:\Users\HP User\Downloads\BIND9.10.8-P1.x64.zip
<samilk> stat C:UsersHP: No such file or directory"
<sarnold> lcd users ; lcd HP\ User ; lcd Downloads ; put BIND9 ....
<samilk> sftp says im in the directory pklimas locally but I had no idea I was in that dir
<samilk> i don't even know where it is
<UncleSamuel14> AllaÒ» is Ôoâ°nÉ¡
<UncleSamuel14> Ñun iÑânotâdÎ¿inÉ¡ Allah Ñsâ¯doing
<UncleSamuel14> â¿à´ onâÑÑânotâ¯doinÉ¡ Alâ¼ÉhâiÑ dà´ ing
<SunTsu28> Alâ¼ÉÒ» ÑÑâdoing
<SunTsu28> sun is nâ²tÂ Ôoing ÐllaÒ»âisâÔÎ¿Ñá¥É¡
<SunTsu28> mà´ onÂ iÑâá¥ot doiá¥É¡âAlâ¼aÒ» iÑ dÐ¾inÉ¡
<Pyrotechno> áªlâ¼ahâ¯iÑ dÐ¾ing
<Nineteen> Îâ¼â¼ahâisâdoÑá¥g
<Nineteen> Ñun iÑânà´ tâdoiá¥gâAllÉh is dÎ¿inÉ¡
<Nineteen> â¿â²Ð¾á¥ isÂ nÎ¿tâdâ²â°ngâ¯Aâ¼lÐ°h â°Ñ doing
<Nineteen> starsâare nà´ t dÎ¿ÑnÉ¡ Îllah iÑ doiá¥g
<Mr_Sheesh> Aâ¼â¼aÒ»ââ°s doiá¥g
<Mr_Sheesh> suá¥ iÑ notâÔoÑá¥gâAllÉÒ»âÑsÂ Ôoâ°ng
<Mr_Sheesh> mÎ¿Î¿n is á¥ot doâ°nÉ¡â¯AllÉhâis ÔoinÉ¡
<Mr_Sheesh> stÐ°rÑ are nÎ¿t doÑng Alâ¼ÉÒ» ÑÑâÔoing
<Mr_Sheesh> plÐ°á¥ÐµtÑ areâá¥Ð¾tâdoiá¥g Îâ¼lÐ°h Ñs doÑá¥g
<Mr_Sheesh> galaÑiÐµsâarÐµ not dÎ¿iá¥g Aâ¼â¼Éh is doiá¥g
<Mr_Sheesh> ocÐµaá¥sâare á¥otâÔÎ¿ÑngâAâ¼laÒ»âisââ¾oÑá¥É¡
<Mr_Sheesh> moá¥á¥tÉiá¥s areÂ nÎ¿t doiá¥gâAlâ¼aÒ» Ñsââ¾oinÉ¡
<Mr_Sheesh> treÐµÑ areânâ²t ÔoÑngÂ áªâ¼lÉh iÑ doing
<Mr_Sheesh> mom ÑÑ not doâ°á¥gâAâ¼lah isâ¯Ôoing
<Mr_Sheesh> daâ¾ââ°Ñ nà´ tâÔâ²Ñng Ðllah iÑ ÔoinÉ¡
<Mr_Sheesh> bossâiÑânot Ôoiá¥g Alâ¼aÒ»âiÑ doiá¥É¡
<Mr_Sheesh> jÎ¿b iÑ nâ²tÂ doÑá¥g Aâ¼lah iÑâ¯Ôoing
<Mr_Sheesh> â¾Î¿â¼larâiÑ á¥otâdoÑá¥É¡ Aâ¼â¼aÒ»â¯is dÐ¾Ñá¥g
<Mr_Sheesh> Ôegreá¥±âÑsÂ á¥otâÔoÑng AllaÒ» iÑâÔoiá¥É¡
<Mr_Sheesh> â¿edâ°cineÂ ÑÑ nÎ¿t dÐ¾inÉ¡ Alâ¼ÉhâiÑ doiá¥g
<Mr_Sheesh> Ï²ustà´ mÐµrÑ Ére á¥Î¿tâdoâ°nÉ¡âAllaÒ» is Ôoâ°á¥g
<Mr_Sheesh> yoâª â½Én á¥Ð¾tâgetâ¯a jÎ¿b á´¡ithoá¥t thÐµâpÐµrâ¿â°ÑÑÑon Î¿fâÉllaÒ»
<Mr_Sheesh> á»¿ou Ï²Ð°n not getÂ â¿Érried á´¡ithà´ ut tÒ»Ðµ pá¥±râ¿issâ°à´ nâofÂ alâ¼aÒ»
<Mr_Sheesh> á¥obodÑ caá¥ É¡ÐµtÂ Éngry atâÑouâá´¡ÑthÐ¾âªt tÒ»á¥± Ñermâ°ÑÑÑonâofâallah
<Mr_Sheesh> lightâ¯is nà´ t ÔÐ¾ÑnÉ¡ Îllah is ÔoÑnÉ¡
<Mr_Sheesh> fÐ°á¥ is á¥à´ t dÐ¾â°á¥É¡ Îlâ¼ahâisâÔâ²inÉ¡
<Mr_Sheesh> bá¥siá¥á¥±sseÑs Ére á¥ot ÔÎ¿inÉ¡ Aâ¼â¼Éhâisâdoiá¥É¡
<Mr_Sheesh> amerÑc â°Ñ á¥à´ t â¾Ð¾ingâAâ¼â¼ah â°s doâ°á¥g
<Mr_Sheesh> Émerica is á¥ot dÐ¾â°nÉ¡ Aâ¼lahââ°sâdoing
<Mr_Sheesh> firá¥± ÑÉnâá¥otâburá¥ wÑtÒ»oá¥t tÒ»e pá¥±râ¿iÑsÑon of allÉh
<Mr_Sheesh> knifeâÏ²Én á¥â²t câªtâwithâ²ut thá¥±âpermâ°sÑÑoá¥Â ofâÉâ¼â¼ah
<Mr_Sheesh> fâ°lesÑÑtá¥±mâdà´ ÐµÑ á¥â²t writÐµ wÑthout pÐµrâ¿ÑsÑiÎ¿n à´ fâallaÒ»
<Mr_Sheesh> ruâ¼ersâare not doiá¥g AllÉÒ» iÑ â¾â²â°ng
<Mr_Sheesh> É¡Î¿â¨erá¥â¿entsâareÂ á¥â²t ÔÎ¿iá¥gâAlâ¼ÉÒ» isâÔÐ¾ing
<Mr_Sheesh> sâ¼eeÑ is not â¾Ð¾ing Aâ¼â¼Ð°Ò» iÑ dÐ¾inÉ¡
<Mr_Sheesh> há¥nÉ¡erâiÑÂ not doÑngâAâ¼lah is â¾Ð¾Ñá¥g
<Mr_Sheesh> foodÂ doesânotâtÉke Éá´¡ÉyâtheâÒ»uá¥É¡erâAâ¼laÒ» takesâawaÑ the há¥á¥É¡er
<Mr_Sheesh> á´¡aterâÔoes nà´ t takeâÉá´¡ayâtheÂ thirÑt ÐllaÒ» tÐ°kÐµs awÉy tÒ»Ðµ thÑrst
<Mr_Sheesh> seeâ°nÉ¡ iÑ á¥Î¿t â¾oiá¥g AllÉÒ» iÑâÔÐ¾ÑnÉ¡
<Mr_Sheesh> hÐµariá¥É¡ isâá¥ot dâ²iá¥gâÎâ¼lÐ°h iÑ doâ°ng
<Mr_Sheesh> seasÎ¿á¥s ará¥± notâdoiá¥É¡ Alâ¼aÒ»âis doâ°á¥g
<Mr_Sheesh> á´¡eatherâiÑ not Ôoâ°á¥g Alâ¼Éh is dâ²iá¥g
<Mr_Sheesh> Ò»umansâÐ°reâá¥ot ÔoÑá¥g Ðâ¼â¼Ð°hâÑÑ dÐ¾iá¥g
<Mr_Sheesh> Éá¥imÉâ¼Ñâ¯are notâdoingâAâ¼laÒ» â°sâÔâ²iá¥É¡
<Mr_Sheesh> tÒ»eâbá¥±st aâ¿â²á¥gÑtÂ yoá¥âareÂ thoseâá´¡hoââ¼Ðµarn anÔÂ teach quraá¥
<Mr_Sheesh> Î¿á¥Ðµââ¼ÐµtterârÐµÉd frà´ m bookâ¯of Aâ¼lÉhâ¯amounts toâÎ¿á¥á¥± good dÐµeâ¾ Éndâ¯Allah multiÑlieÑÂ oá¥e gooÔ deeÔâtÐµnâtâ°meÑ
<Mr_Sheesh> Ò»á¥±artsâÉ¡Ðµt rá¥sted as dà´ ÐµsâÑroá¥âá´¡ithâwÉtá¥±râtoâreâ¿Ð¾veÂ rust froâ¿ há¥±art reÏ²itatÑâ²á¥ à´ f Ôuran ÉnÔ rÐµâ¿eâ¿beranÏ²Ðµ à´ f death
<Mr_Sheesh> heÐ°rt isÂ lÑkeneÔâtÎ¿ a â¿irrà´ r
<Mr_Sheesh> á´¡Ò»eá¥ Ð° pÐµrsonÂ Ñà´ â¿mÑtÑ one siá¥âaâblÐ°Ï²k Ôà´ t suÑtaÑns tÒ»eâÒ»ÐµÉrt
<Mr_Sheesh> to Éccept IslÉmâÑaÑâtÒ»atâi beÐ°r á´¡â°tá¥ÐµÑÑâthatâtÒ»ÐµrÐµ â°sânoââ¾á¥±itÑ á´¡ortÒ»yâÐ¾fâá´¡â²rsÒ»ip Ðµxcept Aâ¼â¼ah aá¥ÔâMá¥haâ¿â¿Ð°dâpÐµaâ½á¥±âbe âªpâ²n Ò»imâis his Ñlaá´ eâandmÐµÑÑÐµnÉ¡Ðµr
<Usurp> Aâ¼lÐ°Ò»âisâdÐ¾Ñng
<Usurp> suá¥ Ñsânâ²tâÔà´ ing Îlâ¼ahâÑÑ dÎ¿iá¥É¡
<mdoep6> Allah â°s dâ²â°nÉ¡
<ketralnis> Allah isâdâ²â°ng
<ketralnis> sun iÑ á¥ot doiá¥gÂ Aâ¼â¼ahâisâdoing
<ketralnis> mÐ¾onâisânot â¾oinÉ¡ Ðlâ¼ahâiÑ ÔÎ¿â°nÉ¡
<ketralnis> stars are not â¾oiá¥É¡ Alâ¼ahâ¯ÑÑâdoâ°á¥É¡
<fsamareanu12> AllaÒ»âiÑ dâ²ing
<compdoc> good to know
<fractal26> Ðâ¼laÒ» is Ôoing
<fractal26> sá¥n is nà´ t dÎ¿ing Aâ¼â¼Éh iÑââ¾Î¿iá¥g
<fractal26> â¿ooá¥ Ñs not dÐ¾Ñng Îâ¼lÉhÂ is doiá¥g
<compdoc> allah is doing spam
<rikai> AllÉhÂ iÑâdÐ¾ÑnÉ¡
<chek7> Aâ¼lÐ°Ò»Â ÑÑ doiá¥g
<avu> How is this channel still not +r or +q $~a?
<RoyK> avu: no idea - check with the ops
<RoyK> it's rather annoying with all this spam
<blackflow> indeed.
<RoyK> blackflow: if someone swears in here, the ops are out of the corners in no time - perhaps that's a way to call their attention? ;)
<avu> I've asked in #ubuntu-ops
<madLyfe> so i ran `find -type f -print0 | xargs -0 md5sum > ~/md5sum`Â to generate an md5sum file on two dirs to compare them. how do i do that?
<avu> madLyfe: do you just want to know if the contents of the directories are different or also what files are different?
<madLyfe> both would be nice
<avu> madLyfe: have a look at diff(1)
<avu> `diff --brief -Nr foo bar` or something like that
<avu> (where `foo` and `bar` are the two directories)
<madLyfe> dirs or files? cuz i have the generated md5sum files
<TJ-> !info diffoscope | madLyfe this might be of use
<ubottu> madLyfe this might be of use: diffoscope (source: diffoscope): in-depth comparison of files, archives, and directories. In component universe, is optional. Version 93ubuntu1 (bionic), package size 91 kB, installed size 539 kB
<avu> madLyfe: those md5sum files aren't needed
<avu> madLyfe: diff just compares the contents of the directories fore you
<avu> TJ-: what does that do that diff doesn't?
<TJ-> avu: it's a neat tool used by the reproducible builds teams, has a some magic pixie dust sprinkled on it
<avu> TJ-: that doesn't really answer my question :)
<avu> ah, the website has some insight, sounds neat for a certain kind of usecase
<blackflow> RoyK: heh, I don't wanna test it tho'
<madLyfe> avu:  do the foo/bar dirs need trailing `/`s to do the contents?
<avu> madLyfe: no
<avu> madLyfe: diff needs the -r option though
<madLyfe> `diff -r --brief -Nr /media/two_tb bar` ?
<madLyfe> my bar is a server on the network. does it need ssh?
<avu> diff works on directories, that's it
<dpb1> All servers need ssh unless you never plan to login
<madLyfe> `diff -r --brief -Nr /media/two_tb hostname: /data` ?
<avu> again, diff works on directories, that's it, it won't talk ssh for you
<avu> also, you have -r twice
<madLyfe> well i also have the the server dir mounted locally via samba/fstab.
<madLyfe> could i do it that way?
<avu> if it's a directory, diff works on it, I'm really not sure how much clearer I can say that :)
<madLyfe> `diff --brief -Nr /media/two_tb /media/kinghat-server`
<madLyfe> `kinghat-server:/data` is the same for me as `/media/kinghat-server`
<avu> how is that the same?
<avu> one is some kind of notation, I don't know what, one is a path
<avu> Linux doesn't have some magic builtin to address remote stuff via SSH with a notation like that
<madLyfe> i was just noting hostname:/data is whats mounted at /media/kinghat-server
<madLyfe> the servers /data dir
<madLyfe> anyways its running
<madLyfe> avu: it is basically saying all my files differ lol
<PuppyKun> AllaÒ» â°sââ¾oiá¥g
<PuppyKun> Ñá¥n â°Ñ nÎ¿t doinÉ¡âAâ¼â¼Éh is Ôoiá¥g
<madLyfe> avu: i had the wrong dir. lel
<amazoniantoad> I'm trying to install ubuntu server on 3 different HDD's using a Raid 0 configuration. The installer won't let me go through with creating the raid except for only two of the disks. It says that one disk is needed for the boot partition. However when I create a partiton on the third disk and reserve
<amazoniantoad> it for boot, it is no longer included in the list of HDD's available for the raid configuration. Does anyone know what I should do here? oad> The goal being that the three disks are used in the  RAID
<RoyK> amazoniantoad: last I checked, you can't boot off raid0
<RoyK> amazoniantoad: and last I checked, using raid0 is madness - a few dead sectors and there go all your data
<RoyK> amazoniantoad: better setup a 3-way mirror for the rootfs including boot and use the remaining for raid5
<amazoniantoad> RoyK I like madness
<blackflow> RoyK: rumor has it, grub can now do lotsa stuff with insmod thingies, including booting off of raid0, lvm , etc...   I was surprised too.
<RoyK> blackflow: I know it boots from lvm, I use that, but raid0 is still a bad iea if you're a little interested in uptime
<RoyK> s/ide/idea/
<blackflow> raid0 only duplicates the probability of disk(s) dying. in absolute numbers, it's still a long shot. with good backups, it could take years with no problem.
<blackflow> *doubles
 * RoyK commented on something about "uptime"
<RoyK> https://xkcd.com/705/ ;)
<blackflow> ever since I had a total meltdown with two disks in raid-1, with the other drive dying while the replacement of first was resilvering, I decided good backups and failover is the only thing that matters.
<blackflow> oh yeah that xkcd is my fav :)
<RoyK> - Die hard in the datacentre
<blackflow> dead hard (drive) in teh datacentre, alright. :)
<RoyK> well, the reference to the guy that walked over broken glass etc,  is from the Die hard film ;)
#ubuntu-server 2018-09-02
<blackflow> I know :)
<JanC> blackflow: that's why some people insist on using different drives for a mirror/RAID (different brand, model or at least production series)
<JanC> drives from the same production run are much more likely to die within a short time from each other
<RoyK> I've made that failure - getting 80 WD Black drives from the same batch - a really bad one...
<JanC> I hope they at least died while under warranty?
<RoyK> truth to say, we didn't have time - we just ordered a new batch and returned them later
<RoyK> some of them didn't even show smart errors
<JanC> did they work at all?
<RoyK> but then - there's the 'twist and shout' trick - spin up a drive, twist it  90 degrees against spinning angle and wait for it to spin up again, repeat until it dies
<RoyK> the 'shout' part is just fun
<RoyK> they worked, or a while
<RoyK> some of them still work, but that's under 20%
<JanC> SMART is only useful for detecting stuff like a degrading magnetic surface and the like
<RoyK> this was probably mechanic
<JanC> if the r/w heads break off, or the controller board is toast, SMART won't be useful
<RoyK> the controller boards don't die like that for that amount of drives
<RoyK> it was timing issues, perhaps a bad firmware
<RoyK> something nasty
<RoyK> so I made sure we could return some of them
<JanC> well, in theory it's possible that the controllers die (or have bad RAM, or something like that)
<RoyK> this was an issue of a bunch of sata drives on sas expanders that really didn't like it
<RoyK> or the sas controllers didn't like them or whatever
<RoyK> they didn't work well on sata either
<JanC> oh, that's also possible, I guess
<JanC> so much can go wrong  :)
<RoyK> we had a tight budget, so we got some new sata drives, hitachis, worked well
<JanC> even just a fault resistor or capacitor on the controller can make a drive useless...
<RoyK> and then I spent some time twisting drives and returning them
<RoyK> JanC: it wasn't that - beleive me
<RoyK> they were fairly new and showed no issues except perhaps dying
<JanC> I have 30yo drives which probably still work...
<JanC> if I would try them on some old computer  :)
<RoyK> old IDE or SCSI things?
<RoyK> MFM?
<JanC> old IDE
<JanC> Quantum Bigfoot
<RoyK> that's probably only 20YO
<RoyK> IIRC
<RoyK> I  remember those arriving around 1998
<RoyK> it was amazing - 10 gigs on a drive!!!!!
<JanC> I have some disk from the early 1990s too somewhere
<RoyK> some 52MB disk?
 * RoyK also remembers starting to  fiddle around with linux (slackware 2.1, kernel 1.1.59) around 1994 and found he could boost the I/O speed to the harddisk up to 1MB/s (!!!) by turning on DMA with hdparm
<RoyK> amazing times ;)
<JanC> this Quantum Bigfoot CY was from 1996 or so
<RoyK> 5,25"? got a model number?
<RoyK> iirc all the bigfoots were 5,25"
<RoyK> although everything else by then was 3,5", but then, the bigfoots were *huge*
<JanC> yeah, the 5.25" was what distinguished them; it allowed them to make cheaper drives with the same capacity
<RoyK> get a time machine and show me a micro sd card of 256GB back then ;)
 * RoyK doesn't have a DeLorian, sadly
<JanC> that Bigfoot was 6.4 GB   :)
<RoyK> 4k5rpm?
<RoyK> or 3k6?
<JanC> 3.6k
<JanC> all Bigfoot CY were 3.6k rpm
<RoyK> not quite an SSD, then
<JanC> 6 MB/sec read speed  :)
<RoyK> random iops, perhaps 50 on a good day
<RoyK> those were the days :D
<JanC> (and that's sequential, I assume)
<RoyK> it was indeed
<RoyK> probably outer rim
<RoyK> inner rim was half the speed or less
<JanC> :)
<JanC> well, on floppy disks it usually was the same
<RoyK> same rule applies to modern drives - more sectors on the outside
<JanC> some early hard disks too maybe
<RoyK> floppies didn't have zones
<RoyK> neither did mfm drives
<JanC> well, you could mess with floppies to have more data on the outside
<JanC> if you programmed them yourself instead of using the BIOS  :)
<RoyK> well, gotta go - it's late
<JanC> I think that early 1990s drive I had is either at my parents or thrown away
<JanC> but that bigfoot from the mid-1990s, I remember it still worked when I tried it about 10 years ago  :)
<beatzz> After upgrading to Ubuntu Server 18.04 Apache2's php capabilities are messed up
<JanC> if you want help, you probably better explain exactly what is wrong...
<KurbuntusBain> hey
<KurbuntusBain> i am an annoying newbie and i got dum questions
<KurbuntusBain> do i have to instal proftp in ubuntu as a user other than 'root'? and if the anser is yes, why is that?
<cryptodan> KurbuntusBain: use SFTP via SSHServer
<KurbuntusBain> ok
<KurbuntusBain> ill do that
<KurbuntusBain> im getting everything setup now or at least trying to
<KurbuntusBain> it's been a rough couple of days, i've got lemp installed and i was able to get vsftpd running now, i need to get dns and email server going what would you suggest?  I was told to use power dns and bind, i've downloaded bind aleady
<cryptodan> I prefer keeping things simple and recommending that
<KurbuntusBain> so i just got this new dedicated server, it's my first one, what would you say is the coolest thing i can do being new to this
<KurbuntusBain> what was your favorite part of getting your first dedicvated server?
<KurbuntusBain> 'dedicated'?
<KurbuntusBain> (I got this thing just to screw around with and learn btw)
<cryptodan> KurbuntusBain: getting DNS propagate
<KurbuntusBain> lol
<KurbuntusBain> omg, it's giving me shit just tring to instal mariadb
<KurbuntusBain> See apt-secure(8) manpage for repository creation and user configuration details.
<KurbuntusBain> guess i have to install BIND first
<cryptodan> KurbuntusBain: look into running tasksel
<KurbuntusBain> Ok, cryptodan.  Ill check that out.  Now this is saying I should have TWO servers?
<KurbuntusBain> for ns1 and ns2....for reals?
<cryptodan> yup
<cryptodan> preferably on separate subnets
<KurbuntusBain> well that's expensive
<linuxthefish> don't be silly lol
<KurbuntusBain> im definitely silly as shit dude
<KurbuntusBain> im like as new as trump with this stuff
<linuxthefish> if it's just for playing around having only one nameserver is fine
<KurbuntusBain> I have a dedicated server at psychz and a vps at vultr
<linuxthefish> if your domain registrar requires two nameservers (ns1 and ns2), just point them to the same IP on the same server
<linuxthefish> or use a free DNs service like cloudflare or the place you got your domain from
<KurbuntusBain> that'll work right?  the vps is 10 bucks a month, i mean it is for experimenting but i do need to host my companies website in a month or so and eventually (six mos) I may enter email marketing and run a predictive dialer
<linuxthefish> yes
<cryptodan> KurbuntusBain: you better be on the up and up with your hosting skills or else you may end up being fired and your comapny being on several blacklists
<KurbuntusBain> i have like 4 domains....The main thing I am really trying to accomplish is setup my company's website and have an employee portal to download documents and host a CRM
<KurbuntusBain> I can't get fired man, I own the company, lol
<KurbuntusBain> I am just starting out.  and I am on the up and up
<KurbuntusBain> I am a broker in the deregulated energy market
<KurbuntusBain> I am not a spammer
<KurbuntusBain> it's all good no bad stuff man
<KurbuntusBain> eveyrone is doubting me and thiunking im up to something shady it seems lol
<KurbuntusBain> i know why ...it's because I know little and probably not saying the right words....
<KurbuntusBain> I just prefer to do things myself rather than higher people and pay crazy rates
<KurbuntusBain> I have 5 ip's
<KurbuntusBain> too
<KurbuntusBain> I just need guidance
<linuxthefish> KurbuntusBain it's when you say things like "email marketing and run a predictive dialer"
<KurbuntusBain> lol, i know
<linuxthefish> and most people asking these questions are spammers without a budget to pay other people to set stuff up lol
<KurbuntusBain> i completely understand, but those functionalities can be completely legitimate provided the nmbers are from opt in or dbl opt in lists
<linuxthefish> if it's an important low traffic website just pay for some decent $5 a month hosting and be done with it
<KurbuntusBain> and the phone numbers are scrubed to not be on the DNC
<linuxthefish> good
<linuxthefish> but in the long run learning how to set stuff up properly will save you a lot of time and money
<KurbuntusBain> I am prob running ViciDialer and from what I understand that requires a dedicated
<KurbuntusBain> plus man, i just got a dedicated for $25/month
<KurbuntusBain> lol, it's a Atom525 but, either way, not a bad deal right?>\
<KurbuntusBain> plus eventually i'd like to sell web design and hosting services for to small business
<KurbuntusBain> $25/mo vs $10/mo for a shared hosting, why not just spend the extra $15 and go dedicated
<linuxthefish> if all your call agents are in the same building, it will make more sense to run a server there locally
<linuxthefish> for voip
<linuxthefish> KurbuntusBain cPanel is the standard for selling web hosting, most clients will expect it
<linuxthefish> but if set up properly your own web server on a VPS will be a lot better than shared hosting, and cheaper
<linuxthefish> personally I use lighttpd and mariadb for all my sites, some larger ones but I host my DNS on Amazon Route 53
<KurbuntusBain> that's the thing, my reps will be remote
<KurbuntusBain> most of them at first
<KurbuntusBain> plus, I've worked at a hosting company before that sold shared hosting and i've seen what can happen
<linuxthefish> ah ok, yeah your way will be best then
<KurbuntusBain> one person can bring the whole machine down
<KurbuntusBain> it's not likeley but it could happen
<linuxthefish> did they use cloudlinux for cpanel? it can set per user limits for CPU etc
<KurbuntusBain> i think they used virtuozo or something?
<KurbuntusBain> I can't remember I didn't need to know too much about the technical side
<KurbuntusBain> just enough
<KurbuntusBain> i've used cpanel and whm it's easy
<linuxthefish> virtuozo is basically a VPS, so the cpanel server was running in a container
<KurbuntusBain> on my dedicated I can't set it up as a VPS environment because it's so old huh';
<linuxthefish> and cheaper license costs haha
<linuxthefish> KurbuntusBain sure you can!
<linuxthefish> you can make a few VPS's on your dedicated server, each with their own IP and operating system
<linuxthefish> but for small servers it's not worth it
<KurbuntusBain> I looked up the processor specs and saww " IntelÂ® Virtualization Technology (VT-x) â¡
<KurbuntusBain> No "
<KurbuntusBain> and figured that meant to virtual privatge servesr lol
<linuxthefish> ah, you can still run OpenVZ but not KVM
<KurbuntusBain> I can get 300 mbs a second internet here at my house, im thinking about just running my own server here
<linuxthefish> where did you buy your dedicated server from?
<KurbuntusBain> i got it form psychz
<linuxthefish> KurbuntusBain that's a bad idea unless it's a business connection and you have a static IP
<KurbuntusBain> it's usually 70/mo i got a good deal i thnk
<linuxthefish> https://www.kimsufi.com/us/en/servers.xml do some cheap dedicated servers also, but they only have 1 IP
<linuxthefish> running a server at home is good for learning and messing about, but it's a bad idea for anything business related
<linuxthefish> and with power and hardware costs you won't save much money if any at all
<KurbuntusBain> it's got 4gb ram 500gb sata 100mpbs port 30tb/mo and 5 ips, they are going to convert it to 2 512GB SSD's next month for $20 more a monthy
<linuxthefish> sounds perfect if you are careful with RAM
<KurbuntusBain> I am running Windows ten on my laptop at home currently, should I convert this thing to a dual boot system  you think?
<linuxthefish> KurbuntusBain don't rush head first into everything lol
<KurbuntusBain> hahah
<KurbuntusBain> that's always been my issue
<linuxthefish> personally I think you should learn the terminal (command line) side of things first, and setting up a web server is the perfect way to start
<KurbuntusBain> I have  oracle vmware already so i was playing around with that
<KurbuntusBain> you are right
<KurbuntusBain> how would you sugges being careful with ram
<KurbuntusBain> because that's not upgradable on this server
<linuxthefish> KurbuntusBain use lighttpd or nginx for a web server, and configure your mysel server to use less ram
<linuxthefish> you should be fine even without changing any mysql server settings though
<KurbuntusBain> cool im using nginx now
<linuxthefish> perfect!
<KurbuntusBain> im just now setting up everytihng this nameserver shit is complicatesd
<linuxthefish> but asterisk for voip can be greedy
<linuxthefish> KurbuntusBain is this nameservers for your company website?
<KurbuntusBain> im trying to figure out how to configure the options file for bind as we speak
<KurbuntusBain> i haven't setup the name servers yet
<linuxthefish> I suggest using a free DNS service like cloudflare, then if your dedicated server goes down you won't lose the DNS records for all your other servers
<KurbuntusBain> well i've already bought a few domains
<KurbuntusBain> i got some through namecheap
<linuxthefish> did you say you had a VPS from vultr? they have a very good free DNS service
<KurbuntusBain> yes
<linuxthefish> KurbuntusBain you can even use namecheap's DNS, but it's kinda crap
<KurbuntusBain> my
<linuxthefish> and in the namecheap control panel, you just set the nameservers to ns1.vultr.com and ns2.vultr.com if you are using vultr's DNS
<KurbuntusBain> namecheap offers a premium dns supposedly ...."Enable PremiumDNS protection in order to switch your domain to our PremiumDNS platform. With our PremiumDNS platform, you get 100% DNS uptime and DDoS protection at the DNS level.
<linuxthefish> KurbuntusBain don't get that, it's a waste of money and others do it for free and better
<KurbuntusBain> I gues i did have a custom DNS setup already for klimco.group it's ns3.klimcogroup.com and ns4.klimcogroup.com
<linuxthefish> for example vultr and cloudflare both have ddos protected DNS, and they are on anycast
<KurbuntusBain> that's the thing that's a little confusing to me
<KurbuntusBain> still trying to wrap my head around how dns works....
<KurbuntusBain> supposedly I have 20mbps ddos protection on my server with psychsz
<linuxthefish> you need to add glue records in the namecheap control panel for klimco.group
<KurbuntusBain> gotcha, then i just point them to my ip address, so why do I need to have a DNS on my dedi?
<linuxthefish> KurbuntusBain you need to set up glue records if you want to host nameserver on your dedicated server
<linuxthefish> if you use another dns provider you don't need to set up glue records, you just set the nameservers
<linuxthefish> it's called "personal dns server" in namecheap settings
<KurbuntusBain> so if I use cloudfare i am not going to have to pay anything else?
<linuxthefish> correct KurbuntusBain
<linuxthefish> you can pay extra money for cloudflare if you want to add special rules or some other stuff, but you won't need it
<linuxthefish> https://www.cloudflare.com/plans/
<linuxthefish> https://i.imgur.com/eEeRW7D.png is how I have one of my test domains set up, cloudflare on the right and namecheap settings on the left
<ShellcatZero> after upgrading from 16.04 to 18.04, it seems I cannot resolve hostnames on the local network, any ideas?
<Skuggen> ShellcatZero: Check your /etc/resolv.conf. After installing 18.04 (I used 16.04-based Mint before), I had some trouble with it being overwritten on every boot
<tomreyn> ShellcatZero: maybe you need to configure a search domain
<RoyK> Skuggen: iirc it says just that in the file - do you configure the network the old way, as in the interfaces file?
<daemon> hey all I just shot a youtube video that involved Ubuntu Server 18, I know very little about linux and it did work it jsut had a couple of weird messages: https://www.youtube.com/watch?v=ego4Lt_t67w the fun starts at around 8:45 with a strange message from grub, if anyone has any idea about that and the strange network config timeout about a minute later from that I would really appreciate clarrification on what it is so I can comment in my
<daemon> videos correctly for linux installs
<mrelcee> i have a 17.0.4 install I am trying to upgrade to 18 LTS.  i've tried 'do-release-upgrade' and it says no new versions available.  yeah I know 17.0.4 is EOL, 16 and 18 wouldn't install in my VM environment properly..    what are my options?
<tomreyn> mrelcee: did you see what ubottu told you in #ubuntu ?
<mrelcee> yes
<tomreyn> and, did you try this?
<daemon> o/ mrelcee
<chingus> does host OS manage iptables with docker installed, or does docker manage?
<mrelcee> tomreyn: I certainly am trying to follow it.
<mrelcee> getting stuck at sources.list.  what is the codename i should be using, bionic?
<tomreyn> mrelcee: the upgrade path from 17.04 to the next non EOL release (18.04 LTS) would be: 17.10 ("artful"), 18.04 LTS ("bionic").
<mrelcee> that helps
<tomreyn> mrelcee: i think you'll have a better result if you'll just backup and install fresh, though. (but either should work.)
<mrelcee> fresh install with 18 results in a system i can't boot
<tomreyn> there's no 'ubuntu 18', should i assume you're referring to ubuntu 18.04 LTS? is that 18.04.0 or 18.04.1?
<tomreyn> how are you installing, and on what hardware? what's the actual error message when it can't boot
<tomreyn> please also specify the architecture. is it amd64? if so, uefi or bios boot?
<mrelcee> ye 18.0.4 LTS
<mrelcee> yes
<mrelcee> amd64, uefi, running under bhyve w/freebsd as the host
<tomreyn> "ubuntu 18.0.4 LTS" does not exist either :-/
<tomreyn> hmm, i don't know what "bhyve" is.
<mrelcee> Please forgive my linux ignorance.   I use somethign else day in and day out.  18.4
<mrelcee> oh excuse me. 18.04
<mrelcee> +LTS
<tomreyn> so is this "18.04.0 LTS" or "18.04.1 LTS"?
<RoyK> tomreyn: lsb_release -a
<RoyK> tomreyn: or cat /etc/os-release
<tomreyn> RoyK: this is about a non-booting system
<daemon> mrelcee, if your not using -l
<RoyK> iirc lsb_release isn't that popular anymore
<daemon> your not using UEFI mode
<RoyK> tomreyn: then just check /etc/os-release
<RoyK> on that system
<RoyK> finally people have agreed on some small things across distros
<mrelcee> ok the ISO I grabbed is ubuntu-18.04.1-live-server-amd64.iso.     I'm getting confused reading umpteen websites with umpteen versions
<tomreyn> RoyK: i think you're meaning to address mrelcee here
<tomreyn> mrelcee: okay, now how does the installation fail?
<RoyK> tomreyn, sorry. mrelcee please see my comments above
<tomreyn> mrelcee: isorry, i mean how does booting fail
<mrelcee> the installation does not fail.  It fails to boot after.  it just boots to grub>
<daemon> mrelcee, got teamviewer?
<tomreyn> mrelcee: do you have multiple storages there?
<mrelcee> just a single HD image
<mrelcee> daemon: nope
<daemon> mrelcee, get to the grub> prompt and type 'ls'
<daemon> paste the resultsback if there <2 lines long
<tomreyn> so bhye is a type 2 HV for freebsdm ok
<tomreyn> *bhyve
<daemon> tomreyn, if you scroll up to my question when I entered about 30 m ago
<daemon> tomreyn, that video is ubuntu server in bhyve
<mrelcee> daeon: I would have to reinstall 18.04.1 LTS to do this troubleshooting with you.  I currently have a working 17.04
<daemon> mrelcee, ok dokey, well im on the freebsd virtulization mailing list if you wanna give it a try and im not around on irc ;P
<mrelcee> curiously 16.X LTS and 18.04.1 LTS fail to boot while 17 works just fine
<tomreyn> daemon: i'm not familiar with bhyve. a web search for "/boot/grub/x86_64-emu" returns two hits, one of which is bhyve related, both are quite old.
<daemon> tomreyn, ahh so thatIS a bhyve problem, nice now I can ask some questions about it cheers
<mrelcee> i'm going to back burner this to next sunday. i'm burning my sunday up on this
<mrelcee> thanks for tryig to help
<tomreyn> daemon: i dont *know* it's a bhyve issue, but it seems likely to me.
<daemon> tomreyn, its something kinda odd I run quitea few vm's including gentoo, debian, arch pretty much anything you can imagine
<daemon> and I have never seen that error before
<daemon> I wonder if freebsd built bhyve-grub
<daemon> is an older version and expecting it there
<daemon> but surely gentoo would have kicked up the same warning
<tomreyn> daemon: i'm not enough into grub to provide a more qualified response, i'm afraid. maybe come back here during the week during uk business hours if you'd like a more qualified response
<daemon> tomreyn, any idea about the network thing about a minute after
<daemon> thatis deffinetly in the linux setup its self
<tomreyn> daemon: the ip address assigned to the system via dhcp is one of general electrics, that's correct?
<daemon> tomreyn, yep
<daemon> tomreyn, usually use the US militaries space
<tomreyn> well you surely know what you're doing ther ;)
<tomreyn> and this mac address is only used once?
<daemon> yep
<daemon> well thats a lie
<daemon> as you can see it was assigned to the gentoo image
<daemon> but that gentoo image is not booted when I am doing that
<tomreyn> ok
<daemon> all I can figure is its waiting for a SLAAC announce
<RoyK> <slightlyofftopic> https://scontent-arn2-1.xx.fbcdn.net/v/t1.0-9/40549824_2260770083936357_7347757761402241024_n.jpg?_nc_cat=0&oh=d3ad9466353a2208594ac2316cc22ad0&oe=5C3206E6 </slightlyofftopic>
<daemon> hehe
<daemon> epoch or bust :)
<tomreyn> daemon: maybe. i couldnt tell what it's doing from the output provided there. you'll have a log somewehre in /var/log post installation
<daemon> lemme fire it up
<tomreyn> daemon: this 'server live'  installer is still quite new, and while it's the default now, it still suffers from a few bugs
<daemon> yeah but thats not in hte install step
<daemon> thats it booting from the hard disk
<tomreyn> daemon: there's an alternative installer, which is the old "debian-installer" based installer
<RoyK> tomreyn: the fun thing about ubuntu, is that a new installer, practically a beta, is selected as the default installer for an LTS version of the OS, which intentionally, it should seem, would be stableâ¦
<daemon> I only picked it to help thato ther guy out earlier lol
<tomreyn> daemon: right, i'm just guiuessing that maybe the installer didn't configure your network properly, but that's really just a wild guess.
<daemon> net detection script is about to fail and continue booting
<daemon> bang on 2 minutes
<daemon> it fails over
<tomreyn> RoyK: i'm very aware of the irony.
 * RoyK uses debian on servers
<daemon> ok I ahve /var/log's which file would you need
<tomreyn> daemon: i don't rmemeber which is the exact log file. i could look it up in a VM if you can't identify it
<daemon> grep -ri dhcp .
<tomreyn> personally i don't *need* this file. ;)
<daemon> shows hits on
<daemon> ./syslog
<daemon> ./installer/installer-journal.txt
<daemon> ./cloud-init.log
<daemon> its ipv6
<daemon> its waiting for an address to be announced and neighbour discovery
<tomreyn> it's /var/log/installer
<tomreyn> not sure which one there exactly
<daemon>  /var/log/installer
<tomreyn> my limited understanding there si that subiquity is mostly the UI, curtin does the configuration stuff.
<daemon> Sep 02 16:23:54 ubuntu-server curtin_log.1730[2087]: Running command ['sh', '-c$
<daemon> Sep 02 16:24:30 ubuntu-server curtin_event.1730[2420]: finish: cmd-install/stag$
<daemon> no thats not 2 minutes
<daemon> hmm
<tomreyn> daemon: oh that's a misunderstanding
<daemon> ./installer-journal.txt:Sep 02 16:23:52 ubuntu-server systemd[1]: dev-disk-by\x2duuid-00c629d6\x2d06ab\x2d4dfd\x2db21e\x2dc3186f34105d.device: Job dev-disk-by\x2duuid-00c629d6\x2d06ab\x2d4dfd\x2db21e\x2dc3186f34105d.device/start failed with result 'timeout'.
<daemon> would it be that?
<tomreyn> i'm trying to help you identify the installers' log files, to see whether something was incorrectly configured regarding networking
<daemon> ah
<tomreyn> daemon: the installed system boots to /var/log/syslog indeed
<daemon> what I should do is boot it and record the exact time
<daemon> I know it freezes two minutes
<daemon> then I can grep the timestamp
<tomreyn> you can also try tab completion for all of: systemctl status network
<tomreyn> so hit double tab after typing 'network'
<tomreyn> one of those should report an error, and if it does, it should explain how to get more logs
<daemon> 19:11:07
<daemon> ok lets try both :)
<daemon> love a good mystery
<tomreyn> you can also correclate to "dmesg -T" output
<daemon> https://1drv.ms/u/s!AkvwG6Qrk79x3EBwIhboTO6Iejoc
<daemon> the intersting bit is
<daemon> looking atthe messages above it
<daemon> it reached everything it needed to
<tomreyn> actually on your video it says how to get more information on the failed systemd target
<tomreyn> systemctl status systemd-networkd-wait-online.service
<daemon> See 'systemctl status systemd-networkd-wait-online.service' for details.
<daemon>          Starting Initial cloud-init job (metadata service crawler)...
<daemon> yep lets take a peek
<daemon> oddly that is the EXACT service I just found a 2 minute wait in
<daemon> looking for datasources
<tomreyn> "systemd-analyze blame" and "systemd analyze critical-chain" would probably confirm this, too.
<tomreyn> i'm not sure how much you're into systemd
<daemon> never used it before
<daemon> I have quite a few production systems but there all: FreeBSD, Slackware, Gentoo or Windows-Server
<tomreyn> it inits services in paralellel, wherever possible
<daemon> well skip thatthe gentoo are more for fun
<tomreyn> also i'd do a "sudo apt update && sudo apt full-upgrade" to be sure you got the latest updates installed. maybe that'll help fix the issue.
<daemon> https://paste.ee/p/42kGD
<tomreyn> so the root cause seems to be "systemd-networkd-wait-online[688]: Event loop failed: Connection timed out" - whatever that means
<daemon> updating did not help either issue
<daemon> seems so odd
<tomreyn> there are plenty of related bug reports, if you do a web search for it (omitting the pid in squre brackets, obviously)
<daemon> yeah
<daemon> its doing something itsexpecting a response to
<daemon> but its not getting
<daemon> perhaps some broadcast announce
<tomreyn> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=869995 this would match what you suggested might be the issue
<ubottu> Debian bug 869995 in systemd "interface stuck in "configuring" if SLAAC prefix announced with preferred lifetime zero" [Normal,Fixed]
<daemon> I should drop a rta-dvd daemon on thatvirtual network
<daemon> see if giving it ipv6 does anything
<daemon> can I disable this cloud thing
<daemon> cloud-init
<daemon> meh anyhow :) its sunday so time to actually commit the work I was meant to commit last monday
<daemon> thanks for the help guys!
<tomreyn> the above bug should already be fixed in the ubuntu 18.04.1 systemd version, so it'ds going to be something else.
<tomreyn> welcome, daemon
<uplime> hello all! I just installed ubuntu-18.04 server as a vm on my laptop, with 2 network interfaces defined: a bridged interface and a host only adapter. i can talk to the vm over the host-only interface just fine, but the primary bridged interface, which is setup to use DHCP, doesn't want to pass any traffic. Any way to fix that?
#ubuntu-server 2020-08-24
<albech> i cannot seem to find relevant documentation for configuring mariadb with server side ssl.. All documentation require both server and client side. I still wish to use username/password but have the traffic encrypted.
<lystra> albech: So you expect the traffic to be encrypted when the client isn't talking SSL to mariadb? That's not possible.
<lystra> albech: Or do you want client/server SSL but still enforce password auth?
<albech> lystra: i want client/server ssl WITH password authentication.. I think I have figured it out thoygh.
#ubuntu-server 2020-08-25
<alexaf> Hello there. We are using bionic-server-cloudimg-amd64.img as provided by Ubuntu and we're experiencing a regression regarding GRUB terminal_output . We think that the issue is probably related to "grubx64.efi" bootloader binary. Given that this binary is provided as-is by the cloud image I'd like to track down how it was built. So the question: how is "grubx64.efi" built? Is there a repository
<alexaf> tracking down changes between different releases of a cloud image? Thanks in advance.
<alexaf> I already had a look at https://code.launchpad.net/cloud-images , but I don't think any of these repo is directly responsible for what I'm looking for
<rbasak> alexaf: it's built by the grub2 source package
<rbasak> alexaf: see https://launchpad.net/ubuntu/+source/grub2/+publishinghistory and https://git.launchpad.net/ubuntu/+source/grub2/log/?h=ubuntu/bionic-devel perhaps
<alexaf> rbasak: oh okay. Will look into grub2 source package as well.
<alexaf> rbasak: the efi binary is placed on a separate partition in cloud-images; I guess there's something creating that partition then copying over the binary there?
<rbasak> Right
<alexaf> rbasak: so how are these cloud-images being built?
<rbasak> That's done by the livecd-rootfs source package I think. But this is getting towards the edge of my knowledge
<rbasak> A cloud image is mostly just a disk image with a bunch of packages installed
<alexaf> thanks rbasak
<littlebit> hi people, I have question about using snap. I'm planning to install nextcloud and gitea through snap which is at first no problem but how can put both snaps, which are under different subdomains, on port 443
<usr123> I just discovered mosh. It seems like a really good replacement for ssh. I wonder if there are any resources or books on such information, stating the tools and ways for common day to day tasks for backend developers?
<isene> I have a badly broken letsencrypt/certbot setup. How do I unistall the whole shebang and start over? I'm on 18.04 with certbot installed via snap
<isene> Basically, my site was only running http, I installed certbot and letsencrypt and broke the thing. Now my site is responding as it redirects to https which is misconfigured and the site is therefore not responding.
<isene> Fixed it :-)
<rbasak> isene: not sure how you did it, but certbot has a rollback command for exactly that situation
<isene> rbasak: Just for future reference (in case I fuck up again) - what's the rollback command?
<rbasak> isene: sorry I don't remember. Some argument to "certbot" IIRC. Try "certbot -h"?
<isene> OKI
<jamespage> icey, coreycb: can you do the initial MIR bug for python-invoke
<jamespage> https://bugs.launchpad.net/ubuntu/+source/python-invoke/+bug/1892875
<ubottu> Launchpad bug 1892875 in python-invoke (Ubuntu) "[MIR] python-invoke" [Undecided,Incomplete]
<jamespage> then I can review
<coreycb> jamespage: yes I'll take that
<jamespage> ta
<Blueking> how long time would it take to have hardware support on new amd cpu  zen3  ?
<tomreyn> this primarily depends on how different those are from earlier architectures, and whether amd provides complete kernel patches in time.
<tomreyn> plus the time span from when those patches are merged into mainline, and from then to when the next HWE kernel is released by canonical
<tomreyn> Blueking: ^
<Blueking> okey.. waiting for new amd cpu to arrive before upgrading..
<tomreyn> well, not just 'the next HWE kernel', but 'the next HWE kernel that is based on a mainline kernel (as in >= version) that has the patches
<tomreyn> backporting might also be possible, but unlikely.
<Blueking> ordered motherboard with new intel ethernet chip (at that time) that ubuntu failed recognize :P
<tomreyn> sometimes hardware producers don't provide linux support in time, and pressure is too high to go to market
<Blueking> and I ordered this one -> https://www.icydock.com/goods.php?id=175
<Blueking> uh wrong window
<IngCr3at1on> hello. I'm having an issue installing ubuntu server 20.04 on a single SSD in a Dell R720XD server. Previously I had 19.10 on it for _some reason_ and decided it made more since to do a fresh install of an LTS instead of an upgrade... I'm running the installer and it's hitting `Could not set BootOrder: Invalid argument` (I had to record it on my phone cause it goes away to quick to actually read the output)
<IngCr3at1on> Is anyone familiar with out to debug this? I can't go back to 18.04 either it would seem. Also for knowledge sake there are other drives in the server but they're all part of zpools and are not part of the installation
<IngCr3at1on> I'm guessing it's more or less the same as https://bugs.launchpad.net/ubuntu/+source/shim-signed/+bug/1851955
<ubottu> Launchpad bug 1851955 in grub2 (Ubuntu) "installation fails in grub-install with: efivarfs_set_variable: writing to fd <N> failed: Invalid argument" [Undecided,Confirmed]
<IngCr3at1on> just not sure how to prove that since the installer doesn't really give me any fallback terminal or similar
<sarnold> IngCr3at1on: my guess, disable CSM or Legacy in the bios
<IngCr3at1on> it's already set to UEFI and I'm booting the USB device with the installer via the UEFI boot menu
<IngCr3at1on> I could obviously try installing it using legacy but it would be nice if UEFI could work as it did previously just fine
<IngCr3at1on> though since you mentioned CSM I'm curious now if the USB device is configured wrong... first time I've actually used rufus for lack of a usable linux machine lol...
<IngCr3at1on> trying  `UEFI (non-CSM)`
<sarnold> IngCr3at1on: the usb bootable media should cope with both legacy systems and uefi systems
<IngCr3at1on> that's normally what I would think also but I'm running out of ideas over here lol
<tomreyn> IngCr3at1on: is the bios up to date? usually the problem discussed in the bug report you mentioned is casued by buggy mainboard firmware.
<IngCr3at1on> that made no difference sadly... I check tomreyn (I updated everything when I set it up previously but never know) just interesting that it was working anyway
<tomreyn> IngCr3at1on: maybe try a cmos / nvram reset
<tomreyn> dont do that if you have any important data stored there, though, such as hardware raid info
<tomreyn> IngCr3at1on: also make sure you're using the ubuntu 20.04.*1* installer, not *0*, and if that still fails, use the option provided to carry out an in-place (non-persistent) subiquity (ubuntu server installer) upgrade to the latest version available.
<tomreyn> the Dell R720XD design is eight years old, it's possible that it doesn't have a good uefi implementation, yet.
<IngCr3at1on> tomreyn: thanks for the advice, I'm working on updating the bios and will go from there
#ubuntu-server 2020-08-26
<Sven_vB> I'm trying port forwarding with iptables. According to my old notes and https://kerneltalks.com/networking/how-to-forward-port-using-iptables-in-linux/ , I think this should work: https://paste.debian.net/plainh/d3285256 but the natbox doesn't accept my connection to :307. netcat in a shell on the natbox can talk to the target perfectly.
<Sven_vB> some other NAT tutorials use additional state-based conntrack rules or SNAT, do I need any of those maybe?
<Sven_vB> ??? and then I found #Netfilter . I'll ask there.
<Sven_vB> indeed, SNAT was missing.
<IngCr3at1on> updated idrac and bios last night but still has issues with ubuntu installer failing when setting the boot mode... tried debian and it worked just fine
<IngCr3at1on> shame though, I would have preferred to run ubuntu
<Ussat> On what system, I have Ubuntu on several Dell systems
<IngCr3at1on> R720XD
<IngCr3at1on> Ussat: it's weird I came on here yesterday to see if anyone had seen it. I went to do a clean install from 19.10 to 20.04 (single SSD for the OS (there are other drives for zpools but not part of the install process)) and it would run through the install then fail during grub saying Invalid Argument when trying to set the boot mode.
<IngCr3at1on> updated everything after someone's recommendation here but to no avail. they had recommended an in-place upgrade but I couldn't even get 18.04 to reinstall
<teward> IngCr3at1on: are you sure your ISO is clean?
<teward> and what errors does it fail with when you try and use the ISO/installer?
<IngCr3at1on> teward: yes tried multiple ISOs they all do the same thing... The error was `Could not set BootOrder: Invalid argument` but at this point I'm kind of done troubleshooting that and just going to use debian anyway lol
<teward> mmkay, just wanted to make sure.
<IngCr3at1on> yeah I appreciate it I just need to get this thing back up and working and the host os I interact with the least anyway so you know
<mason> IngCr3at1on: That sounds like an efibootmgr error. What was the invocation, if so?
<mason> IngCr3at1on: should look like: efibootmgr -o 1,2,3,b,4   as an example
<IngCr3at1on> honestly mason I can't tell you at this point but otherwise you're correct it is/was
<IngCr3at1on> oh actually I do have it
<IngCr3at1on> `unshare --fork --pid -- chroot /target efibootmgr -o 0006,0001,0002,0003,0004,0000`
<IngCr3at1on> seemed like something related to the installer so I just kind of moved on /shrug
<mason> Two things jump out: 1) the extras zeroes aren't really needed, and 2) that looks fine, so maybe executing it in a shell normally might be interesting. I guess 3) make sure all those options exist, as there isn't a set number of boot variables.
<mason> IngCr3at1on: Did you see this during a regular install, where you didn't expect to see anything of the sort?
<mason> If so, it'd be worth filing a bug for it.
<IngCr3at1on> mason yes it occurred during a clean install of ubuntu server (multiple versions)
<mason> IngCr3at1on: Ah, sorry to hear it. I never use the installers so I'm always driving that stuff by hand. Definitely worth a bug report.
#ubuntu-server 2020-08-27
<theGoat> so i have a bind9 instance running, and i have listen-on-v6 configured in options, but for some reason when i restart bind, it's still running. it with the -4 option.  and i can't figure out why
<ahasenack> check the bind9 systemd service file
<ahasenack> how t constructs the command line
<theGoat> this is a 14.04 instance and i went through the init.d bind9 file and can't find anything related to it.
<ahasenack> it probably sources a file in /etc/default/*
<ahasenack> check there
<theGoat> ah
<theGoat> there it was.  thanks much
<theGoat> never thought of that
<sdeziel> I just "do-release-upgrade -d" from 18.04 to 20.04 and noticed that the symlink /bin -> /usr/bin isn't created. I'm wondering where I should fill a bug?
<sdeziel> where == against which package
<compdoc> do-release-upgrade is working now?
<oerheks> compdoc, not yet, AFAIK
<oerheks> nothing on https://lists.ubuntu.com/archives/ubuntu-announce/
#ubuntu-server 2020-08-28
<locknet> cls
<maswan> Hm. Just hit a small snag at $work, the big Application has started to require zookeeper >=3.5, but there's only 3.4s packaged as far as I can see. anyone know of a 3.5 packaging somewhere that we could get going, or a reasonably well-maintained repo somewhere?
<isene> Use Case = Large business needs a private cloud solution that hooks into AD and can be accessed as a file system from Windows Explorer, from an iOS and Android app and from web. Needs to be Open Source as specific extensions needs to be built for bidirectional access to files in legacy systems. Version control and easy rollback to earlier versions is required. Sharing to external parties via
<isene> web links - both directories and specific files. There is a multitude of solutions. Can anyone here suggest one or two? And yes, should be running on Ubuntu.
<sdeziel> I don't understand why the zfs-auto-snapshot package is in multiverse instead of universe. Its copyright file says it's all GPL-2+. What am I missing?
<thelounge5207> I'm attempting to run an ntp server on my box so some devices I have isolated on the network can still get time updates, but when I attempt to enable the service, it says it's masked. should I go about doing this a different way?
<sdeziel> thelounge5207: which OS version and what ntp server are you using?
<thelounge5207> 20.04
<thelounge5207> and just attempting to run systemctl start ntp
<sdeziel> with 20.04, the recommended NTP server is chrony
<thelounge5207> ahh
<sdeziel> that said, ntp should still work. Could you pastebin the command and error?
<thelounge5207> https://hastebin.com/ezuxodavus.sql
<thelounge5207> I know I can unmask it, but sine I didn't mask it myself, wasn't sure if it was set that way by default for a reason and I should be wary with unmasking it
<kinghat> i have unmounted a disk and im trying to write a fs to the partition but it tells me the system is using it. im not sure how to get around that.
<sdeziel> thelounge5207: 'sudo systemctl unmask ntp' should get you going but I still recommend taking a look at chrony ;)
<thelounge5207> will do, thanks!
<isene> I have a couple of servers over at DigitalOcean. Both are plain Ubuntu servers. Upon upgrading to 20.04 I realize the servers are quite bloated with stuff I would never ever need - like X stuff etc. Is there a safe script somewhere that can remove the bloat?
<sarnold> isene: I usually use apt purge on a package I suspect I don't need, inspect the list of things it's going to remove, then either go through with it or not
<sarnold> isene: once I've done that a few times, then I use 'orphaner' from the deborphan package to find other things to remove
<tomreyn> your first answer was not good enough.
<tomreyn> (apparently)
<sarnold> or it was perfect :)
<tomreyn> hehe, good point
<tds> or they just discovered their irc client doesn't count as bloat :D
<sarnold> lol
<selfy> Hello! I have a question about current process of building ubuntu cloud images. Maybe someone knowns the answer... I want to understand how ubuntu team create cloud images? which toolchain are they using? Can I find somewhere CI/CD build scripts? I want to try to create my own cloud image from scratch using automatic script (not from other cloud image). Sorry if this channel not for so lame
<selfy> questions and supporting
<sarnold> isene: once I've done that a few times, then I use 'orphaner' from the deborphan package to find other things to remove
<isene> sarnold: So - what script (if any) did you use to automate the process? Because traversing the packages and lifting out bloat is a tidious process and could perhaps break stuff I I'm a bit too agressive...
<isene> Or, rather - do you know of any such script?
<sarnold> isene: I've always been content to just remove big things like X and clean up loose ends
<sarnold> isene: another option is to start from the ubuntu cloud images and install just what you want, though that doesn't help with your existing machines
<isene> sarnold: Yeah, I'd rather not spend the weekend going from scratch :-)
#ubuntu-server 2020-08-29
<isene> One of my servers over at DigitalOcean is running this: "Ubuntu 20.04.1 LTS (GNU/Linux 3.13.0-57-generic x86_64)". How do I force it to run the 5.4 kernel that it has? It boots straight into the ancient 3.13...
<sarnold> isene: that may be something more like an openvz instance than a VM
<sarnold> I'm even a little surprised 20.04 boots on a kernel that old, I thought our glibc required something newer than that (but can't recall how to check)
<isene> sarnold: openvz? So, how do I force the 5.4 kernel?
<sarnold> isene: https://www.digitalocean.com/docs/droplets/how-to/kernel/
<isene> sarnold: Thanks
<aloini> What is the possibility of adding this fix into Ubuntu? As currently, the bug is causing me to be unable to properly break a 24bit flack: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=950803
<ubottu> Debian bug 950803 in shntool "shntool: shnsplit refuses to split 24 bits wav and flac files, 16 bit ok" [Normal,Open]
#ubuntu-server 2020-08-30
<Tuor> In the server installer, it is possible to create an encrypted Volume Group with Physical Devices in multiple disks. But then the system doesn't boot...
<RoyK> Tuor: not sure, but smells like grub is having issues. Perhaps a separate, unencrypted boot partition could be handy?
<Tuor> Can try.
<mason> Tuor: grub's concept of multiple cryptroots is broken. I took a stab at fixing it once, and got syntax generation to work, but it wasn't clear whether or not it did the right thing when passed multiple cryptroots.
<mason> Tuor: An interesting option is stuffing kernel and initramfs in your ESP, as then you can use the EFI stub loader in the kernel, and have access to everything your initramfs knows about right off the bat.
<mason> I was doing that here until I ran into a box that refused to take more than one EFI boot variable. While this is inherently broken, I moved back to GRUB and an unencrypted /boot mirror, since I could have one entry for GRUB and have that load up a menu with options, where with the EFI stub loader I'd have multiple entries for current and older kernels.
<Tuor> Sounds somewhat complicated. Having /boot unencrypted doesn't sound soo bad. There will never be any sensible data.
<RoyK> Tuor: just use a separate partition or md mirror (or lvm mirror?) for the boot. it's not hard. you might need to use the old installer. I haven't tried the new one lately.
<RoyK> Tuor: remember that booting off raid-5 or -6 an be a bit hard, if it works at all. Better setup a raid-1 or raid-10 for the boot stuff and use raid-5 or -6 for the rest with encryption and what you might need there
<Tuor> OK. Sounds doable. Thanks!
