[00:21] <ShadeS> any idea
[00:38] <majuk> Hey guys. Recently, my Samba server totally stopped enforcing file/folder permissions for my domain users. Permissions at the system level are working as intended. Help.
[00:43] <majuk> http://paste.ubuntu.com/387947/ <-- smb.conf
[00:45] <smoser> kirkland, yeah, i just saw that.
[00:47] <ShadeS> any ideas on this issue?
[00:59] <majuk> Ooooh, the 'force group' parameter is a sneaky little bugger
[02:42] <glphvgacs> lookin for a oneliner, tried this: enabling Restricted proprietary drivers cli site:help.ubuntu.com
[02:58] <Overand> Is there a 'sane' way to use libvirt / virt-manager to handle bridged networks, or is it a matter of configuring the machine's XML file to manually prod the "br0" (or whatever) interface?
[03:02] <persia> Overand: I do nothing at all to configure bridged networks, and it just works for me.
[03:03] <persia> (using virt-manager to define the guests)
[03:03] <persia> Make sure you have a virbr0 interface reported in ifconfig -a
[03:04] <Overand> persia: from what I can tell, the 'vibr0' interface is used for the NAT stuff
[03:05] <Overand> But - this is admittedly a pre-release ubuntu-server 10.04 machine, managed from virt-manager on my arch-linux workstation for the moment =]
[03:06] <persia> Ah.  My guests are on the same machine as my virt-manager.  I'm unsure how to help you with the remote model.
[03:06] <Overand> persia: Based on what little I read here, a 'bridge' network seems to be handled differently than a standard 'network' http://wiki.libvirt.org/page/Networking#Bridged_networking_.28aka_.22shared_physical_device.22.29
[03:06] <Overand> persia: I'm not sure if that's the issue or not.
[03:06]  * persia either
[03:06] <Overand> It would make sense, though.
[03:06] <Overand> hm
[03:07] <Overand> I wonder if I could use some commandline apps rather than 'virt-manager' - and instead of editing the XML files
[03:07] <Overand> virsh maybe
[03:11] <Overand> persia: so you've got guests running - without NAT - on the same physical segment as the host?
[03:11] <RoAkSoAx> zul, how do I test the apport hooks?
[03:11] <zul> STAGING=1 ubuntu-bug <name of apport hook>
[03:12] <Overand> I've got that sort of bridge working, but I had to manually stuff it into the XML file for the machine!
[03:13] <RoAkSoAx> zul, by STAGING=1 you mean to set that environmen variable?
[03:13] <zul> yep
[03:13] <persia> Overand: No.  I have guests running with NAT.
[03:14] <RoAkSoAx> zul, ok thanks
[03:14] <Overand> persia: Oh.  That's why I specified 'bridged'
[03:15] <Overand> bridged != NAT
[03:15] <persia> Sorry.  My misunderstanding.
[03:15] <RoAkSoAx> zul, and do I just test "ubuntu-bug package.apport" or should I install the package and do that?
[03:16] <zul> i would install the package
[03:17] <RoAkSoAx> zul, ok will do that way then, thanks ;)
[03:27] <RoAkSoAx> zul, what if a package a binary package has 2 daemons. How will the hook change?
[03:27] <zul> you can do it with the source package name
[03:29] <RoAkSoAx> zul, yeah but each daemon has different conf file
[03:29] <RoAkSoAx> i mean
[03:29] <RoAkSoAx> net-snmp has two binaries, snmpd has 2 daemons, snmpd and snmptrapd
[03:29] <RoAkSoAx> each with different conffiles
[03:29] <zul> so do a source_net-snmp.py
[03:31] <RoAkSoAx> zul, and then a add_info_snmpd function, then add_info_snmptrapd function and so on?
[03:31] <zul> yep
[03:31] <RoAkSoAx> cool I'll do that thanks
[03:47] <maxagaz> how to set default runlevel for a service ?
[04:20] <RoAkSoAx> zul, the apport hook must only be as root? Because in my tests says it cannot attach the conffile because of permissions
[07:21] <noty> Hello!
[07:21] <noty> Where can I find documents or ebooks about Ubuntu server?
[07:22] <twb> noty: apt-get install ubuntu-serverguide
[07:23] <noty> :)
[07:23] <noty> Thank you!
[07:23] <noty> I'll try now
[08:14] <Error404NotFound> i have a bit general question regarding webserver behavior, anybody would mind if i ask it in here?
[08:14] <twb> !anyone
[08:15] <Error404NotFound> twb, my question was a bit different of "anyone" though :P, okay here comes the questions.
[08:16] <twb> How about: "Don't ask to ask unless you're prepared to ask to ask to ask"
[08:16] <persia> That just encourages recursion and useless traffic :)
[08:16] <persia> Anyone is always free to ask, and lots of folk read backscroll, so waiting can get an answer hours later sometimes.
[08:17] <Error404NotFound> I am trying to setup a cookie-less domain to serve static content. Say i own abc.com and abc.net, both domain are defined in single vhost. If i use abc.net would it become cookie-less? or do i still need cname here?
[08:17] <Error404NotFound> if i use abc.net to load css and images*
[08:18] <persia> CNAME is only vaguely related to cookies in the sense that most browsers won't send a cookie to domains other than those from which they came.
[08:18] <persia> s/most/many/
[08:19] <Error404NotFound> yes, but all traffic on abc.com will use the cookie which is set to domain=abc.com or in worst case for subdomains as well, i think even though both domain use single vhost, due to tld difference it would be cookie-less.
[08:21] <Error404NotFound> So my colcusion is using cname for a cookie-less domain is same as use "A" record, and adding the other domain as ServerAlias in vhost config.
[08:22] <twb> I have also seen stuff like no-cookies.example.net being a cname for www.example.net
[08:22] <persia> I think the important point is not how the DNS server is configured, but what URL the webserver reports back to the browser.
[08:23] <twb> Yeah
[08:23] <persia> That may depend on the DNS configuration, but whether it does or not depends on the webserver configuration.
[08:23]  * persia does not happen to know the defaults
[08:31] <jiboumans> morning
[09:54] <TeTeT> server live migration seems to work with libvirt on Lucid, yeah :)
[09:57] <Jeeves_> TeTeT: I'm going to try that later on :)
[09:59] <TeTeT> Jeeves_: worked fine with virsh on the command line and seems to work even with virt-manager
[10:00] <Jeeves_> TeTeT: I just tested with karmic -> lucid, that didn't work
[10:01] <Jeeves_> But than again, I've never seen it work :)
[10:01] <TeTeT> Jeeves_: yep, I tested karmic last week and it wasn't working
[10:02] <Jeeves_> Ah, ok.
[10:02] <Jeeves_> That gives me hope :)
[10:05] <jayvee> The netboot image doesn't seem to come with IPv6 support — is this intentional?
[10:06] <jayvee> I just spent absolutely ages trying to add IPv4 support to my network because I finally figured out why my netboot ISO wouldn't install on my IPv6-only network.
[10:08] <TeTeT> jayvee: bummer
[10:09] <eekeek> Xubuntu 9.10 localhost server. One virtual host setup. Put a 'RewriteMap' as one of rules in the sites-enabled for the virtual host. Tried to reload apache which returned an error 'RewriteMap not allowed here'. Where can 'RewriteMap' go - httpd.conf?
[10:09] <jayvee> TeTeT: indeed
[10:09] <Jeeves_> jayvee: I wanted to create a bug for that!
[10:09] <Jeeves_> I noticed it too, last week.
[10:09] <Jeeves_> I don't think it intentional, just clueless :)
[10:09] <jayvee> go knock yourself out :-)
[10:10] <jayvee> launchpad is a-waiting
[10:10] <Jeeves_> jayvee: i'm not really in the mood :)
[10:10] <jayvee> I think all it needs is the ipv6 kernel module
[10:10] <jayvee> everything else seems to be there
[10:10] <persia> Probably an oversight, rather than cluelessness.  I'm sure there are folk who know *how* to do it.
[10:11] <jayvee> ubuntu-vm-builder keeps crashing for me, and is buggy as hell. Is there a more rapid way to deploy VMs than the netboot image when vm-builder isn't an option?
[10:11] <Jeeves_> persia: cluelessness as in 'ipv6 is nowhere in our prioritylist'
[10:11] <Jeeves_> jayvee: How is it crashing?
[10:11] <jayvee> bleh, I closed the terminal already
[10:12] <jayvee> it was crashing in a grub step
[10:12] <jayvee> so it got 99.999% of the way, and then bombed out and deleted the whole lot
[10:12] <Jeeves_> Ah, are you trying to directly install onto an device?
[10:12] <persia> Bah.  Just because one person doesn't make it a priority doesn't mean someone else can't.  Just about anything in Ubuntu is subject to fixing by anyone who wants to fix it.
[10:12] <jayvee> Jeeves_: into a disk image
[10:12] <Jeeves_> jayvee: Hmm.
[10:12] <jayvee> to be placed into libvirt, but it didn't get to the libvirt stage
[10:12] <jayvee> let me run it again
[10:13] <Jeeves_> I've seen Grub having issues when I tried to install directly to an iscsi-disk, not to an image
[10:13] <jayvee> running now: $ sudo ubuntu-vm-builder kvm lucid -m 512 --libvirt=qemu:///system -d /mnt/terror/jeremy/VM/lucid --hostname=lucid
[10:13] <jayvee> I'll get back to you when it finished
[10:13] <jayvee> the -d option doesn't work, btw
[10:14] <jayvee> actually, tbh, haven't tested the -d option successfully in the lucid version of vm-builder, as I've not got a vm to build yet :)
[10:17] <jayvee> actually, looks like it *is* caused by the libvirt component — my mistake
[10:17] <jayvee> AttributeError: 'Libvirt' object has no attribute 'vm'
[10:17] <jayvee> I'd hazard a guess a fix would be s/Libvirt/libvirt/, but not sure
[10:20] <jayvee>   File "/usr/lib/python2.6/dist-packages/VMBuilder/plugins/libvirt/__init__.py", line 54, in preflight_check
[10:20] <jayvee>     if hostname in self.all_domains() and not self.vm.overwrite:
[10:21] <jayvee> doesn't like the "self.vm.overwrite". trying again with that bit deleted.
[10:21] <jayvee> bleh heh heh
[10:28] <jayvee> VMBuilder.exception.VMBuilderException: Process (['sed', '-ie', 's/^# kopt=root=\\([^ ]*\\)\\(.*\\)/# kopt=root=UUID=cdf0293f-032c-43ac-a4a2-da4a5775834f\n1.0\next4\nfilesystem\\2/g', '/tmp/tmp18YkH0/boot/grub/menu.lst']) returned 1. stdout: , stderr: sed: -e expression #1, char 84: unterminated `s' command
[10:29] <jayvee> and now I get that
[10:29] <jayvee> I don't call vm-builder buggy as hell for nothing
[10:40] <ivoks> so, installing lamp-server doesn't restart apache2 after installation
[10:40] <ivoks> it should, cause otherwise php5 module isn't loaded
[11:04] <eekeek> Should mod_rewrite rules be inside a <IfModule mod_rewrite.c> container?
[11:10] <Jeeves_> eekeek: Only if you want apache to start even though that module isn't loaded
[11:13] <eekeek> i see. I'm having trouble with RewriteMap. Upon reloading apache I get "RewriteMap not allowed here"
[11:14] <eekeek> I thought I might need a container, but I guess not.
[11:45] <Stargaze> my Network Tools show that port 80 is open, I have local access to my second PC, but not over the internet
[11:45] <Stargaze> hints & tips please?
[11:47] <jayvee> Stargaze: what exactly is the problem?
[11:48] <jayvee> do you want to know what process is opening port 80?
[11:48] <jayvee> $ sudo fuser -v 80/tcp
[11:48] <Stargaze> i want to display index.html in /var/www
[11:48] <jayvee> so it works when you browse to http://localhost/ right?
[11:49] <Stargaze> when i go to the local ip adress
[11:49] <Stargaze> forgot to mention: i'm using DynDNS
[11:49] <jayvee> okay, and you have port forwarded port 80 to that machine with your router?
[11:50] <Stargaze> yes
[11:50] <jayvee> can I try to browse to it?
[11:50] <Stargaze> try 81.241.46.249
[11:50] <Stargaze> that's my current IP address
[11:51] <jayvee> Stargaze: I'm getting an error “ICMP administratively filtered”
[11:51] <jayvee> so I’d say that it’s a firewall problem
[11:52] <Stargaze> i guess my ISP blocks all ports
[11:52] <Stargaze> that's sh*
[11:52] <Stargaze> brb
[11:52] <jayvee> not necessarily
[11:52] <persia> Um, ICMP should not affect other stuff.
[11:53] <persia> There'S no reason why TCP/UDP/GRE/etc. shouldn't work just because ICMP is blocked.
[11:53] <jayvee> nope, you misunderstand
[11:53] <jayvee> I tried to access him via TCP port 80, and got an ICMP error in response
[11:53] <persia> Oh.  heh.
[11:53] <jayvee> Stargaze: my ISP by default blocks port 80 too, but they let me unblock it if I log onto the ISP control panel.
[11:53] <jayvee> Maybe yours will let you too.
[11:55] <jayvee> Stargaze: I’m only getting the ICMP error when accessing on port 80. If I try ports 12000 or 81, I get RST packets, so it’s definitely a port-specific block.
[11:56] <jayvee> persia: oops, I see where you misunderstand — my fault. I should have written “ICMP: communication administratively filtered”. I forgot the “communication” part. ;)
[11:56] <persia> Yes.  THat would have made more sense :)
[11:57] <persia> The first message usually means that a ping was blocked, the second indicatse that something else was blocked.
[11:57] <persia> (although the first could also be a trapped response to a block on the second, etc.)
[11:57] <Stargaze> jayvee: just contacted my IP, they do not allow setting up personal servers
[11:57] <jayvee> ouch
[11:57] <Stargaze> IP =ISP
[11:57] <jayvee> Might want to look at somebody like Rollernet.
[11:58] <Stargaze> i live in Belgium, Europe
[11:58] <persia> Or use port 81
[11:58] <Stargaze> ah, is that possible?
[11:58] <jayvee> I’d change ISP if I were you. :)
[11:58] <jayvee> Stargaze: yes, edit /etc/apache2/ports.conf
[11:58] <Stargaze> they provide my tv too :)
[11:58] <jayvee> and /etc/apache2/sites-available/default, I think
[11:59] <jayvee> and obviously edit the settings on your router. :)
[12:01] <Stargaze> not better with port 81 :(
[12:01] <merlijn-> Hi, I'm trying to get 10.04 alpha3 to work in a VMWare ESX setup, however after installationg it won't boot and grub complains of "error: no such disk"
[12:02] <jayvee> did you run “service apache2 restart”?
[12:02] <jayvee> Stargaze: did you run “service apache2 restart”?
[12:03] <jayvee> merlijn-, does grub actually load?
[12:03] <jayvee> like, do you see the kernel list, and so on?
[12:04] <merlijn-> jayvee: nope, it just says Grub loading...
[12:04] <merlijn-> then it gives me the error
[12:04] <jayvee> merlijn-: have you tried changing the disk type? is it scsi or ide? lsi logic or buslogic?
[12:05] <merlijn-> jayvee: it is a SCSI disk currently and VMWare bios is recognizing it
[12:06] <jayvee> try changing to ide just temporarily
[12:06] <jayvee> or doesn't ESX support ide disks?
[12:06] <jayvee> I don’t remember. The server in my garage running ESX hasn’t been powered on since 2007 because it chewed too much power. :-)
[12:06] <Stargaze> modified both files and the router to *81, but not better
[12:06] <Stargaze> darn ISP
[12:07] <jayvee> Stargaze, you restarted apache?
[12:07] <jayvee> run sudo fuser -v 81/tcp
[12:08] <Stargaze> oops :s
[12:09] <Stargaze> ok, done
[12:10] <Stargaze> not better
[12:10] <jayvee> Stargaze, does fuser say anything is listening on port 81?
[12:10] <Stargaze> I just type fuser to find out?
[12:10] <jayvee> sudo fuser -v 81/tcp
[12:11] <Stargaze> it did not say anything
[12:11] <jayvee> then apache isn't configured correctly
[12:11] <Stargaze> ah
[12:11] <jayvee> did you edit the config files?
[12:12] <Stargaze> i edited /etc/apache2/ports.conf and the default in sites-available
[12:13] <merlijn-> jayvee: okay tried different controllers without luck, to switch to IDE disks I have to do a complete reinstall (can't transition a virtual disk from SCSI to IDE)
[12:13] <jayvee> merlijn-: ouch
[12:14] <jayvee> ubuntu must support the disk if it installed to it
[12:14] <jayvee> maybe grub just doesn't
[12:14] <jayvee> but grub only uses the bios
[12:14] <merlijn-> yea, grub2 is a pain :(
[12:14] <jayvee> grub doesn't care what disk type it is
[12:14] <jayvee> oh, it's grub2!?
[12:14] <jayvee> ouch
[12:14] <jayvee> ouch ouch
[12:14] <jayvee> you said it bro
[12:14] <merlijn-> iirc 10.04 uses grub2 by default
[12:14] <merlijn-> to boot from ext4 partitions
[12:15] <jayvee> even the simplest things are so complicated in grub2
[12:15] <jayvee> like moving Windows to the top of the list
[12:15] <jayvee> you mv 40_os-prober to 0001_os-prober
[12:15] <merlijn-> might be the right time to dust off LILO of GPXE :P
[12:15] <merlijn-> or*
[12:15] <jayvee> and pray that they don't update the grub-pc package, because /etc/grub.d/40_os-prober is owned by the grub-pc package
[12:16] <jayvee> heh
[12:16] <jayvee> or grub1
[12:16] <jayvee> grub 0.97, that is
[12:16] <merlijn-> grub1 will not boot ext4 unless you apply some patches
[12:16] <jayvee> which ubuntu have done
[12:16] <jayvee> thankfully
[12:17] <jayvee> the grub1 in 9.04 and up support ext4
[12:17] <merlijn-> I wouldn't really consider those patches stable :P
[12:17] <Stargaze> merlijn-: http://kezhong.wordpress.com/2009/07/02/converting-ext2-filesystems-to-ext3ext4/
[12:17] <Stargaze> (ik ben ook nederlandstalig)
[12:17] <merlijn-> hmm, could have sworn that 9.10 boots off ext2 with root on ext4
[12:17] <pts_> Any comments on what would be the best password backend for samba against AD Server 2008r2; idmap_ldap or idmap_ad, need it to give least possible user management in the long run.
[12:18] <merlijn-> Stargaze: I have no intention to migrate my filesystem, thank you
[12:18] <jayvee> merlijn-: nope, in fact my /boot is ext3 because I installed this system during the 9.04 alphas, before grub1 supported ext4
[12:19] <jayvee> but subsequent systems I installed definitely used grub1 + ext4
[12:19] <jayvee> including a few VMs on here
[12:20] <merlijn-> hmm, http://news.softpedia.com/news/GRUB-2-The-New-Boot-Loader-in-Ubuntu-9-10-113671.shtml
[12:20] <merlijn-> looks like 9.10 was already using grub2
[12:21] <jayvee> it is
[12:22] <jayvee> merlijn-: why not chroot in and install grub1 instead?
[12:22] <merlijn-> funnily enough, 9.10 just boots right away with the same config on the vmware ESXi cluster
[12:22] <jayvee> or maybe re-run grub-install
[12:22] <merlijn-> jayvee: too much hassle for a release that's in alpha stage :)
[12:22] <jayvee> oops, yeah, forgot
[12:23] <merlijn-> anyway, time to grab some lunch - thanks for your help jayvee
[12:23] <jayvee> time for me to grab some shuteye :-)
[12:24] <jayvee> right after I've done testing these images, anyways
[12:24] <merlijn-> good night then :)
[12:24] <jayvee> :)
[12:24] <Jeeves_> Is anyone else having issues with Lucid, server and X-forwarding?
[12:32] <pmatulis> Jeeves_: you'll need to be more specific
[12:39] <Jeeves_> pmatulis: I've got xauth and ssh installed
[12:39] <jayvee> and doesn’t Jeeves normally have the *answers*, not the questions? ;-)
[12:39] <jayvee> *we’re* supposed to Ask Jeeves. ;-)
[12:39] <Jeeves_> 'normally', if you login, you get a message like '.Xauthority created'
[12:40] <Jeeves_> On Karmic, that's broken as you need to add '-4' to the sshd-options, otherwise X-forwarding doesn't work
[12:40] <Jeeves_> On lucid, it doesn't seem to work at al;
[12:40] <jayvee> are you referring to “ssh -X”? because I use that all the time.
[12:41] <Jeeves_> jayvee: Yes, that's what I'm referring to
[12:41] <jayvee> admittedly not on lucid, but it works on karmic no trouble
[12:41] <jayvee> I have no idea what -4 even does, let alone have to use it.
[12:42] <jayvee> oh, right, forces IPv4
[12:42] <Jeeves_> jayvee: Like I said, I've got the issue with lucid ..
[12:42] <jayvee> of course
[12:42] <jayvee> um
[12:42] <jayvee> are you trying to connect via DNS name or IP address?
[12:42] <Jeeves_> How would that matter?
[12:42] <jayvee> because you could be suffering from the broken DNS forwarder problem
[12:42] <jayvee> on Karmic, that is
[12:43] <Jeeves_> 'broken DNS forwarder'?
[12:43] <Jeeves_> (logging in on ip doesn't help, btw)
[12:44] <jayvee> right
[12:44] <jayvee> hmm, not really sure
[12:45] <jayvee> I presume $DISPLAY is being set
[12:45] <jayvee> from within the ssh -X session, type "echo $DISPLAY"
[12:45] <jayvee> it should say localhost:10.0
[12:45] <Jeeves_> No, it isn't
[12:45] <Jeeves_> Also, xauth is being run
[12:45] <jayvee> try export DISPLAY=localhost:10.0
[12:45] <Jeeves_> Doesn't work\
[12:45] <jayvee> tried it, I spose
[12:45] <jayvee> hmm
[12:45] <Jeeves_> I've been around long enough to try that stuff :)
[12:46] <jayvee> well I'm about to fall off this chair
[12:46] <jayvee> I need zzzz's :)
[12:46] <jayvee> good luck with your problem
[12:47] <Jeeves_> Failed to allocate internet-domain X11 display socket.
[12:47] <Jeeves_> debug1: x11_create_display_inet failed.
[12:53] <henkjan> Jeeves_: lucid ubuntu-desktop install x-forwarding works
[12:53] <Jeeves_> Got it
[12:54] <Jeeves_> henkjan: it's again the -4 switch, but somehow, /etc/init.d/ssh doesn't seem to pass that option to sshd
[13:02] <zul> morning
[13:05] <Jeeves_> Hi zul
[13:08] <Rada> Hello!
[13:08] <Rada> Has anyone ever tried bridging a bonded interface?
[13:09] <Rada> I'm largely unsuccessful...
[13:10] <Rada> when I try doing it through the interfaces conf file, my system crashes and starts coredumping to the point of being completely unusable (couldn't even log in, had to use the rescue cd)
[13:10] <Rada> http://ubuntu.pastebin.com/qU0wdAvn
[13:11] <Rada> ^ this got -server 9.10 to really fuck up
[13:11] <Rada> sorry uvirtbot, I wasn't talking to you.
[13:17] <Jeeves_> bridging a bonded interface?
[13:17] <Jeeves_> Sounds yukkie :)
[13:18] <Rada> :)
[13:19] <Rada> I've had good luck doing this with vmware-server... but now I'm trying to convert to kvm
[13:20] <Rada> and kvm won't let me "just use" my bonded interface
[13:24] <Rada> Yay! Got it working.
[13:24] <Jeeves_> ok, so ssh in Lucid is broken
[13:26] <Jeeves_> cjwatson: Awake?
[13:30] <Stargaze> about my DynDS issue, i need a Bussiness subscription for my personal webserver
[13:31] <cjwatson> Jeeves_: mm?
[13:31] <cjwatson> Jeeves_: broken how?  works for me
[13:32] <cjwatson> actually, let me upgrade before saying that
[13:32] <Jeeves_> :)
[13:32] <cjwatson> I didn't change that much though
[13:32] <cjwatson> not user-visibly anyway
[13:32] <Jeeves_> cjwatson: I tried to restart sshd using /etc/init.d/ssh
[13:32] <cjwatson> DDTT
[13:32] <Jeeves_> Which seems to work, but actually doesn't
[13:32] <cjwatson> why not use the upstart job?
[13:33] <Jeeves_> Pick either one, but please don't use them both :)
[13:33] <cjwatson> I have no option.  /etc/init.d/ssh is for the benefit of people running sshd in a chroot, since upstart doesn't work there
[13:33] <cjwatson> use 'restart ssh' outside a chroot
[13:33] <Jeeves_> Well, this is very weird, if you ask me
[13:33] <cjwatson> and we need the upstart job for other things depending on it
[13:33] <cjwatson> once upstart works in a chroot, it'll be de-weirdified
[13:34] <Jeeves_> We're using /etc/init.d for years, we think of something new (which is fine by me), but than we finish it half, so we use two methods?
[13:34] <cjwatson> not my fault
[13:34] <cjwatson> feel free to file a bug on openssh saying that /etc/init.d/ssh should spot that you're using upstart and do the right thing
[13:34] <Jeeves_> cjwatson: Where should I configure the defaults for ssh ?
[13:34] <cjwatson> that would make sense, imo
[13:35] <cjwatson> Jeeves_: /etc/init/ssh.conf
[13:35] <cjwatson> or sshd_config of course
[13:35] <Jeeves_> argh
[13:35] <Jeeves_> So now we're back to editing configfiles that originate from packages?
[13:36] <cjwatson> it's a design feature of upstart that jobs are simple enough that editing them directly isn't going to create the sort of hideous conflicts that editing /etc/init.d/ssh used to, so we shouldn't need the /etc/default/ssh indirection layer
[13:36] <cjwatson> uh, that's nonsense
[13:36] <cjwatson> /etc/default/ssh originated from a package too
[13:36] <cjwatson> it was split out due to the complexity of editing /etc/init.d/ssh correctly
[13:37] <cjwatson> not in order to avoid editing conffiles
[13:37] <Jeeves_> But if I was to upgrade ssh now, would that end up in a message saying 'you changed a configfile' ?
[13:37] <cjwatson> not if /etc/init/ssh.conf didn't change
[13:37] <cjwatson> you'd have got such a message if I changed /etc/default/ssh in the package
[13:37] <cjwatson> so this is something of a red herring
[13:37] <Jeeves_> But it did (because ssh 'needs' the -4 switch to get X-forwarding to work)
[13:37] <cjwatson> I mean if it didn't change in the package
[13:38] <Jeeves_> ok. Well.
[13:38] <Jeeves_> If I may give Canonical some feedback:
[13:38] <cjwatson> you could also use 'AddressFamily inet' in /etc/ssh/sshd_config, and avoid having to edit /etc/init/ssh.conf or /etc/default/ssh at all
[13:38] <Jeeves_> 1: Good work on upstart
[13:38] <cjwatson> which is almost certainly easier
[13:38] <Jeeves_> 2: Please don't mess up like you're doing now
[13:39] <cjwatson> I don't think we're messing up; I respectfully disagree
[13:39] <Jeeves_> You may.
[13:39] <Jeeves_> Me, as a user, think you're messing up :)
[13:39] <cjwatson> I have given you reasons, corrected your misunderstandings, and given you an alternative
[13:39] <Jeeves_> I, as a user, that is :)
[13:40] <cjwatson> I was also there when /etc/default/ was introduced in Debian, and I remember the reasons for it
[13:40] <cjwatson> and I truly don't think they apply nearly as strongly as they did
[13:40] <zul> ttx: are you busy tomorrow (ha ha)
[13:40] <ttx> zul: ha ha... why
[13:41] <zul> ttx: wanna schedule the samba bug zapping thing tomorrow?
[13:41] <ttx> zul: no, I want to plug a bugday for triaging first
[13:41] <zul> ttx: ok sounds good
[13:41] <ttx> not sure when Pedro will be available for that
[13:42] <cjwatson> Jeeves_: I think it would be an excellent idea to help out people who try to use /etc/init.d/ssh (or invoke-rc.d) without realising that it's switched to upstart, and I would definitely appreciate a bug report for that
[13:43] <Jeeves_> cjwatson: So I will file it. But what's that issue with chrooting that still requires it?
[13:43] <Jeeves_> En why isn't that just fixed?
[13:43] <cjwatson> because it's extremely hard work in upstart
[13:43] <Jeeves_> s/En/And
[13:43] <Jeeves_> En == Dutch :)
[13:43] <cjwatson> many people want to run sshd in a chroot, for one reason or another
[13:43] <cjwatson> upstart can't yet manage services running in chroots
[13:44] <Jeeves_> it might, but this creates a lot of fuzzyness
[13:44] <cjwatson> so /etc/init.d/ssh is there so that people can start it the old-fashioned way
[13:44] <Jeeves_> but you actually can't
[13:44] <Jeeves_> Because it's allready running
[13:44] <Jeeves_> And it's not complaining
[13:44] <cjwatson> sure you can - just pick a different port
[13:44] <cjwatson> this is not a terribly unusual configuration
[13:45] <Jeeves_> It doesn't complain in any way.
[13:45] <cjwatson> it probably does in auth.log
[13:45] <cjwatson> but are you talking about running /etc/init.d/ssh *outside* a chroot?
[13:46] <Jeeves_> I'm not doing anything fancy.
[13:46] <cjwatson> could you just say yes or no :)
[13:46] <Jeeves_> The default is outside a chroot? Than yes.
[13:46] <cjwatson> right.  then that's just part of the bug I asked you to file
[13:47] <cjwatson> there's no reason /etc/init.d/ssh couldn't spot that the service is being managed by upstart and pass requests through to it in that case, given that we have to dual-run for a while for other reasons
[13:47] <cjwatson> I'd be happy to make that change, I just need a reminder of it since I'm doing some other things at the moment
[13:48] <cjwatson> running inside a chroot is a more complicated case that we can't handle any other way at the moment, which is why we still need the init script - but we can make it less confusing
[13:52] <Jeeves_> Thanks
[13:52] <Jeeves_> bug 531912
[13:52] <Jeeves_> Also, do you have a clue why x-forwarding is broken, unless you disable ipv6?
[13:53] <Jeeves_> It's not really an issue on this specific box, but we're using ipv6 in production here. :)
[13:54] <cjwatson> I think there is a bug about that somewhere; I'll see if I can find time to deal with it before lucid
[13:55] <cjwatson> if you could get me ssh -vvv output from an affected system, that wouldn't hurt
[13:55] <Jeeves_> sure, got a bugnr where you want that in?
[13:56] <cjwatson> it *might* be bug 434799, but perhaps better to just file a new one
[13:56] <bogeyd6> ive been a member in launchpad since 2007, yet i have 0 karma points
[13:56] <cjwatson> bogeyd6: karma is related to recent activity
[13:56] <bogeyd6> yup
[13:56] <bogeyd6> hence my depression
[13:56] <cjwatson> bogeyd6: https://help.launchpad.net/YourAccount/Karma
[13:56] <cjwatson> ah :)
[13:57] <bogeyd6> when i got a new job and couldnt document anymore , but now its changed a bit
[13:57] <cjwatson> also http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=422327
[13:57] <bogeyd6> i confirmed a bug this morning!
[14:03] <cjwatson> also https://bugzilla.mindrot.org/show_bug.cgi?id=1457
[14:03] <Jeeves_> cjwatson: I've added debug info to bug 434799
[14:03] <cjwatson> there is a patch there, but I would have to sit and think very hard about it despite its shortness :-)
[14:04] <Jeeves_> :)
[14:04] <Jeeves_> 'disable ipv6'
[14:04] <Jeeves_> is that it? :P
[14:04] <cjwatson> we need to make ipv6 work, not disable it
[14:04] <cjwatson> and no, that isn't the patch :)
[14:04] <cjwatson> it sounds as if it happens on systems that have ipv6 sort of halfway configured
[14:04] <Jeeves_> That's what Fabio would do! :)
[14:05] <Jeeves_> cjwatson: I've got machines to debug, if needed :)
[14:06] <cjwatson> I've reproduced it
[14:06] <cjwatson> 'sudo ip addr del ::1 dev lo' is sufficient to reproduce the problem
[14:06] <cjwatson> ('sudo ip addr add ::1 dev lo' to restore previous state on my machine)
[14:07] <Jeeves_> Hmm
[14:07] <Jeeves_> But why is that ::1 gone?
[14:07] <cjwatson> well, my machine has ipv6 configured
[14:07] <cjwatson> yours perhaps doesn't
[14:07] <cjwatson> s/machine/network/ perhaps more relevantly
[14:08] <Jeeves_> Mine too, but the server hasn't
[14:08] <cjwatson> right, this is code that runs on the server
[14:08] <Jeeves_> I know.
[14:08] <cjwatson> the thing I'm worried about is that this problem arose from a security fix
[14:08] <Jeeves_> But who removes the ::1 from lo?
[14:08] <cjwatson> specifically CVE-2008-1483
[14:08] <cjwatson> Jeeves_: if you don't have IPv6 configured, it might simply not ever be added
[14:09] <cjwatson>   * Patch from Red Hat / Fedora:
[14:09] <cjwatson>     - CVE-2008-1483: Don't use X11 forwarding port which can't be bound on
[14:09] <cjwatson>       all address families, preventing hijacking of X11 forwarding by
[14:09] <cjwatson>       unprivileged users when both IPv4 and IPv6 are configured (closes:
[14:09] <cjwatson>       #463011).
[14:09] <cjwatson> thanks, uvirtbot, you can stop now
[14:10] <Jeeves_> :)
[14:10] <Jeeves_> I don't get much non-ipv6 hosts nowadays :)
[14:10] <cjwatson> I *think* that ignoring EADDRNOTAVAIL wouldn't reintroduce the security hole
[14:12] <cjwatson> the security hole was that you could bind to a port using one address family and sshd wouldn't mind as long as it could bind using the other address family, and then you could capture X traffic
[14:12] <Jeeves_> Hmm
[14:12] <cjwatson> but that would've been EADDRINUSE or something
[14:13] <cjwatson> Damien upstream has a point that it's sort of weird for getaddrinfo to give you addresses you can't bind to
[14:13] <Jeeves_> Yeah
[14:14] <Jeeves_> I'm sorry, but I'm not into development that much that I can make up my mind about that :)
[14:15] <cjwatson> I'm thinking out loud
[14:16] <Jeeves_> ok :)
[14:34] <hagedorn> hey, witch version of ubuntu should i use for xen as dom0 ?
[14:35] <cjwatson> https://bugzilla.mindrot.org/show_bug.cgi?id=1356 is a clearer and better-written upstream bug for the above
[14:35] <Stargaze> using nmap, what doex it mean if port 80 is 'filtered'?
[14:36] <cjwatson> there's a comment at the end about a race condition which is a bit worrying ...
[14:36] <cjwatson> Stargaze: google for 'nmap filtered', and the first hit explains it
[14:37] <cjwatson> (http://nmap.org/book/man.html)
[14:39] <bogeyd6> ja herd
[14:40] <Jeeves_> cjwatson: Ehm. That would be the case if a machine is booting and it allready has an ipv6-addres and not an ipv4 address?
[14:42] <cjwatson> er, something like that.  I'm going to follow up there next time I have my normal browser booted, though, as I'd have thought having getaddrinfo return only bindable addresses would have the same problem
[14:42] <cjwatson> but back to kernel hacking for now
[14:43] <bohne> hi, what's the role of "ubuntu enterprise cloud" when using amacons EC2?
[14:44] <bohne> hm ok, this "Enterprise" is private cloud. EC2 is public cloud.
[14:50] <smoser> bohne, "UEC" is software that allows you to manage your own hardware as a "cloud"
[14:51] <smoser> it is API compatible with amazon's EC2
[14:51] <smoser> this means that you can develop appliances on your internal cloud, and move to ec2
[14:51] <bohne> smoser: ok, so i can use the same mgmt tools?
[14:51] <smoser> or develop on ec2 and move internal
[14:51] <smoser> rigth
[14:52] <smoser> a tool that works against the amazon web service api can be used against the UEC by simply changing the "end point" that the tool talks to
[14:53] <bohne> smoser: when installing an Enterprise Cloud (=private Cloud) , this is a machine which can host XEN based VM's ?
[14:53] <smoser> :-( no.
[14:53] <bohne> smoser: physical machines?
[14:53] <smoser> UEC uses kvm for virtualization
[14:54] <smoser> so if you've got a xen based machine, it will likely need some changes to run on UEC
[14:54] <bohne> smoser: ah ok, i thought amazon is xen based
[14:54] <smoser> the kernel/ramdisk is the big thing
[14:54] <smoser> amazon is xen based
[14:55] <bohne> smoser: but i read somewhere that it is possible to transfer an image vom ec2 to private cloud?
[14:55] <smoser> bohne, yes, it is, "mostly".  there are some things that will have to change.
[14:56] <smoser> i believe that nijaba has a list somewhere of what all needs to be changed.
[14:56] <smoser> bohne, for the UEC images (the "ubuntu" images on ec2) we make an effort to have them "just work"
[14:56] <smoser> the goal being if you started with one of those, your migrate step is minimal
[14:57] <bohne> smoser: ahm that means, ubuntu server image i can transfer, debian image is more difficult?
[14:58] <smoser> its not terribly difficult.
[14:58] <smoser> but, yes. the very least you have to get a non-xen kernel and then get the modules installed into the image
[14:58] <smoser> that make sense ?
[15:00] <bohne> smoser: i think i understand it
[15:02] <smoser> nijaba, ping. i think you had a list of these things ?
[15:02] <nijaba> smoser: otp...
[15:03] <nijaba> smoser: what list are yo talking about?
[15:03] <smoser> checklist of things to do to migrate from ec2 to uec
[15:03] <smoser> to do to the image
[15:03] <bohne> i don't need it sorry, this is just an evaluation
[15:03] <nijaba> smoser: I barely started investigation...  never completed
[15:04] <smoser> hmm... fair. nurmi told me it was somewhere on eucalyptus.com but i cant find it.
[15:04] <nijaba> smoser: afaik changes are only needed with pre 9.10 images
[15:05] <smoser> nijaba, not just ubuntu, but "generic" images
[15:05] <nijaba> smoser: in that case, I don't
[15:05] <smoser> (why anyone would use such a thing, i can't understand why :)
[15:05] <blackxored> d
[15:11] <hggdh> kirkland: I re-opened bug 531445, it started to fail again
[15:12] <hggdh> no, worng bug, sorry
[15:12] <hggdh> bug 531455
[15:15] <Roxyhart0>  hi there i got a nat/router server and a emal server. I want any external ip that come from outside with to the address 203.x.x.x (external email address) that arrive to the NAt can forwarding to my email server to the internal IP. somebody know how to do that?
[15:16] <bohne> smoser: when using ubuntu UEC, ist possible to use plain debian guest? or only ubuntu server? i'm not sure.
[15:17] <smoser> bohne, absolutely
[15:17] <smoser> or fedora, or ....
[15:17] <bohne> smoser: ok thanks
[15:17] <smoser> there is work torwards supporting windows guests
[15:17] <hggdh> soren: can you give me upload rights to ~soren/autotest/*, or should I create my own branch?
[15:17] <bohne> smoser: is it simple to use debian?
[15:17] <cbrowne> Roxyhart0, iptables -t nat -A POSTROUTING --dest-address 203.x.x.x-203.y.y.y -j QUERY # I think? don't quote me on that one
[15:18] <smoser> bohne, is that a rhetorical question ?
[15:18] <smoser> :)
[15:18] <cbrowne> Roxyhart0, familiarise yourself with iptables anyway
[15:18] <smoser> bohne, if you have a working image, its no different
[15:18] <smoser> you just need filesystem-image, kernel, ramdisk
[15:18] <bohne> smoser: ok thanks
[15:19] <bohne> smoser: i only use virtualbox on desktop so far;)
[15:19] <bohne> bohne: and lots of root servers...
[15:19] <soren> hggdh: Just create one of your own.
[15:19] <Jeeves_> Is lucid supposed to try and mount nfs before starting statd? :)
[15:19] <soren> hggdh: (by branching mine, for instance)
[15:19] <Roxyhart0> thanks i was trying many thinks but doen't work
[15:20] <hggdh> soren: will do, thanks
[15:20] <zul> wow people are actually using the php5 apport hook ;)
[15:21] <hggdh> soren: BTW, did you submit the step_file_generator.py to upstream? It is quite a cool idea...
[15:22] <soren> hggdh: I did not, no.
[15:22] <soren> hggdh: It's not up to my ready-for-upstream-submission-standards yet.
[15:24] <hggdh> soren: ah, OK. I only made a small change there, from print > stderr to logging.info()
[15:31] <hink> Anyone had experience with KSplice
[15:37] <bohne> smoser: i have another question;) on amazon ec2, ist simple to port a vm image from weak "hardware", to a more powerfull?
[15:38] <smoser> bohne, there is basically no difference
[15:38] <smoser> well, architecture
[15:38] <smoser> but other than that really not
[15:38] <bohne> smoser: it's possible and simple on UEC and amazon?
[15:38] <smoser> yeah... outside of arch.
[15:39] <smoser> your i386 image will not run on x86_64 instance in amazon
[15:39] <smoser> (i think that actually works in uec... but dont know)
[15:39] <bohne> smoser: you ok, but apart from that its simple
[15:39] <smoser> yeah, the differences other than that are really non-existant
[15:39] <smoser> unless you were *trying* to fail
[15:40] <Roxyhart0> hi cbrowne, that i want to do is any IP form external website to the addrees 203.4.3.2 go to the Ip 172.16.0.4. So i am getting this one but i dont know what is the rest. Do you know abut that?  iptables -A POSTROUTING --dest-address 203.x.x.x -j ??
[15:41] <skrite> hey gents, having some trouble with mysql replication. i have slave io running yes, but slave sql running no
[15:47] <RoAkSoAx> zul, do the hooks have to be with the copyright notice?? nto really right?
[15:47] <zul> RoAkSoAx: they should imho
[15:48] <RoAkSoAx> zul, ok i'll submit a hook for vsftpd in a bit for you to review, I'm just gonna test it first
[15:51] <zul> RoAkSoAx: cool beans
[15:52] <cbrowne> Roxyhart0, you want -t nat so it's in the network address translation table, and I think you want -j FORWARD or -j QUERY but use the iptables manpage for more information about iptables
[15:57] <Roxyhart0> i im using nat but i am using another external Ip for email, it mean NAt use 203.x.x.3 and email use 203.x.x.9...i want any addrees that come to 203.x.x.9 will be forward to 172.19.0.3 for example
[15:58] <tdn> I have a machine with two network adapters: an ethernet adapter and a wireless adapter. How can I turn the wireless adapter into an access point?
[16:02] <cbrowne> Roxyhart0, yes, that's what iptables does
[16:02] <cbrowne> Roxyhart0, "man iptables"
[16:03] <RoAkSoAx> zul, btw.. by setting STAGING=1 or APPORT_STAGING=1 it still doesn't work with the staging server of lp
[16:03] <Roxyhart0> i did i cant firure out this is because im asking here
[16:05] <zul> RoAkSoAx: you should be able to use it without the STAGING=1 and go through the motions without submitting the apport report
[16:06] <RoAkSoAx> zul, right but i would like to submit it and see what is actually attaching
[16:06] <zul> RoAkSoAx: hmm...not sure then
[16:19] <RoAkSoAx> zul, now im getting "This is not a genuine Ubuntu package"
[16:19] <RoAkSoAx> any ideas why?
[16:19] <zul> RoAkSoAx: can you paste your python script somewhere/
[16:22] <RoAkSoAx> zul, http://pastebin.ubuntu.com/388362/
[16:23] <Roxyhart0> HI , somebody can hel me... i need to forwarding emails coming to a external adrees forwadring it to internal address. somebody know hoe to do that?
[16:25] <zul> RoAkSoAx: i put the script as source_vsftpd.py in /usr/share/apport/package-hooks and didnt have that problem
[16:28] <RoAkSoAx> zul, it tells me apport-cli: error: /usr/share/apport/package-hooks/source_vsftpd.py does not belong to a package. and to avoid that i'm creating the deb, installing it, and trying the hook
[16:29] <zul> well yeah you need the package to be installed
[16:39] <skrite> hey all, need some help with a master slave replication config. Thought i had everything set up right, but still shows nothing in Slave_IO_State
[16:42] <RoAkSoAx> zul, same thing
[16:42] <zul> RoAkSoAx: can you put the package up somewhere?
[16:42] <zul> or your bzr branch
[16:45] <cbrowne> Roxyhart0, iptables -t nat -A POSTROUTING -d [remoteip] -j DNAT --to-destination [localip]
[16:45] <cbrowne> Roxyhart0, I got that by READING THE MANPAGE like I told you to do earlier
[16:46] <zul> RoAkSoAx: i have to go to the doctors can you email me the details?
[16:46] <RoAkSoAx> zul, I will
[16:48] <cbrowne> Roxyhart0, when I tell you to rtfm it isn't because I'm lazy, it's because spoon-feeding you the answer isn't going to help anybody
[17:01] <goose> is it okay to put my server's real IP and FQDN in /etc/hosts ? All I have in there now is localhost
[17:02] <ivoks> of course
[17:02] <goose> just wanted to make sure it wouldn't set my server on fire :p
[17:02] <goose> I think some sendmail errors might be stemming from that
[17:07] <goose> thanks ivoks
[17:24] <Neoteric> so does anyone use amazon ec2? and or know how to create custom AMIs based off karmic?
[17:33] <TeTeT> kirkland: just installing another lucid server on the new kernel, just to make sure it was not a one time thing
[17:33] <kirkland> TeTeT: cool, thanks
[17:33] <nxvl> kirkland: i just updated to lucid and noticed that there is an annoying @ everytime there is activity in a byobu 'tab' how do i disable it?
[17:33] <ph8> hi all - is there a way for me to automount a USB drive plugged into the server?
[17:33] <ph8> * into my server :p
[17:34] <nxvl> kirkland: and, is there a way to only enable that for 1 tab?
[17:34] <macno> hi I'm trying testdrive but when virtualbox starts, it gets 100% CPU and do nothing
[17:34] <kirkland> nxvl: echo "defmonitor off" >> ~/.screenrc
[17:34] <nxvl> kirkland: thnx
[17:52] <Pupeno> is there a command that will output some info about certs, keys, public keys, etc of those for SSL?
[17:55] <TeTeT> kirkland: second install went fine too\
[17:57] <kirkland> TeTeT: okay; good data points, thanks
[18:15] <sherr> Pupeno: openssl has a lot of sub-commands, some of which output certificate details etc. See man openssl (and man x509 etc.)
[18:17] <BulleTh0> I have a subnet, 62.231.69.56/29, routed behind 86.122.121.252. On the server, I have 86.122.121.253 on eth0 and, on eth0:0 .252. How do I get internet from the server? I tried on a windows machine connected trough a switch with the server to put IP: 62.231.69.58 with netmask 255.255.255.248, gatway 62.231.69.56(server, eth0:1) Do I need an extra netcard to put .232 or it's just a software issue?
[18:29] <bogeyd6> BulleTh0, it seems you have a routing issue
[18:30] <bogeyd6> Unless your switch also acts a router what you are trying to do, at least on the top, is impossible
[18:30] <BulleTh0> No.. the switch it's just a switch.
[18:30] <bogeyd6> Your eth0 cant have two subnets working on it
[18:31] <BulleTh0> The .253 it's doesen't have a subnet.
[18:34] <bogeyd6> I.e. you cant be on 62.231.69.59/29 and trying to go to 86.122.121.252 without a router
[18:34] <bogeyd6> BulleTh0, ^^
[18:35] <BulleTh0> I've put as aliases ips from that subnet on the server and they work.
[18:36] <BulleTh0> But when I put IPs on network computers they don't work.
[18:38] <bogeyd6> BulleTh0, http://www.sangoma.com/support/tutorials/tcp_ip.html
[18:38] <bogeyd6> alias ips
[18:39] <majuk> BulleTh0, have you enabled ipforwarding and NATing on your server's eth0?
[18:39] <bogeyd6> BulleTh0, if there is a way to make it work without routing I am unfamiliar with it
[18:39] <BulleTh0> majuk, I don't know. How do I check ?
[18:40] <majuk> BulleTh0, Then you haven't. You're going to have to for this kind of a setup. The eth on your server isn't going to just KNOW to route those packets forward from your user net.
[18:41] <BulleTh0> Hmmm... things make sence.
[18:41] <BulleTh0> So .. I have to make a router out of my box.
[18:42] <majuk> BulleTh0, Precisely. But Linux already has router functions as a part of it's kernel. The toolset is called "IPTables"
[18:44] <BulleTh0> And I can have static ip adresses on each network computer ?
[18:46] <majuk> BulleTh0, Yes. What you're going to do, ultimately, is tell your server "I want this block of IP addresses NAT'd onto this WAN address" Then you can assign any IP address in that range to your hosts and they'll be NATd out to the internet.
[18:46] <BulleTh0> So.. this is what I have to follow? http://linuxpoison.blogspot.com/2008/01/how-to-enable-ip-forwarding.html
[18:47] <BulleTh0> Look kinda must have but not enogh :))
[18:47] <majuk> Yes, that's the forwarding. But you also have to do NATing in iptable
[18:47] <majuk> s
[18:48] <majuk> http://tinyurl.com/rd57k
[18:49] <majuk> Check out the 'Masquerading' section
[18:49] <majuk> BulleTh0, ^^
[18:51] <BulleTh0> Geez.... I'm lost.
[18:53] <mdlueck> I have a 9.10 server that needed the -20 kernel update at a bad time... I was just getting it set up, then to shift the network number, etc... Applied it anyway, now the server does not boot to the login prompt. Purged off the packages I was working on setting up, still no login prompt. Suggestions short of a reload?
[18:59] <bogeyd6> mdlueck, any log activity?
[18:59] <mdlueck> bogeyd6: Logs end eerily quiet, no clue...
[19:00] <bogeyd6> mdlueck, so really we have no idea what is going on?
[19:00] <mdlueck> Correct
[19:00] <mdlueck> tail of messages and syslog give no clues
[19:00] <bogeyd6> mdlueck, cntrl alt + f1
[19:01] <mdlueck> Thought perhaps since cups / samba / dhcpserver were not yet configured - just on the server - that perhaps one of those were stalling the boot process
[19:01] <bogeyd6> we need the dmesg log
[19:02] <mdlueck> ctrl-alt-f1 shows the boot console, c-a-f2 is how I logged in to purge back off cups / samba / dhcpserver, reboot, etc...
[19:02] <BulleTh0> I have no ideea how to set up the server :)
[19:02] <bogeyd6> mdlueck,  sudo nano /etc/default/bootlogd then change BOOTLOGD_ENABLE=No to YES
[19:02] <mdlueck> OK, let me see if it comes up far enough to let me ssh to it.
[19:03] <mdlueck> brb
[19:05] <mdlueck> in via ssh from my desk, next what...
[19:07] <mdlueck> I just enabled bootlogd, will IPL the box
[19:09] <bogeyd6> mdlueck, looking for a reboot to see what is hanging up
[19:11] <mdlueck> bogeyd6: where does npptlogd log to? I will check the server console. ps aux shows me I had started setting up djbdns as well, so that I forgot to purge back off.
[19:12] <bogeyd6> sorry bub
[19:12] <bogeyd6> you going all over the board for me to handle it
[19:12] <mdlueck> bogeyd6: stall at the usual spot on the console
[19:12] <bogeyd6> you should have enabled the boot log, restarted and checked /var/log/boot
[19:12] <mdlueck> OK, will check /var/log/boot
[19:13] <mdlueck> cat /var/log/boot
[19:13] <mdlueck> (Nothing has been logged yet.)
[19:15] <bogeyd6> hmm mdlueck
[19:22] <mdlueck> bogeyd6: also purged djbdns which was installed and not yet configured
[19:23] <bogeyd6> you enabled the bootlog and it didnt log anything mdlueck
[19:24] <mdlueck> bogeyd6: Did not seem like it did anything. I copy/pasted the results.
[19:25] <bogeyd6> mdlueck, well im stumped
[19:25] <mdlueck> bogeyd6: I just noticed a service which is not completely starting.
[19:25] <mdlueck> So I will also purge that package off.
[19:28] <ubuntuNewBe> hi, anybody here to help with ubuntu servers?
[19:28] <mdlueck> bogeyd6: That did it - login prompt at the server console! PTL!
[19:28] <lucid_interval> what help do you need?
[19:29] <ubuntuNewBe> I had a question regarding postfix + dovecot setup.  Would like to know if anybody here can help.
[19:29] <mdlueck> ubuntuNewBe: Sure, my prob is solved, so what may I assist you with
[19:31] <ubuntuNewBe> I am running server 9.10, and followed the guide on this page: https://help.ubuntu.com/community/MailServer
[19:31] <ttx> kees: around ?
[19:31] <ubuntuNewBe> to setup postfix + dovecot
[19:31] <mdlueck> ubuntuNewBe: Sorry, never have touched those packages
[19:31] <kees> ttx: hello!
[19:31] <ttx> kees: hey ! Can I bribe you into a quick C advice ?
[19:31] <lucid_interval> ubuntuNewBe: OK... go on..
[19:32] <kees> ttx: sure thing, what's up?
[19:32] <ttx> kees: on https://bugs.launchpad.net/ubuntu/+source/eucalyptus/+bug/531899
[19:32]  * kees reads
[19:32] <ubuntuNewBe> so I setup postfix first without problem, then setup dovecot without problems
[19:32] <ttx> I fixed it like this: http://bazaar.launchpad.net/~ttx/eucalyptus/defunct-fix/revision/940
[19:32] <ttx> kees: which involved creating an avahi  timeout callback
[19:32] <ubuntuNewBe> then went back to the postfix page and scrolled to the bottom where it says setup postfix+dovecot+sasl
[19:33] <ubuntuNewBe> followed all instructions without problems
[19:33] <ttx> kees: was wondering if there wasn't a simpler way out
[19:33] <ttx> kees: the parent process doesn't care if/when the child processes end
[19:33] <ubuntuNewBe> now when connecting to my server via thunderbird from a different machine, it finds the imap +smtp server with starttls without problems
[19:34] <ubuntuNewBe> however when thunderbird asks me to verify unsigned certificates, I get weird certs
[19:34] <ubuntuNewBe> not the ones that I generated during the postfix part of the tutorial?
[19:34] <lucid_interval> ubuntuNewBe: what do you mean weird certs?
[19:34] <bogeyd6> ubuntuNewBe, that does happen when you use self signed certs
[19:35] <lucid_interval> ubuntuNewBe: did you link the same certs into Dovecot?
[19:35] <kees> ttx: usually processes spawning asynchronous children will register a SIGCHLD handler and perform a loop until waitpid(-1, &status, WNOHANG) == 0
[19:35]  * ttx just spotted an error on line 139
[19:35] <ubuntuNewBe> when looking at the certs that thunderbird gives me, they are not the ones that I generated during the postfixt part of the tutorial as they do not have my name/location/email ect...
[19:35] <lucid_interval> ubuntuNewBe: At which stage of using TB do you get these errors (checking mail or sending mail)?
[19:36] <ttx> kees: hmm, any example of that somewhere ?
[19:36] <kees> ttx: optionally, another way to handle this is to have the child-spawner do a double-fork with setsid to disassociate completely from the parent.
[19:36] <ubuntuNewBe> well, the account setup is without problems, I get the first cert when i first try to check mail, and then I get the 2nd cert (smtp) when trying to send a mail
[19:36] <ubuntuNewBe> infact the first cert when checking mail for the 1st time(imap) is correct, has my name/email/location ect....
[19:36] <kees> ttx: which is probably the least code changes.
[19:37] <ubuntuNewBe> however when i try to send mail for the 1st time (smtp)  I get blank cert wihtout my correct info
[19:37] <ttx> kees: ack.
[19:37] <kees> ttx: http://www-theorie.physik.unizh.ch/~dpotter/howto/daemonize
[19:37] <ubuntuNewBe> I generated the certs 2 times to see just to make sure
[19:38] <kees> ttx: oh, I guess it's not a double-fork, just a call to setsid().  even less code to change.  :)
[19:38] <lucid_interval> ubuntuNewBe: did you do the steps to configure postfix to use the certs you generated?
[19:38] <ubuntuNewBe> Yes, I did
[19:39] <ttx> kees: too bad you're so far way TZ-wise, that would have spared me that avahi research :)
[19:39] <ubuntuNewBe> I can try doing that part again making sure I generate the correct certs and put them in the correct locations
[19:39] <kees> heh
[19:39] <lucid_interval> ubuntuNewBe: In particular can you check the following lines in /etc/postfix/main.cf:
[19:39] <ttx> kees: thx, will fix tomorrow.
[19:39] <lucid_interval> ubuntu: NewBe: smtp_tls_note_starttls_offer = yes
[19:39] <lucid_interval> smtpd_tls_key_file = /etc/ssl/private/smtpd.key
[19:39] <lucid_interval> smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt
[19:39] <kees> ttx: cool; glad I could help :)
[19:39] <lucid_interval> smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem
[19:40] <ttx> kees: I suck at C.
[19:40] <lucid_interval> ubuntuNewBe: make sure the files referenced are the correct ones.
[19:40] <eekeek> Xubuntu 9.10 server. mod_rewrite enabled and as far as I can tell it is working with a .htaccess file. I want to map to lowercase urls. Do I put the rewrite instructions in the httpd.conf file?
[19:41] <ubuntuNewBe> lucid, I checked those lines and the lines seem correct, however, I will double check generating the certs and placing them in the right folders.
[19:42] <lucid_interval> ubuntuNeBe: actually, all you need to do is edit /etc/postfix/main.cf to ensure the entries point at the files you have already generated
[19:43] <kees> ttx: hehe.  I attribute my C skills to reading everything W. Richard Stevens ever wrote.
[19:43] <ubuntuNewBe> my concern was, how does dovecot handle the certs?  do I need to specifiy the certs in dovecot.conf or do I just need to worry about the certs in main.cf?
[19:43] <ttx> kees: life is too short.
[19:43] <kees> :)
[19:44] <ttx> kees: my knowledge stops at format string vulnerabilities, somehow
[19:44] <lucid_interval> You need to specify the certs in dovecot.conf (also). But since the mail check is OK from Thunderbird, I am presuming the dovecot setup is OK. dovecot is an IMAP server; it's postfix that is the SMTP server used for sending mail
[19:45] <ubuntuNewBe> okay, because I used https://help.ubuntu.com/community/PostfixDovecotSASL to setup postfix+dovecot sasl and now where on this page does it say to specify certs in dovecot?
[19:46] <mdeslaur> kirkland: fyi, I just uploaded changes to virt-manager and virtinst that change the way keymaps are handled. Basically, now by default no keymap will get set when qemu is being used. If you hear of any problems, let me know.
[19:46] <ubuntuNewBe> so the only time I am specifing certs is main.cf
[19:52] <lucid_interval> ubuntuNewBe: you need to click through for the detailed instructions on Dovecot - see https://help.ubuntu.com/community/Dovecot . Search for SSL
[19:54] <lucid_interval> ubuntuNewBe: if you didn't do (change) this in dovecot, I am not clear how your generated certificates are appearing when you CHECK mail
[19:55] <ubuntuNewBe> lol, okay i knew it didn't make sense, thanks for the help.
[19:57] <ubuntuNewBe> so for these lines which cert files do I use?
[19:58] <ubuntuNewBe> ssl_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem
[19:58] <ubuntuNewBe> ssl_key_file = /etc/ssl/private/ssl-cert-snakeoil.key
[19:58] <ubuntuNewBe> ? is this different than the certs I setup for postfix?
[20:04] <ubuntuNewBe> also, I messed up, I get the correct cert when sending mail (smtp) but I get a blank cert when I first receive mail.  I had that backwards
[20:23] <cak054> can i install the server and cloud on one desktop
[20:31] <lucid_interval> ubuntuNewBe: you should (can) specify the same certs for the dovecot config also.
[20:31] <lucid_interval> ubuntuNewBe: you can generate separate certs, but I do not think there is any point
[20:32] <lucid_interval> ubuntuNewBe: remember that a cert refers to the public part (only) and a key refers to the private part (only)
[20:41] <ubuntuNewBe> lucid_interval, let me first thank-you for all your help.
[20:42] <ubuntuNewBe> lucid_interval, so using the previous examples, would it be okay to use ssl-cert-snakeoile.pem --> cacert.pem (from postfix)
[20:43] <ubuntuNewBe> and ssl-cert-snakeoil.key --> cakey.pem (also from postfix instructions) ?
[20:45] <ubuntuNewBe> once again I generated cacert.pem and cakey.pem from https://help.ubuntu.com/community/Postfix
[20:47] <lucid_interval> ubuntuNewBe: no... you never use the CA key - except to sign new CSRs or certificates.
[20:48] <lucid_interval> ubuntuNewBe: you need to generate a CSR (Certificate Signing Request) and sign a certificate using your newly created CA for this server.
[20:48] <lucid_interval> ubuntuNewBe: that server cert will have a public (cert) part and a private (key) part
[20:49] <lucid_interval> ubuntuNewBe: the ssl_cert_file and ssl_key_file (in Dovecot) should refer to these files
[20:50] <lucid_interval> ubuntuNewBe: similarly the smtpd_tls_key_file and smtpd_tls_cert_file in the postfix main.cf should refer to these two server cert / key files
[20:51] <lucid_interval> ubuntuNewBe: ONLY the smtpd_tls_CAfile entry in the postfix main.cf file should refer to the PUBLIC part of the CA certificate
[20:52] <lucid_interval> ubuntuNewBe: you can also refer this URL for more info on becoming a root CA and creating CSRs / certs: http://www.tldp.org/HOWTO/SSL-Certificates-HOWTO/
[20:55] <lucid_interval> ubuntuNewBe: another useful URL (linked on the Dovecot details page): http://www.debian-administration.org/articles/284
[21:04] <andriijas> is there a way to remove all packages that were installed after installation and purge all the settings for it?
[21:08] <ubuntuNewBe> lucid_interval, thank you again for all your help.  I will read the links you provided and try setting up the dovecot cert properly
[21:10] <hink> i uninstalled proftpd using apt-get autoremove proftpd
[21:11] <hink> i ran a update-rc.d -f
[21:11] <hink> on it
[21:11] <hink> and delete the /etc/proftpd directory. Now when i reinstall using aptitude it doesnt put the scripts back in init.d
[21:11] <hink> am i doing something wrong
[21:13] <lucid_interval> hink: you didn't purge the config files for proftpd when you did the rmeove
[21:13] <hink> i ran an apt-get purge proftpd.... does that not take care of it lucid_interval
[21:13] <lucid_interval> hink: what you wanted was apt-get autoremove --purge proftpd
[21:13] <hink> i see
[21:13] <lucid_interval> hink: you can still do it (should automatically remove /etc/proftpd)
[21:14] <lucid_interval> hink: then do a re-install - you should get the init scripts
[21:17] <hink> lucid_interval: if I am installing proftpd as part of a script. Is there anyway to bypass this screen during install? http://grab.by/grabs/5dd50880e9ee19f003e91c40b2edd104.png
[21:21] <hink> lucid_interval: im thinking it has something to do with debconf-set-selections
[21:21] <hink> but i'm not sure
[21:33] <Znupi> Can someone help me properly install a mail server? I am able to install Postfix and send and receive emails (they get stored in ~/Maildir/), but that's about it
[21:33] <soren> mathiaz: Do we have a plan for dealing with those? ^^
[21:33] <Znupi> I'd like to be able to fetch my email over POP3 and send e-mails through SMTP from a client
[21:33] <soren> Znupi: That's what a mail server does.. You need to be more specific if you want it to do more.
[21:33] <Znupi> (say, Thunderbird)
[21:34] <soren> Ah,
[21:34] <Znupi> But I have no idea whether I need to install something extra for the POP3 or not, or how to configure Postfix to accept (authed) SMTP requests
[21:34] <soren> Fetching mail over pop3 -> fetchmail.
[21:34] <Znupi> is fetchmail a server?
[21:35] <soren> Define server.
[21:35] <Znupi> it sounds like it "fetches" mail
[21:35] <soren> "I'd like to be able to fetch my email over POP3"
[21:35] <soren> Oh.
[21:35] <soren> I see what you mean.
[21:35] <soren> Ok, for that, you want dovecot.
[21:36] <ubuntuNewBe> Znupi https://help.ubuntu.com/community/MailServer worked for me
[21:36] <Znupi> ubuntuNewBe: I was reading the official docs
[21:36] <Znupi> thanks for the link though
[21:36] <soren> Znupi: Actually, there's a package called dovecot-postfix that should set up postfix and dovecot to work together.
[21:36] <soren> Znupi: Those are the official Ubuntu docs
[21:36] <ubuntuNewBe> Znupi, I did postfix first and then dovecot
[21:36] <soren> Znupi: (What ubuntuNewBe linked to, I mean)
[21:36] <Znupi> I see that on the official docs, but they never explain how things actually work
[21:37] <Znupi> I mean, for example, how do I configure thunderbird to work with my new server?
[21:37] <ubuntuNewBe> Znupi do the tutorials first
[21:37] <ubuntuNewBe> then thunderbird 3.0 will configure itself
[21:37] <Znupi> I see
[21:39] <Znupi> But I don't understand a few things. For example, they say "Configure Postfix to do SMTP AUTH using SASL", but they never explain WHY I'm supposed to do that
[21:40] <ubuntuNewBe> if you need secure access to web server
[21:40] <ubuntuNewBe> mail server*
[21:40] <Znupi> basically, this will help authenticate my email client when *sending* messages, correct?
[21:40] <Znupi> (sorry for newbishness)
[21:46] <lucid_interval> Znupi: yes. saslauth is for authentication of client requests to SEND mail
[21:46] <Znupi> ok, thanks
[21:46] <lucid_interval> Znupi: this is useful for clients like Thunderbird
[21:47] <lucid_interval> Znupi: if you want to accept authenticated relay requests from another server (never a good idea to allow open SMTP relay), you need to use a CLIENT certificate on the server requesting relaying. This does not use saslauth
[21:48] <Znupi> ok, so, please bare with me, the process is like this: mail comes from outside, postfix puts it in ~/Maildir/, then Thunderbird connects via POP3 to dovecot which reads mail from Maildir/ and sends it back to thunderbird. When sending mail it gets directly through postfix, dovecot is not involved at all, am i right?
[21:48] <Znupi> But if I need to send email from thunderbird I don't need a special certificate for it, right?
[21:49] <Znupi> I will just need to enter the username/password on the server?
[21:53] <soren> Znupi: Well... Dovecot it somewhat involved in sending e-mail.
[21:53] <Znupi> How so?
[21:53] <soren> Znupi: Postfix asks dovecot for authentication.
[21:53] <Znupi> Why? Can't dovecot just run sendmail ?
[21:54] <soren> Znupi: But the process of accepting the e-mail from thunderbird and sending it on is done by postfix. Dovecot never sees the actual e-mail.
[21:54] <Znupi> sendmail doesn't require authentication, right?
[21:54] <Znupi> Ah
[21:54] <soren> Znupi: The authentication is to check that you are who you say you are.
[21:54] <Znupi> Oooh, I see
[21:54] <soren> If you're on a LAN, you may not need authentication at all.
[21:55] <soren> It's common for SMTP servers on a company's LAN to act as a relay for clients on the LAN without authentication.
[21:55] <soren> SMTP AUTH is most commonly used for road warriors.
[21:55] <soren> At least that's how/why I've used it in the past.
[21:56] <Znupi> yeah, well, i'm not setting up for lan
[21:56] <soren> Ok.
[21:56] <Znupi> But, on the docs, I can see that sasl / SMTP AUTH is set up before dovecot
[21:56] <soren> In that case you want to get SMTP AUTH working properly. Otherwise random people will use your server to send out spam. ("will" being the operative word. Not "may")
[21:57] <soren> Sorry, which docs are we talking about?
[21:57] <Znupi> https://help.ubuntu.com/9.10/serverguide/C/postfix.html
[21:57] <soren> I'm following a stack of different conversations right now, so I got lost somewhere.
[21:57] <Znupi> oh, no, wait
[21:57] <Znupi> bit confusing but i got it
[21:57] <Znupi> so if you want sasl, you have to have dovecot?
[21:58] <Znupi> pardon, if you want smtp auth*
[21:58] <soren> Znupi: "have to have" is such a strong way to put it.
[21:58] <soren> Znupi: It's really, really, really what you want to do.
[21:58] <soren> Znupi: but no, you don't /have/ to have dovecot.
[21:58] <soren> postfix supports other sasl backends as well.
[21:59] <Znupi> ah, I understand now
[21:59] <Znupi> wow, i feel enlightened, thanks a lot
[21:59] <soren> sure thing.
[22:03] <soren> lool: locale-gen has a --no-purge option... Convenient! :)
[22:13] <smoser> erichammond, purely fyi, but if you wanted to sanity check, http://bazaar.launchpad.net/%7Eubuntu-on-ec2/ubuntu-on-ec2/ec2-publishing-scripts/annotate/head%3A/ec2-image2ebs  is largely based off http://alestic.com/2010/01/ec2-ebs-boot-ubuntu
[22:18] <lool> soren: Yes, I hadseen the --no-purge option, but note that a) it still might create a locale on the host which is not desired and b) the actual implementation mightturn purging on again (but that's not vm-builder's problem)
[22:25] <hink> anyone know how to perform an automated install of proftpd without having to slect inetd or standalone
[22:30] <soren> lool: Yeah, I suppose.
[22:55] <bogeyd6> argh i missed a job call back
[23:03] <pwnguin> i have a question about deploying the planet rss aggregator
[23:03] <pwnguin> (technically, venus)
[23:03] <pwnguin> the package didnt create a directory structure for me; where should i put it?
[23:04] <pwnguin> ive got to create a planet.in
[23:04] <pwnguin> i
[23:04] <pwnguin> a template dir, a cache dir, and the output dir
[23:04] <pwnguin> output can co in /var/www, and the cache can go in /var/cache/planet, but what about the templates?
[23:24] <mathiaz> soren: puppet unit tests: bugs have been filed with upstream
[23:24] <mathiaz> soren: we'll see what's their answer
[23:25] <|Mike|> hmz, the nickname soren sounds familair here.
[23:26] <jpds> I hope so.
[23:26] <jpds> soren: yo.
[23:44] <soren> jpds: Ahoy.
[23:45] <soren> mathiaz: Are you meaning to run that test suite regularly?
[23:45] <mathiaz> soren: regurlarly during the LTS cycle?
[23:45] <mathiaz> soren: It mainly is for maintainance purpose and the security team
[23:46] <soren> mathiaz: I think it may make sense to disable the tests we know are currently failing and keep running the test suite so that we can see if new things start failing.
[23:46] <soren> mathiaz: Yes, regularly during this dev cycle.
[23:46] <mathiaz> soren: right - that's another option
[23:46] <mathiaz> soren: we could disable tests at the very end of the cycle
[23:47] <mathiaz> soren: but I'd rather focus on fixing as many tests as possible before release
[23:47] <soren> mathiaz: Sure, sure.
[23:50] <soren> mathiaz: My point is just that until the tests are fixed (which might be a while), it would be nice to know if /more/ tests start failing.
[23:50]  * mathiaz nods
[23:50] <soren> mathiaz: ..and that's easier to notice if the currently failing tests are ignored.
[23:55] <erichammond> smoser: Took a quick glance at ec2-image2ebs. First thing I noticed was the use of "rsync -a" instead of "tar cS | tar x".  The rsync command is not going to do the right thing with hard links, sparse files, special files, devices (and I'm not sure if ACLs or extended attributes matter.)  I'm not sure about any other differences.  If you are stuck on rsync, there are options to enable these, but "tar -S" may be a simpler choice for 
[23:57] <erichammond> smoser: Though I've used tar and rsync extensively for decades, I'm not a complete guru, so it might be good to check with  one before making the decision.  The AWS folks and others I respect on the EC2 forum recommended "tar -S" so I just followed their example.
[23:57] <soren> rsync -aHAS usually does the right thing.
[23:59] <erichammond> --specials              preserve special files