[00:23] <patdk-lap> pdtpatrick, I normally use the serverguide one, why ubuntu created it
[00:34] <pdtpatrick> patdk-lap:  thanks ..
[00:57] <tarvid> how do I file a bug report on a headless server?
[00:59] <patdk-lap> same way you would do it on a server with a head?
[01:00] <riz0n> Hey, I have a problem with Ubuntu 12.04 and Postfix/Dovecot. Dovecot *appears* to be functioning properly, but Postfix is refusing to transport mail (locally or to other servers).. if I try to smtp through, say, Outlook, it says that the authentication method is not supported by the server. If someone here can help me solve this problem, I would greatly appreciate it.
[01:00] <erichammond> tarvid: (1) run ubuntu-bug (2) Choose "S: Send report" (3) Choose "C: Cancel" (4) Copy the provided URL into your browser on your local computer and fill out the bug report.
[01:49] <riz0n> so i take you guys are as clueless about Postfix as I am???
[01:52] <qman__> riz0n, out of the box, postfix only accepts authentication over SSL/TLS
[01:52] <qman__> no 'secure' authentication is enabled, and plain is denied without SSL/TLS
[01:56] <hallyn> stgraber: can you think of any reason why, on lxc_contaienr_new(0, if the configfile exists, we shouldn't automatically load it?
[01:57] <hallyn> there is one reason, which is that the default may exist, but the user may want to load an alternate later
[01:57] <hallyn> but i think it's reasonable to say that if you want to play that game, you should make sure the default does not exist
[01:59] <stgraber> hallyn: that sounds reasonable, it's currently not intuitive to have to run load_config() before you can actually use your container
[02:00] <hallyn> of course that actually complicate the logic of figureing out whether i should load /etc/lxc/lxc.conf
[02:00] <marc_12314> I have a software raid1 array with  sda1 and sdb1.  I removed sda1 physically to test how to recover the array., I plugged it back in, and was able to have my array in sync again. but my second test: I unplugged SDA, then added files on SDB.  now both drives are plugged back in, the MD2 array say that SDA1 is removed, but if I try to add it to the array, it say "mdadm: /dev/sda1 reports being an active member for /dev/md2, but a --re-
[02:00] <marc_12314> fails."  What should I do to re-attach SDA to the array and have it sync it's content from SDB ?
[02:00] <hallyn> bc that means c->lxc_conf will always exist
[02:00] <hallyn> oh, no it doesn't.  s'ok
[02:01] <stgraber> right ;) it only exists for containers with an existing config but won't for new containers
[02:01] <qman__> marc_12314, you have to mark it failed and re-add it to the array
[02:01] <hallyn> well i was thinking with the current flow it lxc_conf would get created anyway, but i have lxc_contaienr_new(0 only calling load_config() if the file exists, so it's ok
[02:03] <riz0n> qman__: so what do I need to add to the main.cf to allow plain auth?
[02:03] <qman__> riz0n, don't; that's a really bad idea
[02:03] <qman__> fix your SSL/TLS and use it
[02:04] <riz0n> qman__: Then how will my server accept mail from outside sources?
[02:04] <qman__> anonymously
[02:04] <marc_12314> qman__:  If I try:   sudo mdadm --manage /dev/md2 --fail /dev/sda1    I get   mdadm: set device faulty failed for /dev/sda1:  No such device
[02:05] <marc_12314> qman__:   same response for --remove, and if I try to add it to the array:    mdadm: /dev/sda1 reports being an active member for /dev/md2, but a --re-add fails.  not performing --add as that would convert /dev/sda1 in to a spare.
[02:05] <riz0n> qman__: here is the issue. I have "old" 10.04 server that is running. I have new 12.04 LTS running on new server (virtual setup).. what do I need to do to fix the SSL/TLS so I can retire the old server?
[02:06] <qman__> riz0n, configure a valid key and certificate, and CA cert
[02:06] <qman__> it may or may not have configured the snakeoil keys, but you should at least generate your own, or (better) use legit ones
[02:07] <qman__> marc_12314, that's an abnormal situation, usually when a drive fails it fails, so it's confused
[02:07] <riz0n> I don't mind using my own keys, I thought I had generated them and put them in the right place. One way or another, I have done something wrong
[02:07] <qman__> marc_12314, you can trash the "failed" drive's data first, then re-set it up like a new one
[02:08] <qman__> riz0n, also out of the box, only TLS is enabled, as you have to turn on listening on 465, the default SSL port
[02:08] <riz0n> right now the main.cf has the "snakeoil" keys
[02:08] <qman__> then you should be able to connect on 25 with TLS
[02:08] <qman__> and authenticate
[02:08] <qman__> but it's not secure using those
[02:10] <marc_12314> qman__:  ok, so  cfdisk to trash the partition of sda1 and create a new one would do?
[02:10] <riz0n> I understsand. I just want it set up right.
[02:10] <riz0n> I had copied the main.cf/master.cf from the old server. Apparently it didn't like that too well.
[02:10] <qman__> marc_12314, as long as it doesn't find the old metadata you're good
[02:10] <qman__> marc_12314, you may have to zero it, or at least into it some
[02:11] <qman__> riz0n, yeah, some stuff has changed since then
[02:12] <riz0n> yeah the dovecot was mad. but i took the default files from a fresh install, rewrote them and fixed dovecot (The only thing I want dovecot to do that it doesn't do out of the box is not set "read" flag on POP3 retrievals)
[02:12] <qman__> best to figure out how your config differs from lucid stock, then apply those changes to precise stock
[02:13] <riz0n> ok so I see that my old server has an smptd.crt and key file, should I copy those from the old server into the new one and apply those to my main.cf, or should I just generate new files?
[02:13] <qman__> either is fine, you can use the old certs as long as they're still valid and you're using the same server name and stuff
[02:14] <hallyn> stgraber: should be all fixed
[02:15] <riz0n> in fact i have the same "smptd.crt" in the /etc/ssl/certs and smptd.key in /etc/ssl/private ... I generated them on the server.
[02:15] <qman__> yeah, that will all be needed
[02:15] <qman__> and there may be other files needed as well
[02:15] <riz0n> I think the old server was named HMCS-Server. The new server is HMCS-Virt-00 so I imagine the files from the "old server" will not be valid.
[02:15] <qman__> the cert only cares about your mailname/the domain name you connect to it with
[02:16] <qman__> if that's the same, it's ok
[02:16] <riz0n> yeah everything is the same
[02:16] <qman__> the end user only sees that, so that's what it's up against
[02:19] <riz0n> you know what
[02:19] <riz0n> I think Outlook is playing games with me
[02:19] <riz0n> I created a *new* account, put the IP of my test server in, and it delivered the message.
[02:22] <marc_12314> qman__:  how do I remove the metadata from the drive?  I found some info about  "mdadm --zero-superblock /dev/sda"  but it say it's for version 0.9, but I see version 1.2 on my drives.
[02:22] <riz0n> I have one more question, if you can help me with, and I'll quit bugging you. What is the best/easiest way to dump mysql from one server, and load it into another?
[02:26] <stgraber> hallyn: looks good, thanks
[02:30] <marc_12314> qman__:  never mind, looked like the version didn't matter, tried it and it worked, I was then able to add it back to the array and it's now syncing.   thanks a lot for your help!
[02:32] <hallyn> stgraber: I guess I'm going to have to add freeing of lxc_conf and all its members.  yuck.
[02:32] <hallyn> (at some point)
[02:35] <stgraber> hallyn: I'll also have to take care of memory management in my own C code at some point ;) doing the refcounting stuff is quite a pain with all these python stuff everywhere :)
[02:40] <JoeCoder> hello.  I'm using courier-imap.  When a mail client creates a new folder, I need it to have the write permission for the group.  It's probably a long shot that anyone here knows the answer.
[02:41] <JoeCoder> it currently creates folders with the permission 640; I need 660
[02:41] <JoeCoder> etc/courier/imapd has IMAP_UMASK=007
[03:20] <riz0n> ok, so here is where I am at with this postfix. When I try to send a message out, it asks me for username and password, but it is not accepting the password.
[03:21] <riz0n> i copied the three cert files out of the old server and put them in the new server
[03:21] <riz0n> I think it is an issue with "SASL"
[03:24] <JoeCoder> I don't know a lot about it, but I don't think a password failure could be caused by bad cert settings
[03:24] <JoeCoder> have you checked /var/log/mail.log for errors?
[03:27] <riz0n> Jun 21 23:19:24 hm-cs postfix/smtpd[11634]: warning: SASL authentication failure: cannot connect to saslauthd server: No such file or directory
[03:27] <riz0n> Jun 21 23:19:24 hm-cs postfix/smtpd[11634]: warning: Ferguson-Gateway.ferguson.lan[5.10.1.254]: SASL LOGIN authentication failed: generic failure
[03:27] <riz0n> Jun 21 23:19:24 hm-cs postfix/smtpd[11634]: lost connection after AUTH from Ferguson-Gateway.ferguson.lan[5.10.1.254]
[03:27] <riz0n> Jun 21 23:19:24 hm-cs postfix/smtpd[11634]: disconnect from Ferguson-Gateway.ferguson.lan[5.10.1.254]
[03:28] <JoeCoder> I wonder what directory it can't find.
[03:28] <riz0n> not sure!
[03:29] <JoeCoder> I found this line in my unbutu server config script; not sure why I added it.
[03:29] <JoeCoder> mkdir -p /var/spool/postfix/var/run/saslauthd
[03:30] <JoeCoder> looks like my sasl is chrooted, not sure.
[03:31] <JoeCoder> looks like I used this tutorial to get sasl setup:  http://www.rackspace.com/knowledge_center/index.php/Mail_Server_-_Secure_Connection_-_Configuring_saslauthd
[03:31] <JoeCoder> this advice probably isn't helpful; but it's the best I've got.
[03:31] <JoeCoder> here's the top index of the tutorial:  http://www.rackspace.com/knowledge_center/index.php/Ubuntu_10.04_LTS_(Lucid)#Postfix
[03:35] <riz0n> yeah thats what i have now is a fully functioning 10.04 server, but i am wanting to migrate everything over to a new 12.04 server
[03:35] <JoeCoder> I recently did a migration and it was dead simple; not much had changed
[03:35] <JoeCoder> of course, I keep one big config script of everything I've installed
[03:35] <riz0n> this server has been up since 08
[03:35] <JoeCoder> so it was just a matter of tweaking that script in a few places, updating version numbers, and a couple config files had different settings.
[03:35] <riz0n> hence the reason i want to replace it :P
[03:36] <JoeCoder> my server hasn't been up yet at all.  it's still a vm that I'm writing software for; nothing launched.
[03:37] <riz0n> well im kinda doing the reverse of what you are doing
[03:37] <riz0n> going from a single server to a virtual machine that will run as a VM
[03:37] <JoeCoder> my desktop / main computer is windows, which I need for work.  So for developing my side project on ubuntu, I installed samba and shared / as a drive on my windows machine
[03:37] <JoeCoder> now I've got eclipse running on windows editing php files directly on the machine.
[03:38] <riz0n> so it looks like an issue with saslauthd
[03:49] <riz0n> i dunno this is all screwed up. i think im going to have to just start all over with this new server
[03:50] <JoeCoder> that's what I did.
[03:50] <riz0n> guess for now im stuck with the old 10.04 server till the drive crashes =/
[03:50] <JoeCoder> just ran my same config script again, in small increments, watching for errors as I went.
[04:37] <JoeCoder> I accidentally overwrote /etc/postfix/main.cf  How can I get back the default version from 12.04?
[04:43] <SpamapS> JoeCoder: mv /etc/postfix/main.cf /etc/postfix/main.cf.oops ; apt-get install postfix --reinstall -o 'DPkg::options=--force-confmissing'
[04:43] <SpamapS> JoeCoder: I *think* that might work
[04:44] <SpamapS> JoeCoder: note that that would restore *any* conffiles you've removed..
[04:44] <JoeCoder> ok, thanks
[04:44] <SpamapS> JoeCoder: another way is to just find the .deb, extract it, and copy the file out
[04:44] <SpamapS> as in, dpkg -x file.deb /tmp/foo
[04:48] <JoeCoder> that sounds easier
[04:48] <SpamapS> yeah, probably less to go wrong there too ;)
[05:07] <aknewhope> how come when trying to install apache2 via ubuntu-server on a rackspace instance, comes back with no such package?
[05:08] <aknewhope> ubuntu-server docs states to do that and its on packages.ubuntu.com
[05:08] <aknewhope> im using apt-get install apache2 with sudo
[05:11] <JoeCoder> it works for me with 10.04; haven't tried 12.04 yet.
[05:11] <JoeCoder> (although I will be in a few weeks)
[05:13] <aknewhope> yeah i was using 12.04. That's prob why. Isn't there a file somewhere to add it to so it can find it?
[05:13] <JoeCoder> where can I get postfix.deb ?
[05:14] <JoeCoder> or whatever it should be called?
[05:14] <JoeCoder> not that I now of.  I'm running 12.04 in a vm and the install worked fine for me.
[05:14] <JoeCoder> rackspace has decent support
[05:14] <JoeCoder> I usually open up a chat window with them.  not sure if the officially support third party software, but the haven't turned me down yet.
[05:23] <Degot> Hi ,All.. I've installed VirtualBox 4.1.12 to Ubuntu 12.04 server... But there is no /etc/init.d/vboxweb-servcie script. How to fix it ?
[05:23] <JoeCoder> I'm also running 12.04 server inside virtualbox
[05:24] <JoeCoder> and I've never heard of that file.
[05:24] <Degot> )) virtualbox inside 12.04
[05:24] <JoeCoder> ah, sorry
[06:11] <nocturnal_> if i install ubuntu server and then run xinit or startx, there will be no wm correct?
[06:13] <nocturnal_> or am i wrong
[06:55] <archman> hello, i'm using Ubuntu 9.10 with ntp (* NTP server is running.). ntp.conf has 'server 2.hr.pool.ntp.org', but i get enormous clock drifts, in example, 5mins change in 8 hours
[06:56] <archman> forward
[06:56] <archman> driftfile has '-73.919' - what does this mean?
[07:08] <ttx> archman: means that your clock is off by 0.0079%
[07:09] <ttx> err 0.0073919%
[07:09] <archman> 0.0073 'too fast' cause of the minus?
[07:09] <ttx> so approximately off 6 seconds per day
[07:09] <archman> hmm, sonething's wrong, then
[07:09] <archman> it drifted 5mins on 8 hrs
[07:09] <archman> s/on/in
[07:10] <ttx> archman: http://www.ntp.org/ntpfaq/NTP-s-sw-clocks-quality.htm
[07:12] <sw> hi. we need to share user accounts/directories across servers, would openldap be best to use for this?
[07:25] <ninstaah> Hi all, my ubuntu 12.04 64 bit LAMP keeps freezing (cannot even ssh into it) and after a reboot it is working again - any pointers would be much appriciatet. Where could I start looking for errors like this?
[07:37] <archman> ttx: i don't know the reason. would it be wise to disable ntpd and do a hourly cron with ntpdate?
[07:39] <ttx> archman: if you can't get ntpd to work properly, that solution would be better than nothing
[07:53] <sw> hi. I need to install sun jvm >=1.5, what package would that be? I see there's plenty of java packages floating around ...
[08:01] <ironm> Good morning. Does anyone of you run interface bonding on ubuntu 12.04?
[08:01] <ironm> Do I really need "ifenslave" for interface bonding (LinksAggregation) on ubuntu-server 12.04?
[08:02] <ironm> When I use ifenslave on ubuntu 12.04 interface bonding works as expected (with following config files: interfaces - http://paste.debian.net/175728/ ... and bonding.conf - http://paste.debian.net/175729/)
[08:03] <ironm> I have tested some configurations (see config files in brackets) without ifenslave, and *no* one works :( interfaces - http://paste.debian.net/175730/ ... bonding.conf - http://paste.debian.net/175731/ ... modules - http://paste.debian.net/175732/ ). Do you have an idea what do I miss? Thank you in advance for any hints or examples of working config files without using ifenslave.
[08:04] <ironm> there is *no* working and documented interface bonding configuration in unbuntu administration guide
[08:05] <ironm> all configurations I have tested (without ifenslave) didn't work ...
[08:18] <jamespage> sw: ubuntu not longer provides Sun/Oracle Java through the partner archive
[08:18] <jamespage> so your choices are: openjdk-6 or openjdk-7
[08:19] <jamespage> sw: the packages follow a common structure
[08:19] <jamespage> openjdk-6-jdk - full JRE and JDK
[08:19] <jamespage> openjdk-6-jre - just the JRE
[08:20] <jamespage> openjdk-6-jre-headless - JRE less bits that pull in lots of desktop things - good for servers
[08:20] <jamespage> OR
[08:20] <jamespage> you can use the default-jdk, default-jre-headless, default-jre packages
[08:20] <jamespage> which in Ubuntu 12.04 point to openjdk-6 (thats changing to openjdk-7 in 12.10)
[08:22] <bnemec> hello?
[08:29] <sw> jamespage: hi. I used this in the end: https://github.com/flexiondotorg/oab-java6
[08:30] <sw> getting this error though when running '$ant dist': 'Unable to locate tools.jar. Expected to find it in /usr/lib/jvm/java-6-sun-1.6.0.33/lib/tools.jar'
[08:31] <ironm> do you have any idea why following configs don't work on fresh installed ubuntu-server 12.04? ... bonding-ubuntu.confs
[08:31] <ironm> http://paste.debian.net/175789/
[08:32] <ironm> these config worked (for some reason) after I have installed ethtool and ifenslave (testing the interface bonding with ifenslave)
[08:32] <ironm> thank you in advance for any hints
[08:33] <jamespage> sw: well that is provided by the official openjdk-* packages in the repository
[08:34] <jamespage> my experience is that the Oracle distribution of Java is not as well integrated into the distro and the openjdk packages
[08:34] <jamespage> sw: are you running java 6 or java 7?
[08:35] <sw> jamespage: 6
[08:35] <sw> jamespage: should I stop following that link and use the ones that are already in the repository? all I basically need this for is to be able to use irccat :-)
[08:36] <ironm> jamespage, is there another alternative to the java/JDK stuff yet? .. (like python / django framework)
[08:36] <jamespage> sw: I would just use whats in the distro
[08:36] <jamespage> I do everyday :-)
[08:37] <jamespage> ironm, depends what you want todo
[08:37] <jamespage> there are always alternatives....
[08:37] <ironm> jamespage, what would be your recommendation for programming portal applications
[08:38] <sw> jamespage: ok, will do that. this is what I need it for, by the way: https://github.com/RJ/irccat. apart from that, I never use java
[08:47] <jamespage> ironm, mmm - not sure TBH - never really been a fan of portals
[08:48] <jamespage> in my experience most requirements for a 'portal' turned out to be requirements for a website with a content management system
[08:48] <ironm> jamespage, yes ... it is more than CMS
[08:51] <bnemec> anyone feel like answering questions about dell OMSA on 10.04?
[09:05] <koolhead17> hello jamespage
[09:10] <thesuperlogical> Hi all - Maybe someone here can help me with adding two new HDDs to my 12.04 Server. Setup was 2x500GB, listed as /dev/mapper/pdc_bchiehhifh (ext3) and /dev/mapper/pdc_bchiehhifh5, RAID1 !? Now I've added 2x1TB, configured to be a RAID 1 in my HP N40L microserver.
[09:14] <RoyK> erm - you've configured it in raid-1 on a hardware raid controller? or in md? or lvm?
[09:15] <thesuperlogical> It's the builtin N40L Sata AMD /?RAID Controller. Server setup suggested to create a LVM during setup
[09:16] <RoyK> what does `cat /proc/partition` say?
[09:16] <RoyK> thesuperlogical: I don't know if the n40l qualifies for a 'hardware raid controller', though
[09:16] <ironm> it looks like I have the reason. ubuntu native ifenslave (in case there is one) doesn't understand the syntax in "interfaces". First after installing ifenslave2.6 it started to work. ... ikonia, probably you can see more than me ... do you have any idea why following configs don't work on fresh installed ubuntu-server 12.04? ... bonding-ubuntu.confs - http://paste.debian.net/175789/
[09:17] <thesuperlogical>  http://piratepad.net/xt0vzKtO24
[09:18] <RoyK> no hardware raid there
[09:18] <RoyK> if you had hardware raid, the new mirror would show up as a single drive
[09:19] <RoyK> but never mind that, just configure it as a software mirror with mdadm or lvm
[09:19] <RoyK> that is - looks like dm-1 is that mirror
[09:19] <thesuperlogical> ok, so I'll simply go with mdadm and ignore that /dev/mapper stuff, the setup created?
[09:20] <RoyK> no, forget about mdadm
[09:20] <RoyK> lvm is just as good
[09:21] <RoyK> pastebin "vgs;pvs;lvs"
[09:21] <RoyK> (or the output of those :P)
[09:22] <RoyK> oh, btw, looks like dm-0 and dm-2 is about the same size, something overlapping?
[09:22] <thesuperlogical> thanks a lot RoyK
[09:25] <thesuperlogical> RoyK just one more question... what excactly does /dev/mapper/pdc_bchieh... stand for
[09:25] <RoyK> it's /dev/mapper/volumegroup-logicalvolume
[09:27] <lifeless> this is dmraid FWIW
[09:27] <lifeless> AFAICT
[09:29]  * RoyK hasn't used dmraid and may be utterly wrong
[09:29] <thesuperlogical> dmraid = FakeRaid?
[09:29] <RoyK> oh
[09:29] <RoyK> dmraid
[09:29] <RoyK> is fakeraid
[09:29] <RoyK> != good
[09:30]  * RoyK slaps ubottu 
[09:34] <lifeless> thesuperlogical: yes, dmraid is also called fakeraid
[09:34] <lifeless> its actually a very good compromise
[09:34] <lifeless> I have added load balancing to reads for dmraid mirrors, though you need to build your own kernel with the patchset.
[09:51] <RoyK> lifeless: really, how would it be better than just md raid?
[10:22] <rbasak> lynxman: I think the mysql innodb corruption thing is an actual bug, given that we're getting tons of reports
[10:23] <lynxman> rbasak: afaict it's due to an upgrade from <5.0 to 5.5
[10:23] <rbasak> lynxman: not sure though. There does seem to have been a sudden influx of mysql bugs for some reason though
[10:23] <lynxman> rbasak: there's a binary innodb incompatiblity between 5.0 and 5.1
[10:23] <rbasak> I see
[10:23] <rbasak> Perhaps our postinst should refuse to do the upgrade or something?
[10:23] <lynxman> rbasak: all tables need to be dumped and restored from scratch to upgrade, if not that'll happen :/
[10:24] <lynxman> rbasak: hmm... could be indeed
[10:24] <lynxman> rbasak: upgrading from 5.1 to 5.5 is also problematic
[10:25] <lynxman> rbasak: I had a ton of issues myself, had to blog about it even :)
[10:25] <lynxman> rbasak: http://devroot.org/2012/04/26/mysql-upgrade-to-ubuntu-12-04/
[10:25] <rbasak> thanks
[10:26] <rbasak> It's annoying for us because we get tons of bug reports. I think we could do something to manage it better, but not sure what exactly.
[10:26] <rbasak> Fundamentally it's sometimes too complex for server packages to upgrade smoothly _automatically_ because of the complexities of what users can do with them
[10:27] <lynxman> rbasak: that's the thing... how far do we want to go to think for the user in this case, MySQL is a serious enough business and most of these bugs are solved by removing the old innoDB files and letting MySQL recreate them since they're only used by the internal mysql log if the user haven't created anything else there
[10:27] <halvors> Isn't the package "opendchub" avaliable in the ubuntu repo?
[10:27] <rbasak> But the postinsts are stuck with trying to do it, and then apport encourages users (particularly desktop users using "server" packages) to file bugs
[10:27] <lynxman> rbasak: yeah, we don't get that many bugs from people who actually run servers :)
[10:28] <rbasak> I wonder if we could tell apport to not file bugs for particular packages, and instead point them to some suitable documentation and ask them to file a bug less automatically (still via apport though) if they still think it's a bug
[10:28] <rbasak> The packages that come to mind are mysql, samba and openldap
[10:29] <lynxman> rbasak: if there was an option to do that that'd be awesome, all of those have complicated user intervention needed upgrade routes
[10:30] <rbasak> samba is special though as it's a desktop package too, but desktop users shouldn't have changed the default configuration at all
[10:30] <lynxman> rbasak: hear hear
[10:30] <rbasak> (and if samba fails to upgrade when the default configuration has changed, then that is a bug and should be filed as such)
[10:30] <rbasak> s/has/hasn't/
[10:31] <lynxman> rbasak: who should we talk for that, foundations?
[10:31] <rbasak> lynxman: jamespage has been looking at these kinds of things. We had a session on it at UDS-Q. But we were looking at some more basic things first I think.
[10:32] <lynxman> rbasak: hmm hope there's any way to push a fix for precise, otherwise we'll be swamped in bugs
[10:33] <jamespage> lynxman, if we can come up with a good test case then probably
[10:33] <jamespage> but as you state its hard to cover all of the bases....
[10:34] <lynxman> jamespage: definitely
[10:35] <thys> hi, I just installed rtorrent and according to the tutorial I am following there should be a .rtorrent file and rtorrent in home folder but there are none. Where should I look?
[10:35] <thys> http://tutorialsplus.net/ubuntu-10-10-lan-torrent-seedbox-with-avalanche-rt-lighttpd-rtorrent-vsftpd-and-samba/
[10:38] <lifeless> RoyK: you can boot off of a degraded set
[10:40] <RoyK> lifeless: you can boot off a degraded md set too
[10:41] <lifeless> RoyK: how?
[10:42] <RoyK> start computer, wait for grub to timeout, boot
[10:42] <lifeless> RoyK: if the main drive is the one that died, that doesn't work.
[10:42] <lifeless> RoyK: e.g. sda goes boom.
[10:42] <RoyK> main drive? in md?
[10:42] <RoyK> it's a mirror
[10:43] <lifeless> md isn't supported by the bios, so you have a primary boot volume
[10:43] <lifeless> s/volume/disk/
[10:43] <RoyK> bios jumps to first available harddisk
[10:43] <lifeless> also md and dm can both do raid 5
[10:43] <RoyK> starts grub
[10:44] <RoyK> grub is installed on both sides on the mirror
[10:44] <RoyK> so if sda dies, sdb will do the booting
[10:44] <RoyK> and last I checked, you can't boot off an md raid-5
[10:45] <lifeless> you can off of dmraid 5
[10:45] <lifeless> including degraded sets
[10:45] <RoyK> and normally you don't *want* to boot off anything else than a mirror, or at least, I wouldn't, since I'd rather use a mirror for the root and raid-[56] for the data
[10:45] <RoyK> lifeless: still, you can boot off a degraded raid-1 md set without issues
[10:45] <lifeless> if you go through the hoops to set it up, sure.
[10:46] <RoyK> oh, shut it, please
[10:46] <RoyK> standard ubuntu install installs grub on both drives in a mirror
[10:46] <RoyK> what else would you need?
[10:55] <brendand> Daviey, have you heard of this issue before: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1016444
[10:55] <brendand> Daviey, it's happening in Quantal
[10:57] <Daviey> brendand: new in quantal?
[10:57] <brendand> Daviey, yeah for sure
[10:57] <Daviey> brendand: yeah, kernel team are aware of it...
[10:57] <brendand> Daviey, right, so that might be a dupe then since i only got around to raising it this morning
[10:58] <brendand> Daviey, good to know though. if you know a bug number please tell me
[11:01] <Daviey> brendand: Something along the lines of a more complex way of loading the firmware.. or something
[11:01] <Daviey> Anyway, i confirmed with them 2 weeks ago they were aware of it
[11:01] <Daviey> brendand: sorry, no bug number to hand.
[12:00] <smb> SpamapS, When you get in: would you mind me taking squid3 as a merge-practising-target from you?
[13:36] <smoser> kirkland, byobu ran from one system, back on the other system. the dots around it showing the second systems' smaller terminal
[13:36] <smoser> how do i kick that second system off (to resize to my larger local systems terminal size) in tmux
[13:38] <smoser> ie, in screen i'd hit 'ctrl-a shift-f'
[13:40] <Daviey> kirkland: Whilst you are on the hilights... Have you considered doing a screen merge?
[13:58] <rbasak> Hey smoser! I've got a patched apt working (currently hardcoded to use the new scheme) and the debootstrap patch looks trivial. I've started writing up a spec on it for wiki.u.c. Still pending is a global option as well as a per-repo fallback to the old behaviour. Once done and assuming that the patches are good, what should our next steps be?
[14:00] <smoser> i guess i have a couple thoughts
[14:00] <smoser>  * did you run this by mvo or david?
[14:01] <smoser>  * can we make a ppa with your apt and debootstrap?
[14:01] <smoser>  * can we get utlemming to show us how to run a quantal S3 mirror with rbasak MAGIC! for larger scale testing
[14:02] <smoser>  * we still need to fixup debmirror, right?
[14:02] <rbasak> david and mvo know the plan, right? I've not got the patch reviewed by anyone yet, no.
[14:02] <rbasak> PPA with apt and debootstrap can happen no problem
[14:03] <rbasak> debmirror will still need to be fixed
[14:03] <rbasak> As well as ubumirror, and any other mirror scripts we want to fix
[14:03] <smoser> right.
[14:03] <rbasak> Also we'll want to publish a by-hash generator somewhere
[14:04] <rbasak> With apt-utils, perhaps? To go with apt-ftparchive
[14:04] <smoser> you mean one that you point at a repo and it adds that data.
[14:04] <smoser> right?
[14:04] <rbasak> Yes
[14:04] <smoser> yeah.
[14:05] <smoser> rbasak, thank you for doing this.
[14:05] <rbasak> So I think two approaches in parallel but independent: 1) mirror scripts to be able to mirror normally and mirrors that support by-hash without races (but don't regenerate by-hash), and 2) a script that generates by-hash
[14:07] <smoser> '1' is rsync mirrors ? or are there other mirror scripts that would get the by hash data without explicit support?
[14:09] <rbasak> They either need explicit support, and rsync scripts will need tweaking for race-free operation
[14:10] <smoser> right.
[14:10] <smoser> rsync scripts need tweaking to be race-free
[14:10] <smoser> but are there other mirror methods than rsync that basically do a full directory mirror
[14:10] <smoser> and would, thus, just get the by-hash data without knowing about it
[14:13] <rbasak> smoser: I'm not sure I follow
[14:15] <rbasak> Also, let's say I do a PPA with apt, debootstrap and debmirror patched. We still won't be able to test the installer :(
[14:16] <rbasak> I'll have to do a mirror of main for some particular arch to do an end-to-end installer test
[14:18] <smoser> rbasak, are yo usoure we couldn't test the installer?
[14:18] <smoser> you can add additioal repos i think to the installer. i think
[14:18] <rbasak> I don't think we can test it from a PPA
[14:18] <rbasak> For a start, debootstrap is embedded in the initrd
[14:19] <smoser> or, woudl you need to convince debootstarp to get apt from there.
[14:19] <rbasak> Yeah but debootstrap would still have a race
[14:19] <smoser> hm..
[14:19] <rbasak> I got around this for the highbank enablement by using my own mirror of main
[14:19] <rbasak> It's not hard, just needs a bit of space somewhere
[14:20] <smoser> rbasak, wellthats why i suggested s3 mirrors
[14:20] <rbasak> Actually that was for some other reasons too
[14:20] <rbasak> I can generate a netboot installer initrd image with a customised debootstrap without too much difficulty too
[14:20] <smoser> ie, you and i use utlemming's scritps to run our own improved mirror of quantal
[14:21] <rbasak> On the existing production S3 mirrors or on a separate one?
[14:21] <utlemming> smoser: the S3 mirrors don't mirror ports.ubuntu.com, which is where the ARM code sits
[14:22] <smoser> utlemming, well, that is actually ok for the moment. and i suspect that they can mirror arbitrary remote http://, right?
[14:22] <smoser> ie, so we can tell them to.
[14:22] <rbasak> utlemming: this isn't ARM-specific. We can test on amd64 if we decide to. I would have tested on armhf as I have access to ARM hardware so that's handy to keep in mind though, thanks
[14:22] <smoser> and i dont want the official s3 mirrors to be touched.
[14:22] <Daviey>  .
[14:22] <smoser>  o
[14:22] <rbasak>  |
[14:28] <rbasak> OK, so next steps I think are: 1) Decide how to do fallback; 2) I add fallback to apt, patch debootstrap, patch one mirroring system (utlemming's?); 3) I publish PPA; 4) Set up test mirror (S3?); 5) Test
[14:28] <rbasak> And what about after testing? Push for support in the official archive? Or a package for maas integration or something?
[14:29] <utlemming> rbasak: patch mirror system?
[14:29] <rbasak> utlemming: yeah to generate the by-hash directory and make sure that updates are race-free (ie. by-hash gets updated before InRelease)
[14:30] <rbasak> Oh, and we'll want to sign this mirror with our own key, so that we can use InRelease instead of Release.gpg
[14:30] <rbasak> (since Release.gpg/Release has its own race)
[14:30] <Daviey> Are there not plans to get this into the official archives ?
[14:31] <utlemming> rbasak: the by-hash directory is easy....maintaining it is going to be the harder part
[14:31] <utlemming> rbasak: when do you need this by?
[14:31] <rbasak> Daviey: my understanding from UDS was that cjwatson and slangasek were both OK with it in principle but wanted to see a PoC first
[14:32] <smoser> Daviey, the goal would be to be in proper archive.
[14:32] <rbasak> Daviey: the alternative to official archive integration would be to have a squid-deb-proxy-like package for maas users to use
[14:32] <rbasak> (as a stop-gap until official archive integration happens)
[14:32] <Daviey> rbasak: right, thanks
[14:33] <smoser> but if we can find a way to resonably do this outside of main, for this cycle, then i'd accept that.
[14:33] <rbasak> But the plan is entirely flexible. How do you think we should approach this?
[14:33] <rbasak> utlemming: I'm not sure about timing. It sort of depends on the plan
[14:33] <Daviey> setting up a proof of concept in the cloud seems like a good idea to me.. then invite Steve and Colin to look, then involve IS.
[14:33] <rbasak> utlemming: I'd be happy to take your scripts and add this feature though
[14:35] <utlemming> rbasak: k, ping me when you get started...there is a gotcha regarding generating the deletion set....and I've been burned a couple times with it.
[14:36] <rbasak> utlemming: ready to start whenever you are - this aspect isn't dependent on my other patches
[14:37] <utlemming> rbasak: give me 20 minutes and I'll ping you and we can talk that over
[14:37] <rbasak> OK, thanks!
[14:50] <rbasak> utlemming: just realised I have a meeting in ten minutes, sorry. Can I ping you after that?
[14:57] <jamespage> utlemming, reviewed walinuxagent - merge proposal waiting for you to review (thought that might be easier)
[14:57] <utlemming> jamepage: L)
[14:57] <utlemming> er, :)
[14:58] <jamespage> lol
[15:01] <SpamapS> rbasak: re the PHP5 bug, I'd wait until the end of June to do another merge from Debian, since Wheezy is freezing on June 30, so if they're going to upload any new stuff, it will be before June 30
[15:06] <rbasak> SpamapS: that makes sense - thanks!
[15:21] <smoser> kirkland, ping...
[15:22] <smoser> i know some where, you used to do a 'sed' of a server install iso so that it would not change graphics modes and could be run with kvm -curses'
[15:22] <smoser> maybe roaksoax remembers this hack?
[15:55] <thesheff17> is there anyway to run commands inside an lxc container after it is started?
[15:55] <matti> thesheff17: Yes.
[15:56] <thesheff17> lxc-execute?
[15:56] <matti> thesheff17: http://lxc.sourceforge.net/man/lxc-start.html
[15:57] <matti> thesheff17: It might not work if you have older LXC.
[15:58] <matti> I do wonder...
[15:58] <matti> http://lxc.sourceforge.net/man/lxc-execute.html
[15:58] <matti> LOL
[15:58] <thesheff17> I'm running 12.04 so the version should be good
[16:01] <stgraber> thesheff17, matti: the command for that is lxc-attach and it won't work unless you've patched your kernel
[16:02] <stgraber> so yeah, there's a command but it doesn't work
[16:02] <thesheff17> so is there a best way to run post startup commands?
[16:03] <matti> stgraber: Ah, that oone :)
[16:03] <matti> stgraber: Thanks!
[16:03] <matti> thesheff17: As init script perhaps?
[16:03] <stgraber> thesheff17: currently you'd have to rely on ssh or on dumping init scripts inside the container before starting it
[16:07] <thesheff17> ah ok...prob easy to clone a lxc container with my ssh keys already inside....then do something with rc.local
[17:04] <stgraber> hallyn: hey, a few thoughts from yesterday :) How much of a pain would it be for you to implement a timeout option to wait()? and once we have that, do you think we should implement shutdown() in the API or just have it done in python (kill -SIGPWR container.init_pid / container.wait("STOPPED", 30) / container.stop())
[17:16] <hallyn> stgraber: both should be ok
[17:17] <hallyn> new userns kernel is in my ppa, ot a little tescase written (which passes, phew)
[17:17] <hallyn> biab
[17:17] <stgraber> yay!
[17:31] <adam_g> zul: do you have a python-glanceclient packaging branch anywhere yet?
[17:31] <zul> adam_g: should be in the regular places
[17:33] <adam_g> nice
[18:01] <hallyn> stgraber: i guess i'll implement an lxc_monitor_read_timeout() which uses select(2) ,and use that from lxc_wait...  probably won't commit that today yet
[18:02] <stgraber> ok
[18:10] <jsnapp> smoser, thanks for your help yesterday, the cloud-init iso works great
[18:12] <smoser> good.
[18:12] <jsnapp> smoser, as i mentioned i use vmware vsphere client which shows the console as the ubuntu vm boots up ... the only problem is that the boot kernel parameters default with console=ttyS0 which means if I want to watch the system boot I have to modify that to be console=tty0
[18:12] <jsnapp> is there any way around that?
[18:16] <smoser> jsnapp, ther is not a good way to do that. soryr.
[18:16] <smoser> you could open a bug that says we should have more 'console=' arguments on that line.
[18:17] <smoser> and that would acutally work.
[18:17] <smoser> but the issue is that once upstart take over, it only writes to one of the console= (the last one)
[18:17] <smoser> so you'd only see kernel messages on one of them.
[18:17] <smoser> so it woudl take upstart work to make it better.
[18:18] <jsnapp> smoser, ok thanks
[18:19] <smoser> feel free to open a bug.
[18:19] <smoser> i'd like to have it be better.
[18:21] <jamespage> adam_g, pls can you check the notification email address in the QA lab - getting alot of bouces
[18:22] <adam_g> jamespage: oh jeez, one sec
[18:22] <jamespage> adam_g, looks malformed
[18:22] <hallyn> stgraber: eh, well, http://people.canonical.com/~serge/waittimeout is the route i'm going, but i haven't tested yet, and i'm ducking out for lunch.  bbl
[18:24] <adam_g> jamespage: fixed, sorry bout that
[18:25] <jamespage> adam_g, ta - no problem - just did not want a full inbox on monday (and was worried noone was getting notified)
[18:28] <sidnei> so, i installed libvirt-bin, but running virsh just hangs. where should i start looking?
[18:30] <sidnei> uhm, i wonder if this has anything to do with it
[18:30] <sidnei> 2012-06-22 18:24:40.444+0000: 8971: error : networkCheckRouteCollision:1660 : internal error Network is already in use by interface virbr0
[19:03] <jsnapp> smoser, where do i open a bug about the console stuff?
[19:05] <smoser> jsnapp, file it here:
[19:05] <smoser> https://bugs.launchpad.net/ubuntu/
[19:06] <smoser> subscribe me, and utlemming. tag it 'ec2-images' and 'cloud-images'.
[19:11] <jsnapp> smoser, ok thanks
[19:17] <jsnapp> smoser, what package do i report the bug against?
[19:18] <smoser> you can pick cloud-init for now.
[19:18] <jsnapp> ok thanks
[19:18] <smoser> thank you, jsnapp
[19:23] <jsnapp> smoser, new bug is here https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1016695
[19:24] <jsnapp> smoser, let me know if i goofed anything up ... i don't report bugs often
[19:29] <smoser> well, its good to start. thank you.
[19:57] <pdtpatrick> QUestion - I'm trying to write an upstart script but I keep getting "stop/waiting" .. is there a log somewhere I can look that explains what is going on? I'm already looking at /var/log/syslog and /var/log/upstart/<scriptname>. Here's my script so far: http://pastie.org/private/xcjc5c2wp8auamy9ydco0g
[20:03] <pdtpatrick> nvm - figured it out
[20:14] <sarthor> Hi, How to solve this Error, http://paste.ubuntu.com/1054904/
[20:15] <sarthor> Problem with MergeList /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_maya_restricted_binary-i386_Packages
[20:19] <genii-around> sarthor: sudo rm  /var/lib/apt/lists/*   && sudo apt-get update
[20:19] <sarthor> genii-around: rm: cannot remove `/var/lib/apt/lists/partial': Is a directory
[20:20] <genii-around> sarthor: Thats fine
[20:20] <genii-around> sarthor: The sudo apt-get update   after is whats important, populates that directory again
[20:37] <halvors> I'm trying to block all traffic to a specific host with iptables, how do i do that? My interfaces is eth1 (Internett) and eth0 (LAN).
[20:40] <r3dLunchb0x_> halvors: have you looked at this? https://help.ubuntu.com/community/IptablesHowTo
[20:41] <halvors> iptables -A INPUT -i eth0 -s 192.168.50.21 -j DROP
[20:41] <halvors> ?
[20:43] <guntbert> halvors: no, if you want to block traffic *to* this host
[20:46] <halvors> But what about outgoing and incomming?
[20:59] <pdtpatrick> halvors: r u trying to block traffic leaving the box to a certain host?
[21:00] <pdtpatrick> why not just block the traffic from the host itself? so if ur trying to block traffic from A to B, then do the blocking on B.
[21:10] <Daviey> hallyn: hey... Can you make use of kvm/qemu spice without access the the hypervisor?
[21:10] <Daviey> hallyn: ie, if i give you full networking to a vm here.. can you use spice.. without having access to the host?
[21:18] <hallyn> Daviey: in a nested vm under that vm, or changing that vm to use spice?
[21:19] <hallyn> you need to switch to qxl video in kvm itself, so i tink the answer is no
[21:19] <hallyn> you can make *use* of it, but you can't switch to it from vnc/vmware/cirrus
[21:22] <hallyn> Daviey: i'm outta here soon fwiw
[21:31] <halvors> pdtpatrick: I have a lan host that not should access the internet for security reasons. I have a Ubuntu box as router, i would like to block all traffic to the internet from there.
[21:32] <pdtpatrick> Im assuming then the host will forward packets to your ubuntu router. Just have your router drop the packets
[21:32] <pdtpatrick> from it
[21:33] <halvors> Ok, with what iptables command?
[21:34] <halvors> iptables -A INPUT -s 192.168.50.21 -j DROP
[21:35] <pdtpatrick> i think in ur case it would be in the FORWARD rules
[21:35] <pdtpatrick> so u can put in ur default INPUT OUTPUT and FORWARD to DROP
[21:35] <pdtpatrick> and then just allow the interfaces you want, and the host that isn't in that list is automatically dropped
[21:36] <halvors> I would like to add manually that host that should be dropped.
[21:36] <pdtpatrick> check out this post
[21:36] <pdtpatrick> http://www.linuxquestions.org/questions/linux-networking-3/iptables-rules-for-an-ubuntu-gateway-filtering-connections-to-and-from-internet-549482/
[21:39] <halvors> iptables -A FORWARD -i eth0 -o eth1 -s 192.168.50.21 -j DROP
[21:39] <halvors> then?
[21:40] <pdtpatrick> if ur interfaces are setup in that manner - sure
[21:41] <halvors> But interfaces or host first? Also -i and -s?
[21:42] <halvors> If i don't forward from outside to inside (NAT) would that still allow for connection from inside and that they actually can get response.
[21:43] <Daviey> hallyn: have i missed you?
[21:44] <halvors> pdtpatrick: Like this? pdtpatrick:
[21:44] <halvors> Obs
[21:44] <Daviey> hallyn: use case, does spice make sense in openstack.. ie, you as a guest only have access to the instance.. not the hyper visor
[21:44] <halvors> iptables -A FORWARD -i eth1 -o eth0 -j REJECT
[21:44] <halvors> Or it is: iptables -A FORWARD -i eth1 -o eth1 -j REJECT
[21:47] <halvors> pdtpatrick: ?
[21:52] <pdtpatrick> REJECT and DROP are different
[21:53] <pdtpatrick> you need to understand them. One will reject and send a message back to the host. The other will just drop the packet and send no message
[21:53] <pdtpatrick> in ur case since ur trying to block internet - then use DROP
[21:53] <pdtpatrick> you'll also want to do something like - iptables-save > oldiptables
[21:54] <pdtpatrick> keep a copy of ur working config that way if it gets screwed up, you can also "iptables-restore < oldiptables"
[21:54] <hallyn> Daviey: i dunno, could different ami's trigger different xml for the guests to enable spice?
[21:55] <pdtpatrick> halvors: i'll also recommend joining #netfilter
[21:55] <hallyn> Daviey: see https://wiki.ubuntu.com/SergeHallyn_spice for what openstack would need to do
[21:59] <Daviey> hallyn: yeah, that is the plan
[22:00] <hallyn> stgraber: all right i think i left yoru tree in a working state, with the wait timeout and shutdown defined
[22:00] <Daviey> hallyn: but i thought spice required access to the host qemu, not just the 'machine'
[22:01] <Daviey> to use a spice remote client, that is
[22:10] <hallyn> Daviey: i don't understand what you're saying.  it requires different args to qemu.  if openstack can change those based on ami or some setting, then it'll work
[22:11] <hallyn> (or, different xml contents)
[22:35] <halvors> pdtpatrick: May i ask you some questions in pm?
[22:36] <pdtpatrick> i'll try to answer
[23:16] <Daviey> hallyn: I mean getting a remote desktop.. i thought the spice protocol was exposed through the host, rather than networking from the outside world to the vm
[23:27] <hallyn> Daviey: how is that different from vnc?
[23:46] <Daviey> hallyn: it's not, but i can't use a vnc client from my local machine atm.  There is a web-based novnc client via the openstack web dashboard tho.. but that is really for debugging, rather than a 'desktop experience'
[23:46] <Daviey> hallyn: I'm trying to work out if it makes sense to make spice part of the story.
[23:55] <hallyn> Daviey: i think it makes sense.  now if openstack isn't amenable to exporting the spice port somehow from the cpu host, maybe we can do something where spice runs on the client and exports its own desktop.  basically like how you could run x11 inside vnc on the guest.
[23:57] <hallyn> interesting problem