[00:05] <daniel_-> somebody can help me? my log files e.g. auth.log syslog mail.log are all empty except the same files with a trailing number e.g. auth.log.1
[00:06] <daniel_-> is this wrong? :/
[00:08] <sarnold> daniel_-: a great many of my log files are also size 0, seems normal enough
[00:08] <qman__> daniel_-, that just means nothing has happened that needs logging since the last time logrotate ran
[00:08] <sarnold> daniel_-: I think there was an update to the standard syslog rules that reduced the number of logged files, and probably logrotate doesn't know to stop making new ones...
[00:09] <sarnold> (though my empty wtmp is surprising..)
[00:09] <daniel_-> but there happened alot to auth.log, but all goes into auth.log.1
[00:10] <daniel_-> I mean like sshd attempts
[00:11] <qman__> daniel_-, it goes into auth.log, but nothing has happened since the last time logrotate ran
[00:11] <sarnold> daniel_-: the .1 version gets _new_ entries??
[00:11] <qman__> which could have been a minute ago
[00:13] <daniel_-> sarnold: yes the .1 gets new entries
[00:14] <daniel_-> all *.1 get the new entries. But I guess then its the default
[00:20] <mkeys> so still having this problem, ter in #udev suggest ubuntu support. :)
[00:20] <mkeys> (unable to enumerate usb disk at boot)
[00:21] <sarnold> daniel_-: if new entries are going into the .1, that is a mistake. rsyslog _should_ have closed those files after the logs rotated. Hrm.
[00:23] <sarnold> daniel_-: which log files are open in: sudo lsof -p `pidof rsyslogd`   ?
[00:24] <sarnold> mkeys: some more options, you could try askubuntu.com or file a bug in launchpad (if you can be confident of which package is failing you in which way)
[00:25] <daniel_-> sarnold: alot of *.1 are open
[00:25] <sarnold> daniel_-: I'd suggest filing a bug. That's not supposed to happen.
[00:25] <sarnold> (perhaps someone already has filed a bug?)
[00:26] <daniel_-> alright! thx for your help sarnold!!
[00:26] <sarnold> daniel_-: in the meantime, a "kill -HUP `pidof rsyslogd`" _should_ fix those.
[00:28] <daniel_-> now a writing to auth.log has begun
[00:28] <daniel_-> *.1 are killed
[00:29] <daniel_-> thx man!
[00:29] <sarnold> daniel_-: check your logs for anything from logrotate; there _may_ be some more details in the logs
[00:30] <sarnold> e.g., if logrotate is confined by AppArmor and does not have capability kill, it may not have been able to alert rsyslogd about the rotated files
[00:30] <sarnold> (only one of many potential reasons for failure)
[00:58] <stgraber> hallyn: is an up to date quantal container booting for you?
[01:02] <hallyn> stgraber: in q i assume?
[01:02] <hallyn> my q laptop is over (waves) there
[01:02] <hallyn> get back to you in a bit
[01:03] <stgraber> hallyn: yeah, on q. All I'm getting at startup time are mountall Event failed errors
[01:05] <stgraber> I'm also getting a whole bunch of errors from upstart in dmesg
[01:05] <stgraber> [40355.415061] init: Failed to spawn mounted-proc main process: unable to change root directory: No such file or directory
[01:05] <hallyn> (wgetting)
[01:08] <stgraber> hallyn: hmm, looks like something with the pivot root went quite wrong here... it also wiped the content of my laptops' /tmp at startup... rebooting
[01:09] <hallyn> hooooly cow
[01:09] <hallyn> hm mine failed on debootstrap.  what on earth?
[01:10] <hallyn> oh, bad network
[01:12] <stgraber> hallyn: found the issue, it's totally my fault
[01:13] <stgraber> hallyn: I'm using lxc from staging which doesn't have a default lxc config (doesn't read /etc/lxc/lxc.conf) which means my container didn't have any networking config
[01:13] <stgraber> which explains why mountall failed (well, kinda)
[01:13] <stgraber> then I turned off apparmor to see if that was the issue and that triggered the rest of the mess
[01:14]  * stgraber really needs to push that lxc-create change upstream and get a default lxc.conf implemented there too, not sure all of our users could debug that kind of weird mess :)
[01:14] <hallyn> defualt lxc.conf upstream - still not sure how we can swing that
[01:14] <hallyn> unless we default to an empty netns
[01:14] <hallyn> which wouldn't be that bad i guess
[01:15] <hallyn> we could also add a script "lxc-add-dev brX -n containerX" to create a new veth pair, hook one end up on bridge brX, and pass the other into the named containerX
[01:16] <stgraber> I think we should ship a default lxc.conf that includes a bit of documentation as comments, ensures that a veth pair is setup but simply not bridged to anything
[01:17] <stgraber> that way you get your eth0, the container is happy and you don can always bridge it to whatever you want later on
[01:46] <hallyn> stgraber: just to be sure, q container started fine for me in q just now :)
[01:50] <stgraber> hallyn: good to know that the distro package is fine and it's indeed just my daily build that's missing some bits
[01:55] <hallyn> stgraber: so the kernel seems to have built fine in ppa:serge-hallyn/lxc-natty.  i haven't tested it for the netns leak yet
[03:55] <linocisco> hi all
[03:55] <linocisco> I could finally setup ubuntu mail server using postfix and dovecot
[03:57] <linocisco> I tried using Thunderbird on Windows and Android's IMAP email clients. It worked fine but with disable_auth= no in dovecot.conf. I dont know what would be if auth is enabled.
[04:32] <hallyn> stgraber: eh never mind, my silly idea didn't work.  depending on what smb shares with us in the morning i may see about bisecting
[04:48] <linocisco> who is using ubuntu mail server for production environment like spanning 100 or 1000 multiple domains across the globe. I would like to be shared such kind of knowledge. As my office is just using windows server globally, I could never have such experience
[05:08] <SpamapS> linocisco: "mail server" is hard to pin down. Web mail? SMTP/IMAP? More?
[05:08] <linocisco> SpamapS, SMTP/IMAP. I have never tried webmail.
[05:10] <SpamapS> linocisco: dovecot and postfix can scale to many thousands of domains
[05:12] <linocisco> SpamapS, is there any scenarios on step by steps setup on how to?
[05:12] <SpamapS> linocisco: https://help.ubuntu.com/12.04/serverguide/
[05:12] <SpamapS> specifically
[05:13] <SpamapS> https://help.ubuntu.com/12.04/serverguide/email-services.html
[05:16] <linocisco> SpamapS, yes. I read it already
[05:17] <SpamapS> linocisco: ok, so, whats your question?
[05:18] <linocisco> SpamapS, how to archive emails in portable readable format without needing email clients?
[05:18] <sarnold> is Maildir an option? easy, piece of cake, downside is it may eat too many inodes if you've got tons of tiny mails
[05:19] <linocisco> SpamapS, in novell groupwise, there is ArchiveToGo software , which can download emails and burn on CD or USB stick into readable format.
[05:19] <linocisco> sarnold, what are inodes? what does this mean?
[05:19] <sarnold> mbox is another option that might be tolerable, it's easily human-readable, but may lead to huge files if you're not careful
[05:19] <linocisco> sarnold, mbox is only text only one single file as far as I learnt
[05:19] <sarnold> linocisco: an 'inode' is the basic unit of unix filesystem storage. every file has an inode.
[05:19] <sarnold> linocisco: indeed, that's what makes mbox so awesome.
[05:20] <SpamapS> linocisco: they're just ways to store email on disk
[05:20] <SpamapS> linocisco: I suspect you want a comprehensive system..
[05:21] <sarnold> linocisco: different filesystems can be optimized for different tasks; you may have fewer inodes if you expect your filesystem to contain nothing but gigabyte-sized files, you may have more inodes if you expect it to contain many small files.
[05:22] <linocisco> sarnold, so which could be better option to archive and how to ?  Actually administrator should delete email accounts of transfered staff after archiving and giving him a copy. At another duty station, another administer will create a new email account for him.
[05:23] <linocisco> sarnold, that is what my org is doing not to keep a person's email for so long. I dont know what is more intelligent idea
[05:23] <sarnold> linocisco: if you're just keeping all your mail in spool files (a little odd, since you don't get folders that way, but the example is easy) then you just archive /var/spool/mail/sarnold and move on. If the person does have folders, it'll typically be stored in e.g. ~sarnold/Maildir or ~sarnold/Mail or some similar place. tar and rm as you see fit.
[05:25] <sarnold> linocisco: if you want to intentionally throw away mail, that takes a bit more effort. Probably a weekly / monthly cronjob run of procmail with appropriate rules over the mailboxes in question could do it. That feels pretty ugly though.
[05:25] <linocisco> SpamapS, in your point of view, what should be comprehensive system in my case?
[05:25] <linocisco> sarnold, that means email policy.
[05:25] <SpamapS> linocisco: GroupWise is.. massive. So. something else massive. Zimbra maybe.
[05:27] <linocisco> SpamapS, groupwise has calendar sharing and other intelligent business options. so alternative is Zimbra to be used with ubuntu mail server?
[05:28] <SpamapS> linocisco: sure, take a look
[05:28]  * SpamapS goes afk
[05:28] <linocisco> SpamapS, any other alternatives else?
[05:29] <sarnold> linocisco: if you want an MS Exchange-alike, look at these guys: http://en.wikipedia.org/wiki/Open-Xchange
[05:31] <linocisco> sarnold, actually I want to hear success stories on what Linux admins are doing on mail servers in their enterprise. I tried to find on full cycle magazines and whitepapers on ubuntu .com . I found a few
[05:32] <linocisco> sarnold, so that I can learn their tips and tools in stories like their night mare headache. reading documents like wiki is just boring.
[05:32] <sarnold> linocisco: heh, I know the feeling.
[05:34] <sarnold> linocisco: everyone I know runs something like postfix+dovecot or postfix+uw-imapd, except for one guy who runs postfix+powermail (downloads.powerdns.com/documentation/powermail/html/)
[05:34] <sarnold> linocisco: I don't know anyone who runs email for 100K+ user organizations though, so...
[05:35] <linocisco> sarnold, thanks anyway
[06:43] <linocisco> sarnold,            http://summit.open-xchange.com is cool
[07:34] <smb> SpamapS, Not sure whom you sent email yesterday. But it may not have been me... ;)
[08:37] <smb> Daviey, So what is the problem with xen (nobody caring to upload it)?
[08:40] <Daviey> smb: no, not that!
[08:41] <Daviey> smb: I did bounce a question to you on Friday.. can't remember what it was
[08:41] <smb> Daviey, I guess you mean the warning about email address not being ubuntu
[08:41] <Daviey> hmm
[08:42]  * Daviey re-reviews
[08:44] <Daviey> smb: Ah yes.. it was conflict/replaces
[08:44] <smb> Daviey, Which I answered that I left them in as they were left in with the previous rc1 upload
[08:45] <Daviey> ok, ok!
[08:45]  * smb growls
[08:46] <Daviey> smb: libxen3 was dropped in natty?
[08:47] <Daviey> which means that lucid->precise is the only upgrade path.. meaning this can be dropped.
[08:49] <smb> Daviey, Yeah I thought so too when doing the merge at the beginning of the cycle. zul kept them in. And I would not change that right now. It should not matter really
[08:49] <smb> We should drop those for R
[08:50] <Daviey> true
[08:50] <Daviey> ok
[08:51] <Daviey> smb: so i updated changelog to point to quantal, and updated the maintainr
[08:52] <smb> Daviey, Ok, yeah, quantal was my fault and the maintainer something that was always "wrong" before
[08:53] <smb> eh no
[08:54] <smb> Daviey, You are right, messed that up and ignored it as I thought it complained about canonical
[11:08] <janek_> hi guys, I have just set my first ubutu serv and wanted to completely switch of the logs related to eth0 I am working on. Any help would be appreciated.
[11:46] <va> Hi. In ubuntu 12.04 server, gnome-control-center's 'unlock' button is inactive if logged in through Xrdp or through an LTSP thin client (works if logged in locally). It says "system policy prevents changes. contact admin" on hover. Anyone know how to enable it or what could be causing this?
[11:58] <mdeslaur> va: it's caused by policykit. There's likely a policykit rule file that needs you to be on the console to get appropriate permissions.
[11:59] <mdeslaur> va: look in /usr/share/polkit-1/actions
[12:00] <mdeslaur> va specifically in org.freedesktop.accounts.policy
[12:00] <Guest4020> If i have a server on let's say 192.168.0.1 and I want to redirect all user to this IP if they try to reach me at 10.5.24.10x would the following Code do the job with Iptables?
[12:00] <Guest4020> iptables -t nat -A OUTPUT -d 10.5.24.10x -j DNAT --to 192.168.0.1
[12:01] <Guest4020> Or how about this iptables -t nat -A PREROUTING -i eth1 -j DNAT --to 192.168.0.1
[12:05] <Guest4020> Well is there anyone able to solve my problem ?
[12:06] <Guest4020> So you are able to change my username but answering is impossible
[12:06] <Guest4020> Great why am I even here
[12:09] <Guest4020> Hello ?
[12:09] <va> Guest4020: iptables -t nat -A PREROUTING -i eth1 -d 10.5.24.101 -j DNAT --to 192.168.0.1  seems what you want
[12:10] <Guest4020> Great thank you man :)
[12:10] <va> whether this will work if 192.168.0.1 is on the same machine that does the nat i'm not sure
[12:10] <va> u might need some additional trick
[12:10] <Guest4020> That would be ?
[12:11] <Guest4020> But actually it is one the same machine, I'm just curious :o
[12:13] <va> Guest4020: hm, ye u probably wouldnt even want to do this NAT if the IP was on the same machine idk why i thought about it, god confused with tricks myself
[12:13] <va> *got
[12:20] <Guest4020> It's just about testing, I know it does not make enormous sense at all :D
[12:33] <caribou> Can someone tell me if 'bzr builddeb' is used frequently to build off LP branches ?
[12:34] <caribou> just wondering if I should get used to use it or continue manually
[13:18] <zooko> Hm, I see that an automated bring-up of an Ubuntu server has stopped on this debconf query: http://codepad.org/aBgISt20
[13:19] <zooko> The command that start this was: apt-get upgrade -y
[13:19] <zooko> Seems like "-y  Assume Yes to all queries and do not prompt" isn't quite working as advertised.
[13:21] <zooko> Ah: http://askubuntu.com/questions/146921/how-do-i-apt-get-y-dist-upgrade-without-a-grub-config-prompt
[13:22] <zooko> Oh! New AMIs shouldn't have had this problem?
[13:22] <zooko> I must have gotten a stale AMI ID just now then. Whoops.
[13:23] <SpamapS> zooko: you using cloud-images.ubuntu.com ?
[13:23] <zooko> SpamapS: I used alestic.com.
[13:24] <zooko> Wait, no I didn't.
[13:24] <zooko> Hrm.
[13:24] <zooko> Yeah, I used this: http://cloud.ubuntu.com/ami/
[13:25] <zooko> Is http://cloud.ubuntu.com/ami/ not the right place to find AMI ids?
[13:29] <jcastro_> zooko, those should be up to date
[13:29] <zooko> jcastro_: they say 20120424 on them,
[13:29] <jcastro_> but yeah, http://cloud-images.ubuntu.com/ is much nicer imo
[13:30] <zooko> and the lp ticket says the bug was fixed 201206
[13:31] <jcastro_> man, these look way out of date
[13:31] <zooko> And, the AMI ID I got from http://cloud.ubuntu.com/ami/ has the bug.
[13:31] <jcastro_> daker, ping
[13:31] <zooko> Daily builds? Yikes, that doesn't sound like what I want!
[13:31] <zooko> http://cloud-images.ubuntu.com/precise/
[13:33] <jcastro_> http://cloud-images.ubuntu.com/releases/precise/release/
[13:33] <jcastro_> is what you want
[13:33] <jcastro_> (they're under a releases directory)
[13:33] <jcastro_> though, why the dailies are in the root instead of under /dailies is beyond me
[13:35] <zooko> Thanks!
[13:37] <jcastro_> I filed a bug, thanks for bringing it up!
[13:37] <zooko> Thank you!
[13:37] <zooko> Bug # please?
[13:37] <zooko> Or URL...
[13:38] <zooko> Found it.
[13:38] <zooko> https://bugs.launchpad.net/ubuntu-cloud-portal/+bug/1060199
[13:41] <zooko> Okay, in a minute or so https://leastauthority.com should be back in operation using the recommended Precise AMI. Thanks for your help!
[13:46] <jcastro_> zooko, wow, that's really cool
[13:47] <zooko> jcastro_: thanks! I'm excited about it!
[13:47] <zooko> It's just me and two other folks. ☺
[13:47] <jcastro_> that is quite excellent
[13:47] <zooko> Doing a test signup to see if the resulting Precise server comes all the way up...
[14:01] <zooko> Whoops... EC2Error: Error Message: The AMI ID 'ami-32845d2f' does not exist
[14:01]  * zooko investigates...
[14:02] <zooko> I wonder where that AMI ID came from. Oops...
[14:02] <SpamapS> zooko: most common problem is you chose the wrong region
[14:05] <zooko> SpamapS: yep. sa-east-1
[14:05] <zooko> sa?
[14:05] <zooko> South America. Neat. But yes, that was the problem.
[14:05] <zooko> Thanks!
[14:06] <doko> Daviey, what's the status of https://bugs.launchpad.net/ubuntu/+source/freeipmi/+bug/1052056 ?
[14:07] <daker> jcastro_: pong
[14:07] <jcastro_> daker, I filed a bug on it, the images are out of date in the AMI browser?
[14:09] <daker> oh yeah :(
[14:09] <daker> i need to fix that
[14:24] <zooko> Hm, so linux-ec2 and linux-image-ec2 are no longer present. I'm changing my setup from lucid to precise.
[14:24] <zooko> Is there a new package that I should install instead?
[14:24] <Daviey> doko: looking
[14:24] <Daviey> doko: it's In Progress :)
[14:25] <Daviey> roaksoax: Have you been able to do the things jdstrand requested for freeipmi?
[14:25] <doko> Daviey, I'd say rather incomplete ...
[14:27] <Daviey> doko: No, the bug report has enough information to allow a developer to undertake the work, and it is assigned. :)
[14:28] <roaksoax> Daviey: howdy, no not completely
[14:29] <doko> Daviey, no, in-progress is not a status for a developer to finish the mir. but anyway, if it's being worked on ...
[14:40] <roaksoax> Daviey: the only thing missing is fixing the compiler warnings... if someone could give a hdn with that would be great :) http://paste.ubuntu.com/1256058/
[14:48] <hallyn> stgraber: d'oh, the grub update failure in lxc is real.
[14:49] <doko> roaksoax, these are about unused results. so check the result, and error out in case that an error is returned
[14:50] <roaksoax> doko: will do
[14:55] <Daviey> doesn't dh do that automagically?
[14:57] <SpamapS> hallyn: yeah I ran into that yesterday
[14:57] <SpamapS> hallyn: have it on my TODO to report the bug
[14:58] <SpamapS> hallyn: patch should be simple enough.. does it ever make sense to install grub in a container?
[14:58] <SpamapS> hallyn: or rather.. to configure it in a container.
[15:10] <Daviey> zul: can you check bug 1059907 isn't a binary depends?
[15:11] <hallyn> SpamapS: it might, if you're using a loopback block dev as backign store and intend to later boot it in kvm
[15:11] <Daviey> zul: a source depends is no problem, but if there is a binary dpeend, we should fix it.
[15:11] <hallyn> but in general the answer is no
[15:11] <zul> Daviey:  build depends
[15:11] <Daviey> zul: certain?
[15:11] <zul> its not a binary
[15:11] <zul> yes checked it before i wrote that response
[15:12] <zul> Daviey: very certain
[15:14] <Daviey> zul: thanks
[15:14] <SpamapS> hallyn: so perhaps the answer is not to fail postinst if root is not a block device.
[15:15] <hallyn> SpamapS: or even if no access to the device
[15:15] <hallyn> right now, no access would mean cgroups.  Next cycle, it might mean different user namespace
[15:15] <hallyn> changing locale, biab
[15:16] <roaksoax> doko: do you have a sample code for the unused result check?
[15:16] <roaksoax> i haven't touchd C in years
[15:22] <zul> Daviey: i just uploaded stevedore to binary new can you please review it (dep for ceilometer)
[15:22] <Daviey> ok
[15:25] <doko> roaksoax, no, not at hand. afk today early, and tomorrow is bank holiday
[15:41] <skrite> hey all, i am looking for an easy to run and configure web server distro that i can just put in a VM on ubuntu.. our company needs a mail server with a fqdn but i am trying to avoid a lot of config pain. any ideas?
[15:52] <rbasak> skrite: yeah that should work
[16:08] <zul> hallyn: i havent been able to reproduce the libvirt hostname thing
[16:08] <hallyn> hostname?
[16:19] <_yac_> i'm fiddling with xen in ubuntu server 12.04. i have a working bridged network setup but want to try a routed setup. is it safe for the dom0's networking to alter the xend-config.sxp to this effect? also comment out xenbr0 and comment back in the standard eth0 fare. pifalls?
[16:20] <eutheria> is there still an ubuntu directory server project?
[16:22] <hallyn> zul: i'm not sure what you're talking about
[16:23] <hallyn> stgraber: ubuntu containers don't have grub installed.  ubuntu-cloud containers (both precise and q) do, and updates are failing
[16:23] <zul> the thing we were talking about yesterday
[16:23] <hallyn> zul: with nova?
[16:23] <zul> hallyn: yeah
[16:23] <hallyn> zul: cool, thanks for the info
[16:23] <hallyn> he did say 'once a month or so'  :)
[16:23] <hallyn> iow bug reproducer's nightmare
[16:24] <zul> dah
[16:25] <hallyn> doo
[16:36] <adam_g> jamespage: hey, about OVS 1.4.3.. i'm going to propose lp:~gandelman-a/ubuntu/quantal/openvswitch/1.4.3 for uploading. theres a 1.4.3 package in  ppa:gandelman-a/ppa available for testing. any chance you can give that a run through your autmoated testing first (and possibly sponsor the upload:)?
[16:36] <jamespage> adam_g, yes
[16:36] <jamespage> lemme get my sprint out of the way for today and I'll test and upload later on
[16:36] <adam_g> jamespage: great, thanks. shall i propose the merge and subscribe you to the bug?
[16:37] <jamespage> sure - sounds good
[16:54] <mercsniper_> anyone offer assistance with maas?
[16:56] <melmoth> mercsniper_, there s a #maas channel as well.
[16:58] <mercsniper_> melmoth: thank you, I asked the question there and I can ask it here, is the cloud-init package still out of date?
[16:59] <smb> SpamapS, stgraber, Ok, so the v2 test kernel is up on people. If you got time I'd be quite interested to see how that goes. :)
[16:59] <hallyn> stgraber: so were you going to file thtat as a bug against grub?
[17:00] <hallyn> i'm wondering whether we do a simple 'is-container' check at top of update-grub, or check deeper inside grub whether we have write access to the root dev
[17:01] <stgraber> hallyn: I vaguely remember smoser or SpamapS mentioning that bug on #ubuntu-devel yesterday, maybe they already have a bug report for it somewhere
[17:01] <stgraber> smb: thanks, will update and reboot soon
[17:01] <smoser> i did not mention such a hting. but that would be nice.
[17:01] <hallyn> stgraber: i dont' think so.  SpamapS said it was on his todo
[17:02] <SpamapS> It was on my todo to report it as a bug and suggest a patch to cjwatson
[17:02] <SpamapS> and still is actually
[17:02] <stgraber> hallyn: as LXC doesn't really support full disks but instead only partitions, I guess it's fine to just exit 0 if is-container returns 0
[17:04] <smoser> i'd say that supporting partitions has nothing to do with it.
[17:04] <hallyn> stgraber: i do fear that eventually we'll find there are cases where we want to support it, but for now it seems best
[17:04] <hallyn> i'll whip up a patch this afternoon
[17:04] <smoser> even if it worked on a partition it isn't going to have an affect
[17:05] <hallyn> SpamapS: if you report the bug, please give # here, else i'll report it after lunch
[17:05]  * hallyn bbl
[17:12] <stgraber> smoser: well, I was thinking of a case where someone would use LXC to fix/upgrade an existing system (VM/external disk), but as LXC won't let you see the whole device anyway grub won't be able to update the mbr
[17:46] <adam_g> zul: where does the 'websockify' python module come from?
[17:48] <melmoth> adam_g, universe ? http://packages.ubuntu.com/quantal/i386/websockify
[17:49] <zul> adam_g: what do you mean?
[17:49] <adam_g> # apt-get -y install websockify ; python -c 'import websockify'
[17:49] <adam_g> import error
[17:50] <zul> w...t...f\
[17:50] <adam_g> also
[17:50] <adam_g> why does novnc provide an init script that starts nova-novncproxy?
[17:50] <zul> adam_g: i have no idea that area is a big mess
[17:51] <melmoth> i realised both packaged conflicted one with another the hard way last week.
[17:51] <adam_g> melmoth: is there a bug?
[17:52] <melmoth> https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1055505
[17:52] <adam_g> melmoth: thanks
[17:53] <adam_g> ugh
[17:53] <melmoth> ohhh, it s fixed :-)
[17:56] <adam_g> melmoth: no, its not
[18:07] <adam_g> zul: consider yourself subscribed: bug #1060374
[18:18] <zul> thanks
[18:20] <jamespage> adam_g, openvswitch tested OK
[18:20] <jamespage> preparing the upload now
[18:22] <stgraber> smb: the little bit of stress test I did on your kernel here didn't show any hang, though I'm not nearly as good at reproducing the bug as SpamapS :)
[18:23] <adam_g> jamespage: super thanks
[18:24] <smb> stgraber, I hope he won't be successful this time. ;) And interestingly proposing this variation seems to have refreshed some memory upstream. And I got some pointers to 3 patches in linux-next...
[18:24] <jamespage> adam_g, its in the unapproved queue
[18:25] <jamespage> Daviey, ^^
[18:25]  * jamespage goes for food
[18:27] <Daviey> jamespage: reviewing
[18:45] <zul> adam_g: websockify wasnt actually a python-module per say...anyways its fixed upstream just need a FFE
[18:45] <zul> Daviey: ^^^
[18:59] <adam_g> zul: yea...
[19:00] <adam_g> zul: are you uploading something?
[19:01] <zul> adam_g: yeap its pending review
[19:01] <adam_g> zul: okay
[19:02] <adam_g> zul: as i predicted, that tgt config change broke nova-volume
[19:03] <hallyn> SpamapS: filed bug 1060404
[19:03] <zul> adam_g: the upgrade or just everything?
[19:05] <adam_g> zul: the removal of the '--conf' option from calls to tgt in nova-volume breaks nova-volume
[19:06] <zul> adam_g: looks like we are going to have to carry at patch
[19:06] <zul> er....a patch
[19:06] <adam_g> zul: ?
[19:06] <zul> blah....*grumble* *grumble*
[19:06] <adam_g> zul: specifying --conf causes other bugs, which is why it was removed
[19:07] <adam_g> zul: the real problem is chaning the 'include' statements in tgt's config. includes can only be specified in /etc/tgt/targets.conf, not from within included files in /etc/tgt/conf.d/
[19:07] <adam_g> so dropping nova_tgt in /etc/tgt/conf.d/ to inclue /var/lib/nova/volumes/ doesn't work, and never really did :|
[19:08] <zul> adam_g: but i have been running fine with it
[19:09] <adam_g> zul: no, you ahave never been using it. if --conf is specified to tgt-admin, /etc/tgt/targets.conf is never even consulted
[19:09] <zul> i havent
[19:18] <SpamapS> hallyn: thanks for filing that. I only have one bit of feedback on your debdiff, which is to consider using debconf so that it can be internationalized
[19:18] <SpamapS> hrm
[19:18] <SpamapS> actually thats daft
[19:19] <SpamapS> I wonder if there is a simple way to access 'templates' from maintainer scripts without telling debconf to nag the user with a question
[19:19] <Daviey> SpamapS: priority
[19:20] <Daviey> but remember, debconf isn't a registry :)
[19:20] <SpamapS> Daviey: what I'm saying is, I want it to print out text that translators have a chance at internationalizing
[19:20] <SpamapS> its silly
[19:21] <SpamapS> server.. containers.. we can skip i18n for now :)
[19:21] <Daviey> Yeah, only en_GB matters TBH
[19:22] <rbasak> :-)
[19:22] <zul> adam_g: do you want to prep an upload
[19:23] <adam_g> zul: yes im filing a bug
[19:24] <zul> adam_g: k prep one for the cloud-archive and ill ack it
[19:24] <adam_g> zul: also, i've committed the missing xvpvnc / novnc stuff
[19:24] <zul> adam_g: good
[19:24] <adam_g> but i'd like some input from someone who knows htf this is supposed to be packaged
[19:25] <SpamapS> hallyn: ok forget my previous comment, but also the language needs some work. "not running because it is in a container" is a bit.. weird
[19:25] <zul> adam_g: check with vishy
[19:26] <SpamapS> hallyn: "Declining to perform automatic grub install in container." might make more sense.
[19:29] <Poapfel> my sysadmin gave me a ipv6 adress and a ipv6 gateway how do I use these to things to enable ipv6 on this server?
[19:29] <Poapfel> (I was able to use ipv6 for a couple of minutes but after closing my ssh session it was disable or so)
[19:30] <Poapfel> (by editing /etc/network/interface)
[19:33] <Jeeves_> Poapfel: Just configure another ethX
[19:34] <Jeeves_> Instead of 'static inet', you configure it as 'static inet6'
[19:35] <Poapfel> I did that...
[19:37] <Poapfel> Jeeves_: this is what my /etc/network/interfaces looks like http://paste42.de/4183/
[19:37] <Jeeves_> Your gateway is wrong
[19:38] <Jeeves_> That should be just an address
[19:38] <Poapfel> what is with this /48?
[19:38] <Jeeves_> Also, it suggests that you are in a /48, not a /64
[19:38] <Jeeves_> That's your netmask
[19:38] <Poapfel> but the address had a /64 at the end
[19:38] <Poapfel> therefore I thought the netmask is 64
[19:39] <Poapfel> and not not 48
[19:39] <Poapfel> should I change the address to xxx:xxx:xxx:xxx:xx:0:0:0/64?
[19:39] <Poapfel> and the netmask to 48?
[19:40] <hallyn> SpamapS: I'm fine with wahtever :)  if you come up with something good, could you post it in the bug?
[19:41] <Jeeves_> Poapfel: What did your admin tell you?
[19:41] <hallyn> at this point I'm waiting for cjwatson input :)
[19:42] <maswan> hm. I'm not sure I've done a static gw in v6 actually. but I would tihnk that that was just an IP, not an IP with netmask
[19:43] <maswan> Poapfel: typically the adress ends in something not 0
[19:43] <maswan> like 2001:6b0:e:2018::163
[19:44] <DataCruncher> I'm running a minecraft server on Ubuntu Server. I am remotely connected with putty. I would like to be able to exit putty without shutting down the server. Is there any way to do that?
[19:44] <maswan> and gateway should just be the ::1 stuff, no /whatever
[19:44] <maswan> I think
[19:44] <Jeeves_> Indeed
[19:44] <Jeeves_> Just an address
[19:44] <Jeeves_> the same for 'address'
[19:44] <maswan> netmask goes in the netmask field in network/interfaces
[19:44] <Jeeves_> THe netmask is only mentioned in 'netmask' :)
[19:44] <DataCruncher> Anyone?
[19:45] <maswan> which you already have there
[19:45] <Jeeves_> Kinda makes sense! :)
[19:45] <maswan> DataCruncher: start it inside screen(1)
[19:45] <Jeeves_> DataCruncher: screen
[19:45] <Poapfel> hm
[19:45] <Poapfel> I am going to change the netmask to 48
[19:46] <Poapfel> and to be more specific my sysadmin gave me the following two informations(without any comments):
[19:46] <Poapfel> 2a00:12c0:1015:100:44:0:0:0/64
[19:46] <maswan> Poapfel: Almost all the time the netmask is going to be 64 on ipv6
[19:46] <Poapfel> GW: 2A00:12C0:1015::1/48
[19:46] <Poapfel> I guess the first one is the adress and the second on is the gateway, right?
[19:46] <maswan> ok, that seems to be a network definition for router setup
[19:47] <DataCruncher> Maswan/jeeves: I'm confused, how would I do that?
[19:47] <DataCruncher> And what exactly does it do?
[19:47] <Poapfel> maswan: well...
[19:47] <maswan> DataCruncher: first you start screen, then you get a new shell inside that and then you can start the minecraft server process inside there. then you can detatch screen by hitting ctrl-a d and logout. later you can login and use "screen -x" to re-attach to the running server
[19:48] <Poapfel> it is still a ipv6 address then, isn't it?
[19:48] <maswan> yeah
[19:48] <Poapfel> hm
[19:48] <maswan> well, 2A00:12C0:1015::1 is an ipv6 adress
[19:48] <maswan> 2a00:12c0:1015:100:44:0:0:0 is a network, you have to choose an IP inside of that
[19:49] <maswan> like 2a00:12c0:1015:100:44::5
[19:49] <Poapfel> oh
[19:50] <Poapfel> but I thought that 2A00:12C0:1015::1 is my gateway
[19:50]  * Poapfel is a total noob when it comes to networking
[19:50] <maswan> yeah, that's what he said it was
[19:51] <maswan> but I don't really understand that bit either, since usually you need the gateway to be inside your network
[19:51] <Poapfel> maswan: I am pretty much confused now...
[19:52] <Poapfel> what should I enter as a ip adress now?
[19:52] <maswan> Poapfel: yeah, so am I. could that be instructions for setting up a router for a whole subnet?
[19:52] <Poapfel> maswan: no, I don't think so
[19:53] <maswan> Poapfel: I'm pretty confused at the instructions too then. :/
[19:53] <Poapfel> :(
[19:53] <maswan> and I've done ipv6 admin on ubuntu for a few years now
[19:53] <Poapfel> hm
[19:53] <Poapfel> well...it is a kvm based vserver which is part of a big data center, but I don't know if this informations matters
[19:54] <Poapfel> (probably it doesn't)
[19:54] <maswan> you could try choosing an IP in 2a00:12c0:1015:100:44::, like 5. and try the gateway 2A00:12C0:1015::1 and readjust the netmask to 48 and see how that works
[19:54] <maswan> because that's confusing, a network has a mask, not an IP. and the gw is just an IP
[19:57] <Jeeves_> maswan: You disappoint me :)
[19:58] <Jeeves_> 2a00:12c0:1015:100:44:: is just as much an ip as 2a00:12c0:1015:100:44::1
[19:59] <Poapfel> Jeeves_: so is 2a00:12c0:1015:100:44::1 my ip?
[20:00] <Jeeves_> Poapfel: yes, possibly
[20:00] <Jeeves_> But, the gateway-address you've got is outside the /64 you're configuring
[20:00] <Poapfel> hm?
[20:01] <Jeeves_> Poapfel: Your /64 network starts at 2a00:12c0:1015:0100:0000:0000:0000:0000 en ends at 2a00:12c0:1015:0100:ffff:ffff:ffff:ffff
[20:02] <Jeeves_> Your gateway is at 2A00:12C0:1015:0000:0000:0000:0000:1
[20:02] <maswan> Jeeves_: wouldn't the network adress be a bad idea for a host adress in ipv6 still?
[20:02] <Jeeves_> Which you cannot reach from 2a00:12c0:1015:100::
[20:02] <Jeeves_> maswan: ipv6 doesn't have network or broadcastaddresses
[20:02] <maswan> Jeeves_: ah
[20:03] <Jeeves_> link-local
[20:05] <Poapfel> btw: what is the correct way to reload the /etc/network/interfaces configuration /etc/init.d/networking restart seems to be deprecated
[20:05] <DataCruncher> maswan: Just got it working, thank's for the help.
[20:05] <Poapfel> besides I always get the error "RTNETLINK answers: File exists. Failed to bring up eth0."
[20:11] <hallyn> stgraber: smb: you know, in the end eth0 is just a nic like any other - i wonder if the dnsmasq preventing clean shutdown bug is actually also to do with routes not being cleaned out at shutdown
[20:11] <hallyn> probably not...
[20:56] <hallyn> stgraber: temporarily assigned bug 1017847 to you to make sure i grok it - is the failing case meant to be caught?
[21:01] <stgraber> hallyn: well, the problem is that we can't really know what architectures are supported by the running kernel
[21:01] <stgraber> hallyn: at least not in an easily parsable form for a bash script
[21:01] <stgraber> hallyn: so the code simply always call qemu-debootstrap if it's installed and deboostrap if it's not
[21:02] <stgraber> so the actual failure to mount is probably in debootstrap's code
[21:05] <hallyn> stgraber: no, it's during the 'chroot $container apt-get update'
[21:05] <hallyn> (i believe)
[21:05] <hallyn> so actually, maybe it's just because qemu-arm's dependencies are no longer in our path?
[21:05] <stgraber> hallyn: that's surprising, it should fail way before that...
[21:06] <stgraber> hallyn: it could happen if you have a container in the cache but qemu-user-static isn't installed anymore
[21:06] <stgraber> then there isn't much we can do really...
[21:06] <hallyn> not sure what you mean
[21:06] <stgraber> you could get that kind of failure if you do:
[21:06] <hallyn> what i'm saying is it works with qemu-user-static but not qemu-user,
[21:07] <stgraber> lxc-create -t ubuntu -n p1 -- -r precise -a armhf
[21:07] <stgraber> apt-get remove --purge qemu-user-static
[21:07] <stgraber> lxc-create -t ubuntu -n p2 -- -r precise -a armhf
[21:07] <hallyn> so i guess it has to do with kernel tries to fire off qemu-armel in the container's namespace, but does'nt find the libs
[21:07] <stgraber> as p2 will copy from cache but can't execute as the binfmt handler is no longer there
[21:07] <hallyn> yeah that would happen too...  doesn't seem any wors than the original reported case
[21:08] <hallyn> so, what do we do?  :)
[21:08] <adam_g> zul: are you going to take care of a new websockify?
[21:13] <hallyn> hm, rsyslog keeps SEGVing in the armhf container
[21:14] <Daviey> hallyn: i saw a new rsyslog in the quantal queue btw
[21:15] <hallyn> hm
[21:15] <hallyn> this container ws *just* created
[21:15] <Daviey> oh
[21:17] <Daviey> adam_g: I'll accept the new nova, but please can we have the man pages complete before release?
[21:17] <Daviey> perhaps track it via a bug?
[21:26] <hallyn> gotta say, today unity in qemu over spice looks nice
[21:51] <raub> Embarassingly easy question: can anyone spot what I am doing wrong here:
[21:51] <raub> ssh -t -K server1.domain.com 'sudo -v' && ssh -t server2.domain.com "stty -onlcr; sudo tar czf - /etc/ldap/ 2>/dev/null" | tar xvf -
[21:56] <adam_g> Daviey: maybe. the manpages that sphinx generates are no less "stubby"
[22:02] <SpamapS> hallyn: were you able to test smb's latest kernel?
[22:02] <SpamapS> smb`: still failing w/ smb2 btw
[22:03] <SpamapS> is that the 'netns' kernel thread reporting that?
[22:04] <SpamapS> if we have a kernel thread, can't we tell it to go look for deadlocks?
[22:14] <hallyn> SpamapS: no, i haven't.  and i wont given it doesn't work for you :)
[22:15] <hallyn> SpamapS: eod here, i gotta run  but will be back on later tonight
[22:52] <SpamapS> whoa
[22:52] <SpamapS> uvirtbot: clock skew?
[22:56] <xymantec> Hi is anyone around :)
[22:57] <sarnold> irc tends to work better if you just ask :)
[22:57] <xymantec> I am having problems setting up a damn cron job... I have a lamp server (uOS 12.04.1) with php5-cli installed
[22:57] <xymantec> SERVER API = apache
[22:58] <xymantec> i "sudo vim /etc/cron.hourly"
[22:58] <xymantec> created a i.sh file
[22:59] <xymantec> inserted * * * * * php /var/www/cron/t.php
[23:00] <xymantec> and saved file, but I it looks like its not working. I did do research and have come to the conclusing that i need to include the path to php binary which is typically /usr/local/bin
[23:00] <xymantec> *conclusion
[23:00] <sarnold> hopefully not /usr/local/bin/php, but /usr/bin/php -- check 'which php' for details there
[23:00] <xymantec> my question is how do I include this path, do i include in the i.sh or the actually php script? and how to i include it
[23:01] <sarnold> but you've typed t.php and i.sh so far -- which are you running?
[23:01] <xymantec> i.sh is the shell script inside cron.hourly folder
[23:01] <xymantec> t.php is the script i want to run on a cron cycle
[23:02] <zul> adam_g: already have
[23:06] <xymantec> its /usr/bin/php
[23:06] <xymantec> do i put that in my shell script of in my php script?
[23:06] <xymantec> *or
[23:07] <sarnold> xymantec: I would. Most cron problems come from improperly specified paths
[23:07] <sarnold> xymantec: though it just strikes me; if you're using cron.hourly, you don't need the * * * * * time specifcation
[23:07] <sarnold> look at cron.daily or something for inspiration :)
[23:07] <sarnold> you only need the time specification for the "main" crontabs, not the "helper" crontabs
[23:07] <sarnold> (I hope that makes sense)
[23:08] <xymantec> gotcha well i really wanted to use it ever hour, but for test purposed i have it clocked at every minute ;)
[23:08] <sarnold> well, the thing is, you've got a syntax error in your file :)
[23:08] <xymantec> hey since I have you helping, can you answer this. i tried using sudo crontab -e and it open up a default file, the problem is i could never figure out how to save it.
[23:09] <xymantec> which file?
[23:09] <xymantec> the shell script?
[23:09] <sarnold> xymantec: depends on which text editor it started. if it started vi, use <esc>:wq   to save and exit
[23:09] <sarnold> if it started something else, you'll have to figure out how to drive taht other editors
[23:09] <xymantec> how to i setup the system default editor, because i think that part of the problem
[23:10] <xymantec> i installed vim-nox but for some reason it always uses some other crappy editor
[23:11] <sarnold> xymantec: I think I uninstalled nano or whatever just to get the /etc/alternatives/ to use vim always
[23:11] <xymantec> I am assuming the correct syntax inside my shell script should be '* * * * * usr/bin/php x/path/phpscript.php '?
[23:12] <sarnold> forget those *****
[23:14] <xymantec> i understand you dont want me to have those but for testing purposes (every minute) is it ok to leave them on temporarily?
[23:15] <xymantec> I just want to make sure the script works
[23:15] <xymantec> script => cron
[23:15] <sarnold> xymantec: it's not a matter of testing
[23:15] <sarnold> if you want them in every minute, then put it into the /etc/crontab file directly
[23:16] <sarnold> if you want it  every hour, take them out, and put it into the /etc/cron.hourly file :)
[23:16] <xymantec> ok let me try sudoing that
[23:16] <xymantec> lol ok
[23:21] <xymantec> ok wrote to main crontab
[23:22] <xymantec> do i need to restart cron service or do i just restart apache?
[23:22] <xymantec> i guess ill see in a min if it works! :D
[23:22] <xymantec> YES it works
[23:23] <xymantec> sweet thanks a mil sarnold :)
[23:24] <xymantec> i can now go put ice on my forhead...(bangin my head against my desk lol)
[23:24] <sarnold> xymantec: woo :)
[23:43] <stgraber> SpamapS: had a chance to test smb`'s new kernel?