[00:02] <adam_g> Daviey: that did the trick, for now
[00:06] <Daviey> adam_g: cool
[00:06] <Daviey> adam_g: if that is the IP address that in punched through the wall, we should hardcode to it anyway
[00:09] <adam_g> Daviey: ive not had any issue getting the archive before, not sure what happened
[00:10] <bascotie> Lol they referred me to ubuntu-server but does anyone know a good Samba channel
[00:11] <bascotie> found one, #samba =P
[00:11] <bascotie> thank you
[00:14] <ohdae> Hey, I'm looking for a way to back-up/mirror data between two separate Ubuntu servers. Looks like rsync is my best bet..any other ideas?
[00:14] <SpamapS> ohdae: mirror and backup are not at all the same thing
[00:15] <SpamapS> ohdae: but rsync is useful for both use cases
[00:15] <ohdae> I didn't mean mirror == backup, i meant mirror and backup. I need the same exact content on both servers for specific folders, but I also need other specific folders to be backed up from one server to another
[00:16] <ohdae> but yeah, SpamapS, rsync looks like the easiest way. well, with what is included that is
[00:16] <twb> ohdae: rsync is a tried and true solution, I recommend it.
[00:17] <twb> ohdae: you may wish to wrap it (or librsync) with one of the higher-level wrappers, like rsnapshot or rdiff-backup.
[00:17] <chaos_zero> how plausable is it to run a windows type server program on ubuntu server?
[00:17] <twb> rsnapshot is reasonable, but a bit stale -- I'm trying to get funding to overhaul it in the next few months
[00:17] <ohdae> Can I explain the situation a bit? Maybe there might be a better solution, the rsync usage is just my guess..I've never performed back-up's across Ubuntu before, tbh
[00:18] <chaos_zero> like, what is the most minimal x that i can get to use wine?
[00:18] <twb> chaos_zero: zero.  It is a world of hurt.
[00:18] <twb> chaos_zero: if there is any other option, choose it
[00:18] <chaos_zero> well, i am trying to host a game server... i cant just reprogram it
[00:18] <twb> chaos_zero: in fact it would probably be less grief to buy a windows license and run it in a VM, than to try to run a w32 app inside wine inside Xvfb
[00:19] <twb> Oh, well, games. You probably have lower expectations ;-)
[00:19] <ohdae> I have 2 servers. DNS for my domain points to both, round-robin. I am not currently using caching for the webservers, so both servers need the same content exactly to serve the website. But there is also data on each server (code, text docs, etc) that I would like periodically backed-up to the opposite server.
[00:19] <chaos_zero> true
[00:19] <ohdae> Does that change my needs at all? heh
[00:19] <chaos_zero> its mainly used for mysql, apache, bind 9 but theirs planty of processing power not used
[00:20] <twb> ohdae: website could be pushed with git; bind has the ability to magically sync between itself IIRC (look for "peer")
[00:20] <ohdae> Hmm, sounds much easier than what I'm doing currently.
[00:20] <twb> I guess if you have a fancy-pants website with php and mysql, plain git might not suffice.
[00:21] <ohdae> Most of the content is static, so I update each page manually and then sftp the updated .html files to the opposite server
[00:21] <SpamapS> ohdae: yeah I'd put the website in whatever your favorite VCS is (bzr, git, hg, svn ..etc) and pull the website from that. And then I'd backup the VCS somewhere else
[00:21] <twb> rsync is definitely not the right option to keep *running* mysql instances in sync, because databases' files (usually) aren't coherent
[00:21] <chaos_zero> so how do you go about running vm in ubuntu server if their is no X
[00:21] <chaos_zero> ?
[00:21] <SpamapS> chaos_zero: libvirt
[00:21] <twb> chaos_zero: VMs don't need X
[00:21] <SpamapS> chaos_zero: kvm, xen, virtualbox, even vmware, will all work fine w/o X
[00:21] <ohdae> But I'm implementing nginx + php + twitter bootstrap, using mysql as a back-end to hold content (blog posts, etc)
[00:22] <chaos_zero> but its windows...and you know how gui-based *windows* is...
[00:22] <twb> SpamapS: except free version of vbox provides no means to get into the headless VM :-(
[00:22] <ohdae> and creating a local-only submission page. so I think that will sovle my website-sync needs. But either way, I'll go check out rsync +librsync
[00:22] <SpamapS> chaos_zero: kvm will expose the graphics hardware as a VNC server you can connect to
[00:22] <twb> chaos_zero: kvm et al can export the guest's GUI via e.g. VNC
[00:23] <twb> chaos_zero: and once it is installed, if the guest is actually running, the guest can speak whatever it likes (e.g. SSH or RDP)
[00:23] <SpamapS> ohdae: you *really* need to have version control for your website
[00:23] <chaos_zero> i am too much of a noob to get how this is going to work...
[00:24] <SpamapS> ohdae: rsync will keep you in sync, but when one breaks, it will break the other one too on the next mirror cycle. ;)
[00:24] <ohdae> hmm true
[00:24] <SpamapS> ohdae: with VCS you can at least roll back
[00:24] <chaos_zero> is their an apt-get for that which you would recommend. I would start researching from that
[00:24] <SpamapS> ohdae: if you're a developer, I'd think you would already know that
[00:24] <ohdae> Oh I know, I've jsut never bothered heh
[00:25] <ohdae> Honestly, I have a sort of ghetto setup
[00:25] <ohdae> and it just got out of hand
[00:25] <adam_g> Daviey: fyi first test passed, second one deploying. should be sunny again soon
[00:25] <twb> SpamapS: you'd be surprised how many incompetent coders are out there, even taking the average idiocy of PHP into it.
[00:25] <ohdae> crappy stuff got built ontop of crappy stuff that shouldnt have been there to begin with
[00:25] <ohdae> Now I have a cluster-fuck on my hands
[00:25] <twb> ohdae: yeah it's called gnu/linux
[00:25] <ohdae> hehe
[00:25] <Daviey> adam_g: rocking, swift-milestone now green
[00:26] <Daviey> (1.4.6 -> 1.4.7 transition)
[00:26] <ohdae> Well...my "main" server is running Ubuntu..is running nginx, php, inspircd, silcd, mysql and it has ohh all of 5gb HD, 64mb RAM
[00:27] <ohdae> It actually still runs pretty well lol
[00:27] <SpamapS> Daviey: ready to chat 'bout rabbitmq?
[00:27] <SpamapS> ohdae: thats not a server, thats a miracle.
[00:28] <ohdae> SpamapS: Oh it's trucking along ;-)
[00:28] <Daviey> SpamapS: hmm, sorta.
[00:28] <SpamapS> ohdae: your cluster f*** is primarily due to you not being able to control change. VCS is the first step in that.
[00:29] <ohdae> Honestly, it still runs pretty good. granted my nginx doesnt get a whole lot of traffic, but it's balanced between that server and my secondary server (which is MUCH better, stats-wise)
[00:30] <twb> Last time I looked, I couldn't even successfully get through netboot d-i without at least 96MB of RAM.
[00:30] <ohdae>              total       used       free     shared    buffers     cached
[00:30] <ohdae> Mem:            54         51          3          0          1         15
[00:31] <ohdae> :o
[00:32] <chaos_zero> ok i have been reading a little...haha...anyway basically if you get virtualbox on ubuntu server you can connect and see the gui over a LAN to configure it then just let it run is that correct or an i totally off?
[00:45] <SpamapS> twb: you can boot the cloud images w/ very little RAM. I'm sure 65MB will work.. though it will swap like a mother.
[00:46] <SpamapS> chaos_zero: you're better off with libvirt and virt-manager if you want a GUI
[00:46] <twb> SpamapS: yes, booting only needed 64MB (the kvm default), but to install I needed a little bit more
[00:47] <twb> SpamapS: otherwise it claimed to finish installing but non-trivial packages weren't unpacked properly and it couldn't get through init, IIRC
[00:47] <SpamapS> twb: right, so the downloadable cloud images would work fine. :)
[00:47] <twb> SpamapS: oh, and of course I had no swap -- who still uses swap these days
[00:47] <SpamapS> I'm one of those weirdos who things swap is more detrimental than the OOM killer
[00:48] <twb> Particularly swapping to a virtual disk inside the VM.  That's silly!  Overcommit the VM's RAM and rely on the host OS to do a single layer of paging
[00:48] <twb> SpamapS: don't install apps that are going to trigger the oom killer in the first place :P
[00:49] <twb> I used to believe in swap until I went for like twelve months where every time I started swapping hard, I couldn't ssh into the system or even log in on the local tty.  So I said "fuck that noise" and I put up with the oom killer, because it's more likely to allow me in enough to either recover or at least trigger a clean reboot.
[00:49] <chaos_zero> how an i supposed to do anything on a windows computer without gui
[00:50] <SpamapS> twb: right, thats exactly how I see it too
[00:50] <chaos_zero> from my exp the cmd is very limited compared to ubuntu
[00:50] <Daviey> zul: python-novaclient is broken because the build process accesses ~/ which it should not.
[00:50] <kklimonda> chaos_zero: there is power shell though
[00:51] <Daviey> zul: Oddly, it worked when it access ~/foo but not ~/foo/bar.txt
[00:51] <twb> kklimonda: monad can be a bit... special
[00:51] <Daviey> zul: https://github.com/openstack/python-novaclient/commit/7601bef9ef70ce69f544e0ffda904a04552bc38c broke it.
[00:51] <zul> Daviey: heh ok
[00:52] <zul> Daviey: looked like it
[00:53] <kklimonda> twb: meh, so can be sh ;)
[00:53] <kklimonda> twb: I don't really have much experience with it though - I did write few scripts when I had to, but not much more
[00:53] <chaos_zero> is their any package in the repository for the aformentioned libvirt?
[00:54] <twb> I know a guy at MS who was adding all sorts of crazy shit to it in his spare time
[00:54] <Daviey> zul: have ideas to work around it?  Make it respect (AS IT SHOULD!) a env variable?
[00:54] <twb> Like readline
[00:54] <zul> Daviey: not yet
[00:54] <chaos_zero> the powershell does not seem suited for my needs
[00:55] <Daviey> zul: if we could get it workng for EoD, we'd have a sunny day!
[00:55] <zul> Daviey: people have already been complaining about the bash completion anyways
[00:55] <zul> Daviey: heh im already eod
[00:55] <twb> Daviey: explosive ordnance disposal?
[00:55] <twb> zul: bash completion is now like 200% more awesome in sid
[00:56] <zul> twb: thats nice it doesnt help the problem we are working on :)
[00:56] <twb> bash-completion (1:1.90-1) experimental; urgency=low * bash-completion 2 preview: dynamic loading of completions
[00:57] <twb> zul: sorry I guess I should read what you guys write as well as my own lines ;-)
[00:58] <Daviey> zul: https://bugs.launchpad.net/nova/+bug/932468
[00:58] <Daviey> twb: feels like it.
[00:58] <Daviey> zul: geez, what is the time?
[00:59] <Daviey> zul: bash_completion doesn't interest me for automatic shell script build tools :)
[01:03] <twb> Daviey: all the cobbler/nova/cloud-y stuff goes way over my head
[01:04] <SpamapS> twb: like, libreadline, or readline-like behavior?
[01:05] <twb> SpamapS: I don't remember
[01:05] <SpamapS> twb: btw, whats the 200% more awesome bash completion thing?
[01:05] <twb> SpamapS: 11:56 <twb> bash-completion (1:1.90-1) experimental; urgency=low * bash-completion 2 preview: dynamic loading of completions
[01:05] <SpamapS> twb: doesn't have to parse all 50,000 lines of it at login?
[01:05] <twb> Right.
[01:05] <SpamapS> twb: nice
[01:05] <twb> Although in my testing it still takes about two seconds to load it
[01:11] <Daviey> twb: same here :)
[01:12] <SpamapS> Ok, with the latest concerns over PHP 5.4.0rc7's signal handling.. I think we have to just consider shipping 5.3.10 .. :(
[01:16] <mdeslaur> SpamapS: whatever you decide to ship, please make sure suhosin is enabled
[01:16] <SpamapS> mdeslaur: definitely it will be
[01:16] <mdeslaur> SpamapS: awesome, thanks! :)
[01:16] <SpamapS> mdeslaur: I really want to ship 5.4 .. but it seems like their quality is just still too poor to ship .0's
[01:17]  * twb struggles not to make more snarky remarks about PHP
[01:17] <kklimonda> SpamapS: oh? we are not following debian on disabling it?
[01:17] <SpamapS> twb: PHP is the Bernie Madoff of the language world. Eventually the pyramid will collapse.
[01:18] <SpamapS> kklimonda: no
[01:18] <twb> Madoff is some Ponzi scheme?
[01:18] <SpamapS> kklimonda: quite the opposite.. I intend to put some effort into getting it merged into upstream next cycle (presumably for a "5.5" or "6.0" ..
[01:19] <twb> Yep, I guessed right.
[01:20] <kklimonda> SpamapS: that's a great goal
[01:20] <SpamapS> twb: I'm not advocating that you watch TV.. but perhaps read news websites? ;)
[01:21] <twb> Like I care about how fucked up the .us gets
[01:21] <twb> As long as I keep giving you all my uranium and coal and such, you won't bomb me
[01:23] <kklimonda> SpamapS: are there any plans to ship two flavours of php: one with suhosin enabled, and one without it?
[01:23] <SpamapS> kklimonda: not yet no
[01:24] <SpamapS> kklimonda: this decision, made in Debian, is pretty recent. We haven't really had time to discuss it in Ubuntu, so we'll stick with the way it is now.
[01:24] <Daviey> adam_g / zul: Seen, http://pb.daviey.com/WON1/ ?
[01:25] <zul> thats a new one
[01:25] <SpamapS> wow.. it takes about 1G of space in /tmp to bzr merge-upstream on mysql-5.5's tarball
[01:27] <adam_g> Daviey: is that from a jenkins job output or manual run?
[01:29] <kklimonda> tjaalton: why are you versioning libpki-*-java files?
[01:32] <Daviey> adam_g: jenkins output, bug 932480
[01:34] <SpamapS> kklimonda: I am inviting members of the Debian php team to join us in Oakland to talk about Suhosin and see if they're also interested in helping push it upstream.
[01:34] <Daviey> adam_g / zul: We are back to Green on the master and those exposed through the jenkins mirror instance \o/
[01:35] <twb> SpamapS: oh right no wonder you like swap -- you use bzr :P
[01:36] <twb> SpamapS: I killed my system three times in a row trying to bzr clone Emacs' repo with "only" 1GB of RAM
[01:36] <twb> (Of course it doesn't OOM until it has finished dl'ing all the patches, which takes about three hours)
[01:36] <adam_g> Daviey: i suppose its possible two different build slaves are attempting to call reprepro?
[01:37] <Daviey> adam_g: that is my theory on the bug
[01:37] <SpamapS> twb: it takes a lot of memory to have a full graph of history. :)
[01:37] <twb> SpamapS: well, git checks out the same repo in half an hour without OOMing
[01:38] <twb> SpamapS: AFAICT it was because bzr tried to build the working tree in memory instead of as files
[01:38] <SpamapS> twb: indeed, I'd expect that git does most things bzr can do faster. :)
[01:38] <Daviey> adam_g: i suspect a for loop, with sleep ${i}s will solve it TBH.
[01:38] <adam_g> Daviey: yeah...
[01:39] <SpamapS> twb: and 99.9% of the time, just as correct. The 0.1% that git glosses over as "not the changesets you're looking for" costs *a lot* of performance and grief in bzr. :)
[01:39] <twb> Damn, looks like I never cached my bzr rant back when I rant into that problem
[01:39] <Daviey> zul: i'm not proud of myself, http://bazaar.launchpad.net/~ubuntu-server-dev/python-novaclient/essex/revision/23
[01:41]  * Daviey EoD's.. nn all
[01:42] <adam_g> Daviey: there was two horizon build jobs timestamped 8:00:29, one passed one failed. g'night
[01:43]  * adam_g EOD
[01:46] <Daviey> jamespage: looks like, https://jenkins.qa.ubuntu.com/job/oneiric-server-ec2/ARCH=i386,REGION=us-east-1,STORAGE=instance-store,TEST=simple-user-data,label=ubuntu-server-ec2-testing/13/console is a harness failure.
[01:48] <kklimonda> tjaalton: ah, it seems to be done by javahelper
[01:54] <Muska> hi I'm having a problem with Ubuntu 11.10 detecting my sata drives on install.  I normally use ata_piix but I can't even load it using modprobe in a virtual terminal
[01:54] <ruben23> guys i have installed iptables on ubuntu server, wher do i find the configuration file where i can write rules on it..any idea guys
[01:54] <Muska> I extracted the initrd image and didn't even see any modules in there for sata
[02:13] <Muska> ah ha problem solved.  fakeraid information was still on the drives causing them to be detected incorrectly *sigh*
[02:15] <twb> ruben23: for Ubuntu you can use ufw for a simple interface.
[02:15] <twb> ruben23: if you want direct management of the ruleset please discuss that on #netfilter.
[02:15] <twb> ruben23: by default Ubuntu and Debian have no raw ruleset file (cf. RHEL's /etc/sysconfig/networking/firewall).
[02:16] <twb> ruben23: the iptables-persistent package provides one such place, but you can also simply do it yourself, or as I said use a wrapper like ufw (or shorewall, or ...)
[02:32] <twb> Re bash-completion, I just found a gotcha -- you have to purge it before dpkg -i'ing the sid version, or it gets confused and keeps all the bogus /etc/bash_completion.d/* conffiles
[02:35] <linocisco> hi all
[02:35] <linocisco> i have a internet device like wifi router
[02:36] <linocisco> i dont know its console IP, how can I find using which ubuntu tool ? I dont want to reset that not to lose settings inside
[02:38] <twb> linocisco: we don't provide support for appliances; contact the vendor.
[02:38] <linocisco> twb, I am thinking if there are any smart tool like nmap
[03:01] <sp4z> Hi i am trying to browse to a web page that has ssl enabled but its not connecting - does anybody know what packages are required to make that work? (i have ubuntu-server base with xfce4 on top using firefox)
[03:02] <twb> sp4z: w3m and ssl-cert
[03:02] <sp4z> tyvm
[03:02] <twb> For GUI support try #ubuntu or #xubuntu
[03:15] <sp4z> twb, are there any more? that hasn't done it.. its not a GUI problem as I can access https pages from my xubuntu box its only this ubuntu server box that has problems
[03:19] <qman___> accessing web pages is a GUI thing, therefore a GUI problem
[03:19] <qman___> try wget or links/lynx, should narrow your problem
[03:20] <sp4z> rgr
[03:20] <twb> sp4z: "w3m https://en.wikipedia.org/" should work fine with just those two packages.
[03:21] <qman___> heh, never used that before, way better than links
[03:22] <twb> qman___: w3m is the default on debian and ubuntu
[03:22] <twb> qman___: if you're in an xterm or fbcon it also supports inline images (install w3m-img)
[03:22] <qman___> cool, I'm impressed with the quality of the mouse support
[03:23] <qman___> page layouts and colors are pretty good too
[03:23] <sp4z> nah that doesn't work :S
[03:23] <twb> sp4z: then you've done something wrong.
[03:23] <sp4z> i'll reboot brb
[03:23] <twb> Sigh.  Kids these days...
[03:23] <sp4z> ??
[03:30] <sp4z> zzz
[03:30] <sp4z> i need to punch myself some times - jeeze that was stupid
[03:32] <sp4z> my iptables were to strict on outgoing traffic
[04:35] <twb> Sigh.  This server has been sittig on my bench for so long, I've forgotten what the login credentials are for it
[04:36] <twb> And init=/bin/sh seems to make it hang after the pivot_root, and because it's "enterprise" hardware it takes five minutes to reboot it
[04:37] <twb> And the fucking grub bullshit is in place so I have to hit shit/alt at EXACTLY the right microsecond or it skips the grub prompt :-////
[04:38] <cloakable> can it boot from usb/optical?
[04:41] <twb> Probably but then I have to mess about finding something that will boot and can talk the same version of md and lvm, &c &c
[04:41] <twb> I shouldn't be obglied to do that simply to save 1s boot time on stupid ubuntu desktops, which is what the grub change was about
[04:42] <twb> *obliged
[04:44] <twb> OK, break worked, so now I can deal with the ramdisk
[04:45] <twb> WTF, mdadm isn't in the ramdisk, and it's reporting different size disks.  Maybe this isn't even my machine...?
[04:45] <twb> No, it definitely claims to be my machine
[04:45] <twb> Looks like it's set up with two hardware raid arrays or something, graah
[04:49] <z3r0n0id> ok so i just installed ubuntu 10.10 server; i set up ip, netmask & default gw. why can i ping the box but not ping anything form the box?
[05:02] <radsouthern> hi guys I changed the vide card and nw my gui is nt working. Any siggestins?
[05:02] <radsouthern> suggestins
[05:02] <radsouthern> suggestions
[05:03] <z3r0n0id> radsouthern: do you have tty1?
[05:03] <radsouthern> 1 sec
[05:04] <twb> It had six disks, but only four of them had running lights, because the other two were SATA not SAS.  Sigh, sigh and thrice sigh.
[05:04] <radsouthern> k yes i just logged into it
[05:05] <radsouthern> alt f1
[05:05] <radsouthern> right
[05:05] <radsouthern> ?
[05:05] <z3r0n0id> ctrl + alt + # to switch between them, 7 is gui
[05:06] <z3r0n0id> sorry F7
[05:06] <radsouthern> k ill try that I had a gforce 5200 and i never culd get a reslutin over 640x480
[05:07] <radsouthern> could
[05:07] <radsouthern> there i fixed my o
[05:07] <radsouthern> lol
[05:07] <radsouthern> resolution
[05:07] <z3r0n0id> radsouthern: it works?
[05:08] <radsouthern> im useing the bult in graphics on the motherboard now
[05:08] <radsouthern> thats what im trying to fix
[05:08] <radsouthern> hold up ill see if it works
[05:09] <radsouthern> no it didn't work bud
[05:11] <radsouthern> maybe i need to reconfigure x
[05:11] <radsouthern> what yah think?
[05:12] <radsouthern> shoot
[05:12] <radsouthern> i may need to remove them drivers from that 5200
[05:12] <radsouthern> the 5200 is a better card
[05:13] <radsouthern> but i never could configure the xorg file
[05:13] <z3r0n0id> what card did you install?
[05:13] <radsouthern> every time i tryed to put some modes in the config it would boot to a black screen
[05:14] <radsouthern> nvidia 5200 fx
[05:14] <radsouthern> the onboard one sux
[05:14] <radsouthern> but i can at least get a resolution over 640 x480\
[05:15] <z3r0n0id> read this it might help
[05:15] <z3r0n0id> http://crunchbanglinux.org/forums/topic/7409/howto-nvidia-geforce-fx-5200/
[05:15] <radsouthern> i have been reading about this for about a year
[05:15] <radsouthern> ill look at it
[05:15] <radsouthern> crunchbang screwed it up so bad i had to reinstall once
[05:16] <radsouthern> i was doing it step by step
[05:16] <radsouthern> then later in the forum it said well do this hahahha
[05:16] <radsouthern> i had already hosed it
[05:17] <radsouthern> another thing I don't think i ever turned the onboard one off
[05:18] <z3r0n0id> radsouthern: sorry i cant help...
[05:18] <radsouthern> what do you do z3r0n0id
[05:19] <radsouthern> what kind of servers do you guys run in here
[05:19] <radsouthern> I'm hst a lamp
[05:19] <radsouthern> hosting*
[05:22] <z3r0n0id> radsouthern: im trying to get my server up and working
[05:23] <radsouthern> my amd box is funny turned
[05:24] <radsouthern> i have had a lot of prbswith it.
[05:24] <radsouthern> i may scrap it
[05:26] <twb> When creating an LV, how do you tell it which PV to prefer?
[05:26] <twb> Ah, after the VG
[05:26] <twb> e.g. lvcreate --name LV1 /dev/VG0 /dev/PV1
[05:27] <z3r0n0id> radsouthern: yea its making me really mad
[05:27] <radsouthern> i have tried different distros on it some won't even run.
[05:29] <rad_> take it easy man hope you get it worked out
[06:22] <Tribaal> ll
[07:38] <tjaalton> kklimonda: versioning? i don't follow
[07:39] <koolhead17> jamespage: ping
[09:00] <jamespage> koolhead17, pong
[09:04] <jamespage> rbasak, please ping me when you start today
[09:08] <Daviey> jamespage: heya
[09:09] <jamespage> Daviey, morning sir!
[09:11] <Daviey> jamespage: did you see my comment last night?
[09:11] <jamespage> Daviey, racey local archive installs in openstack-ubuntu-testing?
[09:12] <Daviey> jamespage: nah, but close :)
[09:12] <Daviey> 01:46 < Daviey> jamespage: looks like,
[09:12] <Daviey> https://jenkins.qa.ubuntu.com/job/oneiric-server-ec2/ARCH=i386,REGION=us-east-1,STORAGE=instance-store a harness failure.
[09:12] <Daviey> jamespage: did you see i created a project?
[09:13] <jamespage> Daviey: that was good thinking
[09:13] <jamespage> we where at the point where we needed one
[09:13] <koolhead17> hi Daviey  :)
[09:14] <jamespage> Daviey: hmm - not really - https://jenkins.qa.ubuntu.com/job/oneiric-server-ec2/13/ARCH=i386,REGION=us-east-1,STORAGE=instance-store,TEST=simple-user-data,label=ubuntu-server-ec2-testing/console
[09:14] <jamespage> dpkg failed to configure something in a big way
[09:14] <jamespage> I've not seen that before
[09:15] <jamespage> Daviey, BTW did you see this - https://jenkins.qa.ubuntu.com/view/ec2%20AMI%20Testing/view/Overview/?
[09:26] <melvincv> ?
[09:27] <melvincv> oh, there's a network lag.
[09:28] <Daviey> jamespage: hey! no i missed the overview - that is nice!
[09:29] <Daviey> jamespage: is ec2 really as unreliable as our testing is currently showing?
[09:29] <Daviey> As in, we have at least 1 failure per day :/
[09:29] <jamespage> well...
[09:30] <jamespage> at the moment the precise testing is failing in us-west-2 due to the archive mirror being hosed
[09:30] <Daviey> right
[09:33] <jamespage> we do see odd things like instances never starting
[09:33] <jamespage> and on pre-oneiric we see some udev issues on first boot
[09:33] <jamespage> bear in mind that only one ami test has to fail for the entire test to be marked red
[09:34] <jamespage> on the dailies that is 1/28 AMI's
[09:34] <jamespage> TBH utlemming and smoser are closer to the actual issues...
[09:35] <Daviey> jamespage: right!  Thanks.
[09:38] <RoyK> Daviey: sounds a bit strange to me that amazon would keep on making money if it was as bad as you picture it
[09:38] <koolhead17> RoyK: you need to read there magical SLA TBH :)
[09:39] <jamespage> RoyK, we launch a large number of instances over the period of a month so we are bound to see edge/race conditions more frequently
[09:40] <jamespage> ~1700 pcm
[09:40] <Daviey> RoyK: exactly.
[09:40] <RoyK> pcm?
[09:40] <Daviey> per calender month
[09:41] <RoyK> k
[09:42] <RoyK> Daviey: what do you use these for?
[09:42] <RoyK> HPC?
[09:42]  * koolhead17 found his juju instances coming up more easily using LXC then AWS
[09:43] <jamespage> RoyK, thats just for validating that the official Ubuntu AMI's actually work :-)
[09:44] <jamespage> none run for more that a few minutes
[09:44] <RoyK> ok
[09:47] <Daviey> koolhead17: you are the first to say that :)
[09:47] <matti> :>
[09:48] <koolhead17> Daviey: i tried all there zones, SpamapS was helping me and i ended up using LXC
[09:48] <koolhead17> all my juju related work i am doing with LXC only :P
[09:48] <Daviey> heh
[09:49] <TeTeT> koolhead17: you do? any advice on setting it up? Tried it a while ago and couldn't launch instances at all
[09:50] <koolhead17> TeTeT: i have a link for you then :d hold on :D
[09:51] <koolhead17> TeTeT: askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage  the magic :)
[09:51] <koolhead17> but try it on oneiric/precious
[09:54] <rbasak> jamespage: pong
[09:56] <koolhead17> jamespage: seems like i will have to try precious as you suggested! :D
[09:56] <TeTeT> koolhead17: thanks, bookmarked. will resurrect my kvm instance for it and check it out
[09:58] <koolhead17> TeTeT: keep me updated incase your still stuck!! :D
[09:59] <jamespage> rbasak, good morning!
[10:00] <rbasak> hey
[10:00] <jamespage> rbasak, so how is openmpi looking today?
[10:00] <jamespage> rbasak, BTW I think this is the issue you are seeing - http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=658600
[10:01] <rbasak> Yeah, I spotted that - thanks
[10:01] <rbasak> (when you linked the bug)
[10:01] <rbasak> My last attempt was http://paste.ubuntu.com/842824/
[10:02] <rbasak> I've been using yorick as it's causing a clear segfault. I suspect the boost build is doing the same thing but it's a bit more hidden.
[10:02] <rbasak> What's odd is that I'm running the same command a second time and it isn't segfaulting
[10:02] <jamespage> rbasak, anything on the upstream mailing lists/bug tracker about this sort of behaviour?
[10:02] <rbasak> here's the script that generates it: http://paste.ubuntu.com/842825/
[10:03] <rbasak> I've not looked
[10:05] <jamespage> rbasak, fakeroot mpic++ works quite well as well
[10:05] <Daviey> jamespage: is there a reason we didn't create a horizon precommit task?
[10:05] <Daviey> 'stable'
[10:05] <jamespage> nope
[10:05] <jamespage> did not know it was in scope for pre-commit testing
[10:06] <jamespage> thats why :-)
[10:06] <Daviey> jamespage: well, i don't think it was - but we have jobs inbound, and would be good to take the donkey work out ;)
[10:06] <Daviey> ie, https://review.openstack.org/3897
[10:06] <jamespage> OK
[10:07] <rbasak> jamespage: I didn't notice that until the debian bug this morning. If I can reproduce that it'll be easier to track down. But I'm also confused as to why my last test couldn't repeat the segfault.
[10:07] <jamespage> I'll add it to the coniguration
[10:07] <rbasak> I thought the segfault was deterministic
[10:08] <jamespage> rbasak, it appears to be in something called opal_wrapper
[10:08] <jamespage> just grabbing a crash dump now
[10:09] <Smozius> Hey guys, I am running Ubuntu server in ESXi and I extended the HDD and rebooted but it still lists the original size on install, but fdisk shows the new size, how can I refresh it to make use of the extra space?
[10:10] <jamespage> rbasak, bug 932628
[10:10] <Smozius> Image of what i'm talking about - http://i.imgur.com/XDnCB.jpg - it shows the usage of  /, which is 20GB, but /dev/sda is 42.9GB which is /
[10:10] <jamespage> Smozius, whats the format of your filesystem?
[10:10] <Smozius> ext4 i believe
[10:11] <jamespage> Smozius, resize2fs would be your friend in this case
[10:11] <Smozius> Can that be used in a production environment?
[10:12] <jamespage> Smozius, it does the resize online
[10:12] <Smozius> cool
[10:13] <Smozius> so you can specify any size....what happens if the size doesnt match what the HDD is....
[10:13] <Smozius> or partition is*
[10:14] <jamespage> ah - one second - lemme look at you screenshot
[10:15] <jamespage> Smozius, sorry - I assumed that you where using LVM
[10:15]  * jamespage thinks
[10:15] <Smozius> Think I should be using LVMs?
[10:15] <Smozius> I've never used them before...
[10:17] <jamespage> Smozius, I normally do on servers as it provides a bit more flexibility and you can span volumes over multiple disks
[10:18] <jamespage> means you never resize an underlying device - you just add a new one and extend...
[10:18] <Smozius> Yeah, the resize2fs isn't working out so well
[10:19] <Smozius> its not detecting the additional sectors
[10:19] <jamespage> Smozius, yes - that would be expected
[10:19] <jamespage> the filesystem is limited by the size of the partition is resides on
[10:19] <Smozius> Right, so LVMs are pretty good with snapshots I heard, is that right?
[10:20] <jamespage> using LVM you can increase the size of the logical volume then resize the filesystem on it
[10:20] <jamespage> Smozius, yeah - that is quite a nice feature - we use root filesystem snapshots in the test lab we have for openstack
[10:21] <jamespage> makes reseting a test machine back to a know good state much quicker!
[10:21] <Smozius> How about for a production email server?
[10:21] <jamespage> Smozius, depends what you want to use the snapshots for
[10:21] <jamespage> I guess you could create point in time snapshots of email data for backup purposes
[10:21] <Smozius> I want to find an efficient hopefully free solution to do backups
[10:22] <Smozius> thats easy
[10:23] <jamespage> you could try bacula
[10:23] <jamespage> not sure if/how it integrates with email solutions
[10:23] <jamespage> what email software are you using
[10:23] <Smozius> Zimbra
[10:24] <Smozius> Can Bacula backup running root file systems?
[10:24] <RoyK> yes
[10:25] <RoyK> Smozius: anything can, really
[10:25] <linocisco> hi
[10:25] <RoyK> Smozius: but with zimbra, the database must be shut down
[10:25] <Smozius> I had attempted at getting DRBD to network raid '/' but it wasn't having it
[10:25] <RoyK> Smozius: there are various scripts out there to help backing up zimbra, it's not hard, but it's a bit more than just backing up ordinary files
[10:25] <linocisco> i have an ISP which offers internet with the proxy settings in browser. I would like to share it to wifi without requiring wifi users to set proxy and port in browser. how to do?
[10:26] <Smozius> Right.... I've seen them, they are messy =/
[10:26] <RoyK> Smozius: if you can afford to take down zimbra during the backup, that's an easy way (which I use)
[10:26] <Smozius> With Bacula?
[10:26] <RoyK> which?
[10:26] <Smozius> Or by coping the files?
[10:26] <Smozius> The way you use....
[10:26] <RoyK> oh
[10:27] <RoyK> I don't use bacula on this system
[10:27] <RoyK> but
[10:27] <RoyK> it'd be the same thing
[10:27] <RoyK> shutdown zimbra, backup zimbra using *anything*, start it
[10:27] <RoyK> see the bacula pre/post scripts
[10:28] <RoyK> but... gotta go... bbl
[10:29] <Smozius> its just a matter of copying the /opt/ folder right?
[10:29] <Smozius> lino are you getting the internet off of an ethernet cable
[10:29] <Smozius> or your own wifi?
[10:32] <RoyK> Smozius: just copy /opt/zimbra out somewhere
[10:33] <RoyK> Smozius: rsync is nice
[10:33] <Smozius> but how fast can you make that go when /opt/zimbra is over 17GB?
[10:33] <RoyK> Smozius: so, stop zimbra, rsync -a /opt/zimbra /back/me/up/zimbra, start zimbra, let bacula handle the rest
[10:33] <RoyK> Smozius: rsync is rather quick after the initial run
[10:34] <RoyK> and the initial run can be done while zimbra is still running
[10:34] <Smozius> So then why the need for shutting it down?
[10:34] <RoyK> you can even do rsync -a /opt/zimbra /somewhere ; stop zimbra ; rsync -a /opt/zimbra /somewhere ; start zimbra
[10:35] <RoyK> that way the first will copy all changes like emails etc, and the last rsync will copy the database files correctly
[10:35] <RoyK> overwriting the ones from the first copy
[10:35] <Smozius> Ah, i've never used rsync before, just scp a few times
[10:36]  * RoyK pirate copies his own idea, changing his backup regime, and then sues himself for copyright infringement
[10:36] <Smozius> lol
[10:37] <RoyK> but... gotta go
[10:37] <RoyK> catch you later
[10:37] <Smozius> +(
[10:37] <Smozius> =(
[10:43]  * koolhead17 not feeling good about cobbler
[11:39] <koolhead17> i downloaded precious server image from http://cdimage.ubuntu.com/ubuntu-server/daily/20120214/precise-server-i386.iso
[11:40] <koolhead17> now when am booting it inside virtualbox am getting error "Please use kernel appropriate for your CPU"
[11:40] <koolhead17> all other versions are running without any issue inside my Dropbox
[11:41] <koolhead17> *virtualbox
[11:41] <koolhead17> jamespage: Daviey ^^
[11:47] <Aison> I'm using freeradius WPA peap authentication with my Wifi APs. In past this worked very good, now it stopped working
[11:48] <Aison> freeradius outputs this message:
[11:48] <Aison> [peap] <<< TLS 1.0 Alert [length 0002], fatal unknown_ca   TLS Alert read:fatal:unknown CA     TLS_accept: failed in SSLv3 read client certificate A rlm_eap: SSL error error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca SSL: SSL_read failed inside of TLS (-1), TLS session fails.
[11:48] <Aison> do anybody know what that could be?
[11:52] <koolhead17> got it working, unlike other versions i have to enable PAE/NX
[11:52] <koolhead17> :)
[12:00] <xperia> hi all. i need to install * FlexUnit 4: http://opensource.adobe.com/wiki/display/flexunit/FlexUnit on my ubuntu server to test some stuff out. does anybody know how to do this best ?
[12:09] <Tixos> hey, global DNS propagation can take 1 month?
[12:10] <koolhead17> Tixos: few hours
[12:10] <Tixos> 'can take 1 month'
[12:10] <Tixos> or longer
[12:11] <Tixos> as far as im aware max TTL can be 1 month?
[12:23] <chilli0> hello, for some reason my headless server keeps on d/cing from the network the time frame of this changes. could be like 2h - 2 days. anyone know how to figure it out?
[12:27] <jamespage> rbasak, I stuck a backtrace on bug 932628
[12:28] <jamespage> rbasak, segfault occurs when "    if (stat("/dev/ummunotify", &st) == 0) {" is called
[12:29] <rbasak> Interesting. That shouldn't make fakeroot segfault, AFAIK.
[12:29] <jamespage> rbasak, fairly easy to test :-)
[12:30] <rbasak> Yeah I tried that
[12:30] <rbasak> $ fakeroot stat /dev/ummunotify
[12:30] <rbasak> stat: cannot stat `/dev/ummunotify': No such file or directory
[12:30] <rbasak> I'll do it properly in half an hour when I should have a dev environment again :)
[12:30] <rbasak> I wonder if it's caused by something stepping over fakeroot's memory?
[12:34] <jamespage> rbasak, might be
[12:35] <rbasak> IIRC, fakeroot keeps pointers to the real functions persistently
[12:35] <jamespage> I see
[12:36] <rbasak> No it can't be that, it's calling dlsym, and calloc is segfaulting.
[12:36] <rbasak> I think it's heap corruption. Debugging this could be interesting
[12:37] <jamespage> well at least we have a few more pointers and a deterministic test
[12:37] <rbasak> "a few more pointers" I see what you did there :-)
[12:43] <jamespage> rbasak, :-)
[12:47] <Daviey> *groan*
[12:55] <jamespage> rbasak, I think this might be related to http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=531522
[12:57] <jamespage> hmm - almost definately - the ordering of the checks is incorrect
[12:57] <jamespage> if should check for FAKEROOT first
[12:57] <jamespage> lemme try a patch
[12:58] <rbasak> reading
[13:00] <rbasak> "It seems to me that OpenMPI is at fault for doing crafty things like
[13:00] <rbasak> "stat()" in the __malloc_initialize_hook()."
[13:00] <jamespage> rbasak, 1.5.x introduced that stat BEFORE the check for FAKEROOT
[13:01] <jamespage> I've re-ordered the checks - building now
[13:01] <rbasak> Sorry jamespage, I'm a little behind you!
[13:02]  * jamespage slows down
[13:02] <jamespage> so if you read the code around line 746
[13:02] <jamespage> (about 20 lines down)
[13:02] <jamespage> it discussed running under FAKEROOT and makes allowances for it
[13:03] <jamespage> this was OK in 1.4.x but 1.5.x introduced the stat on /dev/XX prior to the check for FAKEROOT
[13:03] <jamespage> reading the comments this won't work (which is exactly what we see)
[13:05] <jamespage> basically its a upstream regression of this fix - https://svn.open-mpi.org/trac/ompi/changeset/21493
[13:06] <rbasak> OK, I agree
[13:06] <jamespage> 4 cores + SSD burning the build now :-)
[13:10] <jamespage> rbasak, that appears to work!
[13:10] <jamespage> w00t
[13:11] <jamespage> jamespage@hendrix:~/src/precise/openmpi$ fakeroot mpic++
[13:11] <jamespage> g++: fatal error: no input files
[13:11] <jamespage> compilation terminated.
[13:11] <chilli0> Hey i'm back, i may have fixed it but not sure till it goes down i guess. but now i want to get my VPN working correctly, this is my error. http://pastebin.com/mfw4zZE7
[13:20] <jamespage> rbasak, w00t yorrick builds!
[13:20] <Daviey> adam_g: When you are alive for the day, any idea why precise-openstack-essex-deploy is failing?
[13:21] <rbasak> jamespage: awesome!
[13:21] <rbasak> jamespage: thanks, you did in a day what would have taken me a week. Probably because I would never have found that debian bug and had to rediscover it all.
[13:21] <jamespage> rbasak, let me just gather my thoughts and I'll push a branch somewhere
[13:22] <jamespage> rbasak, hey - no problem!
[13:24] <jamespage> rbasak, so now that we have something that works we need to make that decision as to whether to transition the archive or have and openmpi1.5 package in universe
[13:25] <rbasak> jamespage: yep - I'm not sure about the process here. Transitioning the archive would seem cleaner - in particular these are universe packages that we have no test stories for, so if we don't, will they ever be transitioned?
[13:25] <jamespage> rbasak, agreed; they either transition or get removed from the archive.
[13:26] <jamespage> this is the disavantage of doing it before debian
[13:26] <rbasak> I suppose that they'd be transitioned when Debian does their transition, at which point we'd need to have deltas to manage the upgrade
[13:27] <rbasak> I'm in favour of doing the transition then - if there are issues, then then can be fixed as people report them.
[13:27] <rbasak> I think it's unlikely that we'd get much community testing before release anyway.
[13:28] <rbasak> s/much/any/
[13:35] <jamespage> rbasak, OK so we can validate that the packages that need to transition build OK - thats normal
[13:35] <jamespage> plus1 maintenance team might be able to help with that
[13:36] <jamespage> rbasak, I just pushed a branch containing the release from debian experimental plus my fix - its linked to https://bugs.launchpad.net/ubuntu/+source/openmpi/+bug/932628
[13:36] <jamespage> rbasak, going to get some lunch
[13:37] <rbasak> ok
[13:48]  * koolhead17 has precious running     // ..\\
[13:52] <zul> Daviey: can you cowboy your fix for python-novaclient into the openstack-ubuntu-testing branch so we can catch anything else that your patch might fix/break please
[13:53] <Daviey> zul: hang on, have i missed the workflow?
[13:53] <zul> Daviey: i dont think we have a set workflow really
[13:54] <zul> Daviey: but if we can get instant gratification...why not :)
[13:54] <Daviey> zul: i should never need to touch ~openstack-ubuntu-testing, should i?
[13:54] <Daviey> (or you, or anyone)
[13:55] <zul> Daviey: well if you to test your patch right away and you are not a member of ubuntu-server-dev *cough* adam_g *cough* then you have to
[13:58] <Daviey> zul: no, no, no
[13:58] <Daviey> zul: adam_g needs his changes sponsored in, until he is going to apply for upload access.
[13:59] <zul> Daviey: right but i dont want him blockin on me
[13:59] <Daviey> zul: or any other ~ubuntu-server-dev or core-dev.
[13:59] <zul> Daviey: right
[14:00] <Daviey> zul: please, let's keep ~openstack-ubuntu-testing gated.
[14:00] <zul> Daviey: no arguement from me
[14:00] <zul> Daviey: can you add the patch to the ubuntu-server-dev branches then?
[14:01] <Daviey> zul: why not?
[14:01] <Daviey> zul: Hmm
[14:01] <Daviey> zul: Why not just wait for it to land?
[14:01] <zul> Daviey: so we can shake out any other bugs?
[14:01] <Daviey> it's not a blocker, is it?
[14:02] <zul> it isnt but we can be a good downstream if we can shake out any other bugs we find
[14:02] <Daviey> zul: isn't that what future precommit is for?
[14:04] <koolhead17> woahh, next_server and server is also automated in installation for cobbler in precious
[14:05] <zul> Daviey:  sure but sometimes its good to be one step ahead and pro-active
[14:10] <Daviey> jamespage / adam_g / zul: I just heard the plan to refactor tarball.sh.. What is this about?
[14:11] <jamespage> Daviey: its becoming to rigid and complex in its current form
[14:11] <jamespage> I wanted to re-use the function for the pre-comment testing for stable but as it stands thats asking a bit to much of it
[14:11] <Daviey> jamespage: right, but is python the right fit for this?
[14:12] <jamespage> Daviey: its as good as any - especially as the meta data is not that trivial
[14:13] <Daviey> jamespage: ISTM that we'll be using it to either wrap shell, or added complexity of python libraries when the shell form is easier?
[14:15] <jamespage> Daviey: to be honest I've not looked at in that much detail - zul and I discuss how the scripts should work not how they would be implemented
[14:16] <jamespage> and what data we wanted in the meta-data to support both the build process and configuring jobs in jenkins
[14:17] <Daviey> jamespage: ah, zul said it was being done in python.
[14:17] <jamespage> alot of what we do is already written in python in the lab - the deployer and the jenkins configurator are for example
[14:17] <zul> and since the jenkins uses config.yaml to configure the jenkins job, so in  my mind it made sense to use yaml and python
[14:18] <Daviey> zul: *THAT* is justification :)
[14:19]  * zul just finished his morning caffine fix :P
[14:19] <Daviey> zul: heh, although - i'd rather we get tempest in use first .. before undertaking this.
[14:20] <jamespage> Daviey: fine but we can't deliver pre-commit stable testing until we do the refactoring
[14:20] <Daviey> jamespage: really?
[14:21] <jamespage> thats one of the primary drivers
[14:21] <jamespage> one/is
[14:21] <jamespage> I stated to update tarball.sh for pre-commit testing
[14:21] <jamespage> and the if/case clauses for determining what base version numbers should look like was looking stupid
[14:21] <jamespage> so I stopped
[14:22] <Daviey> jamespage: perhaps i'm missing the complexity, but can't we just fork tarball.sh until the new thing is ready?
[14:23] <jamespage> Daviey: I think you are over estimating the complexity of what we are proposing in terms of refactoring
[14:23] <jamespage> I'd rather spend time doing it so its a bit more maintainable than trying to hack something together based on a fork of tarball.sh
[14:24] <jamespage> although my time/attention is elsewhere at the moment anyway
[14:24] <rbasak> zul: do we need a test for that in CI? That get_console_output returns something?
[14:24] <zul> rbasak: yeah
[14:24] <Daviey> rbasak: that belongs in tempest, i'd say.
[14:25] <zul> Daviey: alot of stuff belongs in tempest ;)
[14:25] <rbasak> How do we need to track stuff that needs to go in there? A bug task?
[14:25] <Daviey> zul: and tempest belongs in the CI :)
[14:26] <zul> Daviey: ack
[14:27] <Daviey> rbasak: I think it's not something that is going to fall of the plate :)
[14:37] <zul> jamespage: is there anything blocking us from getting tempest working on the openstack-ci stuff?
[14:38] <jamespage> zul: not sure TBH - adam_g would be best positioned to answer that question
[14:38] <jamespage> sorry
[14:39] <zul> jamespage: no worries just thinking out loud
[14:39] <jamespage> zul: ack
[14:47] <Daviey> Hey, anyone want to help validate 10.04.4 point release?
[14:47] <Daviey> one test outstanding!
[14:51] <zul> people still use lucid? ;)
[15:17] <brendan0powers> jamespage: Hi, I decided to just make a snapshot of the repository, and then point the watch file to the directory where I store it
[15:18] <brendan0powers> For the next release, I will probably generate a release tarball, and move the packaging to it's own repository
[15:19] <rbasak> Is there any reason why libguestfs/guestfish isn't packaged in Ubuntu? Or is it just that I can't find it? I thought nova did something like what libguestfs does?
[15:20] <zul> it is
[15:20] <zul> rbasak: apt-cache search libguestfs
[15:21] <rbasak> Ah. I'm still on Oneiric. Thanks.
[15:21] <rbasak> I feel an upgrade is imminent :)
[15:22]  * zul is still running oneiric on desktop
[15:24] <jamespage> zul, rbasak: chickens
[15:25]  * zul has been called out
[15:25] <zul> jamespage: i run it on the server though
[15:25] <jamespage> brendan0powers, so that process needs to work for anyone - not just you
[15:25] <rbasak> I haven't got round to it yet :)
[15:26] <brendan0powers> jamespage: The snapshots are stored at http://www.resara.org/release-snapshots/, and I updated the debian/watch file
[15:26] <jamespage> brendan0powers, right - so that can be parsed by uscan using a watch file
[15:27] <brendan0powers> jamespage: right, although it doesn't download the archive now unless I force it too
[15:27] <jamespage> if you add a get-orig-source target to debian rules then you can make it download and rename the tar.gz to the correct name
[15:27] <jamespage> brendan0powers, it would be helpful if I could see the watch file now
[15:27] <brendan0powers> Is it OK to just run uscan --force-download from the rules file
[15:29] <brendan0powers> jamespage: https://bitbucket.org/resara/resara-server/src/06a5b06de1a2/rds/packages/precise/debian/watch
[15:30] <brendan0powers> jamespage: or you can clone the repo hg clone https://bitbucket.org/resara/resara-server and look in the rds/packages/precise folder
[15:31] <jamespage> get-orig-source:
[15:31] <jamespage>     uscan --download-version $(DEB_UPSTREAM_VERSION) --force-download --rename
[15:33] <brendan0powers> ok, so it get-orig-source called by the build system automatically?
[15:43] <brendan0powers> jamespage: where does the DEB_UPSTREAM_VERSION variable come from?
[15:43] <jamespage> brendan0powers, DEB_UPSTREAM_VERSION=$(shell dpkg-parsechangelog | sed -rne 's,^Version: ([^+]+).*,\1,p')
[15:43] <jamespage> brendan0powers, some do - depends on how you build it
[15:44] <brendan0powers> Ah, ok, so it's not provided by the build system
[15:54] <jamespage> brendan0powers, ./debian/rules get-orig-source can of course just be run
[15:55] <brendan0powers> jamespage: right, it's almost working, I need to fix up the snapshot a bit
[16:07] <brendan0powers> jamespage: Ok, so now I have a debian/ directory with an watch file, and a way to get the original source
[16:07] <jamespage> brendan0powers, great!
[16:07] <brendan0powers> But, when you clone the repository, the precise/ fold only contains the debian/ directory, and nothing else
[16:08] <brendan0powers> jamespage: should the source also be in that directory, or should I provide a way to extract the orig.tar.gz into the precise/ folder?
[16:08] <Daviey> jamespage: not sure i've seen, https://jenkins.qa.ubuntu.com/view/Precise%20OpenStack%20Testing/job/precise-openstack-essex-deploy/18036/console , before?
[16:08] <jamespage> brendan0powers, it not required
[16:09] <jamespage> i.e. its fine to have a packaging only branch if that makes sense
[16:09] <brendan0powers> Ok, I get a bunch of warning about ignored deleted files when I build the source package
[16:09] <brendan0powers> The source package builds properly though
[16:09] <brendan0powers> I assume that's a quilt thing?
[16:11] <jamespage> hrm - hard to say without seeing it
[16:12] <brendan0powers> Ok, I'm about to update the bug report with my changes
[16:16] <jamespage> brendan0powers, OK - I'm out for the next day or so - I'll review on Friday is no-one else picks up in the interim
[16:17] <brendan0powers> Ok, thanks
[16:24] <brendan0powers> jamespage: Ok, all the changes to the repository have been pushed, and I just updated the bug report
[16:25] <brendan0powers> jamespage: Thanks for all your help so far
[16:57] <Daviey> utlemming: not in #ubuntu-devel?
[17:02] <SpamapS> zul: have the tests run w/ RabbitMQ 2.7.1 yet?
[17:02] <SpamapS> zul: I want to update erlang as well.
[17:02] <zul> SpamapS: yeah they have been running with the openstack-ci the past couple of runs
[17:06] <zul> SpamapS: no problems so far
[17:09] <SpamapS> zul: sweet
[17:10] <SpamapS> hallyn: bump.. I just filed a MIR on ceph, so you should be able to add ceph support to qemu-kvm now
[17:11] <hallyn> SpamapS: filed?  or filled?
[17:11] <SpamapS> hallyn: filed
[17:12] <SpamapS> hallyn: hrm.. does it actually have to be in main for you to link to it?
[17:12] <SpamapS> I just realized that..
[17:12] <hallyn> :)
[17:15] <kklimonda> tjaalton: the comment about versioning for java libraries was my fault - haven't read the package correctly
[17:16] <kklimonda> tjaalton: I've started playing with ipa-server-install and it's failing spectacularly, as expected - done some hacking, and got it to configure ntp and tomcat6 instance, but it's going to take a lot of work to make it work - and some discussion with the upstream on how to patch distribution-specific things in a sane way
[17:21] <smoser> rbasak, https://help.ubuntu.com/community/UEC/Images#Ubuntu_Cloud_Guest_images_on_Local_Hypervisor_.28Maverick.29
[17:21] <rbasak> thanks!
[17:21] <smoser> you should be able to read that and provide cloud-init with the data you want.
[17:21] <smoser> wait
[17:21] <smoser> not that
[17:22] <smoser> https://help.ubuntu.com/community/UEC/Images#Ubuntu_Cloud_Guest_images_on_Local_Hypervisor_Natty_onward
[17:22] <smoser> the code that makes the iso is in that 'make-iso' and 'user-data' is the user-data that you want to inject.
[17:24] <rbasak> thanks
[17:29] <SpamapS> Ooo, mysql cluster 7.2 released
[17:30] <SpamapS> Perhaps we can resurrect it for precise
[17:30]  * SpamapS goes off to look for its public bug tracker.. ... 
[17:33] <jeffrey_> hi, has anyone installed ubuntu server on a computer with a AR8152 chipset NIC?
[17:48] <adam_g> Daviey: half the cobbler profiles were set to netboot disabled, so juju was only getting half the machines it needed. ive got no idea how that would have happened, reenabled them. ill keep an eye on it
[17:52] <Daviey> adam_g: golly.
[17:52] <Daviey> thanks.
[17:53] <Daviey> adam_g: How did you debug that?
[17:55] <adam_g> Daviey: https://jenkins.qa.ubuntu.com/view/Precise%20OpenStack%20Testing/job/precise-openstack-essex-deploy/18036/console  toward the end of the juju debug, youll see some services have 'Machine: Pending' which means the provider never returned a machine, those should all be populated with host names
[17:56] <Daviey> adam_g: right, but hwo ddi you determine it wasa cobbler issue - rather than just 'machine gone away | failed to bootstrap'
[17:58] <adam_g> Daviey: it doesnt bootstrap, theres always a bootstrap node up. if that 'gone away', there would have been an error and little debug output. machines are shutdown and booted every deploy, so the others shouldn't go away unexpectedly
[17:59] <koolhead17> adam_g: cobbler system?
[18:01] <adam_g> koolhead17: yeah
[18:01] <koolhead17> adam_g: hope i will get it working in precious tomorrow. have wasted 3 days already :(
[18:01] <stgraber> hallyn: you were up pretty late last night ;) (just saw the comments in the qemu bug)
[18:03]  * koolhead17 looks at Daviey conversation at #openstack :P
[18:03] <zul> adam_g: if you missed my email new keystone snapshot in my keystonelight ppa
[18:04] <hallyn> yeah.  let's hope upstream takes the bisect and runs with it
[18:05] <Daviey> adam_g: right, sorry - overloaded the term
[18:17] <ivoks> just a heads up; --public_interface should be 'bridge' interface, not eth0|eth1
[18:17] <ivoks> lots of (official) docs seems to have an error there
[18:23] <smoser> utlemming, http://paste.ubuntu.com/843365/ is new i think (at least to me)
[18:24] <smoser> 'metrics' xml in the met serviceadata
[18:25] <utlemming> smoser: I believe so.
[18:25] <utlemming> smoser: Also, it looks like the m1.large got a bit snappier
[18:25] <smoser> its also interesting to me that the've just shoved it into an older api date
[18:25] <smoser>   http://instance-data/
[18:26] <smoser> shows latest field in there is 2011-05-01
[18:26] <smoser> i'm *certain* i would have seen that metric stuff since then.
[18:31] <adam_g> zul: note - python-greenlet python-eventlet python-passlib are needed for that KSL package, just to get keystone-all to attempt to start up (still with errors, of course :)
[18:32] <zul> adam_g: lovely :)
[18:34] <adam_g> zul: im going to work on that today and put all the packaging work in a branch somewhere in lp:~openstack-ubuntu-testing i suggest we move the testing PPA there, and trigger per-commit builds like we're doing with everythign else
[18:35] <zul> adam_g: agreed
[18:49] <WinstonSmith>  
[18:49] <smoser> rbasak, you were right. cloud-init is buggy with regard to string 'template' for manage_etc_hosts
[18:49] <rbasak> smoser: thanks for looking!
[18:50] <rbasak> smoser: I had tried 'yes' as well as I didn't quite follow which semantics I needed. It seemed to me that both would work in my case (no further cloning), but neither did.
[18:50] <smoser> True would have worked.
[18:51] <smoser> after next upload, this will do what you want:
[18:51] <smoser>  #cloud-config
[18:51] <smoser>  manage_etc_hosts: template
[18:51] <smoser>  fqdn: superman.brickies.net
[18:51] <smoser> (assuming of course that you want 'superman.brickies.net' as your hostname, which just makes sense)
[19:01]  * Corey waves at jeffrubic and smoser 
[19:01] <jeffrubic> smoser: we'd like to triage: https://bugs.launchpad.net/cloud-init/+bug/927795
[19:01] <jeffrubic> Corey is the debian maintainer of salt
[19:04] <smoser> ah. hey, jeffrubic
[19:04] <smoser> fix-committed.
[19:04] <jeffrubic> we still need to address issue (b)
[19:04] <jeffrubic> we == me
[19:04] <smoser> right.
[19:04] <jeffrubic> and (a) for that matter, but it's easy
[19:05] <smoser> i fixed 'a'
[19:06] <jeffrubic> cool, thanks
[19:07] <jeffrubic> the debian package isn't currently supporting upstart yet, but I've got the script available: https://gist.github.com/1617054
[19:08] <hallyn> zul: i'm goign to be doing a libvirt upload.  got anything to queue up?
[19:09] <zul> hallyn: nothing here
[19:11] <Corey> smoser: What does upstart change in the debian/ubuntu package?
[19:12] <hallyn> zul: oh, if you have a sec, did you have any objections to my proposed change for bug 475327?  (in comment 6)
[19:12] <zul> hallyn: none yet :)
[19:13] <smoser> Corey, nothing really.
[19:14] <smoser> just this:
[19:14] <smoser> $ sudo service ssh start; echo $?
[19:14] <smoser> start: Job is already running: ssh
[19:14] <smoser> 1
[19:14] <hallyn> silly me.  i was thinking that was in libvirt until i went to try and make the change :)
[19:14] <smoser> the typical expectation of a debian package is that if you install it, it starts the service.
[19:14] <hallyn> qemu is gonna have to wait until i figure out how to fix its libc-induced FTBFS
[19:15] <smoser> so your install would have started the 'salt-minion' service presumably
[19:15] <smoser> and then the check_call attempt to start it will exit non-zero
[19:15] <smoser> at least it would if the package ever changed to being upstartified.
[19:15] <jeffrubic> after copying the above script to /etc/init
[19:15] <smoser> that make sense?
[19:16] <jeffrubic> from where is check_call invoked?
[19:19] <smoser> jeffrubic, cloudinit/CloudConfig/cc_salt_minion.py
[19:19] <smoser>      subprocess.check_call(['service', 'salt-minion', 'start'])
[19:19] <smoser> i'm sorry if i'm not being clear.
[19:19] <jeffrubic> ok, wasn't sure it it needed to be in the packaging too
[19:20] <smoser> jeffrubic, well to behave more like a normal debian package, your salt package should ensure that it is running
[19:20] <smoser> after install
[19:20] <smoser> i'm not sure if yours does or not
[19:20] <jeffrubic> Corey's baliwick
[19:20] <smoser> or if your 'start' script would exit success if already running
[19:21] <jeffrubic> the deb people are pretty strict, so I'd assume the non-upstart stuff is proper
[19:36] <smoser> jeffrubic, i'm not sure if there is really strict convention on that
[20:01] <adam_g> SpamapS: anything funky going on ATM with the mysql-server-5.5. packages in precise?
[20:04] <henkjan> adam_g: there is some discussion on the ubuntu-server maillinglist to replace mysql-server with mariadb
[20:04] <SpamapS> adam_g: yes
[20:04] <SpamapS> adam_g: seeing crashes?
[20:04] <SpamapS> adam_g: I think I need to switch back to building with gcc 4.5
[20:05] <henkjan> adam_g: but no decision has been made on that afaik
[20:05] <adam_g> SpamapS: no, i can't install it: http://paste.ubuntu.com/843460/
[20:07] <SpamapS> adam_g: looks like the archive just picked up mysql-server-5.5 but not mysql-common
[20:07] <SpamapS> adam_g: its possible thats because the i386 build failed.. which may be where the _all packages come from
[20:08] <SpamapS> adam_g: I'm uploading a version that builds with gcc 4.5 again
[20:09] <adam_g> SpamapS: yea. mysql-common is still at 5.5.17-4ubuntu6
[20:17] <SpamapS> adam_g: 5.5.20-0ubuntu2 uploaded.. it will take a while to build/test.. :-/
[20:24] <adam_g> SpamapS: thanks, ill keep an eye on it
[20:30] <SpamapS> adam_g: specifically you need this one to succeed: https://launchpad.net/ubuntu/+source/mysql-5.5/5.5.20-0ubuntu2/+build/3215214
[20:50] <adam_g> rbasak: around?
[20:51] <Ariel88> Hi
[20:54] <adam_g> zul: ping
[20:54] <zul> adam_g: pong
[20:56] <adam_g> zul: the libvirt console pipe patch we have in precise. how was that generated?
[20:56] <zul> adam_g: its been forward ported from oneiric
[20:56] <adam_g> zul: it seems to be missing some things that explain bug #929780 and bug #932787
[20:56] <zul> adam_g: im in the middle of refreshing it again
[20:56] <adam_g> zul: oh, sweet
[20:57] <adam_g> zul: who ported it?
[20:57] <zul> me
[21:19] <SpamapS> adam_g: amd64 built fine.. i386 should complete shortly
[21:24] <adam_g> SpamapS: thanks
[21:43] <SorlaK> hello everyone
[21:52] <SorlaK> someone can tell how do i "see" the config in the ldap build in  with the 10.04 version
[21:58] <SpamapS> adam_g: built
[21:59] <SpamapS> SorlaK: http://launchpad.net/ubuntu/lucid/+source/openldap
[21:59] <kklimonda> SorlaK: launchpad.net keeps logs of all builds
[21:59] <SpamapS> SorlaK: you should see in there the list of versions available
[21:59] <kklimonda> ah, direct link always better :)
[21:59] <SpamapS> SorlaK: pick the version, then dig down into your architecture, and you'll see the build log
[21:59] <SpamapS> SorlaK: you can also download the source package with 'apt-get source openldap'
[22:01] <adam_g> SpamapS: saw that, thanks
[22:03] <SorlaK> sorry look that i dint explain my self good enought, my bas english is my second leanguage and is rusty
[22:04] <SorlaK> ......ok.......... really rusty
[22:06] <SorlaK> what i mean was, in previus version of ldap there it was the slapd.conf file from there i was able to see wihs scheme, modules a overlays where handling the ldap
[22:07] <SorlaK> but since know uses a dinamyc conf i have no idea how can i find this info
[22:09] <kklimonda> SorlaK: it's still in /etc/ldap
[22:10] <kklimonda> SorlaK:  it's just the files stored there are part of the LDAP database and should be edited with ldap tools
[22:10] <kklimonda> SorlaK: but you can still view them, and even edit them by hand if you shut slapd first
[22:11] <SorlaK> using cn=config? you said?
[22:11] <kklimonda> yes
[22:11] <kklimonda> you can read more about it for example here: http://www.zytrax.com/books/ldap/ch6/slapd-config.html
[22:12] <kklimonda> and your ldap browser/editor should be able to access, and let you configure it.
[22:15] <SorlaK> ok thanks i will give a try
[23:05] <rbasak> adam_g: pong
[23:09] <adam_g> rbasak: was wondering about the console fifo patch we have in precise/nova
[23:10] <adam_g> rbasak: but i think its being sorted out
[23:11] <tjaalton> kklimonda: yep
[23:14] <tjaalton> äppä
[23:14] <tjaalton> ä
[23:14] <tjaalton> oops
[23:14] <kklimonda> tjaalton: I did add some fixes to pki-core but I can't commit them after all - my acl gives me only read rights on other projects
[23:15] <tjaalton> kklimonda: so you're not on collab-maint?
[23:17] <kklimonda> tjaalton: apparently not, I've always assumed that it's enough to get access to one project to have it to all but I've had to misread something
[23:18] <tjaalton> kklimonda: you could create a personal repo where i can pull from, or just apply for c-m :)
[23:19] <tjaalton> it's just paperwork
[23:19] <kklimonda> tjaalton: yeah, I should apply for c-m anyway - I want to keep one of my packages there anyway
[23:19] <tjaalton> hmm so a lot to upload tomorrow
[23:20] <tjaalton> oh well, rammstein was worth it
[23:21] <kklimonda> oh? :)
[23:21] <kklimonda> I guess rammstein is always worth it :)
[23:22] <tjaalton> yeah just arrived home from the show, too tired to work on the packages tonight
[23:23] <kklimonda> tjaalton: are you a DM/DD, or who did you ask to sponsor your collab-maint application who's working on freeipa in debian?
[23:23] <tjaalton> kklimonda: email the diff to pki-core and i'll integrate it before uploading
[23:24] <kklimonda> ok
[23:24] <tjaalton> kklimonda: nope, i guess you can apply from alioth pages somewhere, can't remember
[23:25] <tjaalton> there's noone else working on these packages (anymore) :)
[23:26] <kklimonda> tjaalton: wait, I'll just upload branch to github - should be easier then sending patches
[23:26] <tjaalton> yep
[23:28] <tjaalton> anyway, too late for me now, i'll check those tomorrow
[23:28] <kklimonda> tjaalton: https://github.com/kklimonda/pki-core-maint there are 4 patches now - first three are enough to install freeipa-server if I remember correctly, and the last one adds dependency on some perl lib required for setup to work.
[23:28] <kklimonda> sure, good night :)
[23:28] <kklimonda> It's also time for me
[23:28] <tjaalton> yep, night!
[23:58] <undecim> My server always hangs on "Unpacking x11-common" and the dpkg process ignores even SIGHUP. This doesn't happen with any other package
[23:59] <undecim> Where is the dpkg .deb cache?
[23:59] <kklimonda>  /var/cache/apt/archives/