[00:58] <SpamapS> roaksoax: FYI, regardin maas and rabbitmq.. we should fix MaaS to not fail if RMQ is not running yet, rather than try to coordinate w/ upstart. They won't always live on the same box, so its not really viable to believe we can control the bootup order of machines in a distributed setting.
[00:59] <lifeless> SpamapS: +1
[01:02] <SpamapS> this problem exists in tons of services.. but stuff we are writing *now* should not repeat that mistake. :)
[01:10] <twb> boot order isn't deterministic even on a single host, when using upstart
[01:20] <SpamapS> twb: its not supposed to be. Things that can start in parallel, should.
[01:22] <SpamapS> twb: only "plumbing" should need ordering really.
[01:22] <twb> just sayin
[02:04] <roaksoax> SpamapS: agreed, however, this is not really an issue of maas failing, but rather, this is an issue of not being able to successfully install maas due to rabbitmq stalling the installer, and this is just a work around that allows us to successfully install maas from the installer
[02:05] <roaksoax> SpamapS: and this is the only way Daviey and I could figure out to do so
[03:16] <Smaug> hey all, how do I find out under what user my apache process is running?
[03:17] <twb> It's www-data unless you've messed up
[03:17] <twb> you would find out by looking at ps output or pgrep and /proc
[03:18] <myhrlin> or by checking apache's config file
[03:20] <Smaug> thanks
[03:20] <Smaug> twb
[03:20] <twb> myhrlin: well if you trust config files ;-)
[03:20] <twb> Could be config was changed since apache was last started
[03:21] <myhrlin> ah, hopefully you wouldn't have any untrusted users on the machine to do that and not restart the daemon
[03:21] <twb> fsvo untrusted users = idiot coworker sysadmins, but yeah
[03:22] <myhrlin> yeah I wouldn't trust them
[03:23] <twb> Fucking cups and its stupid conflation of conffile and state file
[03:23] <twb> http://paste.debian.net/161199/ my etckeeper logs are FULL of that crap
[03:46] <Smaug> quick check - any dangers on giving www-data write access to a folder containing a website?
[03:47] <twb> Yes
[03:47] <twb> Tell your web app author to get a goddamn clue
[03:47] <twb> Also probably better to discuss this on #httpd
[03:52] <qman__> yeah, if the httpd has write access to any directories which contain scripts or could contain scripts, invariably when the website code vulnerabilities are found they will use it to upload nasty things
[03:53] <twb> Especially if you enable scripts ;-)
[03:53]  * twb comes from the "static HTML or GTFO" school
[04:04] <Smaug> hmm
[04:04] <Smaug> ty
[04:04] <dork> np dragon
[05:22] <SpamapS> roaksoax: yeah that makes sense. My main point is simply that it is better solved upstream by maas itself.
[05:22] <SpamapS> roaksoax: the mistake is in the way maas did things.. you're just working around that.
[06:16] <JayWalker_> I changed some things on my server (none of which should have caused this) and suddenly I'm getting 403 errors from apache on ALL my host names even with the file/folder permissions set to 777. wat do?
[06:38] <RoyK> gd mrnng
[07:23] <acidflash> hello all,
[07:23] <acidflash> i am having problems with processes blocking for more than 120 seconds
[07:24] <acidflash> i have a raid card, and 10 hdd's installed in jbod, each disk is an array (using it as a sata expansion card), and jfs
[07:24] <acidflash> during rsync, after all the memory 16 (gb) fills up
[07:27] <acidflash> the rsync will go into D state, and when i dmesg, i get the following -> http://pastebin.com/ZYqehBxa
[07:35] <acidflash> i've googled long and hard, and have come to no conclusion,
[07:35] <acidflash> kernel is 3.0.X
[07:36] <twb> acidflash: do you have swap?
[07:36] <acidflash> twb: yes
[07:36] <twb> It's probably swap thrashing if memory is full
[07:37] <twb> 2.6 *sucks* at swap IME
[07:37] <twb> Dunno abuot 3.x
[07:37] <acidflash> take a look at pastebin its dmesg
[07:37] <twb> Those aren't helpful without more context
[07:38] <twb> You say you have jfs and JBOD SATA disks -- is there anything in between (mdadm, LVM, ...) ?
[07:38] <acidflash> yes, sorry, there is LVM
[07:38] <acidflash> no mdadm
[07:39] <twb> so what each SATA HDD is a PV and they're all one VG, and you have a jfs on top of that, spanning PVs?
[07:39] <acidflash> yes sir
[07:39] <twb> OK.  You realize your MTBF is pretty fucked in that layout, right?
[07:40] <acidflash> why would it be? its just a sata expansion, it shouldnt be
[07:40] <twb> Because the failure of any one disk will lose your entire array
[07:40] <acidflash> each disk is a seperate array, and then put into lvm
[07:40] <twb> You're effectively got an unstriped raid0
[07:40] <acidflash> twb, each disk is  seperate array
[07:40] <twb> Right so you have zero redundancy
[07:40] <twb> zero parity
[07:41] <acidflash> there is a way to replace a disk in lvm, only losing data on that disk
[07:41] <acidflash> these files are not mission critical
[07:41] <twb> OK, so long as you realize that
[07:41] <acidflash> its just a bunch of videos cached from youtube
[07:41] <twb> And jfs would probably be pretty pissed off about having a 2TB chunk of its blocks zeroed, obviously
[07:41] <acidflash> yes, i realise there is no redundancy
[07:41] <twb> Okey dokey
[07:42] <twb> How are you calling rsync?
[07:42] <acidflash> truth is i havent tested with jfs, ext 4 is not to shabby about it
[07:42] <twb> And how big are the source and destination dirs
[07:43] <acidflash> rsync -avHP --ignore-existing --exclude '1oBrGpbCGqs' -e ssh root@XX.XX.XX.XX:/videos/youtube /videos/
[07:43] <acidflash> thats how i am calling rsync,
[07:43] <acidflash> source dir is about 8.8TB
[07:43] <acidflash> destination DIR has upper limit of 28 TB
[07:44] <twb> Do you need -H?  That probably pisses it off.
[07:44] <acidflash> no, not necessarily
[07:44] <acidflash> would be nice to have though
[07:44] <acidflash> i have done 8.3 TB with it
[07:44] <acidflash> if i stop now, will it effect anything negatively?
[07:44] <twb> as in interrupt rsync?
[07:44] <acidflash> because the remaning data will be in the same dir
[07:45] <twb> I wouldn't think so
[07:45] <acidflash> no, not interrupt, i need to recall rsync
[07:45] <acidflash> i mean data integrity
[07:45] <twb> I don't follow
[07:45] <acidflash> H = hard link
[07:45] <acidflash> 8.3 TB of data with hard links
[07:45] <acidflash> the remaning 600 Gigs not hard link
[07:45] <acidflash> would that be a problem ?
[07:45] <twb> Not using -H just means if foo.c and bar.c are hard-linked on the source, they won't be hard-linked on the destination
[07:45] <acidflash> mmmmmmmmmmmmm
[07:45] <acidflash> ok
[07:46] <twb> If you don't use links extensively, it shouldn't be a big deal
[07:46] <acidflash> i dont
[07:46] <twb> You can always relink them post-facto with perforate's finddup -l
[07:46] <acidflash> ok ill try without -H
[07:46] <twb> Other than that I can't think what else would be giving you grief
[07:46] <acidflash> so the swap is causing hte problem you think?
[07:47] <twb> swap thrash just means the system will hang instead of killing off naughty procs
[07:47] <twb> Look at free -m or free -g during the issue and if swap is being used that's a tip off
[07:48] <twb> btw next time I would suggest doing raid0 w/striping in mdadm rather than using lvm for this use case
[07:49] <acidflash> why would you recommend that over lvm
[07:49] <acidflash> for performance?
[07:49] <twb> because that's what it's for
[07:49] <twb> atm if you do synchronous writes, they'll all go to one disk and the rest will idle
[07:50] <twb> (Unless you've explicitly striped in LVM, anyway)
[07:50] <twb> Look at iostat and see if all your writes are bunched into one or two disks
[07:50] <acidflash> nah, they are spread across many
[07:50] <twb> hum.
[07:50] <acidflash> there is a nice util called saidar
[07:50] <twb> I find that surprising, but whatever
[07:50] <acidflash> shows me disk, network, cpu, io etc.
[07:51] <twb> cute
[07:52] <twb> Ah, free supports -h in recent versions.
[07:53] <acidflash> there isnt much stress in reads on these disks, and i need them to just be one large chunk, thats why i avoided raid0, because if 1 fails, all fails, with lvm, i just replace empty hdd, put UUID of old hdd on it, fsck, 0 the place that that hdd held, and go on like nothing happened
[07:53] <acidflash> only losing data on that disk
[07:53] <acidflash> raid1 would be nice, but i need double the storage
[07:53] <twb> acidflash: you should be able to do that with raid0 as well
[07:54] <twb> I haven't done it myself because usually if I need volatile storage I just put in a shitload of RAM
[07:55] <acidflash> twb: then obviously raid0 is a better choice, but ive never actually done it or come across someone doing that with raid0, i was unaware you could,
[07:55] <twb> mdadm is pretty flexible
[07:55] <twb> obviously you should test it first; ICBW :-)
[07:56] <acidflash> yeah, ill test it on a small storage
[07:57] <acidflash> im doing it now without -H
[07:57] <acidflash> see if its hangs,
[07:57] <twb> if that doesn't work try turning swap off entirely (temporarily at least) with swapoff -a
[07:57] <acidflash> from what i read on google, heavy io with rsync in any type of raid, is blocking for whatever reason, and it hasnt been solved (in any of the articles in 2011)
[07:57] <acidflash> aha
[07:57] <acidflash> ok
[07:58] <twb> That might just be me being an anti-swap bigot
[07:58] <acidflash> well its worth a try
[07:58] <acidflash> just to finish rsync
[07:59] <acidflash> i probably wont be having these problems once i start serving from it
[07:59] <acidflash> average io is not more then 45 mb/s read
[07:59] <twb> The other thing you could do is just use tar or something and be super lazy
[07:59] <acidflash> 15 mb/s write
[07:59] <clarezoe> Hi, I'm trying to add cgi support to apach but every time I open localhost, the browser asks me to download. I'm following the doc http://httpd.apache.org/docs/2.2/howto/cgi.html and here is my httpd.conf http://paste.ubuntu.com/903495/ . Please help, thanks!
[07:59] <acidflash> but rsync is doing close to 350-400 mb/s
[07:59] <twb> tar over nc instead of ssh I mean
[07:59] <acidflash> mm
[07:59] <twb> Or nfs, or whatever
[08:00] <twb> There are lots of ways to just shove shit from one place to another
[08:00] <acidflash> yeah
[08:00] <acidflash> whats your preferred raid card?
[08:00] <acidflash> areca?
[08:06] <twb> Uh, no.  mdadm.
[08:06] <twb> Hardware raid an expensive, buggy, unreliable pain in the arse.
[08:07] <acidflash> what if you need lots of ports, what do you do
[08:07] <acidflash> ie: i need atleast 20 sata
[08:07] <acidflash> other then the 7 on board
[08:07] <twb> You get a $10 SATA to PCIe bridge and put it in jbod mode
[08:07] <acidflash> any _good_ pci-x to sata only has 2 on it
[08:08] <acidflash> 2 x 6 pci-x = 12
[08:08] <acidflash> still 8 short
[08:08] <twb> pcix?  That's not dead yet?
[08:08] <acidflash> pci-express
[08:08] <twb> That's pcie
[08:08] <acidflash> depends where you are ;)
[08:08] <twb> If you are putting more than 6 disks in a machine you probably need to consider getting a fancy-pants enterprise case or mobo, or a NAS or a SAN
[08:08] <acidflash> correct terminology is probably pci-e
[08:09] <twb> pcix was a competing standard
[08:09] <acidflash> i can use SAN or NAS for my caching systems
[08:09] <acidflash> it needs to write to local disks,
[08:09] <acidflash> it cant write to network storages
[08:10] <twb> Shrug.  I'm just giving you my opinions.  You don't have to follow them.
[08:11] <twb> I'd have to run the numbers, but I expect the PCIe or QPI backplane of a workstation mobo can't sustain a whole lot more than 6 or 8 disks
[08:11] <acidflash> bandwidth wise it can
[08:11] <acidflash> 66 Mhz is plenty
[08:12] <twb> That's a clock cycle rate, not bandwidth
[08:12] <acidflash> its bus speed for port
[08:12] <smb> Daviey, Morning Sir! Can I get you to have a look at bug 882540? I think it would be ready for some sponsorship, sir! ;-)
[08:12] <acidflash> you can calculate bandwidth from it, cant you
[08:13] <acidflash> oh i forgot to mention, these disks are 3 TB GPT
[08:13] <acidflash> twb: does that change anything?
[08:13] <twb> 66MHz appears to be a PCI clock.  PCI-e v3.0 is rated at 1GB/s per lane.
[08:13] <acidflash> 33 Mhz = pci
[08:13] <twb> acidflash: well, it means you'll get some write amplification if your blocks aren't aligned.
[08:13] <acidflash> PCI-E 2 is 66 mhz
[08:14] <twb> acidflash: ref. https://en.wikipedia.org/wiki/PCIE and https://en.wikipedia.org/wiki/PCI
[08:14] <Daviey> smb: looking
[08:14] <Daviey> (morning smb, btw o.)
[08:14] <twb> "As a point of reference, a PCI-X (133 MHz 64-bit) device and PCIe device at 4-lanes (×4), Gen1 speed have roughly the same peak transfer rate in a single-direction: 1064 MB/sec."
[08:15] <acidflash> yes i saw that
[08:15] <acidflash> but PCI-X @ 133 Mhz is definitely not correct
[08:15] <acidflash> if by PCI-X they mean the old PCI
[08:16] <acidflash> its plausible for express
[08:16] <smb> Daviey, The problem itself is probable valid back to oneiric (at least I think I remember reports that had 3.0 kernels in them), but I have not, yet prepared a debdiff for that...
[08:16] <twb> No, as I said PCI-X was a competitor to PCIe and it is now obsolete.
[08:16] <twb> https://en.wikipedia.org/wiki/PCI-X
[08:16] <acidflash> ahhhhhh
[08:16] <acidflash> ok
[08:16] <acidflash> yes very possible then
[08:17] <Daviey> smb: looks good!  Whilst not /required/, it's good pratice to use dep-3 patch headers (tagging).. Have you come across it before, http://dep.debian.net/deps/dep3/ ?
[08:17] <twb> Backplane on a Z68 (for example) appears to be "DMI" now, not PCIe or QPI
[08:17] <smb> Daviey, Not yet, will have to look at the documentation
[08:17] <twb> "The original implementation provides 10 Gbps each direction (using a x4 link). DMI 2.0 (introduced in 2011) doubles the transfer rate to 20 Gbps with a x4 link."
[08:18] <acidflash> yeah i can believe it
[08:18] <Daviey> smb: skip directly to the end, there are some examples, slightly higher up there is detailed explanation.. if you wanted to do that, i'd be happier.. but i'm not going to make it a requirement.
[08:18] <acidflash> especially with ocz pushing their HSDL's
[08:18] <twb> acidflash: that's between the north and southbridges
[08:18] <acidflash> if you ptu a bunch of ssd's in raid0, they are doing amost 2TB/s
[08:18] <acidflash> sorry
[08:18] <twb> acidflash: i.e. between the CPU and the peripherals
[08:18] <acidflash> 2 GB/s
[08:18] <smb> Daviey, I am using a format like we have for the kernel right now (to have s-o-bs and references to bugs and origin)
[08:19] <twb> Given that SATA can do a theoretical 6Gbps, that means you can have up to three disks before you bottleneck at the backplane
[08:20] <acidflash> thats a good point
[08:20] <twb> Of course the heads can't pull data off spinning metal at 6Gbps, but you see my point that you at least need to run these numbers rather than just shoving a shitload of disks in a case and hoping for the best
[08:20] <Daviey> smb: Yep.. so i can upload as is, but if you want to change to dep3, i'll hold out.
[08:20] <acidflash> twb: I normally dont use the onboard sata for more then the system and logs
[08:21] <acidflash> normal throughput is probably not even 2Gb/s
[08:21] <acidflash> but the load is on the PCI-Express cards
[08:21] <acidflash> thats a different story
[08:21] <acidflash> I dont think the backplane applies to them
[08:21] <acidflash> otherwise its not possible to push 20 Gb/s with 4 OCZ's Raid0 striped
[08:21] <acidflash> and its been doen
[08:22] <smb> Daviey, Seems more or less I already got the information in, just not exactly in the right format. But if you are ok with it, I think for now I would prefer to keep it that way and try to comply better for future changes.
[08:22] <acidflash> done*
[08:22] <twb> acidflash: uh, if you plug a pcie card in, that goes disk --sata--> pcie card --pcie--> southbridge --dmi--> northbridge --> cpu
[08:22] <acidflash> then something is off with the numbers :)
[08:22] <twb> Probably
[08:22] <acidflash> not my numbers
[08:22] <twb> Or the OCZ guy was lying, or he didn't use a 4 lane DMI backplane
[08:22] <acidflash> read about OCZ-Vertex 3 Max IOPS
[08:23] <acidflash> mm
[08:23] <acidflash> yes thats possible also
[08:23] <acidflash> not 4 lane
[08:23] <twb> Enterprise gear, like blade racks, will have different backplanes
[08:23] <acidflash> yes thats 100% true
[08:23] <twb> acidflash: 4 lanes on the mobo between north and south bridge, not pcie x4
[08:23] <acidflash> yes i understand
[08:26] <rbasak> Daviey: ping
[08:29] <Daviey> rbasak: I have a call starting in 30 seconds, after that.. Fancy a chat?
[08:29] <rbasak> Daviey: OK
[09:03] <acidflash> tried without rsync without -H and still same problem
[09:04] <acidflash> is here no solution to this?
[09:04] <acidflash> there*
[09:04] <lynxman> morning o/
[09:32] <clarezoe> hi, when I open 127.0.0.1 the script works but the browser asks me to download if I open localhost. anyone can help? Thanks
[10:58] <larsemil> is there some nice lvm manager made in curses?
[11:04] <acidflash> ok this is EXTREMELY annoying
[12:21] <mrrothhcloud_> I want a easy webplatform, for my consulting website, should I use webpress
[12:21] <mrrothhcloud_> wordpress
[12:21] <mrrothhcloud_> I am hosting on a ubuntu server, should if I go with wordpress use the pakcage in repostory or should I choose another platform all togther
[13:25] <lynxman> roaksoax: ping
[13:36] <smoser> Daviey, are we thinking today's cloud images should be tested ?
[13:36] <smoser> for beta-2
[13:39] <Daviey> smoser: There isn't anything i'm aware of that makes a change to cloud-images packages pending.
[13:40] <roaksoax> lynxman pong
[13:41] <lynxman> roaksoax: morning sir, I have a couple questions for you orchestra related
[13:41] <lynxman> roaksoax: I got the new precise profile in place (thanks!) and while trying to juju bootstrap the machine I marked as net bootable reinstalls properly but doesn't install zookeeper or anything extra
[13:42] <lynxman> roaksoax: http://pastebin.ubuntu.com/903894/
[13:42] <lynxman> roaksoax: have you found this issue before?
[13:43] <smoser> jamespage, that means to you, an we run full test ?
[13:43] <lynxman> roaksoax: juju is 0.5+bzr401-1juju1~oneiric1
[13:43] <smoser> of the 20120328
[13:44] <roaksoax> lynxman: let me see
[13:45] <zul> Daviey: ping im going to replace the console patch that we carry with the new one
[13:46] <Daviey> zul: I assume you'll put it through CI, and make sure it DTRT
[13:47] <zul> Daviey: yep
[13:48] <roaksoax> lynxman: that's done by cloud-=init are you sure it is running on boot?
[13:48] <lynxman> roaksoax: it is afaict
[13:48] <lynxman> roaksoax: just PMed you credentials if you fancy a look
[13:48] <roaksoax> lynxman: so check the cloud init logs
[13:49] <lynxman> roaksoax: I reckon cloud-init running is attached to the profile right?
[13:54] <jamespage> smoser: I can kick that off now
[13:54] <smoser> thank you, mr. page.
[13:55] <jamespage> smoser: 20120328 to confirm todays image?
[13:56] <smoser> yeah.
[13:56] <jamespage> OK running now
[13:59] <smoser> hallyn, i responded to your performance mail
[13:59] <smoser> and then had one more thought.
[14:00] <smoser> i gues i'd like to see what we're doing by default in ubuntu with cache=
[14:00] <smoser> as the kvm man page says: "Some block drivers perform badly with cache=writethrough, most notably, qcow2."
[14:01] <smoser> if we're doing that combination by default.. maybe we at least want to know if qed would make a difference. i probably woudn't advocate for changing from qcow at this point as default, but a good data point.
[14:11] <zul> Daviey/smoser: nnnnnghhh
[14:13] <zul> smoser: http://paste.ubuntu.com/903947/
[14:43] <smoser> hallyn,
[14:43] <smoser> http://paste.ubuntu.com/903859/
[14:43] <smoser> oops
[14:48] <smoser> hallyn, http://paste.ubuntu.com/904000/
[14:49] <smoser> basically 512 (default bs for dd) is what is completelyh sucking
[14:49] <gary_poster> hallyn, hi.  We didn't really think apport info was necessary for bug 959352 but we just added it anyway because we didn't see any action on it.  Do you happen to know any behind-the-scenes information on the kernel side of it?
[14:51] <smoser> hallyn, then, when going to file in root disk, http://paste.ubuntu.com/904004/ (including sync) we still get good speed.
[15:26] <smoser> hggdh, i suspect bug https://bugs.launchpad.net/ubuntu/+source/fence-agents/+bug/961232
[15:26] <smoser> is fairly important to you
[15:26] <smoser> ?
[15:27] <hggdh> smoser: sort of: I think it is just mixing the commands & responses received (on the many different ssh sessions to the PDU)
[15:28] <hggdh> if this is the case, then I do not really care, I can live with the excess output
[15:28] <smoser> wait. what?
[15:28] <smoser> is it rebooting the systems or not.
[15:29] <hggdh> at this point in time, I am commanding reboot on 4 systems, via 4 different jenkins jobs
[15:29] <hggdh> each job deals with ONE and ONLY ONE machine
[15:29] <hggdh> but the output shows the commands and responses of ALL machines
[15:30] <hggdh> and the systems do get rebooted
[15:33] <hggdh> smoser: my itch is (apart, of course, of seeing more than I should see): is this just the PDU crappy code mixing the output, or, somehow, fence_cdu is commanding all to reboot when called with just one
[15:33] <hggdh> I frankly think the latter option quite farfetched. But this is software :-)
[15:33] <smoser> hggdh, right.
[15:34] <smoser> that should be easily testable, though, hggdh
[15:34] <smoser> if you issue a reboot of one system, and all reboot ....
[15:34] <smoser> then we need to fix that
[15:34] <smoser> :)
[15:34] <hggdh> yeah. I intend to check on it as soon as beta2 ends
[15:35] <hggdh> when I opened the bug I had just had a WTF moment seeing the output, and decided to get it recorded ASAP
[15:45] <jamespage> utlemming, did you just kickoff a precise ec2 test run
[15:45] <jamespage> ?
[15:45] <utlemming> jamespage: yes
[15:45] <utlemming> did I mess something up?
[15:46] <jamespage> utlemming, hrm - no its running fine
[15:46] <jamespage> smoser got me to press the button about two hours ago - https://jenkins.qa.ubuntu.com/view/ec2%20AMI%20Testing/view/Overview/job/precise-server-ec2/6/
[15:46] <jamespage> **\0/** all green!
[15:47] <utlemming> can you cancel the run? or should we just let it run through?
[15:47]  * jamespage thinks bout that one
[15:47] <jamespage> utlemming, lets just let it run
[15:48] <jamespage> utlemming, if its less green I'll delete it
[15:48] <utlemming> k
[15:49] <jamespage> smoser, Daviey: https://jenkins.qa.ubuntu.com/view/ec2%20AMI%20Testing/view/Overview/job/precise-server-ec2/6/  all looking good
[15:49] <jamespage> thats the first entirely successful run we have ever had on a full test
[15:50] <smoser> jamespage, but then utlemming had to run it again ?
[15:50] <smoser> is that what i see above?
[15:50] <smoser> way to ruin a good result, u
[15:50] <smoser> utlemming,
[15:50] <smoser> :)
[15:50] <jamespage> smoser, like I said - I'll delete the second set of results if they are less green :-)
[15:51] <utlemming> jamespage: lets delete the second run if they fail too :)
[15:51] <utlemming> then I don't ruin smoser's happy day
[16:14] <koolhead17> hi all
[16:31] <robbiew> arosales: SpamapS: looks like https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing needs some DONE or POSTPONE love ;)
[16:33] <arosales> SpamapS: should we postpone the charms and other providers except canonistack and EC2?
[16:33] <koolhead17> Essex/Precise screencast https://vimeo.com/39299140
[16:33] <koolhead17> looks interesting without audio though
[16:35] <level15> hi: what is your suggested way of backing up your KVM virtual machines?
[16:35] <SpamapS> indeed, I'll postpone some stuff
[16:58] <arosales> SpamapS: thanks for adding the updates to Juju charm testing. If its ok with you I am also going to mark deploy framework against ec2 and canonistack as "INPROGRESS"
[16:59] <SpamapS> arosales: I don't really think it is in progress ??
[17:00] <arosales> SpamapS: Is that what m_3 is working on?
[17:00] <arosales> or is that more of the implement work item?
[17:00] <SpamapS> m_3: are you working on getting the test suite runnign with an ec2 and canonistack config?
[17:00] <SpamapS> arosales: they can certainly be decoupled
[17:09] <arosales> SpamapS: ok.
[17:10]  * arosales will wait to see what m_3's current status is.
[17:28] <m_3> SpamapS: yes
[17:28] <hallyn> smoser: interesting
[17:28] <hallyn> gary_poster: I don't know of any behind-the-scenes action, sorry.  Oh, though apw was going to try merging the latest upstream
[17:29] <hallyn> (of overlayfs)
[17:29] <m_3> arosales SpamapS: I'm working to get the current charmtesting framework working against ec2 and canonistack
[17:33] <arosales> m_3: SpamapS: would it be reasonable to mark deployment against ec2 and canonistack as in progress @ https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing  then?
[17:35] <m_3> arosales: just updated that
[17:36] <gary_poster> ack thanks hallyn
[17:42] <arosales> m_3: thanks
[17:44] <m_3> arosales: np
[17:45] <hallyn> gary_poster: the shame is this is horribly interesting to me, i'd love to dig into exactly what's happening;  but i need to focus on libvirt's stability
[17:47] <JayWalker_> Is there a quick and easy way i can set apache back to default configuration?
[17:50] <hallyn> jjohansen: terribly sorry..   but jamespage saw a reoccurance of bug 925024
[17:50] <roaksoax> SpamapS: howdy! So I was wondering whether you think is best to do this : initctl emit --no-wait rabbitmq-server-running or [ -n "$UPSTART_JOB" ] && initctl emit --no-wait rabbitmq-server-running
[17:50] <roaksoax> SpamapS: in rabbitmq's init script
[17:51] <smoser> adam_g, if you have an unmodified ubuntu precise openstack installation running, could you pastebin 'virsh dumpxml instance-id' and ps -axww | grep kvm ?
[17:51] <SpamapS> roaksoax: thas the right way to do it if you want to coordinate on a single node.
[17:51] <smoser> i'm just curious to see what all we're setting, and to lazy to do it myself.
[17:51] <adam_g> smoser: yeah, one sec
[17:52] <SpamapS> roaksoax: rather, just calling initctl without a [ -n ]
[17:52] <roaksoax> SpamapS: cool, thanks
[17:52] <adam_g> smoser: oh actually, what do you mean unmodified? i just updated to trunk at home, i can get you that from CI lab probably, but its running a version thats different than precise atm
[17:53] <smoser> well, give me what you can. and guess if it is differeent.
[17:53] <SpamapS> roaksoax: should also open a task upstream w/ maas to make it wait for the rabbitmq server rather than failing if its not available though. :)
[17:53] <smoser> ie, is the CI lab likely to have changed the libvirt xml ? and kvm?
[17:53] <SpamapS> roaksoax: not to beat on a dead horse. ;)
[17:53] <adam_g> smoser: heres xml: http://paste.ubuntu.com/904274/
[17:53] <gary_poster> hallyn, completely understand.
[17:54] <adam_g> smoser: http://paste.ubuntu.com/904276/ kvm proc
[17:54] <smoser> carp, hallyn, you see that ^ ?
[17:54] <smoser> cache=none on the root disk.
[17:54] <roaksoax> SpamapS: heh, yeah well the issue is not really maas not waiting for rabbitmq, but rather, the creation of user/vhost/permissions fail on CD installation
[17:55] <roaksoax> SpamapS: since this is done in maas postinst, then on the installer things fail, while on a normal apt-get it works
[17:55] <adam_g> smoser: oh, actually, on that node i am running 2012.1~rc1-0ubuntu2
[17:56] <adam_g> smoser: yeah, looks like all disks are hard-coded as cache='none in libvirt.xml.template
[17:57] <smoser> adam_g, actuallyu, i dont see cache= at all in the libvirt
[17:57] <smoser> in the xml
[17:57] <smoser> meaning libvirt mus be doing that by default
[17:57] <smoser> or i'm missing it
[17:58] <SpamapS> roaksoax: I see.. definitely makes sense then.
[17:58] <adam_g> smoser: <driver name='qemu' type='qcow2' cache='none'/> ?
[17:59] <SpamapS> roaksoax: shouldn't the udeb depend on rabbitmq though?
[17:59] <SpamapS> roaksoax: then it would be guaranteed to be configured before.
[18:00] <roaksoax> SpamapS: it's not a udeb, it's the package being installed in-target
[18:00] <roaksoax> SpamapS: but since rabbitmq is not running on installer time, then it does not create the stuff needed
[18:01] <smoser> adam_g, what file is that in ?
[18:01] <smoser> i dont see it in nova source
[18:01] <SpamapS> roaksoax: you're allowed to 'invoke-rc.d rabbitmq-server start' in a postinst...
[18:01] <SpamapS> roaksoax: though this seems more complicated than that
[18:02] <adam_g> smoser: i pulled that from the pastebin
[18:02] <SpamapS> roaksoax: can maas use a remote rabbitmq? If so, is there a debconf question?
[18:02] <Daviey> SpamapS: invoke-rc.d in the installer won't work directly because of policy.d
[18:02] <adam_g> smoser: http://paste.ubuntu.com/904288/
[18:02] <adam_g> smoser: /usr/share/pyshared/nova/virt/libvirt.xml.template
[18:02] <SpamapS> Daviey: *uggh*
[18:03] <adam_g> smoser: apparently users can provide their own template to be used instead of the default, though ive not done that
[18:03] <SpamapS> what a freakni mess
[18:03] <SpamapS> Daviey: how then is maas started to create the user?
[18:03] <Daviey> SpamapS: it just needs more gaffa tape
[18:03] <smoser> adam_g, not in longer.
[18:03] <smoser> not on trunk
[18:03] <smoser> that file is gone
[18:03] <roaksoax> Daviey: https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/913464
[18:03] <Daviey> SpamapS: invoke-rc.d --force or /etc/init.d/ override policy.d :)
[18:03] <Daviey> ^^ gaffa tape.
[18:04] <roaksoax> Daviey: see above patch, I think that fixes our issues
[18:04] <smoser> https://review.openstack.org/#change,5621
[18:05] <adam_g> smoser: yeah, i know about that. but i thought we were talkin about essex? :)
[18:06] <smoser> oh, thats not essex.
[18:06] <smoser> ok.
[18:06] <smoser> thats fine.
[18:06] <roaksoax> Daviey: ah no, that is already merged
[18:07] <SpamapS> Daviey: indeed....
[18:07]  * SpamapS backs away slowly
[18:12] <smoser> well, hallyn adam_g it looks like cache=none has been default for a while.
[18:12] <smoser> https://review.openstack.org/#change,5769
[18:12] <smoser> (it was before tha too)
[18:13] <hallyn> cache is for wussies
[18:14] <smoser> ah.
[18:14] <smoser> for some reason i always had in my head that none == unsafe
[18:14] <smoser> or at least ~ unsafe
[18:17] <zul> adam_g: still around?
[18:17] <adam_g> zul: yeah
[18:18] <zul> adam_g: so im doing some work on the nova packaging branch, im going to make it fail if the tests suites (it needs a fix in python-netadr which i will upload tomorrow) and i swapped out the conosle patch with the new console patch as well
[18:18] <adam_g> zul: what is the status of the new  patch?
[18:19] <zul> adam_g: fails some pep8 tests right now but it has been reviewed with asking a bigger file size to check
[18:20] <adam_g> zul: hmm looking for the gerrit proposal
[18:20] <zul> adam_g: https://review.openstack.org/#change,5873
[18:21] <JayWalker_> I hosed my apache2 install. I reinstalled all the apache2 packages and it's back to default config, but still doesn't work. It cant see anything in /var/www even with permissions wide open. What can i do?
[18:22] <adam_g> zul: oh, thats cool libvirt upstream is looking to solve this
[18:22] <zul> adam_g: da
[18:23] <adam_g> zul: id really prefer we wait to get some more 1+'s from upstream before carrying it. can we run it in proposed and do some heavy testing on it first?
[18:23] <zul> adam_g: yeah its alreayd in the proposed tree
[18:23] <adam_g> zul: ok, lemme run it through. in theory we should be able to dump gigabytes to the console logs with no issues
[18:24] <adam_g> zul: did that swift test suite fix get uploaded? id like to enable keystones tests if os
[18:24] <zul> adam_g: not til the beta freeze is off
[18:26] <Daviey> adam_g: where is libvirt tracking it?
[18:27] <adam_g> Daviey: not sure, its mentioned in zul's proposal by someone from (i assume) libvirt
[18:27] <adam_g> Daviey: https://review.openstack.org/#change,5873
[18:29] <zul> Daviey: since its totally libvirts fault
[18:31] <zul> adam_g: also im starting to look at out of tree patches that mgith not get into final
[18:32] <urthmover> is it possible to install 10.04 from a 11.04 minimal install disk?
[18:32] <urthmover> I have an apple xserve and 10.04 does not boot from the disk, but the 11.04 does.  I need to test out 10.04 though.  Can this be done?
[18:36] <KM0201> urthmover: i highly doubt it.
[18:42] <Daviey> zul: is it, or kvm's fault?
[18:42] <zul> Daviey: libvirt
[18:50] <urthmover> ok KM0201 I'll take your word for it
[18:51] <KM0201> urthmover: i fail to see how you think it would be possible..
[20:28] <hallyn> aw crud.  kvm-spice -vga qxl is not working ofr me (at least with precise alternate iso)
[21:10] <hallyn> sigh
[21:12] <adam_g> zul: will all of these pass once that upload of yours goes in? https://jenkins.qa.ubuntu.com/view/Precise%20OpenStack%20Testing/job/precise-openstack-essex-nova-trunk/669/console
[21:13] <zul> Status Code: 404
[21:13] <zul> it should
[21:14] <adam_g> zul: hmm. did you disable the tests or something again? nova just finally built successfully
[21:14] <zul> adam_g: i disabled the tests
[21:14] <adam_g> ok
[21:14] <zul> adam_g: ill re-enabled them after the netaddr stuff is uploaded tomorrow
[21:14] <adam_g> k
[22:06] <hggdh> folks, I will need help with bug 967815
[22:42] <Nicolas_Leonida2> hi, what mail server should I install for mail() to work in php?
[22:43] <azneita> Nicolas_Leonida2, how about postfix
[22:47] <Nicolas_Leonida2> default ubuntu server doesn't come with smtp service installed?
[22:49] <hnsz> where do i put my wpa_supplicant.conf?
[23:00] <kklimonda> I'm having problems mounting nfs4 share from lucid server to precise client. It looks like bug http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=622146 - does anyone have idea if that's fixed in lucid? changelog seems to say "no"
[23:25] <kklimonda> yup, seems related.. oh well, I'll have to see if I can backport the patches