[06:58] <bkerensa> SpamapS: I want to know what your doing!
[12:03] <hazmat> rafa wins :-)
[12:46] <m_3> mornin
[12:56] <hazmat> m_3, ping
[13:02] <m_3> hazmat: yo
[13:02] <m_3> what's up?
[13:05] <m_3> saw proof-errors... we should talk about how to incorporate other test results too
[13:06] <m_3> we were talking about the browser showing green/red results for each charm... perhaps broken down by test
[13:36] <hazmat> m_3, back
[13:37] <hazmat> m_3, so wanted to ask about getting revision numbers into the job info
[13:37] <hazmat> m_3, and also which is the canonical url i should be using for jenkins instances
[13:37] <hazmat> m_3, to date i've just been the one off your subdomain
[13:37] <hazmat> m_3, i was also wondering how much work it would be to just offer a jenkins publishing endpoint
[13:38] <hazmat> m_3, the testing infrastructure seems a bit broken though which is also of some concern
[13:38] <hazmat> i wanted to brainstorm/talk with you about alternatives
[13:39] <hazmat> just to be clear for now i'm fine with jenkins testing, but i'd like to discuss what's on your plate as regards it this cycle, and the overall feature set we're aiming for
[13:49] <gary_poster> juju folks, we're seeing a problem that is a big deal for us.  As of sometime late last week, we are seeing machines (not services/charms) that never move from the pending state.  We initially saw this late Friday afternoon and figured it was transient or something to do with our charm, but now we are seeing it again and repeatedly.  Here's an example of my status (see line 21): http://pastebin.ubuntu.com/1035478/ .  Here's
[13:49] <gary_poster> an example of benji's status (see line 8).  benji started his with the fairly innocuous http://pastebin.ubuntu.com/1035483/ while I start with http://pastebin.ubuntu.com/1035462/ . Something suspicious is that this seems to happen with our slave charm and not with our master charm--but the charm shouldn't be able to affect this stage of things, right?  I can't even use `juju ssh 2` (I get "2012-06-11 09:35:41,881 ERROR None
[13:49] <gary_poster> object cannot be quoted")
[13:49] <m_3> hazmat: sounds good... lemme finish breakfast and take taylor to work.  ping you in an hour or so... g+ might be easier
[13:51] <benji> another bit of data on the above: the EC2 console does not show the machine as running
[13:51] <gary_poster> ah, yeah, thanks benji
[13:51] <benji> (or even existing for that matter)
[13:52] <gary_poster> huh
[13:52] <hazmat> m_3, sounds good
[13:53] <gary_poster> benji, it is shown as existing in mine, but stopped.  "State Transition Reason: Client.InstanceInitiatedShutdown: Instance initiated shutdown")
[13:53] <hazmat> benji, can you pastebin the provisioning agent log
[13:53] <hazmat> er. gary_poster
[13:53] <gary_poster> :-) sure
[13:53] <hazmat> gary_poster, which version are you running?
[13:53] <hazmat> ppa or distro?
[13:54] <benji> hazmat: sure; I assume that is somewhere on machine 0.  What is the path?
[13:54] <gary_poster> hazmat ppa  0.5+bzr540-1juju5~precise1
[13:54] <hazmat> benji, /var/log/juju
[13:54] <hazmat> benji, its provisining-agent-log
[13:55] <gary_poster> hazmat, http://pastebin.ubuntu.com/1035503/
[13:56] <benji> hazmat: ooh, it looks interesting: http://paste.ubuntu.com/1035504/
[13:56] <mgz> ha.
[13:56] <hazmat> gary_poster, thanks
[13:56] <benji> I /think/ this is because I had an existing, but stopped instance.
[13:56] <hazmat> benji, yup
[13:57] <hazmat> if it was a previous juju instance that you had stopped, ie managed behind juju.. then this makes sense
[13:57] <mgz> hazmat: related question, currently juju leaves stopped machines alone of destroy environment
[13:57] <mgz> this seems... bad to me.
[13:57] <hazmat> mgz, agreed :-)
[13:57] <gary_poster> hazmat, ah!  yes, we had both done a poweroff that we thought was in an lxc but turned out to be in the host :-P
[13:57] <hazmat> mgz, there are additional issues with the security group intraction
[13:57] <gary_poster> thanks hazmat
[13:58] <hazmat> mgz, so intead of killing the security group when it exists already, we should just clear it out
[13:58] <mgz> yeah, I'm trying to resolve some of these teardown issues right now
[13:58] <hazmat> mgz, this also saves from the mostly pointless wait at destroy
[13:58] <hazmat> mgz, since that almost never succeeds
[13:58] <mgz> right, that's also painful.
[13:59] <hazmat> mgz, so intead on teardown just kill the instances, on bootstrap, clear out sec groups of rules as we allocate machines in them
[13:59] <hazmat> mgz, we can leave sec group cleanup in its entirety to a jitsu tool, or opportunistic provisioning agent level thing
[14:00] <hazmat> mgz, the ostack branch looks like its shaping up... any chance i can start reviews on it
[14:00] <mgz> so, I've got a neat way around that for openstack, but seems like a good plan for ec2
[14:00] <mgz> right, I'm nearly there with all the moving parts so should be reviewable this week
[14:01] <mgz> some things can land later.
[14:01] <hazmat> mgz, re way around... what's that?
[14:02] <mgz> you can just disassociate a security group from a server in openstack
[14:02] <hazmat> oh.. nice
[14:02] <SpamapS> bkerensa: huh? You asked what I'm doing?
[14:02] <mgz> that avoids the hole where opening ports in a pre-existing sec group means you need to be sure you killed any previously associated machines
[14:02] <hazmat> mgz, are you testing on hpcloud btw?
[14:03] <mgz> (not that this firewalling is really security critical)
[14:03] <mgz> hazmat: not yet with this branch
[14:03] <hazmat> mgz, well considering the per machine group, any pre-existing associated machines are really user error imo or juju not properly handing stopped instances (which also amounts to the same given the lack of support for manual machine management atm)
[14:04] <hazmat> ie. we shouldn't care about pre-existing sec groups per se
[14:04] <mgz> right.
[14:22] <m_3> hazmat: back
[14:25] <m_3> SpamapS: promulquestion if you're still around...
[14:26] <m_3> should we make promulgate _change_ the owner of a branch or require a separate push to a ~charmers branch?
[14:26] <hazmat> m_3, invite out
[16:12] <SpamapS> m_3: around now, wassup?
[16:16] <m_3> SpamapS: hey
[16:16] <m_3> SpamapS: just wondering if we should make promulgate do the extra work of pushing to ~charmers first for you
[16:17] <m_3> SpamapS: and if so, should it just change the ownership of the lp branch or would it require another push
[16:18] <m_3> right now I'm just barfing if it's not ~charmers (and --owner-branch isn't set)
[16:19] <SpamapS> m_3: you can't "just change ownership" of a branch.
[16:20] <m_3> SpamapS: ok... well that answers that then :)
[16:20] <SpamapS> m_3: so are you saying, just have promulgate look at metadata.yaml, and if no explicit branch destination is set, have it just try to push to lp:~charmers/charms/<current-series>/<name>/trunk ?
[16:21] <SpamapS> m_3: I'd be fine with that
[16:21] <SpamapS> m_3: bzr wil protect us from doing anything super stupid in that case
[16:21] <m_3> yeah, essentially
[16:21] <SpamapS> m_3: should be *loud* about what we're doing
[16:21] <m_3> it seems like lots of extra work to push to ~charmers every time
[16:21] <SpamapS> promulgate is currently way too quiet
[16:21] <SpamapS> m_3: automation for the win!
[16:22] <m_3> SpamapS: cool one thing at a time... I'll mp the restriction to only promulgate ~charmers branches unless explicitly excepted first
[16:22] <SpamapS> m_3: werd
[16:22] <m_3> SpamapS: thanks man
[16:35] <hazmat> m_3, what sort of charms are you pushing that shouldn't be viewable in the browser?
[16:50] <m_3> hazmat: promultest for instance
[16:52] <m_3> hazmat: and looking at creating launchpadlib-based tests that might temporarily stick stuff in locations like .../charms/<series>/<name>/trunk...
[16:54] <m_3> hazmat: it'd probably be best if they don't get cached in the charm browser / store
[16:54] <hazmat> m_3, funky..
[16:54] <hazmat> m_3, if their publishing junk, why are they publishing it ;-)
[16:54] <hazmat> m_3, seems like it should be kept local then
[16:54] <m_3> hazmat: testing the publishing mechanism itself
[16:54] <hazmat> m_3, i'm not opposed to the idea incidentally, just trying to understand if its nesc.
[16:54] <m_3> i.e., tests for 'charm promulgate'
[16:55] <hazmat> m_3, so how about a .no_store file in the root of the charm
[16:55] <hazmat> niemeyer, ^
[16:56] <hazmat> m_3, or you do want it in the store, but not the browser?
[16:56] <m_3> no biggie... sort of a one-off thing... not those almost always eventually turn into bigger things :)
[16:56] <m_3> hazmat: in this case, I want it in neither the store nor browser cache
[16:57] <SpamapS> m_3: just let it be in the store
[16:57] <SpamapS> m_3: "This charm is not useful. It is only for testing the charm store itself."
[16:58] <m_3> SpamapS: ok
[16:58] <SpamapS> Don't make it special at all
[16:58] <SpamapS> at < 100 charms it might be found.. but when we get to the many hundreds and then thousands of charms.. nobody will ever see it
[16:58] <m_3> hazmat: I'll kill that bug
[17:00] <hazmat> except when it pops for search..
[17:00] <hazmat> m_3, no.. i think its a good idea still to have that capability..
[17:01] <hazmat> m_3, pls leave the bug, i'll hit up on the next charm browser round
[17:01] <m_3> hazmat: ok... cool, thanks
[17:03]  * SpamapS wonders if its worth our time is all
[18:02] <m_3> ok, enough charm-tools for now... back to charmtester
[18:25] <FunnyLookinHat> Here's the question of the day - can I use a charm to image a machine ( i.e. real hardware ) ?  I can't think of a way other than just re-purposing the charm's config code.
[18:26] <FunnyLookinHat> I think that would essentially be rewriting juju more than rewriting charms - yes?
[18:28] <marcoceppi_> FunnyLookinHat: Sounds like you're looking for MaaS?
[18:28] <FunnyLookinHat> yes ?
[18:28] <FunnyLookinHat> The real question is - I've "heard" a lot about maas - but I haven't seen a way to implement it yet - am I missing something?
[18:29]  * FunnyLookinHat googles.
[18:29] <marcoceppi_> FunnyLookinHat: So MaaS is a way to coordinate and manage bare metal, it's also a provider for Juju. So Juju talks to MaaS to deploy charms to bare metal (among other things) https://wiki.ubuntu.com/ServerTeam/MAAS
[18:30] <FunnyLookinHat> Right right - let's say I'm creating a zimbra charm to help install/setup zimbra mail servers that will be used for totally different clients though
[18:30] <FunnyLookinHat> and they will just plug the server in at their place after I've config'd it
[18:31] <FunnyLookinHat> Does this still provide a valid use-case?  or would I be better off taking the charm for zimbra and just hacking it into a setup script that I run after imaging the server.
[18:32] <marcoceppi_> hum, I mean you _can_ use Juju, but it sounds like you'd be better just creating a setup script
[18:33] <FunnyLookinHat> marcoceppi_, yeah agreed.
[18:33] <FunnyLookinHat> hey congrats on the ultrabook from UDS btw
[18:33] <FunnyLookinHat> Did you end up submitting the git charm ?
[18:36] <marcoceppi_> FunnyLookinHat: I did gitolite and gluster. Gitlab proved to be too difficult in the amount of time I had. So polishing those up now for the store
[18:37] <FunnyLookinHat> Ah ok - cool  :D
[18:37] <marcoceppi_> Now that we deploy gitlab in production it's easier to finish the charm
[18:48] <SpamapS> FunnyLookinHat: btw you could definitely use MaaS+juju to create those machines..
[18:49] <SpamapS> FunnyLookinHat: you'd just want to remove juju before shipping them out.
[18:49] <FunnyLookinHat> SpamapS, you think so?  The machines won't be connected to the MaaS/Juju server after the initial image.
[18:49] <FunnyLookinHat> Ah.
[18:49] <SpamapS> FunnyLookinHat: probably enough to just 'juju destroy-service customerfoo-zimbra' before shutting it down
[18:50] <SpamapS> FunnyLookinHat: its not what juju is specifically designed for, but it should work
[18:50] <SpamapS> FunnyLookinHat: (the real question is why are you selling them a mail server instead of a whole cloud. ;)
[18:50] <SpamapS> because now that they have mail, they'll want a status.net microblog and a wiki to distill the billions of words of email that nobody has time to read ;)
[18:52] <FunnyLookinHat> hahaha
[18:52] <FunnyLookinHat> I'm just trying to speed up my own process.
[18:52] <FunnyLookinHat> Seems silly to essentially "copy" a charm and not use juju
[19:12] <SpamapS> m_3: https://code.launchpad.net/~mark-mims/charm-tools/fix-list-commands/+merge/109697 .. dude, just commit that
[19:16] <m_3> SpamapS: ok
[19:19] <m_3> done
[19:26] <FunnyLookinHat> quick bash help anyone?  I need to make a be the substr of b - from length of TEST + 2 - instead of just test... but +2 evals as a string obviously: a=${b:${#TEST}};
[19:26] <FunnyLookinHat> i.e. a=${b:${#TEST}+2};
[19:29] <FunnyLookinHat> ah - never mind: a=${b:$((${#TEST}+2))};
[21:38] <negronjl> m_3: ping
[21:43] <hazmat> FunnyLookinHat, re zimbra to me it really depends on whether you plan on taking it multi-node
[21:44] <hazmat> SpamapS,  a JBOM provider :-)
[21:54] <m_3> negronjl: yo
[21:54] <negronjl> m_3: I just followed the instructions ( README ) for MongoDB ( precise ) and, provided that I follow them, they work on the replica-set
[21:55] <negronjl> m_3:  Would you give me more details on how to reproduce ?
[21:55] <m_3> negronjl: oh cool... yeah, I was just triaging a few
[21:55] <m_3> didn't look too carefully
[21:56] <m_3> we have a lot of bugs that aren't getting caught in the review process
[21:56] <negronjl> m_3:  no worries ...  marking bug as Invalid
[21:56] <m_3> I hate to add them to the review queue though... maybe just 'triage general charm bugs' as a reviewer task
[21:57] <m_3> dude... review-queue rocks btw... how did we ever get along without it?
[21:57] <negronjl> m_3:  lol ... it does if I say so myself :)
[22:00] <FunnyLookinHat> hazmat, single node
[22:06]  * SpamapS re-uploads Juju to Debian after it was rejected for debian/copyright niggles :-P
[23:05] <roy-feldman> Does anybody have a few minutes about Juju with Maas?
[23:06] <roy-feldman> I have read all of the docs I can find online and I am still have some issues. BTW, I am using KVM for most of my MaaS nodes I want to use with Juju.
[23:24] <marcoceppi_> roy-feldman: I can try to help
[23:27] <roy-feldman> Thanks
[23:27] <roy-feldman> For starters, have you used KVM with MaaS?
[23:28] <marcoceppi_> I have briefly, I ended up having quite a few problems with it though :)
[23:29] <roy-feldman> I there a better hypervisor in your opinion for Maas + Juju at this point?
[23:29] <roy-feldman> I don't have enough physical nodes to do anything very interesting with MaaS
[23:30] <bkerensa> SpamapS: http://www.omgubuntu.co.uk/2012/06/ubuntu-12-10-development-update-1 <-- your interview
[23:32] <marcoceppi_> roy-feldman: I've tried virtual-box which wasn't anymore fun to run. So far the only hypervisor I've had any real-life usage out of is Xen
[23:32] <roy-feldman> But have you used it successfully with MaaS?
[23:33] <roy-feldman> Searching the forums, the one person on the Juju team that seems to use KVM is Jorge.
[23:34] <roy-feldman> What problems did you run in with KVM?
[23:34] <marcoceppi_> roy-feldman: I couldn't get them to properly PXE boot, and when I finally got around that the wouldn't register with the maas pxe-boot server
[23:35] <marcoceppi_> so there were a lot of networking foul-ups
[23:35] <roy-feldman> I have had similar problems
[23:35] <SpamapS> bkerensa: nice! thanks!
[23:35] <roy-feldman> Following the instructions, I was able to PXE boot a KVM
[23:35] <roy-feldman> Configured for Wake up lan
[23:36] <roy-feldman> It would find the MaaS server and get the "initial" image and shutdown.
[23:36] <roy-feldman> The MaaS interface would then show that it was ready.
[23:37] <roy-feldman> However, when I would try to peform a juju bootstrap, I would get the following error
[23:37] <marcoceppi_> Is your MaaS server running in a KVM or is it on another box (or the host machine)
[23:37] <roy-feldman> ERROR Invalid host for SSH forwarding: ssh: Could not resolve hostname node-525400edd759.local:
[23:37] <marcoceppi_> roy-feldman: that's a DNS issue
[23:38] <roy-feldman> I have installed maas-dhcp and set it correctly, as far as I can tell.
[23:38] <marcoceppi_> it should be pretty easy to overcome, if you've installed maas-dhcp on the maas machine, then set /etc/resolve on your juju machine to point to the maas server
[23:38] <roy-feldman> I have gone over the Maas setup instruction several times
[23:39] <marcoceppi_> juju machine, being the machine you're running juju from
[23:39] <marcoceppi_> MaaS is kind of designed to take over a network, so it's supposed to act as your internal DNS server
[23:39] <roy-feldman> what would the directive look like in /etc/resolve to do that
[23:40] <marcoceppi_> roy-feldman:  /etc/resolve.conf *
[23:40] <roy-feldman> BTW, MaaS is not my gateway, instead I pointed it at my router in the maas-dhcp setup
[23:40] <roy-feldman> Would that be a problem?
[23:40] <marcoceppi_> It's nameserver IP_ADDRESS - I would recommend commenting out (#) your current entries and placing the values to the MaaS (you may need to restart networking)
[23:40] <marcoceppi_> not sure, that's better suited for the #maas room
[23:41] <marcoceppi_> It *shouldn't* be a problem
[23:42] <roy-feldman> Ok, I will go to the maas channel, but are you saying the gateway setting is not a problem?
[23:42] <marcoceppi_> AFAIK, it's not
[23:43] <roy-feldman> And maas will refer to my gateway for external address resolution after /etc/resolve is configured correctly?
[23:44] <marcoceppi_> roy-feldman: it should
[23:44] <roy-feldman> One last question
[23:44] <marcoceppi_> shoot
[23:44] <roy-feldman> Once I have Maas properly configured
[23:45] <roy-feldman> I can deploy charms to maas registered machines, whether they are currently running or not?
[23:45] <roy-feldman> Can I ...
[23:45] <marcoceppi_> roy-feldman: correctly, MaaS will ping it with a WOL call to turn the machine on, then provision it
[23:46] <roy-feldman> great
[23:46] <roy-feldman> Is the maas channel open to outsiders?
[23:46] <roy-feldman> Some ubuntu channels are members only
[23:47] <marcoceppi_> roy-feldman: it's open
[23:47] <roy-feldman> great
[23:47] <marcoceppi_> If after you get that all sorted, juju doesn't work, feel free to pop back in here
[23:47] <roy-feldman> Ok... one more clarification of your answer, please
[23:48] <roy-feldman> When you said comment out my current entires, did you mean my entries in /etc/reslove?
[23:49] <roy-feldman> resolve
[23:49] <marcoceppi_> yeah, the ones in /etc/resolv.conf
[23:50] <roy-feldman> What did you mean by " /etc/resolve.conf *"
[23:50] <roy-feldman> ?
[23:51] <marcoceppi_> the file is /etc/resolv.conf - mis-spelled it several times :)
[23:51] <roy-feldman> Ok, thanks a lot!
[23:51] <roy-feldman> Wish me luck
[23:52] <roy-feldman> When I finally succeed I will write up a little cookbook for Maas + KVM, I think it is sorely needed.
[23:52] <roy-feldman> Looking at the Maas forums, I am not the only person having problems with Maas
[23:53] <roy-feldman> I do have a quick Juju specific question
[23:53] <marcoceppi_> sure, go for it
[23:54] <roy-feldman> Is there a juju ppa repo I should be using for juju on 12.04, or am I better off sticking with normal updates?
[23:54] <SpamapS> roy-feldman: the PPA is extremely stable (for now) but may get volatile later.
[23:54] <roy-feldman> Assuming that all I am doing is charm development?
[23:55] <SpamapS> roy-feldman: there are a few updates pending in precise-proposed ..
[23:55] <SpamapS> which reminds me I need to get those back on track. :-P
[23:55] <SpamapS> we really need a 'juju-origin: ppa://....' so we can have a more stable PPA than the one that builds from trunk. :-P
[23:56] <roy-feldman> In the meantime, are there any major advantages for a charm developer to be using the current ppa repo?
[23:57] <roy-feldman> Also, would it be helpful to the project if I used the ppa repo?