[01:11] <hazmat> SpamapS, i'm tired of watching the pile too
[01:12] <hazmat> imbrandon, zk mem isn't a practical limitation
[01:12]  * hazmat digs salt
[01:14] <hazmat> imbrandon, can you run ansibile on a single node?
[01:15] <hazmat> ie something for a hook to use?
[01:27] <imbrandon> hazmat: erm, if you kinda bent it very likely, i'll try it out here in in just a few and see if run into anythgin
[02:22] <hazmat> SpamapS, i take it back, hp does support internal group access rules, just wasn't enabled in the provider, doing some last testing work, but it appears we have a flawless victory for hpcloud
[02:22] <hazmat> rackspace is going to require a bit more refactoring, worth putting off till post trunk merge
[02:24] <hazmat> hmm.. maybe not re rackz.. tbd
[02:59] <hazmat> SpamapS, imbrandon if you want to try it out.. its at lp:~hazmat/juju/openstack_provider
[02:59] <hazmat> should be an OOTB experience
[03:17] <imbrandon> sweet
[03:18] <imbrandon> kk yea ill fire er up in half sec was just finiishing up some stuff with sis
[03:18] <imbrandon> been a loooooooong day
[03:18] <imbrandon> :)
[03:19] <imbrandon> hazmat: do i need to modify my env.y for any of the changes ?
[03:19] <imbrandon> ( that you vcan think of )
[03:20] <hazmat> imbrandon, nope
[03:20] <imbrandon> sweet, snagging now
[03:26] <imbrandon> heya hazmat / SpamapS : while i'm thinking of it can someone please bump the ver in setup.py to the proper version too with this release ? heh
[03:27] <hazmat> it would be nice to go 0.6
[03:28] <imbrandon> :)
[03:29] <imbrandon> hazmat: i did this a few days ago
[03:29] <imbrandon> https://code.launchpad.net/~imbrandon/juju/fix-setuppy-version/+merge/114207
[03:30] <hazmat> imbrandon, i saw it
[03:30] <imbrandon> if you wanna use it, if not i'm fine as long as it gets filed in, i was thinking that in the debian packing then the __version__ could be changed by the debian/rules on build
[03:30] <hazmat> imbrandon, i thought i could finish up the upgrade stuff, which gets the python packaging up to snuff for reals..
[03:30] <hazmat> and uses an embedded version.txt file
[03:31] <imbrandon> thats cool too either way, i just did that cuz i ran accross in pep-08 where it said to use module.__version__
[03:32] <imbrandon> but honestly i dont care either way, the txt would work for tools like you said and the __version__ could even just feed off that and in fact the debian/rules could feed the version.txt on build too
[03:32] <imbrandon> depending on how elaborate you wanted to get :)
[03:32] <hazmat> yeah.. that bit is already solid for lp:~hazmat/juju/core-upgrade
[03:33] <hazmat> but unless its extracted its perhaps better to go with something simple, as
[03:33] <imbrandon> rockin, i'll un req that MP then, really was just mostly pokeing learning etc anyhow
[03:33] <hazmat> i'm spread thin
[03:34] <imbrandon> i hear ya there :) heh if there are bite sized stuff ever i'm happy to help when/where i can :)
[03:35] <imbrandon> looks like its all
[03:35] <imbrandon> wortking great too
[03:35] <imbrandon> gonna full teardown and redeploy ( just got junk there now anyhow , the mirror are manula for the moment )
[03:36] <imbrandon> but i want to get those charmed too sometime soon-ish
[03:36] <imbrandon> then tossing up a mirror for any env , even vpc ones would work for all providers etc
[03:36] <imbrandon> in one command :)
[03:37] <imbrandon> ( /me still dont like it takes 70 minutes to update the HPCloud mirrors per run and only 3 to 5 on AWS or @home )
[03:41] <imbrandon> hazmat: is that https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/ 35357 number tied to my acct ? ( was gonna prep some doc additions for HPCloud in a bit )
[03:42] <imbrandon> or is that just the port for everyone
[04:02] <imbrandon> hazmat: sweet, so when i deployed a new env and then destroyed it , it actually cleaned up all the junk left from the broken juju too
[04:03] <imbrandon> nice :)
[04:12] <hazmat> imbrandon, that's the port for everyone
[04:13] <imbrandon> sweet and i found my keyboard issue, was dead easy
[04:13] <imbrandon> there is a eng(US) and a eng(MACINTOSH)
[04:13] <imbrandon> just had to pick the right darn one :)
[04:14] <imbrandon> no more pasting wrong things AND i dont loose ctl+c to stop a process like you do swapping ctl+cmd
[04:14] <imbrandon> :)
[04:15] <imbrandon> well you dont loose it but then its cmd+ctl, i think this is the first time in gnome/unity ive gotten it to fully work ( OSx , Windsows and KDE all default to the mac way)
[04:25] <hazmat> imbrandon, that sounds pretty nice re mirror charm
[04:25]  * hazmat destablises the branch to ensure proper essex support
[04:25] <hazmat> mometarily
[05:14] <hazmat> cool all good now, should work with essex  and diablo
[05:15] <imbrandon> nice
[10:49] <hazmat> mgz, greetings
[10:52] <mgz> hazmat: hey
[10:53] <mgz> so, merging the current state sounds good, should I file bugs for the remaining bits and pieces?
[10:55] <hazmat> mgz sounds good, actually its probably better to do that first
[10:57] <hazmat> mgz nothing that's a show blocker that i saw?
[10:58] <mgz> nope, a couple of things we really do want before release though
[10:58] <hazmat> mgz most  of the todos there are pretty minor
[10:58] <mgz> cert checking and some concurrency management
[10:59] <mgz> the rest is all just incremental improvements
[11:00] <hazmat> mgz the cert checking it would be nice to have on by default
[11:00] <mgz> right, it's not hard either as the tough bit is available in txaws
[11:00] <hazmat> i'm not sure what you mean by concurrency management
[11:01] <mgz> atm there's nothing stopping vast numbers of parallel http api requests
[11:01] <hazmat> if there's multiple provisioners, there's external management of the concurrency around any given machine.
[11:01] <mgz> so, try to shutdown 200 machines, twisted will attempt to make 200 tcp connections
[11:02] <hazmat> if there's multiple provisioners, there's external management of the concurrency around any given machine.
[11:04] <hazmat> mgz, hmm.. true, although that's the only O(n) ops semantic op, that's actually rather lame in general (lack of multi node deletion in ostack), that's a juju client op, batch and iter  would be nice there.
[11:05] <hazmat> but not a blocker
[11:07] <mgz> anyway, will file bugs for those bits and look over your branch again
[11:07] <hazmat> mgz, cool
[11:08] <hazmat> mgz, for that shutdown case, it would be nice to kill the zk nodes last in a separate op, so that if the op fails, the env is still interactable.
[11:09] <hazmat> bbiab, off to drop off the kids
[11:11] <mgz> okay, just going through the diff
[11:11] <mgz> +            # Add internal group access
[11:11] <mgz> +            yield self.nova.add_security_group_rule(
[11:11] <mgz> +                parent_group_id=sg['id'], group_id=sg['id'],
[11:12] <mgz> ...this was in the EC2 provider, and I didn't understand the purpose there, and I still don;t
[11:12] <hazmat> mgz, without it in a restrictive env, units a and b can't talk to each other
[11:12] <mgz> this used to flat out not work in openstack, as a bug fix they permitted it, but it... still does nothing no?
[11:13] <mgz> it's permitting a rule to access itself, is tautology
[11:13] <hazmat> mgz, it does what's intended
[11:13] <mgz> it is itself
[11:13] <hazmat> in hpcloud
[11:13] <hazmat> mgz, its permitting nodes in the group to access each other
[11:13] <mgz> what's the actual effect?
[11:15] <hazmat> its permitting a group.. not a rule to acess itself. without it mysql and wordpress can't talk to each other over mysql port

[11:16] <_mup_> Bug #965674: openstack api create security group rule doesn't allow self-referential group <OpenStack Compute (nova):Fix Released by vishvananda> < https://launchpad.net/bugs/965674 >
[11:17] <mgz> as I understand it, security group rules come in two forms:
[11:17] <mgz> 1) allows access to ports in a range for any instance with the group
[11:19] <mgz> hmm...
[11:20] <mgz> so 2) is allows access to... everything? for any instance in given group to the parent group?
[11:21] <mgz> so the intention of the rule is to allow any machine managed by juju to access any other machine managed by juju?
[11:21] <hazmat> mgz within the same env yes
[11:22] <hazmat> expose/open/close-port govern external acccess
[11:22] <mgz> without the mysql charm needing to say "allow access to this port on the local network", only charms that need a publicly accessible port
[11:22] <mgz> okay, I get it.
[11:22] <mgz> thanks.
[11:27] <mgz> so, it would make some sense to add that to the per-machine group rather than the per-environment one I think
[12:27] <_mup_> Bug #1026102 was filed: Openstack provider does not validate https certs <juju:New> < https://launchpad.net/bugs/1026102 >
[12:32] <_mup_> Bug #1026103 was filed: Openstack provider does not use constraints <juju:New> < https://launchpad.net/bugs/1026103 >
[12:37] <_mup_> Bug #1026107 was filed: Openstack provider does not limit http api connections <juju:New> < https://launchpad.net/bugs/1026107 >
[12:40] <_mup_> Bug #1026108 was filed: Openstack provider client does not raise exceptions with full details <juju:New> < https://launchpad.net/bugs/1026108 >
[12:42] <mgz> okay, after lunch I merge your branch hazmat, fix the test failures and push
[13:26] <hazmat> mgz, sounds good
[13:27] <hazmat> mgz re per machine group instead of environment group.. i don't see why
[14:22] <hazmat> mgz, can you ping me when its ready, i'd like to get this merged today
[14:22] <mgz> sure.
[14:22] <hazmat> mgz, there are a couple of oscon demos tomorrow which people have expressed interest in using juju on hp for.
[14:22] <hazmat> fwiw
[14:23] <mgz> there does seem to be a fair bit of interest in exactly that
[14:28] <hazmat> mgz, did you ever get an account setup with hpcloud?
[14:30] <mgz> I didn't hear back about whether we got the billing thing sorted
[14:30] <mgz> sent a query after you mentioned it
[14:30] <mgz> a rackspace one would be good too.
[14:31] <mgz> did you get trystack working? zul posted a thing about an arm zone coming on line there.
[14:50] <SpamapS> hazmat: \o/
[14:51] <hazmat> mgz, no re trystack.. i limited myself to 3 openstack clouds..
[14:51] <hazmat> so many snowflakes
[14:52] <hazmat> mgz, re hpcloud afaik its sorted, i'll double check
[14:52] <SpamapS> trystack is at least pure upstream
[14:52] <hazmat> SpamapS, so is rackspace
[14:52] <hazmat> well mostly
[14:53] <hazmat> its actually tracking trunk by about 3w is what i recall
[14:53] <marcoceppi> imbrandon: is the environments.yaml file in ~/.juju/ or somewhere else?
[14:53] <hazmat> mgz, your hp account should be good to go
[14:53] <SpamapS> hazmat: so its only special because they hide some services?
[14:54] <hazmat> SpamapS, its not special in that regard, that is the state of art for folsom as well
[14:54] <hazmat> SpamapS, network and volumes are in separate services
[14:55] <hazmat> SpamapS, technically the existing sec group and floating ip stuff isn't in core either per se, its implemented via api extensions.
[14:56] <hazmat> mgz arm and juju still requires something intelish for the zk nodes, afaicr java on arm is still a boondongle
[14:59] <m_31> testing from subway
[14:59] <Nippo> I setup subway irc. but it is not working
[15:00] <Nippo> if anybody knows please help me .
[15:00] <Nippo> thanks advance
[15:00] <marcoceppi> m_31: eat fresh
[15:01] <SpamapS> m_31: howdy subway!
[15:02] <SpamapS> Nippo: m_31 can help :)
[15:02] <Nippo> Thanks SpamapS, :-)
[15:02] <Nippo> I will ask m_31
[15:05] <m_3> Nippo: hi, there were two complications I came across... first is to see what port the app is coming up on
[15:05] <m_3> Nippo: ssh to the subway/0 unit and 'netstat -lnp'
[15:05] <m_3> Nippo: then make sure you're doing a 'juju expose'
[15:05] <m_3> (if you're not using the local lxc provider)
[15:06] <SpamapS> m_3: subway doesn't always use 80?
[15:06] <Nippo> m_3 : Please give me easy setps or URL to install subway irc :-)
[15:07] <Nippo> spamaps : I am using port 3000
[15:09] <SpamapS> m_3: hey how did the talk go?
[15:16] <m_3> SpamapS: good... Jorge's part was _awesome_!  I'd really like to make improvements to my demo part
[15:16] <m_3> tweaking... ;)
[15:16] <m_3> preparing for the next one tomorrow
[15:17] <m_3> Nippo: see if juju status shows the ports exposed
[15:18] <m_3> Nippo: then 'juju ssh subway/0' (assuming the 0... look at the status output to find out for sure)... once you're on the machine, then `sudo netstat -lnp | less` and see where "node" is listening
[15:18] <mgz> hazmat: have pushed what should be a reasonable state
[15:19] <Nippo> m_3 : Yes, It's listening . i launch the subway . but multiple users are not working
[15:20] <Nippo> i follow this link  https://github.com/thedjpetersen/subway
[15:20] <hazmat> mgz, nice, i'll take a look
[15:21] <mgz> basically just alters the tests for the behaviour changes to sec groups and shutdown and a couple of trivial things
[15:21] <mgz> other things (removal of some dead code and so on) can probably wait
[19:26] <hazmat> SpamapS, any dead chickens you want me to sacrifice b4 landing ostack?
[19:35] <hazmat> mgz, one minor diff fwiw, i'm just applying during merge.. http://paste.ubuntu.com/1098899/
[19:39] <mgz> hazmat: right, you changed that back to the old form so that test update wasn't needed any more
[19:39] <mgz> I swear that branch has broken and fixed that particular test about four times
[19:40] <hazmat> mgz, indeed it has ;-)
[19:40] <hazmat> mgz, looks good though
[19:51] <iamfuzz> is there any way to tell juju to import another ssh key?
[19:51] <iamfuzz> I'm patching things to try to get it working with eucalyptus but when i bootstrap, the instance come sup but I'm unable to ssh into it
[20:10] <SpamapS> iamfuzz: the ssh key is installed using cloud-init
[20:10] <SpamapS> iamfuzz: so if its failing to be installed, thats because cloud-init is failing
[20:10] <SpamapS> iamfuzz: check console output maybe?
[20:10] <iamfuzz> SpamapS, ouch, that's no fun
[20:10] <iamfuzz> I'll give it a go but that rarely works
[20:11] <SpamapS> iamfuzz: cloud-init, or console output?
[20:11] <iamfuzz> SpamapS, console output
[20:11] <SpamapS> iamfuzz: thats disappointing. :-/
[20:11] <iamfuzz> looks like networking isnt even coming up on the instance, can't even ping it.  Or does juju not enable ICMP by default?
[20:11] <SpamapS> iamfuzz: juju just boots an image
[20:12] <iamfuzz> (note than when i launch the same emi manually, everything is kosher)
[20:12] <SpamapS> I don't think it does any specific authorizing other than SSH
[20:24] <lifeless> you can check the access groups it creates
[20:24] <lifeless> euca-describe-access-groups or whatever
[20:28] <mgz> just euca-describe-groups and compare with names listed for instance under euca-describe-instances
[20:28] <mgz> ping and ssh should work for all juju created instances if they're operational
[20:29] <hazmat> mgz, tis done
[20:29] <mgz> hazmat: yeay!
[20:29] <hazmat> iamfuzz, it doesn't re icmp afaicr
[20:30] <hazmat> it setups tcp / udp for internal group access, and ext access for ssh only by default
[20:30] <iamfuzz> hazmat, I tried again and am able to ssh in mnaually now but juju status and juju ssh hang
[20:30] <iamfuzz> investigating
[20:32] <mgz> hazmat: I swear icmp used to be enabled, was that removed from the ec2 provider?
[20:33] <mgz> ...maybe just misremember
[20:38] <iamfuzz> hazmat, http://pastebin.com/PakTGnwj
[20:39] <iamfuzz> hazmat, machine 0 should be the instance I'm on, no?  Any idea what could cause this error?
[20:40] <lifeless> mgz: I could swear it was enabled too
[20:44] <hazmat> iamfuzz, do you have console output for that instance?
[20:45] <iamfuzz> hazmat, figured it out - juju-admin call failed
[20:45] <hazmat> iamfuzz, it looks like the initialize some how failed
[20:45] <iamfuzz> hazmat, ran it manually and now status works
[20:45] <hazmat> iamfuzz, cool
[20:45] <iamfuzz> baby steps!
[20:45] <hazmat> iamfuzz, cloud init logs in /var/logs should have the original error from the admin initialize cmd
[20:51] <SpamapS> hah.. I love when I hit a public IP from a server I had yesterday, and its now something totally different
[20:51] <SpamapS> http://ec2-54-245-6-168.us-west-2.compute.amazonaws.com/
[20:52] <hazmat> hmm.. looks like the stack tests depend on a local ssh key
[20:54] <SpamapS> stack tess?
[20:54] <SpamapS> tests even
[20:56] <hazmat> SpamapS, i trigger the ppa build
[20:56] <hazmat> but it failed as the ostack provider tests need a key setup.. working on a fix
[20:59] <SpamapS> heh
[20:59] <SpamapS> the ec2 tests had that problem for a long time too
[21:17] <_mup_> juju/trunk r558 committed by kapil@canonical.com
[21:17] <_mup_> [trivial] fix openstack provider tests to not require an ssh key
[21:17] <hazmat> fixed and committed
[21:17] <hazmat> rebuilding ppa
[21:28] <SpamapS> ok who broke charm promulgate?
[21:28] <SpamapS> $ charm promulgate
[21:28] <SpamapS> W: metadata name (gunicorn) must match directory name (.) exactly for local deployment.
[21:35] <SpamapS> lifeless: earlier this week, did you ask who was on review duty?
[21:38] <lifeless> nope
[21:38] <lifeless> I asked how to move my charm forward :)
[21:40] <SpamapS> lifeless: I don't see your charm on the review queue.. Need to subscribe charmers to the bug and make sure it is New or Fix Committed for us to see it in there.
[21:40] <lifeless> it was
[21:40] <lifeless> will look in a sec
[21:43] <SpamapS> Ring the bell twice everybody.. python-django and gunicorn promulgated!
[21:43] <SpamapS> avoine: ^^ \o/
[21:43] <avoine> yay!
[21:44] <SpamapS> negronjl: now just Riak, then.. you.
[21:44] <negronjl> SpamapS: 'cause you still hate me :)
[21:46] <SpamapS> just you now, Riak is Incomplete
[21:46] <SpamapS> oh look at the time...
[21:46] <SpamapS> ;)
[21:49] <iamfuzz> SpamapS, when would be the last safe date to submit patches for juju to get it working against walrus again?
[21:59] <SpamapS> iamfuzz: there's no "freeze" scheduled at this piont
[21:59] <iamfuzz> SpamapS, would beta freeze be a safe target?
[21:59] <SpamapS> iamfuzz: oh for quantal? I'd aim a bit sooner than that
[22:00] <iamfuzz> SpamapS, week before feature freeze? (16th of august)
[22:00] <SpamapS> iamfuzz: actually no thats fine, it won't be in main, so we can upload it right up to the end
[22:00] <iamfuzz> likely won't take that long, just handing off a bug here in-house and he wants a date
[22:00] <iamfuzz> good deal, I'll tell him feature freeze, with enough time for review, so August 16thish
[22:01] <SpamapS> I'd love to add Eucalyptus to our charm tester targets
[22:01] <iamfuzz> we would as well
[22:01] <iamfuzz> also asking in-house if we can setup a "public" cloud for you guys to run tests against
[22:01] <SpamapS> like, if you could give us a URL to hit and capacity to run 3 or 4 at once.. we'd put it up on our jenkins.
[22:02] <iamfuzz> shouldn't be a prob
[22:02] <iamfuzz> much appreciated
[22:02] <iamfuzz> we're just in the midst of a datacenter move so it may be a bit
[22:02] <SpamapS> right now I think we only run against ec2 itself and the local provider
[22:04] <SpamapS> iamfuzz: sure. Just ping me or m_3 when you want to chat about it.
[22:04] <iamfuzz> SpamapS, will do, thanks man
[22:07] <lifeless> SpamapS:  https://bugs.launchpad.net/charms/+bug/1017796 was the charm
[22:07] <_mup_> Bug #1017796: Charm needed: opentsdb <Juju Charms Collection:New> < https://launchpad.net/bugs/1017796 >
[22:08] <SpamapS> hazmat: any idea on the latest build fail for the PPA?
[22:24] <SpamapS> lifeless: i re-subscribed charmers
[22:24] <lifeless> SpamapS: ah
[22:25] <lifeless> also, relatedly, is there anything I can do to get my uses-ip patch rolling again? It seemed to just get rejected and sidelined into a bug somewhere...
[22:26] <SpamapS> lifeless: the fallback method i proposed seems preferred
[22:27] <SpamapS> lifeless: in case u missed it tho, openstack native support landed...
[22:27] <lifeless> in precise ?
[22:28] <SpamapS> hah no
[22:28] <lifeless> :)
[22:28] <SpamapS> its worth putting in backports tho...
[22:29] <hazmat> SpamapS, sigh no
[22:29] <SpamapS> problem being there is no way to tell spun up nodes to use backports
[22:29] <hazmat> SpamapS, there' all different
[22:29] <hazmat> SpamapS, perhaps a missing inline callbacks..
[22:29] <hazmat> i'm able to run trunk tests locally without issue
[22:30] <SpamapS> hazmat: it works in my local sbuild
[22:30] <SpamapS> i *hate* buildd specific problems :(
[22:31] <hazmat> SpamapS, even worse when every different rel build is reporting something different
[22:32] <SpamapS> hazmat: i am retrying just precise
[22:33] <hazmat> SpamapS, k
[22:33] <hazmat> SpamapS, one of the oneiric failures is odd
[22:33] <SpamapS> hazmat: have seen io probs lead to wierd fails in the past
[22:34] <SpamapS> Failure: twisted.internet.defer.TimeoutError: <juju.control.tests.test_status.StatusTest testMethod=test_subordinate_status_output> (test_subordinate_status_output) still running at 10.0 secs
[22:41] <Daviey> hazmat: considering "juju-origin: ppa" is already supported, i'd imagine "juju-origin: backports" is a trivial change.. and a reasonable SRU.. <-- SpamapS, comment?
[22:43] <Daviey> (although, adding additional providers as SRU also seems reasonable IMO)
[23:02] <SpamapS> Daviey: yes its something we could do
[23:03] <SpamapS> Daviey: and the additional provider, I agree with that too
[23:17] <SpamapS> Daviey: but I think it would require TC approval, since it does not fit with SRU guidelines.
[23:24] <SpamapS> hazmat: so I think it was some kind of weirdness on the buildds
[23:25] <SpamapS> negronjl: so I've been thinking more about the haproxy service_name thing
[23:26] <SpamapS> mthaddon`: ^^
[23:26] <SpamapS> I'm still not sold on a generic thing like that.
[23:28] <SpamapS> negronjl: can you perhaps show me what exactly is intended?
[23:29] <SpamapS> Like, a fully filled out config and the web app that is being balanced?
[23:42] <Daviey> SpamapS: Well, do what you feel.. but the TC seemed to be pretty clear that the SRU team can handle one-off's quite nicely themselves
[23:54] <SpamapS> Daviey: one-off's of applying the actual SRU policy... this would not even come close.
[23:55] <SpamapS> Daviey: actually I take that back
[23:55] <SpamapS> Daviey: we can SRU stuff that enables access to remote services that did not exist when the release was made.
[23:55] <SpamapS> Like in days past empathy would add an IM network and that was ok to SRU
[23:57] <Daviey> SpamapS: Generally, mdz made it quite clear that the SRU team can handle this stuff and "only escalates to the tech board in exceptional circumstances".