[01:11] SpamapS, i'm tired of watching the pile too [01:12] imbrandon, zk mem isn't a practical limitation [01:12] * hazmat digs salt [01:14] imbrandon, can you run ansibile on a single node? [01:15] ie something for a hook to use? [01:27] hazmat: erm, if you kinda bent it very likely, i'll try it out here in in just a few and see if run into anythgin [02:22] SpamapS, i take it back, hp does support internal group access rules, just wasn't enabled in the provider, doing some last testing work, but it appears we have a flawless victory for hpcloud [02:22] rackspace is going to require a bit more refactoring, worth putting off till post trunk merge [02:24] hmm.. maybe not re rackz.. tbd [02:59] SpamapS, imbrandon if you want to try it out.. its at lp:~hazmat/juju/openstack_provider [02:59] should be an OOTB experience [03:17] sweet [03:18] kk yea ill fire er up in half sec was just finiishing up some stuff with sis [03:18] been a loooooooong day [03:18] :) [03:19] hazmat: do i need to modify my env.y for any of the changes ? [03:19] ( that you vcan think of ) [03:20] imbrandon, nope [03:20] sweet, snagging now [03:26] heya hazmat / SpamapS : while i'm thinking of it can someone please bump the ver in setup.py to the proper version too with this release ? heh [03:27] it would be nice to go 0.6 [03:28] :) [03:29] hazmat: i did this a few days ago [03:29] https://code.launchpad.net/~imbrandon/juju/fix-setuppy-version/+merge/114207 [03:30] imbrandon, i saw it [03:30] if you wanna use it, if not i'm fine as long as it gets filed in, i was thinking that in the debian packing then the __version__ could be changed by the debian/rules on build [03:30] imbrandon, i thought i could finish up the upgrade stuff, which gets the python packaging up to snuff for reals.. [03:30] and uses an embedded version.txt file [03:31] thats cool too either way, i just did that cuz i ran accross in pep-08 where it said to use module.__version__ [03:32] but honestly i dont care either way, the txt would work for tools like you said and the __version__ could even just feed off that and in fact the debian/rules could feed the version.txt on build too [03:32] depending on how elaborate you wanted to get :) [03:32] yeah.. that bit is already solid for lp:~hazmat/juju/core-upgrade [03:33] but unless its extracted its perhaps better to go with something simple, as [03:33] rockin, i'll un req that MP then, really was just mostly pokeing learning etc anyhow [03:33] i'm spread thin [03:34] i hear ya there :) heh if there are bite sized stuff ever i'm happy to help when/where i can :) [03:35] looks like its all [03:35] wortking great too [03:35] gonna full teardown and redeploy ( just got junk there now anyhow , the mirror are manula for the moment ) [03:36] but i want to get those charmed too sometime soon-ish [03:36] then tossing up a mirror for any env , even vpc ones would work for all providers etc [03:36] in one command :) [03:37] ( /me still dont like it takes 70 minutes to update the HPCloud mirrors per run and only 3 to 5 on AWS or @home ) [03:41] hazmat: is that https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/ 35357 number tied to my acct ? ( was gonna prep some doc additions for HPCloud in a bit ) [03:42] or is that just the port for everyone [04:02] hazmat: sweet, so when i deployed a new env and then destroyed it , it actually cleaned up all the junk left from the broken juju too [04:03] nice :) [04:12] imbrandon, that's the port for everyone [04:13] sweet and i found my keyboard issue, was dead easy [04:13] there is a eng(US) and a eng(MACINTOSH) [04:13] just had to pick the right darn one :) [04:14] no more pasting wrong things AND i dont loose ctl+c to stop a process like you do swapping ctl+cmd [04:14] :) [04:15] well you dont loose it but then its cmd+ctl, i think this is the first time in gnome/unity ive gotten it to fully work ( OSx , Windsows and KDE all default to the mac way) [04:25] imbrandon, that sounds pretty nice re mirror charm [04:25] * hazmat destablises the branch to ensure proper essex support [04:25] mometarily [05:14] cool all good now, should work with essex and diablo [05:15] nice === bgupta_ is now known as bgupta === gmb` is now known as gmb === zyga is now known as zyga-afk [10:49] mgz, greetings === zyga-afk is now known as zyga [10:52] hazmat: hey [10:53] so, merging the current state sounds good, should I file bugs for the remaining bits and pieces? [10:55] mgz sounds good, actually its probably better to do that first [10:57] mgz nothing that's a show blocker that i saw? [10:58] nope, a couple of things we really do want before release though [10:58] mgz most of the todos there are pretty minor [10:58] cert checking and some concurrency management [10:59] the rest is all just incremental improvements [11:00] mgz the cert checking it would be nice to have on by default [11:00] right, it's not hard either as the tough bit is available in txaws [11:00] i'm not sure what you mean by concurrency management [11:01] atm there's nothing stopping vast numbers of parallel http api requests [11:01] if there's multiple provisioners, there's external management of the concurrency around any given machine. [11:01] so, try to shutdown 200 machines, twisted will attempt to make 200 tcp connections [11:02] if there's multiple provisioners, there's external management of the concurrency around any given machine. [11:04] mgz, hmm.. true, although that's the only O(n) ops semantic op, that's actually rather lame in general (lack of multi node deletion in ostack), that's a juju client op, batch and iter would be nice there. [11:05] but not a blocker [11:07] anyway, will file bugs for those bits and look over your branch again [11:07] mgz, cool [11:08] mgz, for that shutdown case, it would be nice to kill the zk nodes last in a separate op, so that if the op fails, the env is still interactable. [11:09] bbiab, off to drop off the kids [11:11] okay, just going through the diff [11:11] + # Add internal group access [11:11] + yield self.nova.add_security_group_rule( [11:11] + parent_group_id=sg['id'], group_id=sg['id'], [11:12] ...this was in the EC2 provider, and I didn't understand the purpose there, and I still don;t [11:12] mgz, without it in a restrictive env, units a and b can't talk to each other [11:12] this used to flat out not work in openstack, as a bug fix they permitted it, but it... still does nothing no? [11:13] it's permitting a rule to access itself, is tautology [11:13] mgz, it does what's intended [11:13] it is itself [11:13] in hpcloud [11:13] mgz, its permitting nodes in the group to access each other [11:13] what's the actual effect? [11:15] its permitting a group.. not a rule to acess itself. without it mysql and wordpress can't talk to each other over mysql port [11:16] [11:16] <_mup_> Bug #965674: openstack api create security group rule doesn't allow self-referential group < https://launchpad.net/bugs/965674 > [11:17] as I understand it, security group rules come in two forms: [11:17] 1) allows access to ports in a range for any instance with the group [11:19] hmm... [11:20] so 2) is allows access to... everything? for any instance in given group to the parent group? [11:21] so the intention of the rule is to allow any machine managed by juju to access any other machine managed by juju? [11:21] mgz within the same env yes [11:22] expose/open/close-port govern external acccess [11:22] without the mysql charm needing to say "allow access to this port on the local network", only charms that need a publicly accessible port [11:22] okay, I get it. [11:22] thanks. [11:27] so, it would make some sense to add that to the per-machine group rather than the per-environment one I think [12:27] <_mup_> Bug #1026102 was filed: Openstack provider does not validate https certs < https://launchpad.net/bugs/1026102 > [12:32] <_mup_> Bug #1026103 was filed: Openstack provider does not use constraints < https://launchpad.net/bugs/1026103 > [12:37] <_mup_> Bug #1026107 was filed: Openstack provider does not limit http api connections < https://launchpad.net/bugs/1026107 > [12:40] <_mup_> Bug #1026108 was filed: Openstack provider client does not raise exceptions with full details < https://launchpad.net/bugs/1026108 > [12:42] okay, after lunch I merge your branch hazmat, fix the test failures and push [13:26] mgz, sounds good [13:27] mgz re per machine group instead of environment group.. i don't see why [14:22] mgz, can you ping me when its ready, i'd like to get this merged today [14:22] sure. [14:22] mgz, there are a couple of oscon demos tomorrow which people have expressed interest in using juju on hp for. [14:22] fwiw [14:23] there does seem to be a fair bit of interest in exactly that [14:28] mgz, did you ever get an account setup with hpcloud? [14:30] I didn't hear back about whether we got the billing thing sorted [14:30] sent a query after you mentioned it [14:30] a rackspace one would be good too. [14:31] did you get trystack working? zul posted a thing about an arm zone coming on line there. [14:50] hazmat: \o/ [14:51] mgz, no re trystack.. i limited myself to 3 openstack clouds.. [14:51] so many snowflakes [14:52] mgz, re hpcloud afaik its sorted, i'll double check [14:52] trystack is at least pure upstream [14:52] SpamapS, so is rackspace [14:52] well mostly [14:53] its actually tracking trunk by about 3w is what i recall [14:53] imbrandon: is the environments.yaml file in ~/.juju/ or somewhere else? [14:53] mgz, your hp account should be good to go [14:53] hazmat: so its only special because they hide some services? [14:54] SpamapS, its not special in that regard, that is the state of art for folsom as well [14:54] SpamapS, network and volumes are in separate services [14:55] SpamapS, technically the existing sec group and floating ip stuff isn't in core either per se, its implemented via api extensions. [14:56] mgz arm and juju still requires something intelish for the zk nodes, afaicr java on arm is still a boondongle [14:59] testing from subway [14:59] I setup subway irc. but it is not working [15:00] if anybody knows please help me . [15:00] thanks advance [15:00] m_31: eat fresh [15:01] m_31: howdy subway! [15:02] Nippo: m_31 can help :) [15:02] Thanks SpamapS, :-) [15:02] I will ask m_31 [15:05] Nippo: hi, there were two complications I came across... first is to see what port the app is coming up on [15:05] Nippo: ssh to the subway/0 unit and 'netstat -lnp' [15:05] Nippo: then make sure you're doing a 'juju expose' [15:05] (if you're not using the local lxc provider) [15:06] m_3: subway doesn't always use 80? [15:06] m_3 : Please give me easy setps or URL to install subway irc :-) [15:07] spamaps : I am using port 3000 [15:09] m_3: hey how did the talk go? [15:16] SpamapS: good... Jorge's part was _awesome_! I'd really like to make improvements to my demo part [15:16] tweaking... ;) [15:16] preparing for the next one tomorrow [15:17] Nippo: see if juju status shows the ports exposed [15:18] Nippo: then 'juju ssh subway/0' (assuming the 0... look at the status output to find out for sure)... once you're on the machine, then `sudo netstat -lnp | less` and see where "node" is listening [15:18] hazmat: have pushed what should be a reasonable state [15:19] m_3 : Yes, It's listening . i launch the subway . but multiple users are not working [15:20] i follow this link https://github.com/thedjpetersen/subway [15:20] mgz, nice, i'll take a look [15:21] basically just alters the tests for the behaviour changes to sec groups and shutdown and a couple of trivial things [15:21] other things (removal of some dead code and so on) can probably wait === salgado is now known as salgado-lunch === Furao_ is now known as Furao === salgado-lunch is now known as salgado === Furao_ is now known as Furao === Furao_ is now known as Furao === Furao_ is now known as Furao [19:26] SpamapS, any dead chickens you want me to sacrifice b4 landing ostack? [19:35] mgz, one minor diff fwiw, i'm just applying during merge.. http://paste.ubuntu.com/1098899/ === three18ti_ is now known as three18ti [19:39] hazmat: right, you changed that back to the old form so that test update wasn't needed any more [19:39] I swear that branch has broken and fixed that particular test about four times [19:40] mgz, indeed it has ;-) [19:40] mgz, looks good though === lucian_ is now known as lucian [19:51] is there any way to tell juju to import another ssh key? [19:51] I'm patching things to try to get it working with eucalyptus but when i bootstrap, the instance come sup but I'm unable to ssh into it [20:10] iamfuzz: the ssh key is installed using cloud-init [20:10] iamfuzz: so if its failing to be installed, thats because cloud-init is failing [20:10] iamfuzz: check console output maybe? [20:10] SpamapS, ouch, that's no fun [20:10] I'll give it a go but that rarely works [20:11] iamfuzz: cloud-init, or console output? [20:11] SpamapS, console output [20:11] iamfuzz: thats disappointing. :-/ [20:11] looks like networking isnt even coming up on the instance, can't even ping it. Or does juju not enable ICMP by default? [20:11] iamfuzz: juju just boots an image [20:12] (note than when i launch the same emi manually, everything is kosher) [20:12] I don't think it does any specific authorizing other than SSH [20:24] you can check the access groups it creates [20:24] euca-describe-access-groups or whatever [20:28] just euca-describe-groups and compare with names listed for instance under euca-describe-instances [20:28] ping and ssh should work for all juju created instances if they're operational [20:29] mgz, tis done [20:29] hazmat: yeay! [20:29] iamfuzz, it doesn't re icmp afaicr [20:30] it setups tcp / udp for internal group access, and ext access for ssh only by default [20:30] hazmat, I tried again and am able to ssh in mnaually now but juju status and juju ssh hang [20:30] investigating [20:32] hazmat: I swear icmp used to be enabled, was that removed from the ec2 provider? [20:33] ...maybe just misremember [20:38] hazmat, http://pastebin.com/PakTGnwj [20:39] hazmat, machine 0 should be the instance I'm on, no? Any idea what could cause this error? [20:40] mgz: I could swear it was enabled too [20:44] iamfuzz, do you have console output for that instance? [20:45] hazmat, figured it out - juju-admin call failed [20:45] iamfuzz, it looks like the initialize some how failed [20:45] hazmat, ran it manually and now status works [20:45] iamfuzz, cool [20:45] baby steps! [20:45] iamfuzz, cloud init logs in /var/logs should have the original error from the admin initialize cmd === Bryanste- is now known as Bryanstein [20:51] hah.. I love when I hit a public IP from a server I had yesterday, and its now something totally different [20:51] http://ec2-54-245-6-168.us-west-2.compute.amazonaws.com/ [20:52] hmm.. looks like the stack tests depend on a local ssh key [20:54] stack tess? [20:54] tests even [20:56] SpamapS, i trigger the ppa build [20:56] but it failed as the ostack provider tests need a key setup.. working on a fix [20:59] heh [20:59] the ec2 tests had that problem for a long time too [21:17] <_mup_> juju/trunk r558 committed by kapil@canonical.com [21:17] <_mup_> [trivial] fix openstack provider tests to not require an ssh key [21:17] fixed and committed [21:17] rebuilding ppa [21:28] ok who broke charm promulgate? [21:28] $ charm promulgate [21:28] W: metadata name (gunicorn) must match directory name (.) exactly for local deployment. [21:35] lifeless: earlier this week, did you ask who was on review duty? [21:38] nope [21:38] I asked how to move my charm forward :) [21:40] lifeless: I don't see your charm on the review queue.. Need to subscribe charmers to the bug and make sure it is New or Fix Committed for us to see it in there. [21:40] it was [21:40] will look in a sec [21:43] Ring the bell twice everybody.. python-django and gunicorn promulgated! [21:43] avoine: ^^ \o/ [21:43] yay! [21:44] negronjl: now just Riak, then.. you. [21:44] SpamapS: 'cause you still hate me :) [21:46] just you now, Riak is Incomplete [21:46] oh look at the time... [21:46] ;) [21:49] SpamapS, when would be the last safe date to submit patches for juju to get it working against walrus again? [21:59] iamfuzz: there's no "freeze" scheduled at this piont [21:59] SpamapS, would beta freeze be a safe target? [21:59] iamfuzz: oh for quantal? I'd aim a bit sooner than that [22:00] SpamapS, week before feature freeze? (16th of august) [22:00] iamfuzz: actually no thats fine, it won't be in main, so we can upload it right up to the end [22:00] likely won't take that long, just handing off a bug here in-house and he wants a date [22:00] good deal, I'll tell him feature freeze, with enough time for review, so August 16thish [22:01] I'd love to add Eucalyptus to our charm tester targets [22:01] we would as well [22:01] also asking in-house if we can setup a "public" cloud for you guys to run tests against [22:01] like, if you could give us a URL to hit and capacity to run 3 or 4 at once.. we'd put it up on our jenkins. [22:02] shouldn't be a prob [22:02] much appreciated [22:02] we're just in the midst of a datacenter move so it may be a bit [22:02] right now I think we only run against ec2 itself and the local provider [22:04] iamfuzz: sure. Just ping me or m_3 when you want to chat about it. [22:04] SpamapS, will do, thanks man [22:07] SpamapS: https://bugs.launchpad.net/charms/+bug/1017796 was the charm [22:07] <_mup_> Bug #1017796: Charm needed: opentsdb < https://launchpad.net/bugs/1017796 > [22:08] hazmat: any idea on the latest build fail for the PPA? [22:24] lifeless: i re-subscribed charmers [22:24] SpamapS: ah [22:25] also, relatedly, is there anything I can do to get my uses-ip patch rolling again? It seemed to just get rejected and sidelined into a bug somewhere... [22:26] lifeless: the fallback method i proposed seems preferred [22:27] lifeless: in case u missed it tho, openstack native support landed... [22:27] in precise ? [22:28] hah no [22:28] :) [22:28] its worth putting in backports tho... [22:29] SpamapS, sigh no [22:29] problem being there is no way to tell spun up nodes to use backports [22:29] SpamapS, there' all different [22:29] SpamapS, perhaps a missing inline callbacks.. [22:29] i'm able to run trunk tests locally without issue [22:30] hazmat: it works in my local sbuild [22:30] i *hate* buildd specific problems :( [22:31] SpamapS, even worse when every different rel build is reporting something different [22:32] hazmat: i am retrying just precise [22:33] SpamapS, k [22:33] SpamapS, one of the oneiric failures is odd [22:33] hazmat: have seen io probs lead to wierd fails in the past [22:34] Failure: twisted.internet.defer.TimeoutError: (test_subordinate_status_output) still running at 10.0 secs [22:41] hazmat: considering "juju-origin: ppa" is already supported, i'd imagine "juju-origin: backports" is a trivial change.. and a reasonable SRU.. <-- SpamapS, comment? [22:43] (although, adding additional providers as SRU also seems reasonable IMO) [23:02] Daviey: yes its something we could do [23:03] Daviey: and the additional provider, I agree with that too [23:17] Daviey: but I think it would require TC approval, since it does not fit with SRU guidelines. [23:24] hazmat: so I think it was some kind of weirdness on the buildds [23:25] negronjl: so I've been thinking more about the haproxy service_name thing [23:26] mthaddon`: ^^ [23:26] I'm still not sold on a generic thing like that. [23:28] negronjl: can you perhaps show me what exactly is intended? [23:29] Like, a fully filled out config and the web app that is being balanced? [23:42] SpamapS: Well, do what you feel.. but the TC seemed to be pretty clear that the SRU team can handle one-off's quite nicely themselves [23:54] Daviey: one-off's of applying the actual SRU policy... this would not even come close. [23:55] Daviey: actually I take that back [23:55] Daviey: we can SRU stuff that enables access to remote services that did not exist when the release was made. [23:55] Like in days past empathy would add an IM network and that was ok to SRU [23:57] SpamapS: Generally, mdz made it quite clear that the SRU team can handle this stuff and "only escalates to the tech board in exceptional circumstances".