[03:50] <stub> Can anyone confirm that if I run relation-list in a peer relation hook, that it might be listing dying units?
[11:42] <noodles775> mthaddon: Not sure if you review charmhelpers, but as per our conversation yesterday, here's an update for the salt support which doesn't install a daemon: https://code.launchpad.net/~michael.nelson/charm-helpers/depend-on-salt-common-only/+merge/170298
[11:42] <mthaddon> noodles775: I saw the comment in the RT, thanks. I'll leave the review to wedgwood and others for charm-helpers though
[11:43] <noodles775> Cool.
[12:38] <MarkDude> Is all off Juju's main code under Affero v1?
[12:46] <marcoceppi> MarkDude: yes, juju is licensed agpl
[12:49] <MarkDude> v1? not v3? marcoceppi ?
[12:50] <marcoceppi> Oh, I'm not sure the version exactly
[12:50] <MarkDude> Was that the license from the start
[12:50] <marcoceppi> juju-core is AGPLv3
[12:50] <marcoceppi> so is juju0.7
[12:51]  * MarkDude can email on it. I have been wanting to put juju in Fedora- and have seen v1 be an issue
[12:51] <MarkDude> Yay,
[12:51] <MarkDude> much easier, I can proceed now, I *might* need a clarification letter at some point so I could put it in the regular repos
[12:52] <MarkDude> A few of the other parts are not compatible with all the legal restrictions on my side.
[12:53] <MarkDude> But as far as the main part- I feel juju- used in concert with Kickstart (scripts for the Anaconda installer) has awesome possibilities
[13:00] <maxired> Hi everybody !
[13:00] <marcoceppi> Hi maxired
[13:01] <maxired> I'm trying to use "juju-gui", from cs:~juju-gui/precise/juju-gui-68 ,a dn i'm stuck on ""Connecting to the Juju environment"
[13:01] <maxired> I'm not sur how to check that the juju API is running, any idea ?
[13:07] <jcastro> MarkDude: you don't want that
[13:07] <jcastro> you want to use cloud init
[13:07] <jcastro> it's in fedora
[13:08] <MarkDude> Agreed, the cli client looks easy to integrate
[13:09] <MarkDude> the rest of the licenses appear to work, and I have a list of Juju items ALREADY available for Fedora
[13:09] <jcastro> how's the golang stack in fedora?
[13:09] <MarkDude> +1
[13:09]  * MarkDude expects no issues there
[13:09] <MarkDude> Well major ones, possible small fixes needed
[13:10]  * MarkDude started notes- and mostly saw a Affero v1 as issue- v3 means a bit of work on my side- but it should be fine
[13:10] <marcoceppi> maxired: what version of juju are you using?
[13:11]  * MarkDude was reviewing the code. Looks like there is a really nice base to build off of
[13:11]  * MarkDude has not so secret plan of linking (in idea only) juju charms and Kickstart
[13:11] <maxired> marcoceppi : I was precisly looking at this.
[13:12] <maxired> i'm on 0.7 from ppa:juju/pkgs on the master node
[13:12] <marcoceppi> MarkDude: charms are licensed sperately from juju, you'd need to check each charms license if that was the case ;)
[13:13] <maxired> but look's like juju 0.7 from default repository has been installed by MAAS
[13:13] <maxired> I'll try to change this
[13:13] <marcoceppi> Admittdly it's been a while since i've deployed the gui with 0.7
[13:14] <ehg> hey - is there an OS X build of go-juju anywhere?
[13:14] <marcoceppi> ehg: not for go-juju, not yet at least
[13:14] <ehg> marcoceppi: ah, would it compile?
[13:14] <MarkDude> The charms themsleves are not as intereresting as what Juju could do for helping make a dev environment, or a few other things more technical
[13:15] <MarkDude> Bonus being we get to keep the awesome part of getting folks to dev- on the easy route
[13:15] <marcoceppi> ehg: I don't see why not. I don't think there's anything platform dependant, but I'm not a juju-core developer nor do i own a mac to verify
[13:15] <ehg> cool, i'll get our CEO to try :)
[13:15] <marcoceppi> ehg: Let us know how it works out!
[13:15] <MarkDude> Juju is full of win. Charms made elsewhere may not work for various reasons - technical , license etc
[13:16] <MarkDude> Roll out on installs of servers is a possiblity at this point
[13:17]  * MarkDude is hoping to get a kickstart to create Juju- and go from there
[13:17] <MarkDude> Ty jcastro , marcoceppi , everyone
[13:18] <MarkDude> Back to study, do I send my "formal letter of intent" (yay Juju rocks) to jcastro  or someone else ?]\
[13:18] <jcastro> a letter for what?
[13:18] <jcastro> I don't know where you hang out man, but we're ubuntu, you don't need a letter, just go do awesome stuff. :p
[13:19] <maxired> MarkDude : I was wrong, i'm also running juju from ppa:juju/pkgs on the execution machine
[13:19] <maxired> marcoceppi : I was wrong, i'm also running juju from ppa:juju/pkgs on the execution machine
[13:20] <AskUbuntu> Ubuntu 12.04 MaaS and JUJU's proxy issues | http://askubuntu.com/q/310153
[13:20] <marcoceppi> maxired: Well, that all _should_ work. There was a rudimentary api added to 0.7 for the gui
[13:20]  * marcoceppi checks
[13:21] <MarkDude> I know jcastro but despite the awesomeness of Fedora, every so often I need to do formal shit
[13:22] <jcastro> ok let me know if you need anything
[13:22] <MarkDude> So I can forward to legal, and such- CYA on my side
[13:22]  * MarkDude hates having to use his title, but has to in the formal letter
[13:23]  * MarkDude really really wants to have Distros focus on our common themes- and juju has great possiblities
[13:24] <MarkDude> Ubuntu helps all of FOSS imho. and providing some concrete examples can help stop some sniping from at least a few troll types
[13:25] <MarkDude> Oh I should have an agreeable packager to help with this, so I can see about getting some momentum. Keep kickin ass Juju folks :D
[13:25] <jcastro> hey we do have some RPM packaging somewhere, but it's old and for pyju
[13:25] <jcastro> so not useful probably, just thought I'd mention it
[13:26] <marcoceppi> jcastro: I'd rather pyjuju not end up in some rpm distro somewhere :P
[13:26] <jcastro> hah yeah, we can't even get rid of it in ubuntu
[13:26] <jcastro> it's gone from the saucy though!
[13:27] <maxired> I got a process with " /usr/bin/python -m juju.agents.api --nodaemon --port 8080 --logfile /var/log/juju/api-agent.log --session-file /var/run/juju/api-agent.zksession --secure --keys /etc/ssl/juju-gui" , but not listening on 8080
[13:27] <gary_poster> maxired, hi.  I'm one of the GUI people.  We test against pyJuju 0.7 at least a couple times a day, but not against MAAS.  We've had similar reports to this, but we have resolved everything we know about previously.
[13:27] <gary_poster> ok that sounds promising/interesting.  this is on the GUI charm, I assume?
[13:28] <MarkDude> Where are they located?
[13:28]  * MarkDude is aware the pyju is the OLD method
[13:29] <MarkDude> Well we would look at pyju for reference, not to include :D
[13:29] <gary_poster> maxired, first question I'd have, if you've verified that nothing is listening on the gui charm on part 8080, is what you see in that logfile (/var/log/juju/api-agent.lo)
[13:29] <gary_poster> /var/log/juju/api-agent.log
[13:30] <gary_poster> ... *port
[13:31] <maxired> gary_poster : thanks for helping. yep, nothing actually listening on that port.
[13:31] <maxired> i'm am currently trying again, because there has bene previously an nginx on it
[13:31] <maarten__> When I do juju bootstrap, the bootstrap server does not have my openstack keypair and I can not log into it. What do I have to put to include the canonistack keypair?
[13:32] <gary_poster> ok maxired.  there's a (closed) bug for this, which has had some conversation.  I don't think it has anything interesting but will go find it.
[13:33] <gary_poster> maxired, https://bugs.launchpad.net/juju-gui/+bug/1180095 <shrug>
[13:33] <_mup_> Bug #1180095: GUI charm may have difficulties working with Juju on MAAS <juju-gui:Invalid> <https://launchpad.net/bugs/1180095>
[13:35] <maxired> gary_poster : thanks
[13:39] <marcoceppi> maarten__: You can add SSH keys to your environments.yaml file: http://askubuntu.com/q/205170/41
[13:40] <maarten__> thanks
[13:40] <marcoceppi> WIth that format, you'd just put one public key per line
[13:40] <marcoceppi> (if you had multiples)
[13:40] <maxired> gary_poster  : after a new deployment, the agent is listening, but i still stuck on "Connecting to the Juju environment"
[13:41] <maxired> do you know how can i get logs from haproxy ?
[13:42] <gary_poster> maxired, I don't.  looking.
[13:45] <gary_poster> maxired, benji has graciously offered to help
[13:46] <gary_poster> he'll be around presently
[13:46] <benji> hi, maxired; I'm reading the backlog to get cought up
[13:46] <maxired> hi benji ;) thanks for helping
[13:48] <maxired> benji : looks like it's working right now
[13:49] <maxired> the first connection to the websocket as been really long
[13:49] <maxired> but i could login the the web ui ;)
[13:49] <benji> my work here is done ;)
[13:51] <maxired> benji , gary_poster and marcoceppi, thanks for helping
[13:51] <marcoceppi> maxired: glad you got it working!
[13:51] <gary_poster> maxired, benji, lol, excellent
[13:51] <maxired> i'll check in the bug tracker, but looks like my probleme was a destroyed instance of 'wordpress' wich didn't remove the nginx
[13:52] <gary_poster> maxired, on the same box to which you deployed the GUI later, I'm guessing?
[13:52] <maxired> yep
[13:54] <gary_poster> ok, yeah.  FWIW, effectively fixed in Juju Core AIUI, maxired.
[13:56] <maxired> Good news i don't have to reproduce to be sure about that in order to fill a bug ;)
[13:58] <gary_poster> :-)
[13:59] <maxired> any link to the commit/bug ?
[13:59] <gary_poster> looking
[14:00] <gary_poster> maxired, https://bugs.launchpad.net/juju/+bug/872264
[14:00] <_mup_> Bug #872264: stop hook does not fire when units removed from service <goju-resolved> <juju:Confirmed> <juju-core:Fix Released by fwereade> <https://launchpad.net/bugs/872264>
[14:02] <gary_poster> (I believe you destroyed the service, but I also am 90% sure that the issue is the same)
[14:02] <fwereade> maxired, gary_poster: we do not currently implement an uninstall hook, which is what I'd expect would actually remove such things
[14:03] <fwereade> maxired, gary_poster: we are currently working on containerization, which would let you *really* remove things, rather than just trusting to the effect of the putative uninstall hook
[14:03] <maxired> gary_poster : thanks, i was looking for something more specific
[14:03] <gary_poster> fwereade, when you destroy a service, don't you do a lot more drastic clean up than pyJuju did up?  Or am I just thinking of containerization discussions?
[14:03] <gary_poster> maxired, ok, that's all I've got.  :-)
[14:04] <fwereade> gary_poster, we can do a lot more stuff in the DB; and we can actually guarantee that stop hooks will be run; but we don't go any further than that
[14:04] <maxired> fwereade : I guess you are write. stoping could be not enough if things for example start at reboot ;)
[14:04] <fwereade> maxired, I would hope that a well-implemented stop hook would prevent the service from autostarting ;)
[14:05] <gary_poster> fwereade, gotcha, thanks.
[14:17] <jcastro> jamespage: m_3: have either of you checked out that merge proposal from patrick hetu yet?
[14:18] <jamespage> jcastro, remind me again - which one is that?
[14:18] <jcastro> https://code.launchpad.net/~patrick-hetu/charms/precise/gunicorn/python-rewrite/+merge/167088
[14:18] <jamespage> stub, you are terrifying me with all of the postgresql MP's you have in flight
[14:18] <jcastro> rick_h: hey so sinzui isn't around so you're the closest, tldr, some merge proposals aren't getting in the charm review queue
[14:19] <rick_h> jcastro: yea, there's a bug for that. Sec let me find it.
[14:19] <rick_h> https://bugs.launchpad.net/charmworld/+bug/1191823 jcastro
[14:19] <_mup_> Bug #1191823: Merge proposals are missing from the review queue <charmers> <charmworld:Triaged> <https://launchpad.net/bugs/1191823>
[14:19] <rick_h> "But ~charmers isn't requested to review this proposal, only ~mark-mims."
[14:20] <jcastro> OOOHHHHH
[14:20] <rick_h> jcastro: so not a lot we can do atm about it.
[14:20] <jcastro> so this is a workflow bug
[14:20] <rick_h> jcastro: yea, at least for now until we can get time to check out a better way to generate the list
[14:20] <jcastro> ok so you know how we can "reset" a merge proposal to a non assigned to mims state?
[14:21] <rick_h> jcastro: moved to #juju-gui since aaron is there and he looked into it.
[14:21] <jamespage> jcastro, just fixed that
[14:22] <jcastro> right so basically after the first reviewer reviews it, and they ping pong, we need a way to stick it back into the pool instead of assignit to the first guy
[14:22] <jcastro> jamespage: oh awesome, so we just need to tell charmers when you're done you're first pass to do what you just did
[14:23] <jcastro> jamespage: what did you do> :)
[14:23] <jamespage> jcastro, requested 'charmers' review the MP
[14:23] <jamespage> 'Request another review'
[14:25] <jcastro> ack, found the docs, thanks, I'll send out a reminder and probably put this in newdocs
[14:25] <stub> jamespage: Big one coming up now I understand -broken/-departed hooks and can rewrite the horrible replication stuff
[14:26] <jamespage> stub, do you want to consolidate into a single merge proposal?
[14:27] <stub> jamespage: The ones in the queue will get the charm in the store actually working again
[14:27] <stub> jamespage: You actually prefer big MPs to smaller bite sized ones?
[14:28] <jamespage> stub, no - I just wanted to understand if whats in the queue is all valid
[14:28] <jamespage> time order right?
[14:29] <stub> yeah - I use a bzr pipeline so they each depends on the previous
[14:30] <stub> https://code.launchpad.net/~stub/charms/precise/postgresql/charm-helpers/+merge/169487 is likely rubber stamp, a separate MP to avoid noise in actual work
[14:30] <stub> https://code.launchpad.net/~stub/charms/precise/postgresql-psql/devel/+merge/169851 is teensy
[14:31] <stub> https://code.launchpad.net/~stub/charms/precise/postgresql/bug-1190141-client-access-fail/+merge/169928 is also teensy and fixes the worst bug atm
[14:32] <stub> yeah, nothing to be scared of yet ;)
[14:32] <fwereade> rvba, ping
[14:33] <jamespage> stub, there are two others in the queue
[14:33] <jamespage> https://code.launchpad.net/~stub/charms/precise/postgresql/bug-1084263-fix-broken-hooks/+merge/169486
[14:34] <jamespage> no - just that one actually
[14:34] <stub> https://code.launchpad.net/~stub/charms/precise/postgresql/bug-1084263-fix-broken-hooks/+merge/169486 may give you a wtf moment. I think I based my implementation on a misunderstanding. But the approach works.
[14:34] <stub> There is a test infrastructure one too
[14:34] <stub> https://code.launchpad.net/~stub/charms/precise/postgresql/tests/+merge/169866
[14:34] <stub> But nothing scary in there
[14:35] <jamespage> stub, OK - I'm completely confused
[14:35] <stub> :)
[14:35] <jamespage> stub, which one should I be reviewing first? charm-helpers?
[14:35] <stub> I'm not helping? ;)
[14:35] <stub> Yes, charm-helpers
[14:35] <jamespage> Ok - lemme start there then
[14:36] <stub> That is just upstream code I've copied in from lp:charm-helpers
[14:36] <jamespage> stub, yeah - I see - you sure you want to include the entire project? and not just the python package?
[14:38] <stub> jamespage: I don't know if it matters. Whatever best practice is considered.
[14:38] <jamespage> stub, not sure we have one just yet
[14:39] <stub> Pulling the whole thing gets the license etc.
[14:39] <jamespage> I know for the openstack charms we have been taking the approach of pulling in the bits we want
[14:40] <jamespage> stub,  see http://bazaar.launchpad.net/~gandelman-a/+junk/charm-helpers-sync/files
[14:40] <stub> I worry that will drift... people making their own little hacks. The PG charm already contains a hacked apon copy of charm-helper's ancestor.
[14:41] <stub> How does that helper help with charm-store charms?
[14:41] <stub> oh, I think I see
[14:42] <jamespage> all charm-helpers-sync does is provide a standardized way to described which bits you want from lp:charm-helpers
[14:42] <jamespage> and syncs them into your charm
[14:43] <jamespage> stub, I'll push adam_g to get that included in charm-helpers itself
[14:43] <jamespage> adam_g_, nudge ^^
[14:43] <stub> Right. So I don't see any reason not to switch to that. I'll go with whatever mechanism is preferred.
[14:45] <jamespage> stub, I'd prefer to see that rather than a sync of the entire branch
[14:45] <jamespage> wedgwood, any opinions on the above?
[14:48] <stub> A point for the whole tree is it is easier to merge fixes back, perhaps.
[14:48] <stub> Nah, ignore that
[14:50]  * wedgwood reads backscroll
[14:53] <wedgwood> jamespage: I've been doing my best to structure charm-helpers in a way that it can be included in pieces. +1
[14:54] <wedgwood> I envision it being packaged that way too. The cli, for example, might be a separate deb
[14:59] <stub> jamespage: Pushed that update anyway. Just running the test again (soon that will be tests, plural)
[15:07] <_mup_> Bug #1192598 was filed: pyjuju provision-agent hangs and fails to provision instances <juju:New> <https://launchpad.net/bugs/1192598>
[15:19] <jamespage> wedgwood, ack
[15:24] <jamespage> stub, guess I should review lp:~stub/charm-helpers/bug-1191002-local-unit-relation-data first as that is where you are pulling charm-helpers from
[15:25] <stub> jamespage: turtles, all the way down
[15:25] <jamespage> stub, coolio
[15:29] <jamespage> stub, that branch also adds the data that the local unit has set on a specific relation right?
[15:29] <jamespage> (thats the bit I did not even know you could do until last week)
[15:32] <stub> jamespage: Correct
[15:32] <jamespage> stub, OK - that works for me - do you want to rebase using lp:charm-helpers now?
[15:32] <jamespage> I think that would be good hygiene
[15:32] <stub> It has landed? Sure.
[15:34] <stub> jamespage: And done
[15:55] <jcastro> charm call in 5 minutes!
[15:55] <marcoceppi> jcastro: do you have the URL?
[15:56] <jcastro> setting it up now
[15:56] <jcastro> hey can one of you prep the etherpad for this week while I fire this up?
[15:57] <jcastro> https://plus.google.com/hangouts/_/9d5f2933f12c69245f97d157a4c372d4d61318bd?authuser=0&hl=en
[15:57] <jcastro> if you want to participate
[15:57] <jcastro> http://ubuntuonair.com if you want to just listen in
[15:58] <arosales> jcastro, I can work on the etherpad
[16:05] <marcoceppi> http://pad.ubuntu.com/7mf2jvKXNa
[16:54] <jamespage> stub, how worried are you about backwards compat with py-juju?
[17:04] <stub> jamespage: I will be worried, at least to the extent of migrating our existing systems
[17:05] <jamespage> stub, destroy-machines in the tests is a juju-core ish - py juju does terminate machines
[17:08] <stub> right, ta. Might as well change that.
[18:14] <stub> When I have a standard require/provides relation between client_service and server_service, is it possible for a unit in server_service to retrieve relation data set by other server_service units? From the server side, relation-list only appears to list client_service units and relation-get with the other unit listed explicitly seems to be failing.
[18:16] <marcoceppi> stub: I believe that's what peer relations are for
[18:16] <stub> This is relation state... it doesn't fit the peer relation
[18:17] <marcoceppi> stub: Units of the same service don't talk to each other (normally) when it's service -> client relation
[18:17] <stub> One of server_service is a master, and creates a user and password for the clients and publishes it to the relation with the client
[18:18] <stub> The other server_service units ideally will republish the same information
[18:19] <stub> I'm confused as I think this worked before. It might be racy in some way though.
[18:20]  * stub ponders about using the peer relation as a communication channel
[18:30] <wedgwood> stub: you want relation-ids
[18:30] <wedgwood> you'll need the right ID to go along with the unit
[18:32] <wedgwood> e.g. (service a) <- myrel:0 <- database -> myrel:1 -> (service b)
[18:33] <stub> hmm... maybe I'm using the wrong relation id. That would explain why it used to work.
[18:54] <MarkDude> Where were the RPM charms? I have someone reviewing now
[18:54]  * MarkDude knows they were done the old way with Pyju
[19:00] <jcastro> https://github.com/jujutools/rpm-juju
[19:30] <MarkDude> Ty