=== txwikinger2 is now known as txwikinger === mbarnett` is now known as mbarnett === defunctzombie is now known as defunctzombie_zz [07:05] Can anyone help with an lxc environment issue? I can bootstrap fine, but 'juju ssh 0' is prompting for a password. I have no problems with openstack. [07:41] stub: try `juju ssh posgresql/0` instead... machine-id's a little wonky with the local provider [07:42] ok, maybe with a 't' in there :) [07:44] just `ssh ubuntu@10.0.3.xxx` should work ok too... just not using the machine-id [07:46] m_3: So I can't use the unit name, because deploy is leaving the unit stuck in a 'pending' state. juju status isn't giving me an ip address either - public-address: null [07:46] So my guess would be the lxc container's network is borked, except that commands like 'juju status' and 'juju debug-log' are working... [07:50] hmm.... debug-log showing me nasty encoding failures, and 2.7 python paths. Which leads me to an open bug about lxc using python3 and exploding because juju is python2 and has PYTHONPATH set. [07:51] So I think I'm on openstack until that is resolved ;) [07:51] stub: yeah, iirc raring lxc is still unhappy with juju [07:51] So lemme just set up a precise instance using lxc.... and.... umm.... [07:51] think the python2/3 issue isn't there in other series (just was working with precise local provider this morning) [07:52] Does lxc inside lxc actually work, or does the universe implode? [07:52] yikes... haven't tried that one... I do lxc in kvm [07:53] actually have a precise ec2 instance spun up where I was working with precise lxc just now [07:53] You people with your fast ping times... [07:53] in a novotel atm :) [07:54] stub: please lemme know if lxc is inceptable [07:55] that's probably a big ole rabbit hole though [07:55] Yeah, I don't think I can justify wasting time on it ;) [07:55] ack [08:34] m_3: Do you recall if the PostgreSQL charm always required the database name to be set from the client end? http://jujucharms.com/charms/oneiric/oops-tools/hooks/db-relation-changed seems to expect it to be set by the server [08:43] stub: yes, iirc, the sequence is that the postgresql service creates a db w/ creds during relation-joined [08:43] then relation-sets that for the other side to you [08:43] it creates the db name based on the remove service name [08:43] that way multiple units of the same service get the same db [08:44] m_3: So that would mean there is a bug in http://jujucharms.com/charms/precise/postgresql/hooks/hooks.py around line 1760? [08:44] lemme look [08:45] oic. [08:45] What is the syntax to set the database name from the command line then? juju set ... ? [08:45] yes [08:46] looking to see if there's a config param for that [08:46] there doesn't look to be one [08:47] and there shouldn't [08:47] whoops, I thought that was working on there [08:47] the sequence with a client is: [08:48] So my understanding: there is a bug in the postgresql charm. At the moment it aborts if the database name isn't set on the relation. Instead, it is supposed to create a database based on the remote service name and inform the client. [08:48] juju deploy postgresql && juju deploy wordpress && juju add-relation wordpress postgresql [08:48] yes, that sounds right [08:48] so wordpress calls db-relation-joined which should be a no-op [08:49] postgresql calls db-relation-joined which creates database named "wordpress" and makes up user creds [08:49] does `relation-set` with all that info [08:49] wordpress calls db-relation-changed [08:50] if postgresql has sent the info, it uses it... otherwise, exits cleanly and waits to be called again [08:50] postgresql calls db-relation-changed which should be a no-ip [08:51] s/no-ip/no-op/ (that's my dns provider so's in muscle mem) [08:51] dang, I didn't realize that's been broken [08:52] I'm writing my first client charm (so I can write tests) and was wondering why things were odd. [08:52] the postgresql's db-relation-joined hook needs to not _create_ a new db if it's already been created... it just sends the auth info back down the pipe [08:53] stub: that way multiple client units get the same database within the postgresql service [08:53] right. Do you want me to fix the charm or do you want to repair it? [08:53] so if you want multiple clients talking to _different_ dbs... then the clients should have different service names [08:53] juju deploy wordpress blog1 && juju deploy wordpress blog2 [08:54] then when those're related, we'd have two different dbs in pg [08:54] stub: It'd probably be better for you to do it [08:54] but I'm happy to if you'd rather [08:55] I'll give it ago. I need it working so my charm client works ;) [08:55] thanks... wow, sorry didn't realize in review that that process was borked [08:57] m_3: I suspect there is a charm somewhere relying on this behavior - specifying the database name to use [08:57] oh? :( [08:57] we'll have to have a look at pgbouncer and friends too [08:58] hmmm.... wonder if it's worth trying to support either behavior [08:58] I think we can have both behaviors, although we may end up with a spurious database if postgresql-charm creates a db then client-charm sets the database name [08:58] if we moved pg's creation logic into relation-changed [09:00] there's a bit of a problem there... ambiguity on what it means when the client is "silent" [09:00] does it mean the that client hasn't finished coming up yet? [09:00] or that it wants you to pick the db name for it? [09:00] http://jujucharms.com/charms/precise/oops-tools/hooks/db-relation-joined [09:01] crap [09:01] Cause there will be several different services needing to share the database. [09:02] right... that's still totally cool [09:02] based on service_name [09:02] new services get new dbs [09:02] Different services with different names needing to share the database (because they need to share data) [09:02] oh oh, gotcha [09:03] It is either that, or we require messing around with pgbouncer proxies and aliases and that seems more painful. [09:03] lemme look at oops... maybe we just need to support both behaviours [09:03] So I'll see if I can support both uses. [09:03] the kicker is that we'll probably need to make mysql do the same thing [09:04] mysql does the 'db decides' [09:04] Nah, mysql is for toy services ;) [09:04] haha [09:04] well I'm also wondering if we've copied that paradigm for mongodb, cassandra, etc [09:05] important to have them consistent [09:05] but supporting both behaviors seems worth trying [09:05] I don't know if mongodb or cassandra have a similar way of partitioning data like PostgreSQL and its separate databases. [09:06] see if you can figure out how to differentiate "defaults" -vs- "relation-guard" [09:06] yeah, don't know if they create default keyspaces or leave that up to the client [09:07] relation-guard is what I call the "is the other side up yet? go on... otherwise exit0" [09:07] remember changed keeps getting called as long as either side keeps doing relation-set [09:10] Right. So pg's relation-changed and relation-joined are the same. So in this hook, if the database name isn't provided we generate it. If the database doesn't exist, we create it. [09:10] And extra points for cleanup - if we generated a database that is not used by any active relations, nuke it. [09:11] But a spurious empty database does no harm really. [09:12] we probably need to remove the "joined" [09:12] if it's reading anything from the other side, then that's really best done in changed exclusively [09:12] joined really should never try to read (it's only fired once) [09:14] either making it a no-op in code (exit 0) or just removing the hook (link) is equiv [09:14] It needs a -joined hook to generate the default database in case the remote end doesn't want to specify it. [09:14] So -joined generates a default database and -changed allows the client to override it. [09:14] oh, ack [09:15] yeah, that's probably workable... check the same "database" relation var [09:15] cool... I'll let you proceed [09:15] thanks man... nice catch [09:20] stub, m_3: just noting we need test cases for multiple client services _and_ multiple units of the same... using defaults as well as overriding db name [09:21] ack [09:50] This schema_user thing seems bogus too. A magic user is created with superpowers on the database, so every client using the charm gets superpowers on the database they connect to destroying any security you bother to set up with the normal user. [11:28] jamespage: I want review on mine and Mark's packaging updates to juju, should I just propose my branch for merging into raring? the catch is the history was kinda screwy so the mp diff will... not be useful [11:29] mgz, yeah - thats OK [11:29] or just pass me the branch [12:39] jamespage: have done everything and proposed, see lp:~gz/ubuntu/raring/juju/0.7 and mp === defunctzombie_zz is now known as defunctzombie [12:58] ugh... /me hates userpas === salgado is now known as salgado-brb [13:04] m_3: ^you probably also want to have a look at that branch, to see if I brought your changes in correctly [13:04] mgz, m_3: some comment/fixes require - I'll doc them in the MP [13:04] jamespage: thanks! [13:09] jamespage: mgz: cool... hoping there's a better way than sed to get the version from rules into the postinst/prerm files [13:10] gah, and these test failures are new to me... [13:10] m_3, ah - right [13:10] thant answers one of my questions [13:10] not that is not a great way todo it [13:11] mgz: yeah, I guess my use of 'DEB_BUILD_OPTIONS=nocheck' is a really bad habit [13:12] this is just something in my env, these ones were passing earlier... [13:12] not the most robust suite in the world [13:12] ack === salgado-brb is now known as salgado [13:19] okay, the branch is slightly borked... [13:22] mgz, I had trouble getting the orig.tar.gz as well [13:22] gah, quilt, I hate you [13:22] comments in the MP [13:22] https://code.launchpad.net/~gz/ubuntu/raring/juju/0.7/+merge/158088 [13:22] yeah, the rule for getting the tarball is borked... but it has been forever [13:22] and I didn't want to fix [13:23] thanks, those comments are useful [13:26] jamespage: yup, thanks [13:27] m_3, mgz: if you do want to name the package juju-0.7 then we need to deal with transitions from juju -> juju-0.7 [13:27] the prerm 'rm' might not be necessary with '--compile' in the rules... purge wasn't cleaning out that directory for some reason [13:31] jamespage: yeah, I guess I don't understand exactly how an upgrade would happen then... I thought it would remove old and install new, which should be fine with /usr/lib/juju-0.6 and /usr/lib/juju-0.7 [13:31] jamespage: the plan is to add a meta-package I think [13:32] m_3, for the files yes they would get automatically upgraded; but the alternatives will break [13:32] pushing a fix for the issue I noticed [13:33] actually forget that - you use a prerm to remove the older version alternatives [13:33] so its OK I think [13:33] m_3: sorry - being dumb [13:34] jamespage: dude, I'm just barely caught up on this one... running through upgrade scenarios [13:46] mgz, m_3: did I say to merge the 0.7 changelog entries? [13:52] jamespage: nope. I can do that [13:53] was a little confused as to the right thing, with ppas in the mix as well. [13:53] mgz, ignore PPA changelog entries - only distro ones are relevant [13:55] jamespage: I assume '$< >' was an accidental autocomplete and not something strange required by dh? [13:56] m_3: thats makefile magic [13:56] oh, nm... it's rhs and then redirect to file [13:56] doh [13:56] $< is read-in dependency (in this case XX.in) and write out > [13:56] right [13:57] wasn't expecting the write out... my bad [13:58] wht a difference an '-i' makes :) [14:00] have pushed the changelog fix [14:01] m_3: are you working on the alternatives changes? you can either base off my branch, or I can pull in your parts when you're ready [14:02] mgz: I based off of your branch... pushing them one at a time to ~mark-mims/ubuntu/raring/juju/0.7 [14:03] mgz: figure you can merge them and resubmit your branch [14:04] jamespage: mgz: any ideas why this won't purge cleanly unless the 'rm -Rf /usr/lib/python2.7/dist-packages/juju' is there? [14:05] m_3 what error do you get? [14:05] take a bit to reproduce, but essentially it was a handful of "won't remove directory because it's not empy" [14:05] I sort of assumed it was .pyc files at the time, but that's not nec true [14:06] I'm sure it's cause I'm overriding dh_auto_install with [14:06] 'set -e && python setup.py install --root=debian/juju --install-layout=deb --install-scripts=/usr/lib/juju-$(VER)/bin --compile' [14:07] distutils magic that I'm missing by calling it manually [14:12] mgz: kicking off pbuilder to test the postinst/prerm changes [14:20] mgz: quilt's failing on that zookeeper patch [14:21] 0.6 still debuild's fine though [14:22] mgz, m_3: you guys know about bzr merge-upstream right? [14:22] jamespage: um.... no [14:23] googling now [14:24] :) [14:24] I was tarring [14:26] jamespage: when the branch isn't horribly broken, yes [14:27] mgz, is the last 0.6 update in the branch good? [14:27] it should work for any future upstream merges... but I needed to do some hacks [14:28] jamespage: nope. I could have based on the new minor 0.6.1 that I also fixed up, but really that's diverged [14:29] mgz: so all changes _other_ than manpages are in... just having problems getting past the borking patch atm [14:29] so they're untested [14:37] m_3: what is breaking for you exactly atm? we should possibly just add some more test skips for now. [14:39] mgz: http://paste.ubuntu.com/5695537/ [14:39] from `debuild -us -uc -S` [14:41] ...internal error? let me try that locally. perhaps just refreshing the patch will help. [14:42] I'll clear and re-pull [14:43] damn that repo's big [14:44] I've been using `bzr builddeb -- -nc -us -uc` which works +- failing tests [14:46] crap... same thing with just your branch [14:46] juju-0.6 does fine though [14:46] ok, well lemme try with bzr bd [14:54] no love [14:54] got a call.. biab [14:59] m_3: I get a failure on your branch, but not mine, will see if I can fix [15:01] mgz: sorry, I don't relly know what's going on with .pc [15:01] I assume it's where it's applying the patches or something [15:01] yeah, that's all quilt related [15:04] m_3: what I get [15:06] mgz: yikes [15:06] m_3: try `quilt pop -a` then `quilt push -a` [15:07] mgz: ok, so I've gotta get some other bits into the rules [15:08] dang... did the pop and push and it's still failing [15:51] m_3: so, with your commit of the .in files and one more fix, I get a build package: lp:~gz/ubuntu/raring/juju/mims [15:52] mgz: oh, whew... that error was when the .in files were missing [15:52] mgz: good, was scared by that error otherwise [15:53] I'll fixup that and put it in my branch. what have we got left, the man pages? [15:54] mgz: yeah, I wanted to test the .in changes [15:54] mgz: and then figure out how to do the manpages [15:55] no clue how to version them for alternatives though [15:55] * m_3 can look at java or something [15:57] mgz: I was wondering what vim's problem was with the syntax on rules... good catch === defunctzombie is now known as defunctzombie_zz [15:58] m_3: those fixes are now brought across to lp:~gz/ubuntu/raring/juju/0.7 [15:59] mgz: oh, great... I'll rebase [15:59] thanks! [15:59] (cherrypicked rather than merged so the fileids stayed the same) [16:00] ack [16:08] mgz: argh... no love... the new branch is still doing the same thing for me [16:09] I give up trying to build it [16:09] lemme quit screwing with that and see if I can get manpages versioned [16:12] you can just `rm -rf .pc` if that helps [16:12] easy enough to get back the state with `bzr revert .pc` after if you're not actually touching the patches [16:18] still same thing w/o the .pc file /me doesn't know wtf [16:22] hm, I wonder if it's actually the dsc that's borked. try removing that and the -nc flag if you're still using it? [16:23] yeah, I just cleaned everything up [16:23] noted that I had some 0.6 stuff in the higher directory [16:27] mgz: hey, should I cut the .pc stuff out of the upstream tarball? [16:27] I'm just creating it from the repo directly w/ no changes [16:28] it certainly shouldn't be in the upstream tarball [16:28] ah... ok [16:28] the upstream tarball is what you get from lp:juju/0.7 and bzr export [16:29] * m_3 facepalm... sorry man [16:30] it's okay, the tooling for this is slightly borked in the branch as jamespage mentioned earlier... [16:30] now that makes sense as to why quilt wasn't happy [16:30] it puts patches on _top_ of the upstream tarball [16:31] I'll have another quick look at resolving that === salgado is now known as salgado-lunch [16:33] mgz: whoohooo! [16:33] that worked [16:33] ok, wife's waiting on me for dinner... I'll test out alternatives chagnes, figure out how to get manpages in [16:34] have a branch for you in the morning? [16:34] robbiew: arnt you in Austin? I hear its the next city to get Google Fiber ;) [16:34] yeeeeeep! [16:34] m_3: sounds great [16:34] mgz: might be able to get it sooner... lemme see how long dinner goes [16:35] whew [16:35] now I can build again :) [16:35] ok, I'm out for abit [16:35] robbiew: I'm telling ya now, goto the http://google.com/fiber page and put your address in for pre-registration , you'll be glad you did … it filled here FAST [16:36] heh [16:37] robbiew: yup, here is the exact link, and Austin is now listed … https://fiber.google.com/cities/#header=check === defunctzombie_zz is now known as defunctzombie [16:38] imbrandon_: already did [16:38] :) [16:40] m_3: fixed the get-orig-source rule === defunctzombie is now known as defunctzombie_zz [16:52] jcastro, arosales: i've got some questions about the Charm Store but need to get off of the hangout and run to lunch. i'll ping you later today. thanks! [16:52] ok [16:53] mattgriffin: ok, thanks for joining us. === defunctzombie_zz is now known as defunctzombie === salgado-lunch is now known as salgado [17:36] % juju publish cs:~niemeyer/precise/build-mongo [17:36] cs:~niemeyer/precise/build-mongo-0 [17:36] ! === BradCrittenden is now known as bac [19:00] jcastro: thanks for sending out the charm meeting minutes, nice summary [19:02] yeah it turned out not as crappy as I thought [19:03] I'll make them more interesting once I get used to it [19:52] anyone here played with the openstack packages? I'm a bit confused on the 'quantum' networking stuff === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie [21:10] jcastro, arosales: ping … got some questions about the Charm Store. have a sec? [21:36] mattgriffin: hello, sure [21:37] mattgriffin: sorry the delayed response. I was grabbing some lunch. [21:41] arosales: np. are there any plans to support some kind of a feedback loop for Charms usage - either quantitative or qualitative - back to the author? [21:42] mattgriffin: besides a bug report a method for continuos feedback [21:43] mattgriffin: there is https://juju.ubuntu.com/docs/charm-quality.html that we are working on "rating" for each charm [21:43] but that isn't really continuous feedback, and mostly from the LP charmers group and not from users. [21:44] mattgriffin: aside from bugs were you thinking of comments on the charm page, or a forum, or something else? [21:45] mattgriffin: also the metadata.yaml does have the authors contact info, at least for charms that have maintainers . . . [21:45] mattgriffin: in terms of quantitative feedback we are also working on display the charm "usage" or download statistics from the charm store. [21:46] arosales: yeah… feedback could be interest measured by downloads (not really possible), ratings from Charm Store managers (your link), +1's/likes by users, qualitative feedback/praise/reviews by users, and an easy way to submit bugs [21:47] arosales: so the purpose would be to help Charm browsers make better decisions and also help Charm authors with feedback on interest in their Charm and ideas to make it better [21:48] arosales: ah… download stats … very interesting [21:50] arosales: when might the quantitative feedback measures be available? [21:50] arosales: stats i mean [21:51] mattgriffin: hopefully by 13.04, so soon [21:51] mattgriffin: I like your ideas on feedback [21:51] shows in the charm page [21:51] what ways did you envision, "feedback/praise/reviews by users" [21:51] comments? forums? [21:52] s/shows/shown/ [21:52] arosales: yeah. something similar to consumer app store feedback - short and sweet … and maybe easily tweet. [21:53] mattgriffin: ya so we need some more social networking integration into the charm page. [21:53] arosales: perhaps be able to leverage some of the reviews infrastructure (or learnings) from USC [21:54] * arosales makes note for a blueprint for "adding social networking to Charm Store charm pages." [21:54] mattgriffin: good point on USC [21:56] mattgriffin: aside from download stats any other quantitative information you would be interested in? [21:57] * mattgriffin thinks [21:59] arosales: some indication of the age or update frequency might be helpful in selecting a charm… i don't know … might be reaching on that one [22:00] mattgriffin: perhaps recent commits, and age in the charm store [22:02] arosales: :) cool [22:07] mattgriffin: just some ideas on information that is there, but may be interesting to users . . [22:07] mattgriffin: good stuff, I think there is some good ideas there for a blueprint. [22:09] :) [22:25] mattgriffin: please add any ideas that come up into https://blueprints.launchpad.net/ubuntu/+spec/servercloud-1305-juju-charmstore-feedback-loops [22:26] mattgriffin: be good to see virtually see you at http://summit.ubuntu.com/uds-1305/ too [23:48] arosales: cool. will do