=== ahs3` is now known as ahs3 [03:29] m_3, gluecon looks awesome [03:29] m_3, pls do a write up [03:29] * hazmat is wishing he had gone [03:31] hazmat: yup will do... it was great [03:31] wish they would've accepted my talks... [03:32] m_3, "RT “@adrianco: If your instance AMIs depend on an external repo to configure post boot you have a big SPOF.” <- Bingo.." [03:32] hopefully that'll be different next year though... juju came up in a couple of presentations... and I talked to folks a lot [03:33] m_3, right on [03:33] s3 repos hopefully ftw [03:33] that guy's talk rocked! [03:34] they manage lots of static amis though... they're quite ripe for something like juju [03:34] and it was nice to realize we'd been just playing with more scale than they usually use for one app [03:35] (most of their load is really in big CDNs... not on ec2 proper) [03:40] m_3, did you see derek richardson's talk? [03:41] er.. collison [03:54] * hazmat googles for all the slides he can find [05:07] guys, is there an indirect way to deploy joomla through juju? [05:11] JoseeAntonioR: My cousin might bring me some Brazillian Pisco ;) [05:12] bkerensa: what the... I think there's no brazilian pisco! [05:12] :D [05:13] we shall see [06:41] JoseeAntonioR: somebody wrote a joomla charm once.. its been a while though === popey_ is now known as popey === elmo_ is now known as elmo [13:35] SpamapS: happen to know a list of deps for charm-tools ? finally getting it ported to osx today ( or trying ) and seems to be missing a few things, works over all but like proof gets this [13:35] bholtsclaw@ares:~/Projects/local/charms/precise/newrelic$ charm proof [13:35] abort: Unknown script hl [13:35] usage: charm [ --help|-h ] [ script_name ] [13:35] script choices are: [13:35] /usr/bin/charm: line 14: exec: list_commands: not found [13:36] most of it looked to be bash when i perused the source, but i dident inspect everything [13:38] ohh nevermind [13:39] its fskin bsd's readlink is broken [13:39] and thus OSX's [13:39] i always forget that [13:39] * imbrandon patches [14:27] imbrandon: the readlink thing can be solved by installing GNU coreutils IIRC [14:27] imbrandon: we need to get rid of the readlinks I think [14:28] imbrandon: I was thinking I'd switch charm-tools to work like juju-jitsu, with autotools setting the dirs, rather than trying to be all clever :) [14:32] yea, that would rock, there is a bit more too but i'm on the phone [14:32] one secd [14:32] and yea i installed coreutils via brew [14:32] to semi solve it === mbarnett` is now known as mbarnett [14:58] anyhow yea, sed is diffrent etc too but gnused is installed via brew too [14:59] problem is without autotools its a PITA to switch everything cuz the gnu coreutils are prefixed with g on osx [14:59] so ginstall gsed etc [14:59] you CAN put them in the path before the BSD ones but it breaks other things [14:59] sooo i'd rather not do that, esp in a package i plan to distriubute [14:59] :) [15:00] * imbrandon is trying to think of the best way, but i think a ./configure that has some if/else for the platform will likely be the best [15:00] then i dont have to maintain a patch, and it can be "upstreamed" except for the install forumla like juju [15:02] ideally i've been working on a way to upstream it ALL if possible but in a way that the current ubuntu devs / maintainers dont have to care it is or really even realize it [15:02] etc [15:02] still a WIP :) [15:08] oh hey SpamapS , thought of something new, with the Author spec just now going in, would it be wise to diffrentiate the Author and Maintainer like git ( and i assume others like bzr but do not know ) seperates the author and commiter for commits ( e.g. a sponsored commit ) in the metadata , i think that would allow for something akin to what we're wanting to do with per-charm-commit access to be a little clearer too that way, kinda like the diffre [15:08] not 10000% sold on it, but figured now would be the time to bring it up rather than later [15:13] hrm , ok totally seperate train of thought here, back on my coreutils and other gnu/bsd tools "issue", how far fetched do you think it would be to have on install the juju and related tools create a chroot env or something like the python virtualenv that puts the gnuutils and the many other things i've come accross so far that would be ok to hack on MY machine but not good for long term or wide use, so a terminal could be fired up on osx and then [15:14] sounds complicated but i think it would be a lot less complicated than the current hacks i'm contemplating and would work other distros too with very little care for their current setup that way [15:15] because really in every context juju cares about OSX is just a BSD* distro [15:37] Author spec? [15:38] err the maintainer field [15:38] got myself mixed up [15:40] i'm not sure if a separate author spec is adding significant value [15:40] yea [15:40] not really a whole spec [15:40] just mean while were adding maintiner field to the meta data it mighjt be good idea to add author also [15:41] as to have a clear seperation [15:42] and really was just a passing thought, it might not be all that great, but it is done in other things like RCS and debian packing so not tooo far fetched [15:42] but figured now instead of a month from now would be the time to think about it [15:42] I think, for the most part (and quite some time) the author will be the maintainer [15:43] likely so for 99% of them [15:43] was more of to not have to change it later and go though a change again since we;re making one similar change now [15:43] tis all [15:43] but yea your likely correct [15:44] btw heya marcoceppi , hear you had some of that nasty bug too, hope ya feelin better [15:44] much [15:44] :) [15:45] i got it pretty bad for a few days but i dont think near as bad as you from what jorge said [15:45] btw that tillo mashup thing is slick , nice job :) [15:45] twillo* [15:47] trello? [15:47] :) [15:47] oh yea twillo is the sms thing [15:47] i always mix them up [15:48] marcoceppi: yeah, I agree, your mashup is cool [15:48] strapello? [15:48] yup [15:48] yea [15:48] thanks :) need to push up some caching so Jorge's page doesn't take three years to load [15:48] hahaha [15:51] todays goal for me is to finish wrangling the kickstrap ubuntu them into a sphinx theme and base django app for the ubuntu-community-themes pkg, i'm like 99% done , well i am done with the django one , sphinx close but its a weird templating lang [15:53] hazmat: latest plan on fixing the oneiric stacked branches was to rename the base (precise) branches back, then fixing the stacking, then renaming the precise branches again. This sounded like a complete clusterquack to me... I'm gonna explore other options. nothing new from the lp team, they were mostly focused on lp changes to prevent problems in the future. I'll chase clint down and brainstorm once there's more caffeine in the mix. [15:54] SpamapS: please budget a little time to discuss ^^ later today [15:57] m_3, hmm.. yeah.. that seems like a reasonable solution, cluster quacky indeed.. but we need to restore access to fix.. er break.. and then rename. [15:58] that's all kinds of retarded i agree [15:58] * hazmat dons a PC hat [15:58] no offense intended [15:59] m_3: I think thats actually an ok plan. [16:00] Its just a result of the weird distro model which we've inherited. We won't repeat the problems with quantal [16:01] this weekend then [16:01] SpamapS, you mean do the invalid restack, to only temporarily make the branches in accessible prior to renaming.. [16:02] maybe we can even arrange to have a lp superpower person available too [16:02] the ideal solution james_w pointed out was stacking on the id [16:02] er. the branch id [16:02] so it doesn't die on rename [16:02] hazmat: bugs are in to fix that going forward [16:02] but it won't help now [16:02] well... maybe it would [16:03] there is a way to avoid breaking all the precise branches while doing it [16:03] but it's more work [16:03] oh? [16:03] james_w: do tell [16:03] hazmat: yes, that would be good. Another good solution would be to fix the charm store/charmworld to not care about the branch name for official branches [16:03] the stacking could be fixed directly, but someone would have to work out the branches they were supposed to stack on, and find out the ids of them [16:04] james_w: oh... that doesn't seem to bad then [16:04] james_w: is the ID not accessible via LP API/ssh+bzr ? [16:04] SpamapS, hmm, maybe it is [16:04] james_w: and a _whole_ lot safer [16:04] i.e., can be done right now [16:04] not directly that I know of, but it's possible it's available [16:05] m_3, its not something we can use right now [16:05] m_3, even if it was available [16:05] what would renaming the precise branches break? [16:06] james_w: access via the lp:charms/precise aliases... as well as access via the store ( juju deploy mysql ) [16:06] james_w, long term nothing, temporarily it would break anything looking for charms, and existing branches that want to pull/push [16:06] er.. on the push side it might get uglier [16:06] if somebody pushes in the middle of this op.. [16:06] james_w, m_3, it won't break the store [16:07] scheduling it over the weekend is just a crapshoot [16:07] huh? [16:07] the store assembles zipfiles for download [16:07] sure it will... the store is looking for [16:07] it doesn't serve directly from bzr [16:07] oh, you mean the store has it cached [16:07] ok [16:07] the store won't see anything new, but it's not going to delete all the charms is it? [16:07] it polls bzr to look for new revs [16:07] james_w, no the store and charm browser never delete things === wrtp is now known as rogpeppe [16:07] and it would presumably pick up again when they were renamed back [16:07] the lp:charms/precise aliases won't break [16:08] hmm.. actually the charm browser will.. on the assumption that the upstream branch changed.. but it will fetch them again when their back in place.. the metadata would remain though [16:08] and modern bzr uses those I believe, so most pulling and pushing would still work I think [16:08] well maybe I'm just being a panty-waiste then [16:08] * m_3 puts on _his_ PC hat [16:09] not renaming would be better [16:09] but likely needs help from someone with lots of LP internals knowledge, and probably a web-ops too as it will probably require some db/fs syrgery [16:10] the rename is API doable [16:10] as is the stacking change [16:10] so we'd have, what, 15 minutes of exposure? [16:10] I say do it [16:10] yup [16:11] its not ideal as a long term solution for other rels, but it works for restoration [16:11] ok, I'll test it out on a single branch and then go from there if that works [16:11] SpamapS, the stacking change can be done over the API? [16:11] james_w, so if we wanted to do this in the future with out running branch-distros.. could we just push the branches and then set some api flag to mark the series active? [16:12] ie.. bypass most of branch-distros [16:12] hazmat, yeah, you could [16:12] you can't manipulate the series, but that's disjoint anyway [16:13] once the series is created, you can push the branches, then use the API to make the official branch links [16:13] but it's a one-line fix to branch-distro to avoid this [16:14] it doesn't get the new branch names "right" but I understood that was going to be changed in the charm store anyway [16:16] james_w: well not API, but we can bzr reconfigure and push, right? [16:16] SpamapS, I don't know if that will change it to stack on the id rather than the name [16:16] we can experiment [16:16] SpamapS: that's my understanding of the current plan... still needs to be tested [16:16] Oh I was thinking of just changing the stacked on name. [16:17] there are a couple of scripts we can run to definitely fix it up from the LP size while the precise branches are renamed [16:17] we rename the base, then bzr reconf, then push that back up, then rename the base back to trunk [16:17] the ID is a nice idea, but if I can't get it, I'd rather have a solution for today and then fix the net distro branching. [16:17] you can't reconf to a name that doesn't exist [16:17] hmm [16:17] s/net distro/next distro/ [16:17] how about this [16:17] right, so we rename /trunk back to /precise (the base braches) [16:17] 1. branch from the correct name to the wrong name for the branches for precise [16:17] james_w: heh right, so we rename, branch, rename, reconf, push [16:17] 2. done [16:17] _then_ we should be able to reconfigure [16:18] james_w: Ahh so just fake it basically? [16:18] * SpamapS tries that [16:18] SpamapS, it's perfectly valid from bzr's point of view, just a bit messy [16:18] might confuse a few people [16:19] it could be cleaned up, but there would be working branches with no interruption [16:19] oh, so you mean we create new /precise's instead of renaming from /trunk? [16:19] * SpamapS tries it w/ hadoop-slave [16:19] cool [16:19] m_3, yeah, they will stack on trunk, which will make for slower access to the oneiric branches, but it should fix them [16:19] since that one is borked in precise anyway [16:20] works [16:20] james_w: I knew we gave you a laptop for a reson. [16:20] reason even [16:20] so we still bzr reconfigure those branches too? [16:20] somebody give me a new keyboard ugh [16:20] heh [16:20] m_3, that would be good [16:20] or just leave the fake-out in place? [16:20] lp:charms/oneiric/hadoop-slave works [16:20] whoohoo! [16:21] well that was a lot easier than we were making it :) [16:21] I don't know if it will stack on the id at that point, so no-one is allowed to rename the precise branches again [16:21] all I did is push lp:charms/precise/hadoop-slave -> lp:~charmers/charms/precise/hadoop-slave/precise [16:21] right [16:21] bzr branch -d lp:charms/precise/$charm lp:~charmers/charms/precise/$charm/precise [16:21] but we really should bzr reconf, then we can remove the extra /precise branch [16:21] someone in ~charmers can put that in a for charm in $charms loop [16:22] m_3, +1 [16:22] james_w: does that pull/push automatically or does it happen all on the other side? [16:22] stacked on: /~charmers/charms/precise/hadoop-slave/precise [16:22] still stacked on the name :) [16:22] SpamapS, it's client side, it's just easier to type :-) [16:22] james_w: gotchya. Doing that now [16:23] err, I probably got the paramters the wrong way round [16:23] bzr branch lp:charms/precise/$charm lp:~charmers/charms/precise/$charm/precise [16:23] or whatever works, you get the idea :-) [16:23] bzr reconfigure --stacked-on lp:charms/precise/$charm lp:charms/oneiric/$charm is the next step I think [16:24] james_w: yeah the -d was wrong [16:24] Ok thats running [16:24] the oneiric branches should be resurrecting one by one :) [16:24] then deleting all the /precise branches [16:24] m_3: can you verify alice-irc works? [16:25] which can be scripted, but last I looked it wasn't directly available from Launchpadlib [16:25] james_w: or maybe --stacked-on lp:~charmers/charms/oneiric/$charm/trunk? [16:25] SpamapS: checking [16:25] m_3, yeah, that's probably better, you're right [16:26] * james_w -> lunch [16:27] james_w: thanks! [16:27] np [16:28] SpamapS: can branch lp:charms/oneiric/alice-irc, which shows parent_location = bzr+ssh://bazaar.launchpad.net/+branch/charms/oneiric/alice-irc/ [16:29] but don't know how to see what that's stacked on [16:30] m_3, bzr info lp:charms/oneiric/alice-irc should show it [16:33] m_3: Stacking is unchanged at this point [16:33] ok, as expected, still stacked on /precise [16:33] right [16:34] its slow going [16:34] I'm up to hadoop-mapreduce [16:35] 'morning all [16:35] is there a hdfs charm yet ? i seen a nice addon for that , that adds a external REST api to do crud ops on it [16:35] moins [16:36] imbrandon: well really the hadoop charm is deploying HDFS [16:36] so how do we tell http://paste.ubuntu.com/1006653/ to stack upon itself? [16:37] m_3: I think we want it to stack on precise [16:38] m_3: just.. the precise we have.. not the precise we renaemd away from. :) [16:38] ohhh... [16:39] SpamapS: nice, i may poke that rest api later then and see if it will make a good sub charm for hadoop [16:39] ok, so what happens in q? we change everything to stack on top of ....../trunk for quantal? [16:39] so the base is follwing series going forward? [16:39] anything different for LTS? [16:40] m_3: I'm not sure. We need to think about this. We haven't really given thought to how we'll do updates. [16:41] m_3: My thinking is we'll just evaluate changes to make sure they won't break existing deployments, and then just push them into the branches. [16:41] SpamapS / m_3 : this is where i said the drupal branch model would work better [16:41] let me see if i can find a good infographic and real example and show yall. cuz i'll never sell/explain it right [16:42] and its really just a minor change but signifgant one [16:42] imbrandon: yeah, we need good ideas [16:42] otherwise we'll put our brains on it.. [16:42] and then.. well.. thats going to get ugly ;) [16:42] lol [16:43] well its one of the things that came from drupal with their "not invented here" mentality that actually is better than most anywhere else [16:43] unlike most of the stuff thats half baked [16:43] at this point I'm leaning towards let's backport changes manually and not use any _help_ that stacking might give us for this process [16:44] tl:dr its basicly flip the charmname and the series in the url ( that also means the series is a branch ) [16:44] tried bzr reconfigure --stacked-on lp:~charmers/charms/precise/alice-irc/trunk lp:~charmers/charms/oneiric/alice-irc/trunk [16:44] but there is lots of good reasons why [16:44] bzr: ERROR: Lock not held: LockableFiles(lock, bzr+ssh://bazaar.launchpad.net/~charmers/charms/oneiric/alice-irc/trunk/.bzr/repository/) [16:44] maybe [16:44] i need to do it locally then push [16:45] soo all the charms become like this lp:~charmers/charms/alice-irc/precise , and trunk is the current DEV series , like right now Q but each series gets a branch of the charm not a LP series [16:45] make sense ? [16:46] nope, still could not acquire remote lock when trying to push the changes... locally restacked though [16:46] might need lp for that [16:47] http://paste.ubuntu.com/1006670/ looks like exactly what we want [16:47] thats how the few k modules on drupal.org do their thing, each one has http://code.drupal.org/// [16:47] m_3: stacking doesn't really help you. Its just an efficiency play for Launchpad IIRC [16:48] so like drupal.org/views/views/7.x [16:48] nice [16:49] imbrandon: so there's a branch per charm, and the one people push to (lp:charms/foo) is the current dev series? [16:49] imbrandon: thats *exactly* how it works now. [16:49] alias bbranch='bzr branch --dont-you-dare-stack-anything-ever' [16:49] imbrandon: sorr, there's a branch per charm, per series [16:49] m_3: don't crush LP just because you don't like stacking. :) [16:50] * m_3 is just a hater :)... need more coffe [16:50] e [16:50] SpamapS: http://drupal.org/node/1015226 [16:51] SpamapS: no reversed from now [16:51] SpamapS: where the series is set by the charm as a branch [16:51] imbrandon: but its conceptually identical [16:51] SpamapS: yes but fixes the problems like this [16:51] seriously though, I'm talking out of ignorance and need to take more time to learn the tool [16:51] that link is their policy [16:51] SpamapS: http://drupal.org/node/1015226 [16:52] imbrandon: the only problem we have now is that the LP tools and the charm store don't jive [16:52] take a gander real fast [16:52] SpamapS: right, but there are other small nuances that make it nicer too [16:52] like being able to share merges easier [16:52] beteween serises [16:52] You can do that w/ the current system too [16:53] SpamapS: so when your run is done, we should get #lp to unlock the series so we can push up the restacking changes, then remove the extra /precise branches [16:53] sure, i dont mean you cant, just a nicer workflow that happens to fix this seris lp issue too [16:53] is what i ment [16:53] worth a look at least [16:53] Like, how hard is it really to type 'bzr merge lp:charms/precise/foo' to get the latest goodness from precise in your branch? [16:53] What you want is git [16:53] we've talked about this [16:53] its up for discussion [16:54] well yea but it could be done here too [16:54] when i thought about it later [16:54] manual backport by the maintainers is cool for now [16:54] whether I type bzr merge lp:charms/foo/precise or lp:charms/precise/foo .. matters not. Right? [16:54] As long as they share ancestry [16:54] so it goes smoothly [16:54] sure sure, i am not trying to say the way we have is totaly or even in any way broken [16:54] then we can automate it [16:55] just that there is "this other way thats cool and not tooo much of a change for some added stuff" [16:55] I'm suggesting that their policy, and ours, are not really different at all. [16:55] they arent [16:55] Theirs is just optimized for git [16:55] thus why i said a small change but really in overall it is signifigant [16:55] well the way they use named branches its really alot like bzr too [16:56] as it WAS based on cvs [16:56] hehe [16:56] or feature branches , howeever you call them, same thing [16:56] anyhow it boils down to one main thing, trunk or master or whatever is never the branch that is published from [16:57] ever [16:57] this declaritively saying in the branch what its for [16:57] not just what series on LP its pushed to [16:57] thus would work on any hosting service [16:58] and in out situation that would make branchname === series name [16:58] m_3: Ok, I'm going to bzr reconfigure all the stacking now in oneiric, MMkay? [16:59] I think what james_w put out should work without having it locally [16:59] imbrandon: branchname is basically immaterial in the current system. The fact that the charm store chose 'trunk' was arbitrary. Its really "the series+charm official branch" [16:59] we just gotta get the series open for pushing [16:59] SpamapS: ok, i'll stop now, but please do read over that link a little sometime even in passing, just cuz ytou konw i dont explain things the best either on irc [16:59] SpamapS: i fully realize that [16:59] SpamapS: i'm saying about how it COULD be , and the good things it brings if dione that way [16:59] SpamapS: i.e., `bzr reconfigure --stacked-on lp:~charmers/charms/precise/alice-irc/trunk lp:~charmers/charms/oneiric/alice-irc/trunk` [16:59] like i said i dont mean its broken now [16:59] imbrandon: so whats the advantage of swapping series and charm name in the path, which is all I see. [17:00] SpamapS: ive been trying to explain that but your equating to whats there now insead of looking over all what it would mean i think [17:00] and yes thats technically all it would be [17:00] Because to me, there is a psychological benefit to grouping all of the charms together as a single entity per series, something that we make sure all works together. [17:01] well you still have that [17:01] you dont loose it [17:01] what do I gain? I see nothing in this policy that changes anything we're doing [17:01] you just may need a diffrent view to see it [17:02] you gain the control in the charm branch its self over the series AND you lose the headache of LP not jiving [17:02] think about it like this [17:02] when you fix a bug in juju core [17:02] you push it to lp as lp:juju/some-named-fix [17:03] because that has value in the name, same thing for the series instead of trunk becomming rolling [17:03] and then the headaches now [17:03] we have charm/series-aka-branch [17:03] not trunk [17:04] and again i dont mean the way we have is broken, just that there really is value in the other small change [17:07] but yea i'm terrible about explaining concepts on irc , so glance over the link as you have time and i'll slap a quick image in gimp togather that will i think illistrate it better too, as its not an urget thing, it could be done anytime in the next 6 months before q releases and still have the same impact, no need to rush it [17:08] that way i can present it to the list too , hopefully in a cohearant manner [17:11] imbrandon: no, I push the fix to lp:~clint-fewbar/juju/some-named-fix [17:13] imbrandon: and with charms, I push to lp:~clint-fewbar/charms/precise/foocharm/some-named-fix as well.. and then propose for merging into lp:charms/precise/foocharm [17:13] imbrandon: and then if that fix is good, and introduces no regressions/etc, and the branches have not diverged, it can be directly pushed to lp:charms/oneiric/foocharm [17:14] imbrandon: if they have diverged, at worst, I must bzr merge it into oneiric.. a workflow that I do not love the mechanics of, but works fine in terms of how straightforward it is [17:18] umm you just said the same thing i thought i just had [17:19] but if thats not how i said it thats exactly how i mean, yes , thats the current flow and is exactly right and exactly shows what i mean with the other but ... yea , i need to get better at abstract --> real value conversion articulation [17:20] Sure, a picture may help. :) [17:20] james_w: when I tried using bzr reconfigure on one of the branches I got this: If your kernel is 2.6.30+, you don't need to follow this guide anymore as the kernel includes a proper driver. [17:20] dohhh [17:20] james_w: take-2 .. when I tried using bzr reconfigure on one of the branches, I got THIS: bzr: ERROR: Lock not held: LockableFiles(lock, bzr+ssh://bazaar.launchpad.net/~charmers/charms/oneiric/hadoop-slave/trunk/.bzr/repository/) [17:25] SpamapS, heh, I was going to say that was a very unexpected message from bzr! [17:25] SpamapS, would you run again using "bzr -Derror -Dlock" please? [17:26] it should spit out a traceback, and the last stanza of your ~/.bzr.log should have some information on the locks that are being taken and released [17:27] james_w: http://paste.ubuntu.com/1006739/ [17:28] SpamapS, try "bzr break-lock bzr+ssh://bazaar.launchpad.net/~charmers/charms/oneiric/hadoop-slave/trunk/" and then try again [17:30] Break held by clint-fewbar@bazaar.launchpad.net on crowberry (process #1853), acquired 11 minutes, 15 seconds ago? ([y]es, [n]o): [17:30] Broke lock bzr+ssh://bazaar.launchpad.net/~charmers/charms/oneiric/hadoop-slave/trunk/.bzr/branch/lock [17:45] james_w: lock broken, same issue [17:46] SpamapS, do you have a traceback? [17:46] http://paste.ubuntu.com/1006767/ [17:46] james_w: ^^ [17:47] james_w: almost like its a deadlock or something [17:48] SpamapS, if you break-lock again and do the bzr info, does it show the old or new stacked location? [17:54] SpamapS, ah, it may work better if you call "reconfigure" with the sftp:// URL rather than the lp: or bzr+ssh: one [17:59] Any good examples of how people are integrating juju and chef? My google-fu isn't working for me right now. [18:02] Urocyon: not yet... we've got tasks for charms that call chef-solo recipes, and a subordinate charm that might be able to register a service-unit as a node in a role with a chef server [18:03] Urocyon: also probably a charm to help deploy chef-server [18:03] Urocyon: it should be easy to call recipes from within charm hooks... the trick is splitting up the logic into "installation" logic -vs- "relation" logic [18:04] I don't see any chef related charms in jujucharms.com/charms/precise is there someplace else I should look for those? [18:05] Urocyon: no, they don't exist yet... was just mentioning the todos :) [18:07] Urocyon: we have tons of ideas, but very little time. Are you interested in experimenting? [18:07] yes, please help! [18:09] i've just started getting my feet wet here. But I have some time today to experiment. [18:11] Urocyon: you can see somewhat similar things done for puppet... lp:~michael.nelson/charms/oneiric/apache-django-wsgi/trunk, and lp:charms/puppetmaster [18:12] Urocyon: do you have an existing chef system that you're interested in hooking up to juju? [18:14] james_w: weird, sftp does seem to have worked. [18:15] james_w: though I think stacking is now borked [18:15] ha! nice [18:15] SpamapS, it's a bzr bug then, it's missing some required handling for smart servers [18:15] sftp://clint-fewbar@bazaar.launchpad.net/~charmers/charms/oneiric/hadoop-slave/trunk/ is now stacked on ../../../precise/hadoop-slave/trunk [18:15] double bug! [18:15] but now I can't 'bzr info -v lp:charms/oneiric/hadoop-slave [18:15] actually, that's correct right? [18:16] relative path for the ~charmers user [18:16] hadoop-slave may be a bad example [18:16] since we sort of half unpromulgated it [18:16] bzr info lp:~charmers/charms/oneiric/hadoop-slave/trunk looks fine though [18:17] m_3: hm [18:17] so, the other ones don't stack on ../../.. [18:17] and I can branch it from there, so it's behaving ok [18:18] they stack on an explicit /~... [18:18] m_3: can you branch from lp:charms/oneiric/hadoop-slave? [18:18] bzr: ERROR: Not a branch: "bzr+ssh://bazaar.launchpad.net/+branch/precise/hadoop-slave/trunk/". [18:18] I get that because the stacking is relative [18:18] instead of absolute [18:19] nope... no love from lp:charms/oneiric/hadoop-slave [18:19] yeah thats borken [18:20] others look to be stacked on /~charme... [18:20] but I'm having problems with alice-irc... [18:20] bzr info lp:~charmers/charms/oneiric/alice-irc/trunk looks fine (abs path on base) [18:21] but bzr info lp:charms/oneiric/alice-irc is borked [18:21] * m_3 more confused now [18:21] * SpamapS considers just going --unstacked [18:21] oh! [18:21] didn't know that existed [18:21] of course [18:21] Yeah [18:21] its simpler [18:21] for this particular conundrum [18:22] plus like a hundred million [18:22] eventually we'll have to make it work stacked [18:22] but Yeah, let me try that [18:22] I'd ask why, but... [18:23] m_3: it will be important when we have 1000+ charms with 1000+ revs each [18:24] m_3: but we'll probably be using git by then. :-P [18:24] ;) [18:24] * m_3 no comment [18:24] * SpamapS hates relying on solutions that don't exist yet [18:24] I know! [18:24] alright [18:24] unstacked works [18:24] F it, I"m unstacking all oneiric branches [18:25] and then we can delete the extraneous 'precise' branches [18:25] ok [18:25] and then we can actually *MOVE ON WITH OUR LIVES* [18:25] hrm, we need to fix charm list to show the non-dev-focus branches as aliases, that wuld be cool [18:26] oh, lp:charms/oneiric/? [18:26] yeah [18:26] does it matter? [18:26] yeah [18:26] so I can say "list all the oneiric charms" [18:26] right now I can't really [18:26] ugly multiple grep pipes [18:26] but yeah [18:27] * m_3 likes charm list a lot [18:27] even the grep pipes aren't "official" [18:27] since in theory we could have a branch owned by ~charmers that has been unpromulgated [18:28] (which at this point is only ceremonial since the charm store does not delete.. but still) [18:28] oh, I see [18:28] sftp://clint-fewbar@bazaar.launchpad.net/~charmers/charms/oneiric/appflower/trunk/ is now not stacked [18:28] m_3: try appflower [18:29] yikes... no info from the alias [18:29] wtf [18:29] long-form gave info fine [18:29] branching now [18:30] perhaps have to repromulgate?! that looks really weird [18:30] parent branch: bzr+ssh://bazaar.launchpad.net/~4-ja-d/charms/oneiric/appflower/trunk/ [18:30] branched fine from both [18:30] SpamapS: but you could if you just said list all oniric charms in lp:charm/ with a oneiric branch :) [18:30] * imbrandon stops [18:31] SpamapS: the orig author is still in the set of branches (not saying stack)... jorge was in alice [18:31] imbrandon: you're smarter than that. :) [18:31] heh [18:31] m_3: ah, so maybe hat branch got deleted? [18:32] hmmm... lemme dig [18:32] m_3: I don't think info works on any aliases [18:32] bzr: ERROR: Parent not accessible given base "bzr+ssh://bazaar.launchpad.net/+branch/charms/precise/alice-irc/" and relative path "../../../../../~jorge/charms/oneiric/alice-irc/trunk/" [18:33] works perfectly well on lp:charms/hadoop [18:33] yeah [18:33] but I think thats only by mistake [18:33] m_3: I think thats just because the relative path matches [18:33] sorry, got distracted there for a minute. [18:33] SpamapS: yeah, I'm getting the same results for other oneiric aliases [18:34] Urocyon: np... nature of the medium :) [18:34] SpamapS: I guess it doesn't matter if we can branch [18:34] actually, bzr info lp:charms/oneiric/jenkins looks fine [18:35] charm list | awk -F/ '/lp:~charmers\/charms\/oneiric/ {print $4}' [18:35] best I can do for a "list of oneiric charms" [18:35] Yes, I have a chef server running currently. I was looking at juju as a way to provide some orchestration for my deployments... like making sure servers came up in a paritcular order, or that existing servers could react automatically to say a new webnode coming on line, or to integrate a new database server into an existing cluster. [18:35] SpamapS: good enough [18:36] Urocyon: sounds like a perfect integration case [18:37] Urocyon: not nec the easiest _start_, but... [18:38] Urocyon: ordering is hard, and juju can't really help you with that.. [18:38] Urocyon: juju can help you make your servers wait for their dependencies, so they in effect end up in the right "order" [18:39] that seems sufficient, and maybe even preferable to specifying order. [18:39] Urocyon: Right, it usually is. :) [18:40] right... ordering between different service, no prob... within the units of a single service is hard [18:41] Urocyon: simplest thing you could probably do would be to have charms which feed data into databags and run chef in response to the changes in configurations. [18:41] Urocyon: one thing that may be difficult to get over is that juju wants to provision everything [18:41] m_3 I thought maybe juju's various hooks and event handling could help there. When x happens, then add role y to server Z .. sort of thing. [18:42] IMO, juju should decouple its provisioning parts from its event handling parts.. but thats a large discussion. :) [18:42] Urocyon: yeah, it's easier to add a juju machine to chef, than the other way around atm [18:43] Urocyon: so perhaps start with the specific case of adding a new db to an existing app [18:43] m_3: FYI, unstacking progressing on all branches [18:43] I was looking prevously at cloudformation, which has a similar limitation. [18:43] SpamapS: nice... you are a god among mere mortals [18:43] * SpamapS deflects that comment to james_w [18:44] Urocyon: the reason for the provisioning "limitation" is that there is great power in knowing that you will have a clean box at the start of things. :) [18:44] you're just jealous of his new laptop [18:44] pretty much [18:44] carbon fiber > aluminum [18:45] Urocyon: eventually I think we'll have a mode for juju where you just install it on existing servers and they phone back home and start doing their orchestration duties. [18:45] yeah, I was pretty sad they didn't have extras for the judges [18:45] m_3: I'm just tired of being a walking billboard for Apple. ;) [18:46] But damn.. System76 and ZaReason make, frankly, f'ugly laptops [18:46] SpamapS: that might not be too hard to get rigged up... enroll an existing machine with juju [18:46] m_3: we might even be able to do it w/ jitsu :) [18:46] m_3: just convince a machine it is running the local provider.. ;) [18:47] I'm tactile, so the aluminum just feels good [18:47] ha! [18:47] m_3: or maybe orchestra/maas [18:47] hmmmm [18:47] m_3: and then just plug in machine ids using our handy dandy zk admin privileges [18:47] jimbaker: ^^ get on it [18:47] really though, what's in the way of installing juju packages and passing it credentials in databag [18:48] m_3: haha.. "credentials" [18:48] * m_3 sigh [18:48] meaning, the address of ZK? ;) [18:48] yes, that [18:48] m_3: don't worry, we'll have ACLs soon [18:48] I think [18:49] zk's gotta get a new node put in, but jitsu can do that really [18:49] Urocyon: fun project :) [18:50] SpamapS: what sort of battery life are you getting nowadays on your little one? [18:50] I really was wishing I had that the past couple of days... we were short on juice at gluecon [18:55] m_3: 5.5hrs usually [18:55] SpamapS, unprovider sounds like a good idea to me [18:55] m_3: sometimes only 5 if I watch too many videos :) [18:55] wow [18:56] m_3: yeah, precise added the "deep" sleeping where it basically turns everything off if all I'm doing is staring at the screen for a few seconds [18:56] deep C6 or whatever its called [18:56] oh nice [18:56] which realistically, we all do quite a bit of [18:56] was that a tweak of out of the box [18:57] its standard [18:57] * m_3 looks the other way and whistles [18:57] the only tweak I did was choose Unity 2D [18:57] seriously :) [18:57] my mbp13's still a hog... less than half that time === Furao_ is now known as Furao [19:00] Where is the code for existing charms stored? [19:02] Urocyon: jujucharms.com has browsable links to code.launchpad.net/charms [19:03] Urocyon: sorry, the lp:charms/hadoop repo urls are actually "launchpad" shortcuts [19:04] we just get used to throwing "lp:" around in here without warning or explanation [19:05] Urocyon: the charms are stored/developed in bzr branches on launchpad. You can also use the built in "charm store" by just typing 'juju deploy foo' .. but there are issues with doing that in production (you can't change the charm for instance) [19:07] atp-get install charm-tools && charm getall && grab coffee [19:08] I haven't ever done any work with launchpad, but I'm guessing that's a bit like github, and that I should fork the charms I need and deploy using those? [19:08] Urocyon: in short, yes [19:10] m_3: ugh.. charm getall sucks man. [19:10] <-- author of charm getall [19:11] afaik, the biggest difference between launchpad and github is that lp has a lot of test/build-related stuff to make sure ubuntu packages are all green [19:12] otherwise, you can pretty much assume common features [19:12] Urocyon: btw, github.com/charms [19:12] Urocyon: we'd rather that you use the charms as-is, and feed back any changes you see necessary so that all can benefit [19:13] Urocyon: unlike chef, juju is designed for encapsulation so you don't have to fork (though you can of course) [19:13] m_3: I'm still not convinced that having a github mirror is a good thing. Just confuses people when they're ready to start doing real charm store dev. [19:14] m_3: think how hard its going to be for somebody who forks them on github to actually get that change back in to LP [19:14] SpamapS: true, and we haven't had that much call from users to have them hosted on github [19:15] SpamapS: pull requests were gonna be manual until we decided if it was worth it === spladug is now known as SteveNewhouse [19:15] I'm happy to just turn them into redirects whenever people think we should === SteveNewhouse is now known as spladug [19:16] happy with that being a ~charmers decision... and not just mine [19:16] sure, just trying to the the swing of launchpad here. [19:16] Urocyon: oh right... we're just off on a tangent :) [19:18] *DOH* .. just noticed that txzookeeper's packaging states MIT/X/Expat but txzookeeper is LGPL3 [19:18] * SpamapS is undaunted by the Debian NEW rejection. [19:19] when contributing, is there an official way your hooks should be written? i.e. write hooks in bash. [19:20] Urocyon: nope [19:21] Urocyon: thats intentional. Write them howe you want. :) [19:22] wow unstacking is *slow* [19:22] Just realized its still going.. rabbitmq-server [19:25] I saw something about cycles in stacks being a problem... hope we haven't created any of them [19:27] m_3: the problem with unstacking is that now each branch gets bigger for launchpad to deal with. But thats ok in this instance where verything is just all screwed up [19:30] SpamapS: hmmm, just getting my feet wet, I see what you mean about decoupling provision and event handling... [19:31] Urocyon: yeah, it comes from the goal being to take advantage of the cloud's magic ability to give you lots of shiny clean servers. :) [19:31] Urocyon: but there's value in being able to use juju on dirty old servers too. :) [19:32] * SpamapS wonders if that is a euphamism for waiters in tiki lounges [19:32] * SpamapS heads to lunch [19:33] SpamapS: well I enjoy taking advantage of the magic shiny clean servers, I just am doing it with chef at the moment. Most of the charms it seems overlap with chef though - with a focus on configuring a service. [19:33] I have chef to do that. I just want to better manage relationships and add on event handling. [19:35] SpamapS: just seems like I'm not getting as much use out of the existing charms that way. [19:44] SpamapS, yeah.. it basically has to copy the history back [19:56] SpamapS / m_3 : wait , did i miss something , i thought the plan was to move in the other direction ? [19:57] SpamapS, ugh.. i thought i'd fixed that .. [21:42] imbrandon: whats the other direction? [21:43] hazmat: you thought you had fixed what? [21:43] SpamapS, the licensing stuff in txzk [21:43] hazmat: oneiric branches should all be fixed now [21:43] SpamapS, sweet [21:44] hazmat: Yeah I think you fixed it in trunk, but I didn't fix it in the packaging [21:44] hazmat: so it failed NEW review in Debian. :-P [21:44] I'll fix it. Should have double checked anyway [21:50] SpamapS, james_w, m_3 thanks for fixing those branches [21:54] hazmat: hey did you say you have sup working with ruby 1.9 ? [21:55] SpamapS, yup [21:55] hazmat: I had to make some.. evil symlinks to get it to start up [21:56] SpamapS, i just run it from git .. with a -Ilib bin/sup and alias [21:56] hazmat: Ok so "run it as crack" [21:56] good idea :) [21:56] hazmat: I'm running from an old git snapshot that I patched to work with gpgme2 [21:56] SpamapS, well i'm on stable branch.. so not quite [21:57] SpamapS, yeah.. that integration seems a little borked still to me [21:57] i've got the gem installed, but its not detecting it.. haven't really investigated [21:57] hazmat: it is. Its just useful for reading encrypted and verifying signed messages. Sending, I still just pipe to clearsign [21:57] SpamapS, by crack you mean running mostly unmaintained stuff like sup ;-) [21:57] hazmat: gpgme2 has an entirely different API and sup has never been updated to use it [21:57] gotcha [21:58] hazmat: I keep meaning to try heliotrope [21:58] SpamapS, i'm going to bite the bullet and do gmail [21:58] i like sup.. but having my email on all my devices and easy to check from anywhere is a huge win still [21:59] hazmat: having *access* on all your devices you mean. ;) [21:59] SpamapS, fair enough [22:00] hazmat: I have to agree though. The web interface I do have is basically total crap. [22:01] SpamapS, only downside is loss of gpg [22:01] I thought about setting up heliotrope the other day, but it looked a bit too rough [22:03] e.g. I didn't see a mention of authentication [22:03] Perhaps the sup guy just gave up and started using gmail too? ;) [22:03] he doesn't appeared to have committed for a month :-) [22:04] or maybe its just done ;-) [22:15] Software that is "done" is so boring ;) [22:17] software is never done === andreas__ is now known as ahasenack [22:19] wooot! txaws in Debian unstable. :) [22:19] sweet [22:22] is there any way to inspect relation settings similar to the config settings accessed with juju get service? something like juju relation-get service1 service2 ? [22:22] i know you can see them being set in debug-log, but not aware of a more ondemand kind of way to inspect... [22:28] nathwill: yes you can inspect all of them [22:29] nathwill: you'll need to tell relation-get where to find them, and which context you mean [22:29] nathwill: so you need to pass -s /path/to/agent/socket and -r relation:id [22:29] nathwill: you can get the relation:id from the 'relation-ids' command [22:30] SpamapS, this is inside a hook? [22:30] nathwill: if you're already inside a hook, don't need to pass -s [22:30] right. ok... [22:30] nathwill: and if you're in a *relation* hook, then you don't even need to pass -r [22:30] um, how can i find the socket? [22:31] nathwill: lsof works :) [22:31] python 4597 root 11u unix 0xffff880065374000 0t0 10917 /var/lib/juju/units/vivo-1/.juju.hookcli.sock [22:31] haha, ok. sugarless solutions work :) [22:32] hmm, dunno how to set JUJU_CLIENT_ID actually [22:32] nathwill, an alternative is to use jitsu run-as-hook [22:32] nathwill: you can also use 'jitsu run-as-hook' [22:33] JUJUJITSUJINX [22:33] ;) [22:33] say that 5 times fast :) [22:33] jitsu you say... i've heard about it, guess i'll have to go fiddle w/ it [22:33] heh [22:33] SpamapS, i gave up just saying juju-jitsu [22:34] hrm looks like 0.6 never made it to the PPA [22:34] nathwill, jitsu run-as-hook is good for inspecting settings. so you can do stuff like $ jitsu run-as-hook mysql/0 relation-get -r db:0 - [22:35] and if you had a service unit of mysql/0 with a relation of relation id of db:0, you would get those settings [22:35] ahh right, they need to update launchpad so I can use '{latest-tag}' in bzr recipes [22:35] k, thanks folks. i'm trying to ascertain the best way of setting a password for a default user in owncloud. [22:36] owncloud doesn't separate the db config from the default admin config like WP [22:36] you could do $ jitsu run-as-hook mysql/0 relation-ids db to get a relation id; juju status of course to get the service unit you wish to inspect; etc [22:36] where again i'm just some standard examples like db for a relation name [22:36] so there's no automagical way to hide the db business if trying to use remote mysql host... i've got some ideas, just want to look at what the users are going to have to go through to get the pwd out of the settings [22:37] nathwill, yes, the flip side is that this stuff is in ZK [22:38] i'm almost thinking a config option is best, and forget the autogenerating bit [22:38] nathwill: this is a common enough problem, I think we need to solve this in juju core. type: password would be cool, where it just generates a random password on the first deploy, and then you can 'config get' to see it [22:39] SpamapS, that would be amazing. [22:39] nathwill: another option is to just put it somewhere that the admin can access it through juju ssh [22:40] yeah, i thought about writing it to a tightly perm'd file... it just feels kludgy to do that type of thing [22:40] I think juju needs more ways of feeding data back from charm->admin though [22:40] debug-log is a really awful way of doing it [22:40] maybe a config-set option to create config parameters during hook execution? [22:40] config-set has come up a lot [22:40] and its my favorite option [22:41] because then charms can generate passwords that they know will work with the intended software [22:41] * nathwill nods [22:55] as long any hook set values are namespaced away from service config, sounds good [22:56] SpamapS, nathwill its basically 'constant' as i recall [22:56] have a good weekend folks [22:56] ditto hazmat [22:57] aye you too hazmat [23:08] nathwill: someday you will be a jedi :) [23:10] nice, drizzel + nginx natively talking to it + nodejs taking the json output and turning it pretty with express templates [23:10] i think i might have found a reason to give up php [23:12] oh dear. another one drank the koolaid [23:12] heh the only new thing in there that i havent been useing is express templates ( well and drizzel but right now its the same as mysql in my setup ) [23:14] i just like the fact that nginx talks directly to drizzel/mysql, i dont mean proxying to it, i mean talks to it, then spits out cvs/json/etc [23:14] :) [23:14] oh and does it in a non-blocking way to the eventloop keeps going , meaning 10k+ connections and not blink [23:17] i was referring to the nodejs koolaid. the other business sounds cool [23:20] ahh well js koolaid maybe, node though is fskin FAST only thing i seen close it it is nginx and it still cant keep up [23:22] nathwill: trust me i was right with you about js/nodejs/jsp crap untill about ohhhhhh 2 months ago and did somehting with a co-worker by accident , then i had to give it a real look again, its a totaly diff landscape than 3 years ago even let alone 12 or 15 when i wrote off JS [23:22] :) [23:23] imbrandon: i hope you're right. i fear what the js monkeys will do to the web when they have control of the backend. my experience to date has confirmed this; though i admit i have not investigated very deeply [23:23] i dont think its a good app platform, but its a good async utility / api one if each bit is kept self contained [23:24] imbrandon, that makes sense. [23:25] oh and did i say fast [23:25] lol [23:26] i was pulling sql rows with selct * limit 0, 1000 earlier then spit it out as json and ran seige on it just for an unscientiffic test [23:26] and my cpu gave out before i could make it cause errors in the requests [23:27] and that was just with like 5 minutes and no optimizations etc etc [23:28] curious to see how the disk io is on it tho [23:28] something to try out the next few days [23:29] but i'm fairly sure all my php REST stuff is going to be replaced with small node apps over the next few months the way it looks [23:29] on the back end at least [23:30] pair that with nginx rev proxing all that apps from diff ports into one interface and microcache it , might be onto something big there :) [23:33] SpamapS: you realize that nginx will even act as a mongo or mysql proxy , you basicly just list the upstreams the same way we did the php-fpms [23:33] this thing is starting to become a hammer and everyones got nails i need to pound on [23:34] i'm starting to think nginx is a tcp proxy first and httpd second [23:39] we use nginx for proxying imap. works pretty well [23:58] imbrandon: it doesn't really grok the mysql protocol tho [23:59] imbrandon: also, Drizzle has an HTTP+JSON plugin, so you don't really need nginx to translate