[00:01] sinzui: just got back from vet, haven't looked at log yet. does the above scrollback mean your ppc64 tools issue is solved? [00:02] wallyworld, almost. The tools not found issue was because juju was swallowing a 403 [00:02] well that sucks, the error needs to be propagated [00:03] wallyworld, I need to get creative with /etc/hosts to get access to download the actual tool. the lab shouldn't need access to the tools [00:04] wallyworld, This is a cases where streams.canonical.com is the correct site to get tools., but ubuntu didn't build tools to put there [00:05] ok [01:32] axw: morning. standup? [01:32] wallyworld: morning, otp with waigani [01:33] ok, ping me when you're free [01:42] wallyworld: free now [01:43] wallyworld: sorry, for stealing axw, didn't realise it was your standup [01:57] wallyworld: I need to get a proper webcam... I get that weird crackly feedback, but only on G+ for some reason [01:58] G+ maxes out my cpu also [01:58] so it's not my favourite [02:08] wallyworld: https://bugs.launchpad.net/juju-core/+bug/1317197/comments/14 [02:08] <_mup_> Bug #1317197: juju deployed services to lxc containers stuck in pending [02:08] I hope that's clear enough [02:09] yeah, looks good, thank you [02:10] we do the the status stuff at some point cause if things take a while to complete that's useful [02:10] yes absolutely [02:11] just didn't want ryan to think we were doing something that we weren't [02:26] wallyworld: in order to speed up downloading of cloud images is there a way to specify a root image tarball to lxc-create via juju? [02:26] its only 175M so not a huge deal but times like tonight its slow for some reason [02:26] not that i am aware of directly. maybe you could put the tarball in the lxc cache dir [02:27] wallyworld: ahh good idea [02:27] the same place where lxc would put it [02:27] can use juju scp or something [02:27] not sure of exact details [02:27] wallyworld: ill mess around with it and let you know [02:27] ok [02:29] wallyworld: it is so much faster already btw [02:29] great :-) [02:29] demos going well? [02:30] yea there is a contest going on in getting an openstack cloud deployed in under 30 minutes [02:30] i think i can make that happen with this cloning [02:30] all in kvm's/lxcs [02:30] the quickest install is 33 minutes i think [02:31] are you deploying stuff to the kvm instance? [02:31] of just to the lxc containers? [02:31] yea ive got like 8 kvm's running [02:31] 1 kvm with nested lxcs for cloud controller and other bits [02:31] 1 for compute [02:31] 3 for swift, and 3 for swift-proxy [02:32] so all charms are deployed to lxc [02:32] so 8 kvms and 7 lxcs [02:32] swift, swift-proxy, nova-compute are on separate kvms [02:32] but everything else is on lxcs nested within 1 kvm [02:33] there's a bug whereby if you deploy charms to kvm *and* start lxc containers within that kvm host it could fail [02:33] but sounds like you are not affected [02:33] i manually bind the lxc bridge to eth0 in the kvm [02:33] with some juju scp magic [02:34] and restart the kvm node before deploying the lxcs [02:34] ok [02:34] wallyworld: otherwise https://bugs.launchpad.net/juju-core/+bug/1304530 [02:34] <_mup_> Bug #1304530: nested lxc's within a kvm machine are not accessible [02:35] that issue happens [02:35] yeah, that's being worked on this cycle [02:35] ok cool [02:35] not a blocker since i can work around it [02:35] good :-) [02:35] yea i think if the nested lxc's would honor the network-bridge option in the environments file [02:35] that could work [02:36] but im sure there is a better approach [02:37] i'm not sure tbh, but at this point any approach that works is good i reckon [02:37] davecheney: do you have a list of the tests that currently fail on power? [02:38] thumper: two secs [02:38] didn't I include that in my mail ? [02:38] wallyworld: im sure people will want vlan support within those lxc's maybe [02:38] understandable [02:39] though it probably only matters with local provider [02:39] since maas does vlan now [02:42] davecheney: no, there was a link to the ppc64el bugs in juju-core, but no list of currently failing tests [02:42] davecheney: what does SGTM mean in a review comment? Sounds Good To Me or Silently Giggling To Myself? :) [02:42] menn0: the frist [02:43] davecheney: thumper: I figured. Next question, does the landing bot treat SGTM the same as LGTM? [02:43] I don't think so [02:43] ok thanks for clarifying [02:44] menn0: sounds good to me [02:44] I saw davecheney use it and wasn't sure if it has special meaning to the landing bot [02:44] davecheney: also, any idea why juju/errors tests fail with gccgo? [02:44] davecheney: I can't see it myself [02:45] davecheney: given that I had the ones in arrar passing with gccgo [02:47] on da phnoe [02:48] kk [02:49] menn0: the sgtnm/lgtm thing comes from code review [02:49] lgtm (case insensitive) trigger the green success line in code review [02:49] thumper: we want to introduce a universally used base test suite to kill outgoing http egress, protect the user's home real dir, encapsulate loggingsuite etc - do you like the name "BaseSuite"? [02:49] even if you say [02:49] there is no way in hell i would lgtm this in a millino years [02:49] wallyworld: sounds fine [02:49] it's still a thimbs up [02:49] so, sgtm is a way of saying i like it, but i'm not prepared to put my name to it [02:50] thumper: i'll take a look a tthe errors tests now [02:50] i have trusty and gccgo [02:50] davecheney: to avoid the juju/errors package [02:50] you could just start with juju/errgo [02:50] davecheney: makes sense. thanks. [02:50] which also fails with gccgo [02:50] get the foundations right first I guess [02:52] thumper: got time for a hangout [02:52] yup [02:53] i think i'm confused about which error packge is broken [02:53] :) [02:53] this being able to hangout with your boss thing is sorta neat [02:53] ... only taken 2 years [02:54] davecheney: https://plus.google.com/hangouts/_/g36ayyi6golhypoflszins4leia?hl=en [03:03] wallyworld: if you haven't thought about this already: the thread on signature wrapping reminds me that we should create a git pre-push hook, a la lbox.check [03:04] wallyworld: also, would be nice to run the checks in the lander bot, if we don't already [03:04] agree with lander bot checks, hadn't thought of a pre push hook [03:04] sounds like a good idea [03:06] i was deferring thinking about it till next week when hopefully there'll be some progress on the jenkins side of things [03:08] wallyworld: we do that https://github.com/juju/juju-gui/blob/develop/HACKING.rst#hooks and then in our CI lander side http://ci.jujugui.org:8080/job/juju-gui-merge/ [03:09] rick_h_: awesome, thanks, we'll look to do something similar [03:09] wallyworld: yea, let me know if you want a run through the lander stuff we setup when you get that far https://github.com/juju/jenkins-github-lander [03:10] rick_h_: will do. curtis was going to start looking at setting up juju-core's jenkins next week [03:10] wallyworld: cool [03:10] i already have a migrated repo to test with [03:10] very nice [03:10] github/wallyworld/core [03:11] when you get the migration script let me know. We want to move more stuff over but we didn't have it collapse commits so our history is messy [03:11] has a pending pull request to merge and everything :-) [03:11] :) [03:11] i have the migration steps which i follow, and i think i collapsed history [03:11] cool maybe I can see if sinzui's team has the bandwidth to subordinate the lander code for jenkins [03:11] wallyworld: cool, yea when I use bzr fast import it didn't collapse so lots of old commits of "fix lint" [03:12] rick_h_: i used these steps http://flexion.org/posts/2012-10-migrating-bzr-to-git.html [03:13] rick_h_: as a rule, i hate how git workflow seems to be to rebase out history [03:13] throws away information [03:13] bzr's model is much nicer [03:13] yea, but it's usually useless information ime [03:13] hide it by default, but show it if needed [03:13] but I can appreciate both sides [03:13] wallyworld: can you please update gomaasapi on the bot? [03:13] sure [03:14] rick_h_: is there a way to see old comments in GH after you rebase? [03:14] line comments [03:14] wallyworld: can you see https://bazaar.launchpad.net/~rharding/juju-gui-lander/trunk/files [03:14] axw: it leaves the comments, but marks them as "xxx commented on an outdated branch" [03:15] hmm, when I have done PRs the comments always disappear after I rebase and push -f [03:15] rick_h_: yep [03:16] wallyworld: so might be useful to sinzui as well I don't know. That's the scripted setup we used to migrate [03:16] wallyworld: the git upload, the jenkins deploy/initial install/etc [03:16] rick_h_: ok, ta. i've sent curtis the link previously [03:16] wallyworld: there's some api creds and such in there which is why it's private [03:16] ah ok [03:16] well, i think so [03:17] that doesn't look right [03:17] bzr-git and git-bzr can preserve commit histoyr [03:17] I symlink my .bazaar/ignore to .gitignore [03:18] axw: https://github.com/juju/juju-gui/pull/303 (hatched commented on an outdated diff 2 days ago) [03:18] sinzui: i think the issue is we need to rebase to get a single revision for merging? [03:18] axw: and click on the '2 days ago' to get the comment in with a snippet [03:18] rick_h_: cool, thanks [03:19] aaaaaaaaaaand, that's lunch [03:19] axw: gomaasapi on rev 50 now [03:19] wallyworld: thanks [03:20] wallyworld, Yeah, git is like multi-inheritance diamond parents. Just fucked. bazaar was smart in that respect. the left-hand ancestry is special and we can traverse that to learn the direct commits and choose to go deeper later [03:20] wallyworld, rebase is essentially a hack to fix a logging problem [03:20] heh, every commit is in teh reflog, just how easy can you get at it ;P [03:20] * wallyworld will miss bzr :-( [03:22] wallyworld, since juju likes informative commit messages with links, it is essentially at odds with most projects that use git. They want all the details squashed to a single line [03:23] yeah, i hate that :-( [03:24] huh? I disagree with that [03:25] wallyworld, We might have a policy to never delete the branches from our public repos. They have all the commits. The merge bot will create a squashed commit using the pull request subject and description [03:26] might be all we can do [03:26] the merge bot would include the link to the source branch with preserved history [03:26] No one needs to rebase [03:26] maybe auto push to a second remote to track all commits? [03:27] it'll be a bit nuts to have all branches ever in github [03:27] and we force users to fork so they can keep their branches in their own fork if they want [03:27] so that the main repo looks clean [03:28] https://github.com/juju/juju-gui vs https://github.com/mitechie/juju-gui [03:30] rick_h_, I assumed that developers would be using the triangle trade arrangement. The juju/juju just has stables a devels with the squashed revs. The commit message will have a link to the developer's repo with the squashed revs [03:31] But we could have juju/juju-deep which is where the merge bot puts a copy of the unsquaded branch [03:33] juju/juju? [03:33] are we not having juju/core? [03:34] not, it's juju/core [03:34] s/not/no [03:34] sgtm [03:34] juju/sgtm [03:34] lol [03:34] otherwise we'd have "github.com/juju/juju/juju" [03:34] yeah, no [03:34] we decided at the sprint it would be core [03:34] \o/ [03:45] hmm... [03:45] I notice that the .jenv file has a user and password entry at the top of it [03:45] but both are empty strings [03:45] why? [03:46] * thumper steps away for a bit as heaps of calls later tonight [03:46] * thumper needs to put on dinner === axw is now known as axw-lunch [04:59] can anyone confirm whether it's possible to use API and state port numbers other than the defaults? [05:01] I know they can't be changed once an env is bootstrap but could someone use a different value at bootstrap time [05:06] menn0: yes I have done this in the past [05:06] it's necessary when using the local provider with multiple users on the same host [05:06] ok cool. [05:07] I was reviewing a change to "juju restore" and noticed it generates bash scripts with hardcoded port numbers. [05:07] I guess my review comments are valid concerns then :) [05:07] cool [05:07] thanks [05:08] was just about to review that. now I have to stop procrastinating... :) [05:10] perrito666: the journal thing is a parameter we have to pass on the Session/Write request. [05:10] fwereade: ^^ [05:12] wallyworld: fwereade: IIRC the specifics of why we fallback was because we had the "released" vs "daily" streams, and it was searching the "daily" and not finding any "released" items and then stopping. And when we discussed it, it seemed better to keep searching. [05:13] jam: and also, people adding their own images etc as overrides [05:14] i'm ok with it, william seems twitchy about it [05:16] wallyworld: pre-checks ala lbox propose, we *can* but if it is things like "go fmt is sad" I'd rather just have the box run go fmt and clean things up than stall someone to tell them to go run a command and propose again. [05:17] i don't mind either way tbh [05:17] wallyworld: you don't have to rebase, and I'd rather we didn't have single commit for review [05:18] git does have "git log --follow-first-parent" [05:19] my git foo is poor, i'm just going by what curtis etc said at the sprint. curtis will be doing the landing side of things pretty much so we can liaise with hin to et the model right [05:19] I certainly don't have final say in any of this, but while I don't think we need all branches on master, I'd really rather we didn't rebase to land [05:20] i think it was being canvased to allow a single rev to be easily cherry picked etc [05:20] like we do with bzr with backports etc [05:20] wallyworld: for cherrypicking you can do "merge ABCD ~ ABCD^" IIIRC [05:20] wallyworld: which is the same meaning as "bzr merge -c" [05:20] just different syntax [05:20] $REV^ is the parent of REV [05:21] aka, before:REV or REV-1 to bzr users [05:21] i'm totally ok with whatever folks who know git recommend [05:21] wallyworld: well in the git world you can still do things in different ways, I'd like to push back on how the gui guys decide [05:21] decided [05:21] because I think you can use git in a form that allows easier exposure of specific detail [05:22] i don't like how git rebase discards history [05:22] Though, TBH, if rebase still refers to the old commits in a sensible fashion, that might be ok, but I don't quite see how it works [05:22] if we can work aornd it, great [05:22] vs giving you the detail only if you have the magic revs that aren't pulled in by default [05:22] i just want to it work like bzr :-) [05:23] so for dev workflow, i hope we just push up the raw branch, and let the landing bot sort out how to deal with the final merge [05:24] so we need to talk to curtis et al about the rules for that [05:24] wallyworld: interestingly you can "git rebase --exec $RUNMYTESTSUITE" which is nice [05:25] what does that do? rebase and then run the tests? [05:28] wallyworld: I believe it runs the test after each rebased commit [05:29] ok [05:31] wallyworld: conceptually I'm hesitant about throwing away history. Though having someone actually spend time to make sure there is logical meaning to commits is nice, having a policy of "throw away all the stuff you were thinking about at the time and only give me a summary" isn't great either. [05:32] Often when spelunking I'd really like to understand why someone changed *this exact line* rather that "this was changed as part of doing foo" [05:32] jam: i'm +1 on that, hence i dislike rebase [05:35] wallyworld: jam: I get what you guys are saying. There are definite downsides to overuse of rebase but that's more a policy/process thing IMHO. [05:35] menn0: gui's policy is that every commit gets squashed when merging to master [05:35] which I disagree with [05:36] i guess they view commit history as noise? [05:36] wallyworld: jam: one thing I'm finding with bzr is that I'm committing less often because I know it's much harder to go back and tidy things up. [05:36] wallyworld: by default "git log" shows everything in time-sorted order [05:36] it doesn't have the indented logging that bzr has [05:37] which is poor :-( [05:37] so you don't have "here is a summary commit, and here are the commits that made up that summary commit" [05:37] exactly [05:37] git log output has tons of configuration options [05:37] wallyworld: yes, but we paid *a fuckton* of CPU for it that lots us a lot of hearts and we weren't quite good enough at explaining why [05:37] it is actually quite expensive to compute [05:37] menn0: but not that one [05:37] it does have --follow-first-parent [05:37] brb [05:37] which will get you "bzr log -n1" [05:37] and it does have topological sorting [05:38] but topo doesn't help when 2 people are doing concurrent work [05:38] neither one is "before" the other [05:38] the sort bzr uses is "include a commit after the last commit that didn't have it, and before the one that does" [05:39] but doing the "what commit doesn't have this" is expensive to compute [05:39] O(n) ? or O(what?) [05:39] fwereade: https://codereview.appspot.com/98260043 [05:39] * menn0 is back from dealing with vomiting child... [05:41] wallyworld: so to be able to compute lots of them fast, the algorithm isn't bad if you just walk the whole history, as you can push on a stack everything you've seen so far. So the one we use is O(N=all_history) [05:41] to do it by walking just a bit of the graph [05:41] it easily gets into N^2 or N^3 or thereabouts because of needing to do backtracking [05:41] so N is smaller, but the order is higher [05:42] ok [05:42] wallyworld: so I still love it, and for histories the size of Juju it is actually still a wonderful view of the world (IMO) [05:42] but for things like the kernel/mozilla/etc it made things slow [05:43] and, of course, it is off by default in bzr, too with "bzr log -n1" being the default [05:43] fwereade, bodie_ I made some changes after the discussion today and put in a new MR. I didn't implement the Unit cleanup yet. Some prerequisites still arent in. [05:44] jam, wallyworld: would it help if we followed a different policy than the GUI team in terms of squashing commits. it's not a given that we do that is it? [05:44] wallyworld: "time bzr log -n1 -r -10..-1" vs "time bzr log -n0 -r -10..-1" is 85ms vs 250ms on juju today, which is noticeable [05:44] menn0: no it isn't a given, but that's why I'm bringing it up [05:44] we can decide how *we* want to do it [05:44] menn0: we *are* switching to git, that has been enforced on us [05:45] :-( [05:45] in my experience that's not at all standard practice [05:45] menn0: but the details are what we are hashing out, and I'm just participating in the discusison [05:45] with git projects [05:46] jam: i was going to experiment a bit once the test landing bot on my test repo got set up next week [05:46] on projects that use git that i've worked on, people might do some tidying up with rebase before pushing but rarely down to just one commit. [05:47] menn0: so I think some cleaning up can be nice, but often I've wished people would have committed *more* detail because they aren't explaining the actual steps, just the global chunks [05:47] it's usually about making things more coherent and easier to follow for other developers (and yourself in 2 weeks) [05:47] jam: I get that. [05:48] different people use different approaches when committing and rebase makes it easier for people to hide detail [05:49] perhaps the best we can do is educate each other as part of the review process [05:50] if you feel like someone didn't break up their commits into small enough pieces then call them on it [05:50] menn0: there seem to be projects in git that review your merge by the commits you did, so you have to clean it up into logical steps. Which can be good, but you can also lose some of the "why is it this way". [05:50] menn0: I really like the default of looking at the global thing rather than the individual bits , we can have multiple branches for landing something that needs evolution [05:50] I'm not sure the github default view, though. [05:51] Does it default to the global commit change or the individual commits? [05:52] menn0: so if merge --squash or rebase had a way to still reference the old commits and just not show them by default in log, that would be ideal for me, but I haven't seen a tool that really separates the two [05:52] jam: yeah I've never heard of such a thing [05:53] menn0: so looms was bzr's evolution towards that, and bzr-pipelines is a good approximation of it. Where you have several branches and you are iterating among them. [05:53] with git once you squash revs those old revs are effectively gone [05:53] And the tips of each branch are the "patch for review" while the individual commits are still there. [05:53] jam: yeah i've read a little about looms [05:53] and then in bzr when you merge a commit, by default you only see the merge commit, rather than the details, but the details are all there. [05:54] menn0: and certainly things like stacked-git, or quilt, etc are also similar approaches. [05:55] though again, most of those are explicitly squashing the intermediates down to the final patch [05:57] menn0: like I think stacked git is actually a great workflow, but I wish it actually kept the intermediates as an alternate view. [05:58] i've just been looking at the stacked git site and yeah it doesn't look it helps with keeping the intermediate commits [05:58] menn0: looms/stacked-git/etc have lots of nice process built in for managing multiple branches that you want to eventually apply all together, but still keep things in logical chunks [05:58] like what I'm working on now for the API Versioning has about 4 logical steps [05:59] start doing the internals, start exposing it on the server, start consuming it in the client, etc. [05:59] it is great to separate those out and still sync between them [05:59] jam: yep that makes sense [05:59] bzr pipelines lets you create a "pipeline" aka stack of branches, and then switch to the first one, and "pump" the changes through the pipeline. [06:00] jam: in the past I've done that kind of thing manually but it gets tricky when you get beyond a handful of concurrent branches. [06:00] jam: a tool to help would sometimes be nice [06:01] menn0: yeah, I don't know state-of-the-art with git here. I bzr-pipelines are used the most on our team, and got the most love and attention. Looms had a potentially better data model, but were a bit bigger picture (you could merge looms at the conceptual loom level), and didn't actually get as much polish [06:01] the idea being that you could collaborate on the set of patches that are being tacked on [06:02] which is stuff that OS people do [06:02] because they track UPSTREAM and have their local (deb) patches applied, and need to sync from what Debian is doing to how Ubuntu is doing, etc. [06:02] wallyworld: are you fixing the TestConstraintsValidator network isolation? if not I will [06:02] jam: yeah I've maintained a linux kernel fork for a product before. I know the problem space [06:03] axw: not explicitly but it will be done as part of the overall network isolation work [06:03] so yeah, i'll do it [06:04] wallyworld: ok. it is calling out to cloud-images.ubuntu.com [06:04] wallyworld: seems to be only provider/maas [06:04] axw: i was leaving it till i go the framework in place to see that it failed :-) [06:05] wallyworld: I just ran the tests with "unshare", as described in #1233601 [06:05] <_mup_> Bug #1233601: juju test suite should be isolated and do 0 outbound network requests [06:05] that was the only test that failed [06:05] wallyworld, jam: potentually useful for the basics: http://agateau.com/talks/2010/git-for-bzr-users_uds-natty/git-for-bzr-users.pdf [06:05] wallyworld: eh sorry, only package. 3 tests failed in maas [06:05] menn0: mercurial had "mq" or mercurial queues to work in the same process space, but it seems stacked-git is the only similar one for git [06:06] ok, i thought there was one for store also [06:06] wallyworld: *shrug*, did not fail in my run [06:07] axw: you know the one i mean? it shows up in failure logs but i haven't looked in detail yet [06:08] anyway I've got to go... see you'll tomorrow [06:09] you all even [06:09] wallyworld: I forget. I've seen store fail in the past, but not atm [06:09] later menn0 [06:10] axw: i can't recall the specifics right atm. it's in the logs somewhere [06:11] wallyworld: how did you end up doing the conversion? [06:11] jam: http://flexion.org/posts/2012-10-migrating-bzr-to-git.html [06:11] seemed to work fine [06:12] htp://github/wallyworld/core [06:13] i didn't rename any authors though [06:18] wallyworld: sounds about like how I'd do it [06:19] jam: good :-) must be right then :-) [06:19] it did work nicely [06:19] An interesting alias: ll = !git --no-pager log --first-parent HEAD~10..HEAD --reverse --pretty=short [06:20] matches pretty close to my default bzr output, though doesn't have *any* datestamp [06:20] a good start i guess [06:21] I personally find dates like: Mon May 12 09:28:05 2014 +1000 to be a bit too complete [06:21] vs just ISO dates [06:21] yep [06:21] wallyworld: git log --date=iso [06:21] :) [06:26] once small hurdle solved [06:26] so many more to go [06:49] git log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit [06:49] is quite pretty [06:50] niiiiiiiiice [06:53] I still like --first-parent logs, but having a terminal graph is fun [06:55] git --no-pager log --first-parent --reverse --pretty=tformat:"%C(yellow)%h%Creset%<|(25) %an %ai%n %B" -10 [06:55] is the closest I could get to my "bzr log --short --forward -r-10..-1" alias [06:55] I couldn't find a way to indent all lines of the body, though *maybe* %w() [07:07] wallyworld: ll = !git --no-pager log --first-parent --reverse --pretty=\"tformat:%C(yellow)%h%Creset%<(25) %an %ai%n%w(72,8,8)%s%+b%n\" -10 [07:08] that looks really close to "bzr log --short" though the date is full instead of just YYMMDD, and it doesn't have [merge] markers [07:11] found it, instead of "%ai" for iso date, use "%ad" and --date=short [07:12] [alias] [07:12] ll = !git --no-pager log --first-parent --reverse --pretty=\"tformat:%C(yellow)%h%Creset%<(30) %an %ad%n%w(72,8,8)%s%+b%n\" -10 --date=short [07:15] wallyworld: so the reason people like squashing commits is because 'git log' and 'giggle' and 'gitk' look nowhere near as nice as "bzr qlog" on the same data. [07:16] well that's not a goof excuse :-( [07:16] you can tell what commits are "mainline" because tarmac puts "[r=" in the header, but it only takes a few commits before all of those are lost [07:16] fix the tool, don't throw away data [07:16] wallyworld: if you're tool can't show you good history, you clean up your history first [07:16] that's arse about [07:17] wallyworld: well, your preaching to the choir here, but if you try to look at the juju-core history with anything I've found it is a bit hard to figure out what is going on [07:17] outside of about 3 steps [07:18] and if you do "gitk --first-parent" it shows you just the mainline [07:18] but for some reason, it wants half of the merges to be from the right, and half from the left [07:18] jam i get a syntax error with git --no-pager log --first-parent --reverse --pretty=\"tformat:%C(yellow)%h%Creset%<(30) %an %ad%n%w(72,8,8)%s%+b%n\" -10 --date=short [07:18] wallyworld: if you are pasting it into shell, you need to remove the "\" [07:18] sigh [07:18] that is the escape for putting it in ~/.gitconfig [07:18] yes [07:19] wallyworld: it took me a while to find the right escape point for gitconfig [07:19] i didn't look too closely clearly [07:19] because without escaping " it treats the | as a shel lchar [07:19] well the < [07:19] hey i like that latest log output [07:20] i just wish the revs nums were monotonically increasing [07:20] ah, ffs, I figured out how log decides which commit comes first [07:20] alphabetical sort of the hash... [07:20] i can't fathom the hashes [07:20] why not date? [07:20] wallyworld: consider hashes arbitrary handles that you just have to copy & paste [07:20] sure [07:20] but not user friendly [07:20] wallyworld: sort of the hashes is because in the git world, everyone is "equal" there is no "mainline" [07:21] so me merging you is equivalent to you merging me [07:21] AIUI [07:21] which is a design flaw to me [07:21] not how most projects work, except maybe the kernel [07:21] wallyworld: anyway, that is why 'gitk' et al can't really do mainline, because they don't think about it, and they take the most arbitrary thing (the hash) and use it for sorting [07:22] and yet we've been mandated to use it [07:22] it is a *stable* sort, I can give them that [07:22] * wallyworld wishes we could use the best tool for the job [07:23] wallyworld: so there are certainly things that show git's had more man-hours put into it. Like the configurability of log is better than what bzr gives you [07:24] we'll get stockholm syndrome eventually [07:24] hope not :-) [07:24] you can pry bzr from my cold, dead hands [07:27] wallyworld: you can always polish up your bzr-git and have it both ways [07:27] true, i need to look into that [07:28] i also need to figure out how to make working with github nice [07:28] from the cmd line etc [07:28] wallyworld: I believe working with github is "clone core into your own namespace, and then push and pull from there" === vladk|offline is now known as vladk [07:29] sure, but all the bzr setup to allieviate typing i need to figure out - parent location based on repo etc etc [07:29] used named branches (don't commit directly to master), push those names back to github, merge from named branches to trunk. [07:29] plus it keeps asking me for username and password all the time [07:29] wallyworld: git's "origin" policy works decent here, I think you have to set it up 1 per "repo" but then it should continue to work for all branches. [07:29] yeah, that stuff i need to read [07:30] haven't found time yet [07:30] i need a "github for bzr/launchpad refugees" document [07:30] morning [07:34] wallyworld: if you set up an SSH key with github, you then push to the magic "git@github.com:wallyworld/core" and it will detect who you are from your SSH key fingerprint [07:35] note that "user@host:/path" is git's way of declaring an SSH path [07:35] vs bzr's bzr+ssh://... [07:35] jam: ah, i've been pushing to just github.com/... i think [07:36] i'm also hoping to just git push and it knows where to push to like bzr does with lp [07:36] i don't like typing the full remote location each time [07:37] wallyworld: you create "remotes" for that [07:37] essentially aliases for a repo location [07:37] rightio, that's the github 101 stuff i haven't read yet [07:51] wallyworld: https://help.github.com/articles/error-permission-denied-publickey is how to set up your ssh key stuff, and the default from github is to use "https://" which require username & password, but if you look at the "clone from here" there is a link for getting the SSH address. [07:52] great, thanks [07:52] some weekend reading [07:52] by default I believe "git clone $SOMETHING" will set that something as an origin so "git push" will push back to it [07:52] wallyworld: I think working out this stuff/documenting it is all part of actually getting the team moved over [07:52] yes indeed [07:54] * wallyworld -> dinner [08:02] http://www.seamicro.com/node/313 "Up to 64 8-core processors with 64GB dram each" and up to 4TB of RAM for the 10U box. [08:02] that's a big box [08:05] jam: nice, indeed. but also a nice power consumption. may also use it as heating system. :) [08:06] TheMue: apparently it is about 2x as efficient as regular boxes of that size (6 racks of 40kW for normal machines gets replaced with 2 racks of 20kW) [08:06] so it is definitely power-dense [08:06] but maybe slightly more efficient per kW than othres. [08:06] at least, the numbers they give put it as 1.5x more kW/rack space, but 3x the compute power [08:07] jam: yeah, it’s fantastic. espcially when thinking back to the good old Sun E10K times. [08:07] jam: we once ran three of those boxes, at that time absolutely top (around 2000). [08:09] TheMue: http://en.wikipedia.org/wiki/Sun_Enterprise#Enterprise_10000 quite a machine in 2000 [08:11] jam: absolutely. our computer room has been under the roof of our building and they had to deliver it with a large crane. :D [08:21] * TheMue thinks about quad-core mobiles or notebooks (with 8 hyperthreads and 16 gb ram) today compared to the good old Z80 he started with [08:23] * axw punches peergrouper tests in the face [08:27] morning all [08:51] morning voidspace === vladk is now known as vladk|offline [09:53] morning [09:59] perrito666: morning [10:09] morning [10:09] wwitzel3: morning [10:15] wwitzel3: did you pull out your tests yesterday, or just not push them? [10:15] voidspace: never pushed them, never got them working [10:15] wwitzel3: the changes look pretty complete in terms of replacing the EnvironConfig watcher with the Rsyslog watcher [10:16] wwitzel3: :-/ [10:16] voidspace: I will push them now [10:16] wwitzel3: cool [10:18] wwitzel3: want to pair again? Shall I drive this time? [10:19] voidspace: yep, that seemed to work well yesterday [10:20] wwitzel3: yeah, when we finally got a screen resolution I could see :-) [10:20] wwitzel3: it meant that for the first hour or two you were doing all the work and describing it to me as we went [10:20] sorry about that [10:20] wwitzel3: my branch landed by the way [10:21] voidspace: ahh, good [10:21] yep [10:21] right, so coffee update then hangout [10:21] voidspace: sounds good, doing the same === vladk|offline is now known as vladk [10:49] jam: hangout? [12:57] wwitzel3: nearly 2pm here, so I'm going on lunch [12:57] see you back soon :-) [12:59] voidspace: ok, sounds good [13:10] fwereade / mgz -- working through some of this Action stuff in my head. I think I got a much better picture of what we need to do! :) [13:10] bodie_, cool, we can have another chat about it later if you think that'd be useful? [13:12] fwereade, absolutely. I was looking at the gojsonschema stuff. should we be assuming that the args for an Action will be defined as a json-schema file? [13:13] instead of the "actions.yaml" I was basing my thoughts on -- because if so, our work is literally cut out for us [13:13] or does the schema define what a legit actions.json might look like? [13:13] i.e. action, description, args, types [13:18] bodie_, so ISTM that we unmarshal the contents of actions.yaml, and call NewJsonSchemaDocument with the params definition for each action; and then we call Validate on that doc, passing in the unmarshalled params we got from the user [13:18] bodie_, I perceived you as talking about json *files* which I think is a slight derail [13:19] won't the UI be generating json-schema? [13:19] the UI will be using the json schema document you send us over the api [13:19] that is, the charm definition GUI [13:19] and we'll construct a UI, do client side validation, but juju must do the same [13:19] to jump in as I follow along :) [13:19] bodie_, it will be using a json-schema document to generate the ui, and we will be using that same json-schema document to validate what they send to us, yeah [13:20] I see, but the charm will define the action spec using yaml? [13:20] the GUI might wish to ask you what actions are available, what is the schema doc for a specific charm/action. [13:20] bodie_, but the representation in the charm of that json-schema document will just be a bunch of maps/lists/values under a particular key in the yaml [13:20] I thought there was some JSON-schema builder which was desired to be used to actually put the charm's actions spec together [13:20] bodie_, we unmarshal that stuff and then pass the resulting interface{} into the gojsonschema funcs [13:21] sorry, fwereade / rick_h_, talking about charm manufacture, not UX [13:22] bodie_, that hadn't really crossed my mind -- a simple json-schema can be pretty simple, though, according to my reading of it [13:22] bodie_, certainly no worse than a charm config schema [13:22] right, it's very human-readable [13:22] bodie_, but with more scope for complexity if it's warranted [13:22] yea, didn't realize there was talk of a charm building UI/tool. I think that's something of a follow up though to actually having actions work [13:22] rick_h_, +1 to that [13:22] :) [13:23] well, hopefully that will be coming soon now that I have the conceptual framework in place much more clearly [13:23] bodie_, +1 also to that :) [13:23] sorry, didn't mean that to come across as snarky, just mean a UI tool to help you build a charm shouldn't influence how to write a charm without. [13:23] imo [13:24] right, the question was more whether json-schema is going to replace YAML [13:24] since that would obviously make validation / parsing really simple [13:24] perhaps I'm not listening lol, rereading [13:25] since yaml is a superset of json, my understanding is that we'll update charm proof to validate actions yaml files to be valid json files. [13:25] charms will write yaml [13:26] and what they are internally once pulled into juju doesn't matter (so using json would be ok?) [13:26] especially since the GUI will be requesting json anyway [13:26] but asking charm devs to do part XX in json and part YY in yaml isn't a great experience for them [13:26] Gotcha, so really the ideal interface takes either-or [13:27] rick_h_, yeah, exactly -- although really the only parts that *have* to be valid json are the subsections of those files that will actually be piped into gojsonschema [13:27] rick_h_, ie the params definitions [13:27] fwereade: true, I was under the actions.yaml perspective so not pondered the nested structure idea yet [13:27] rick_h_, (I would be surprised if actions.yaml *actually* ended up containing anything that wasn't *representable* as json though) [13:27] fwereade: but definitely, we can scope the issue down easily enough. [13:28] rick_h_, I wouldn't expect the gui to have to care about that fwiw [13:28] :) well I think my work is cut out for me [13:28] fwereade: right, just that yaml allows some form of variables and such if I recall, which wouldn't dump to pure json correctly [13:28] rick_h_, ask us about the charm over the api, we'll tell you the schemas for the actions [13:28] rick_h_, yeah [13:28] rick_h_, I'm not taking it off the table, just saying I'd be surprised [13:28] fwereade: right, that's cool. Easy to work with. [13:28] rick_h_, cool [13:29] rick_h_, what you said about the json-schema sent over the API [13:29] delivery truck, have to run for a few [13:29] let me know when you're back :) [13:30] never mind, seems simple enough. just need to add a "get-schema" api endpoint [13:31] fwereade, the question in my mind is still whether to simply store the params and spec as json or as an interface map [13:31] bodie_, let me turn that round: what's the benefit of storing as json? [13:32] heh :) [13:32] makes it a simple gojsonschema call internally [13:32] whereas storing as k/v makes it easy to send args to the charm [13:33] is that something like what you're thinking [13:33] ?* [13:33] bodie_, ISTM that the gojsonschema funcs actually work with interface{}, and supply convenience functions to turn json text into them [13:33] ah [13:33] didn't realize that [13:34] bodie_, maybe I'm looking at the wrong gojsonschema, it's not immediately apparent which of xeipuuv's and tolsen's is better, if there's any difference at all [13:36] fwereade, it appears xeipuuv's is much more maintained -- tolsen's is a fork of it from 8 months ago [13:36] hm, it was actually merged into xeipuuv's a couple of months ago [13:37] maybe I'm looking at the wrong numbers here [13:37] https://github.com/tolsen/gojsonschema/network [13:38] so, probably xeipuuv's is our best bet :) [13:39] sorry back [13:40] bodie_, I think we'd likely want to vendor it anyway, I presume the license is tolerable [13:40] bodie_, but maybe we just stick with godeps [13:41] hi rick_h_, no worries. Just wanted to reflect what you said about a get-schema API endpoint to make sure I'm on the same page :) [13:42] fwereade, it doesn't appear to be licensed at all. under Github's TOS, I believe that means if you fork it, it's yours :P under the most liberal interpretation, afaik [13:42] bodie_, ha, fun :/ [13:43] bodie_: I think we should file an issue to get a license in place and suggest we're willing to add one in a pull request that's of a version we'd like to see :) [13:43] perfect [13:43] rick_h_, sgtm; bodie_, would you handle that please? [13:43] explicit ftw, shoot, maybe even start with a pull request that's bsd or MIT and see if it gets the discussion going [13:53] https://github.com/xeipuuv/gojsonschema/issues/16 -- is this good? [13:53] it's an active repo and there are about 2 issues open [13:53] didn't see your suggestion of opening the MR myself [13:54] but, that of course would've been a good idea ;) and I can still go ahead and do that if you'd like [13:54] bodie_: oh, you'd have to fork it and then do a pull request against it? [13:55] correct [13:55] oic, yea. That's cool, just might use pull request vs MR as MR is launchpad centric terminology [13:55] but nitpicking language there [13:55] he he. just caught that myself [13:58] I wasn't sure if I ought to mention what project it's for [14:03] bodie_: if it comes up it comes up. [14:03] voidspace: standup [14:14] fwereade: (bodie_) I re-submitted the MR after addressing the issues in the review [14:14] fwereade: I didn't start working on the Unit cleanup though... I figured that could be the next MR? [14:16] jcw4, perfect :) I just broke several MR's out of that monolithic one from earlier [14:16] bodie_: nice [14:17] so, each of those should be pretty clean and easy to get in, we might even want to consolidate a couple of them [14:18] jcw4, dammit, sorry, I started looking this morning and something distracted me [14:19] fwereade: no worries... I know you have a lot of demands :) I appreciate whatever you're able to help us with [14:19] jcw4, I'd rather like it if you did fix up the cleanup, I don't *think* it'd grow the CL too far, but feel free to start working on it in another branch -- you can always merge them into one to land [14:19] fwereade: Yeah. I'll start that this morning [14:19] jcw4, (or ofc demonstrate that I'm crazy, and it does/would grow it too far ;)) [14:19] fwereade: hehe will do [14:55] fwereade, were you saying an Actions spec ought to be an interface{} or a map[string] interface{} ? [14:56] fwereade: ping [14:56] fwereade: we're implementing WatchForRsyslogChanges [14:56] void [14:56] voidspace: i'm around if fwereade isn't [14:57] cool [14:57] jam1: fwereade: what we *really* want is a "MultiEntityWatcher" [14:57] that will watch two mongo collections and notify the client when *either* of them change [14:57] fwereade: le sigh… I can't find AllWatcher in a test suite to validate my changes... [14:57] is that a) something that already exists or b) a feasible approach we just need to do it or c) a dumb idea [14:58] voidspace: I don't know that we've written the code, but AIUI NotifyWatchers can be trivially collapsed [14:58] NotifyWatcher is client side, right? [14:58] voidspace: it is both sides [14:58] We'd rather collapse the EntityWatcher on the server [14:58] ah, ok [14:58] we'll have a go at it then [14:59] voidspace: MuxNotifyWatcher ? [14:59] we just didn't want to burn time on it if it was an approach that was never going to work and we should just use two of them [14:59] Hi Devs. If you look at juju-ci, don't panic. HP is very ill. IS knows about it and are in contact with HP. [14:59] voidspace: well IIRC the discussion from fwereade was that we really should treat the items in environconfig as immutable, in which case you don't need to watch them anymore. [14:59] jam1: ah, so *just* watch APIHostPorts [14:59] sinzui: thanks for the heads up, IIRC azure was having some troubles [15:00] jam1: that's certainly easier... [15:00] sinzui: was that just temporary? [15:00] jam1, azure often resolves itself over 4 hours [15:00] voidspace: right I'm hesitant to just change the code that we've already had working [15:00] but fwereade seemed pretty confident that we can't just change things like ports and ca certs [15:01] voidspace: so as long as we have some sort of implementation of a WatchForRsyslogStuff [15:01] then we can make it trigger with whatever logic we converge on [15:01] jam1 This HP outage is cause by bad archives or networks. Apt is crippled. This happen in March when I did the 1.17.4 release I think [15:01] jam1: we have that done. It's currently just doing WatchForEnvironConfigChanges under the covers [15:01] voidspace: certainly you can *start* with just APIHostPorts [15:02] sinzui: did we ever get trusty in HP ? [15:02] jam1: we can switch it to APIHostPorts and check that everything still works *plus* the parts that weren't working before [15:02] and if everything works then great [15:02] jam1 it is listed :) [15:02] awesome [15:02] k, I thought for a while it wasn't available, but yeah, ISTR HP having archive routing issues in the past [15:04] * jam1 is sad, I can delete the AllWatcher which GUI critically depends on and nothing in juju-core would notice (at least that I can find) [15:06] jam1: well, the GUI is an important part of Juju, obviously. Is there a way we could support them differently without the AllWatcher? [15:07] pubhubsubpubhub! [15:09] natefinch: well I can just migrate the code over there, it isn't like I'm getting rid of it. [15:09] the *bug* is that we don't actually test that we don't break them all the time [15:10] If I mess up the permissions, or accidentally don't expose the API, nothing notices [15:10] we have some client code [15:10] but it isn't exercised (AFAICT [15:11] rick_h_: do you do regular testing with dev-releases or trunk ? [15:11] jam1: no, it's something we brought up with sinzui about getting into the CI/QA steps [15:12] wallyworld_: if you want to write some extra tests bug #1319441 [15:12] <_mup_> Bug #1319441: AllWatcher API is untested in juju-core [15:12] rick_h_: k. this is mostly about "I should be able to refactor code and know that everything works" [15:13] jam1: agreed, it's a hole we hit with quickstart leading to 14.04 and we brought up in vegas to address [15:13] but AllWatcher just "exists" as an API. It looks like the underlying "megawatcher" code is reasonably tested, just not that we can actually get anything from it via the api [15:13] rick_h_: is it possible to drive the GUI from a script? [15:14] rick_h_: I guess you have quite a bit of JS tests written [15:14] jam1: we do have a few functional tests using selenium we run to drive the gui in a couple of ways [15:14] jam1. rick_h_ and I agreed that QA needs to test the next juju against quickstart and gui. Run their tests suites that exercise juju too [15:17] perrito666, regarding the ha restore test. Do I expect to see just a master (users need to rerun ensure-ha) after the restore completes? [15:18] sinzui: yes [15:18] thank you. [15:21] sinzui: btw, I found that in "restore_present_state" server you are masking possible juju-restore runtime errors [15:22] oh, how do I fix that? [15:23] well, you try to match the instance name to destroy the instance, if there is no match you raise a generic exception with a string, you might want to append the output from err in that string [15:28] thank you perrito666 [15:30] sinzui: np [15:30] when is TestStartStop called in a suite (how does it differ from SetUp)? [15:31] voidspace: it is just a test [15:31] ah... [15:32] that explains why changing it isn't affecting the other tests :-) [15:32] jam1: d'oh, thanks [15:32] voidspace: Ithink anything starting with Test* is just atest, the SetUpSuite et al start differently [15:32] yep, shoulda twigged [15:41] :q [15:42] irssi is the worst vim ever [15:42] :) [15:43] :w [15:43] :w! [15:43] :qa! [15:45] wwitzel3: you should have seen my week trying to switch to emacs, I have never corrupted so many files with ESC^i and :w in my life [15:50] <_benoit_> Outscale has a S3 endpoint in fact! so juju should work easily \°/ [15:56] _benoit_, sweet! [16:02] fwereade: about cleaning up the unit.... 1) I assume the cleanup should go in func (u *Unit) Remove() [16:03] fwereade: and 2) the same predicate used by the watcher to find Actions specific to the unit can be used to find and remove the Actions queued for the unit being Removed [16:03] jcw4, look in cleanup.go, I think [16:03] ok [16:03] fwereade: thx [16:04] jcw4, I think you need to look at service.removeUnitOps to find the stuff that'd definitely run when a unit is actually removed [16:04] jcw4, do we have an opinion about running actions on units after someone's destroyed them, though? [16:05] fwereade: good question [16:05] fwereade: I'll review the draft doc to see if it's mentioned [16:05] jcw4, because we might be in a position to actually mark them all failed at that point, I'm right now writing a cleanup op that's queued when someone calls Destroy on a unit [16:05] jcw4, pretty sure it's not [16:05] jcw4, btw, I'm afraid I need to be off pretty sharpish now, otherwise I'd pop in and talk to you [16:06] jcw4, if I'm around later I will try to get you a fresh review [16:06] fwereade: tx! [16:12] reasonably trivial code review: https://codereview.appspot.com/98250044 === jam2 is now known as jam1 [16:16] <_benoit_> fwereade: http://paste.debian.net/99505/ how can I bypass this lord of the ring check ? [16:33] natefinch: ping [16:34] natefinch: worker/rsyslog/rsyslog_test.go TestNamespace - do you know anything about this test? [16:34] natefinch: being our resident rsyslog expert [16:36] is anyone having problems with amazon taking forever to initialize machines? [16:36] jam1: sooo... we have some of our tests that call the following and expect rsyslog to be restarted [16:37] s.APIState.Client().EnvironmentSet(map[string]interface{}{"rsyslog-ca-cert": coretesting.CACert}) [16:37] jam1: I can put a workaround in the tests, but is it possible that the rsyslog cert will be set *after* APIHostPorts have changed? [16:38] jam1: in our manual testing everything appears to work for all the relevant scenarios [16:38] non-HA, HA, deploy unit *then* ensure-availability [16:38] voidspace: so the argument from fwereade was that rsyslog-ca-cert could be treated as immutable, and if we can consider it thusly, then the above test is invalid [16:39] voidspace: rsyslog-ca-cert has to be set up on machine-0 before anything happens [16:39] jam1: immutable except for *initial setting*, but would that *not* be done through the api [16:39] right [16:39] so in which case I will modify the test [16:39] the only bit you have to worry about is if something comes up right away before bootstrap actually sets up the cert [16:39] that's my specific question [16:39] do I have to worry about that [16:40] voidspace: so I'd rather get fwereade to confirm, but that is my understanding from our conversation [16:40] it should be considered immutable-once-set [16:40] though it would be best if we actually enforced that. [16:40] right [16:40] I'm not entirely sure why it is in environconfig [16:40] heh [16:40] (which lets user's touch it) other than we may not have a grab-bag for stuff we want to create and keep track of [16:41] it was probably just somewhere convenient [16:41] it certainly seems like *in practise* the cert is already set when we write out the initial rsyslog conf [16:42] but I'd like to be certain that *must* be the case, otherwise we'll have intermittent problems [16:42] and if we can't be certain then we need to watch environconfig changes at least until the cert is set [16:42] which is extra code [16:43] voidspace: just got back [16:44] natefinch: hey, hi [16:44] natefinch: see the above conversation with jam1 from where I pinged you [16:44] natefinch: we're now only triggering rsyslog conf to be rewritten, and rsyslog to be restarted, when the set of APIHostPorts changes [16:45] natefinch: because we consider the rsyslog stuff in environconfig (cert and port) to be immutable [16:45] voidspace: sounds good [16:45] natefinch: the only danger is that this information is "somehow" set *after* the config has been written the first time [16:45] I'd like to assume that's impossible - and it certainly seems to work in practise [16:46] voidspace: so my personal view is I'd rather have the ability to change it and not use it than not allow it at all (especilaly if we don't enforce that it is immutable). However, since it is easier to implement and fwereade liked it being immutable we can go with it [16:46] jam1: it's easy to change [16:47] jam1: on the other hand EnvironConfig changes for a lot of irrelevant things, so the new way restarts rsyslog a lot less [16:47] jam1, voidspace: my preference is for whatever makes the code simpler. It sounds like assuming it's immutable will make the code simpler [16:48] I don't want us to write and maintain code that only .001% of people will actually use [16:48] hell, I don't want to write and maintain code that only 1% of people will actually use :) [16:49] at some point 1% becomes a lot of people [16:50] Don't care, that means 99% is always way way more people [16:50] more complexity and more features will just slow us down for the features the 99% want [16:53] jam1, I'm not even necessarily *against* having it mutable, but if we do we should be careful to make it sane; and we haven't, and nor have we scheduled mutability, so we should just save ourselves hassle by making it immutable [16:53] jam1, and it's easy, and simple, and maybe one day there will be a compelling case for changing it, and that will be fine too [16:55] * fwereade is done for the day (or at least for now) anyway [16:55] fwereade: night, have a good evening [16:58] fwereade: night [17:00] hmmm... my actual error now is a permissions error [17:03] natefinch: should all api endpoints do a permissions check? [17:04] natefinch: our initial implementation of WatchRsyslogChanges included this check [17:04] if api.authorizer.AuthOwner(agent.Tag) [17:04] which caused a test to fail elsewhere [17:04] I wonder if it's needed [17:05] brb [17:05] voidspace: it seems like watching shouldn't need permissions, since you're not changing anything, but I'm actually not sure what the policy is. [17:05] natefinch: it looks to me like the WatchAPIHostPorts itself doesn't have this check [17:06] voidspace: that sounds like a good enough precedent to me [17:06] natefinch: and as this is *essentially* what WatchRsyslogChanges is doing it shouldn't need extra permissions I don't think [17:06] heh, cool [17:06] we probably just copied the permissions check from somewhere else anyway... [17:24] voidspace: natefinch: all API methods do permission checking [17:25] voidspace: especially when it was exposing the raw EnvironmentConfig before, but even now, it is only for use by Agents [17:25] probably you just had a wrong Authorizer set up [17:25] natefinch: voidspace: WatchAPIHostPorts is special, because *everyone* needs to watch that [17:25] but Clients shouldn't have WatchRsyslogChanges exposed to them. [17:28] ah, dammit [17:28] at least I've confirmed the error even if my fix is wrong :-) [17:29] although it seems like this new check must be more draconian than the previous one [17:29] jam1: thanks for the clarification. I guess I sort of assume that read-only stuff doesn't really need authentication because.... who cares? But good to know the policy anyway. [17:30] natefinch: well readonly of your Environment credentials would be a bad thing, right? [17:30] jam1: yes, true [17:31] natefinch: so while I could see "does security really matter on *this* function", it is more of a "lets not think about it, and just restrict everything" [17:31] jam1: fair enough [17:31] because I can agree, why would leaking rsyslog needs changing be a security hole [17:32] jam1: but then again, you never can tell where you might leak information in an indirect way that gives an attacker just enough advantage [17:34] jam1: so the correct permissions check is [17:35] authorizer.AuthMachineAgent() ? [17:35] voidspace: the UnitAgent doesn't run rsyslog, right? [17:35] then AuthMachineAgent [17:36] * perrito666 wishes lbox didn 't h a n g t h e w h o lemachinewhilerunning [17:37] perrito666: get a better machine ;) lbox does do a lot of stuff, and Linux (Unity? not sure the culprit) does have the unfortunate bug where if you're hammering the CPU, it tends to freeze up everything. [17:37] so, the test now passes... [17:37] with that change in place [17:37] time to run all the tests and go jogging [17:38] natefinch: perhaps a less mobile processor, but it is hard to go back from an extremely lightweight machine === circ-user-UuLvN is now known as mramm [17:44] hey all! [17:44] hey mramm [17:44] ODS demo yesterday was #*(%ing amazing [17:44] great work everybody [17:45] mramm: yeah it was. I watched it this morning. Damn awesome. [17:46] natefinch: how hard would it be for you to do a sprint for a few days in a couple of weeks? (I know it is both short notice, and close to the last juju sprint temporaly speaking) [17:46] natefinch: and am I remembering right that you/your squad have Windows and CentOS on your work list for this cycle? [17:47] mramm: yeah, we have non-ubuntu workloads [17:47] mramm: as for a sprint, if it's within commuting distance (like at the Lexington, MA office), then it's cool. Going elsewhere and my wife might kill me. [17:48] well, let me talk to the cloudbase guys [17:50] and see if I can get them to come back to the US [17:51] It's too late (I think) to keep them here after ODS [17:55] who has the resources work on their schedule? [17:56] hey, did we add a fsmount type of rpc_bindfs wrt bind mounting local filesystems on local provider? [17:58] mramm: Onyx (Tim) has resources, looks like [17:58] mramm: I think that's thumpers team === vladk is now known as vladk|offline [18:10] The HP archive issue is resolved. Juju is confirmed to be fine [18:10] http://juju-ci.vapour.ws:8080/ [18:10] ^ if we fixes the i386 and ppc unittests, every test would have passed [18:11] sinzui: nice! [18:17] wwitzel3: I fixed the auth issues [18:17] wwitzel3: but there's a bunch of failing tests in state/api/rsyslog/rsyslog_test.go [18:18] no state/apiserver/rsyslog/rsyslog_test.go [18:18] wwitzel3: they look like the same failure we've fixed before ("No state server machines found") [18:18] wwitzel3: but I have to EOD [18:19] wwitzel3: all my changes are pushedc [18:19] *pushed [18:19] wwitzel3: so have fun :-) [18:19] g'night everyone [18:19] EOD [18:20] night voidspace [18:20] nite vds [18:20] agh [18:21] nick autocomplete should work after they left :) [18:21] heh [18:29] natefinch: did we add a fsmount type of rpc_bindfs wrt bind mounting local filesystems on local provider? [18:29] i ran into a corner case with apparmor not having the correct profile defition that blocked me from spinning up lxc containers this morning - working with ty helped me get it sorted. [18:29] *Definition [18:33] lazyPower: interesting... we haven't twiddled with the local provider code very much recently. I don't believe we've changed anything about the filesystem, or anything like that... though I know there's been some lxc/apparmor problems in the past [18:34] natefinch: http://paste.ubuntu.com/7463805/ [18:34] this is what i found - and it only happens when deploying local charms. if i specify CS charms i don't see the behavior. [18:35] if we made some alterations to lxc without getting the app armor, i was requested to forward on teh bug so they can SRU the app armor fixes. just let me know either way and i'll put in the due dilligence. [18:37] lazyPower: hmm... I really doubt we mucked with lxc lately.... can you try just spinning up a container manually and make sure it works without juju? [18:38] natefinch: i only encountered the issue when using juju to deploy a local charm. otherwise it had no issues. [18:38] i've created lxc containers while debugging, that didn't exhibit the issue [18:39] lazyPower: weird. Ok. There must be something we're doing with the local charms that is making app armor mad.... if you can make a bug for it, that would be great. Obviously it's something we'll need to fix ASAP [18:39] ack. I'll add what i've got - which isn't much. but i can comment out the app armor patch and reproduce. [18:45] natefinch: https://bugs.launchpad.net/juju-core/+bug/1319525 [18:45] <_mup_> Bug #1319525: juju-local LXC containers hang due to App Armor Denial of rpc_fsbind request with local charms [18:46] lazyPower: awesome, thanks [18:47] Thanks for taking a look. I hope its just a corner case that I've found - and not something thats out there in the wild. [18:51] if anyone has literally 2 seconds https://codereview.appspot.com/97370044 === vladk|offline is now known as vladk [19:17] neat, I may finally get to use the windows named pipe package that I wrote for Gustavo during(ish) my interview === andreas__ is now known as ahasenack [21:04] morning all === vladk is now known as vladk|offline [21:20] morning all :) [21:35] waigani: howdy [21:35] menn0: iirc, agent.conf from the old machine is put in place and services restarted in these ports [21:35] menn0: the old agent.conf is ignored [21:41] perrito666: ok, so the agent.conf that awk is manipulating isn't the one from the backup? [21:46] * menn0 pulls branch to have a closer look [21:46] menn0: is the one from the backup [21:47] what it currently does is it untars the backup over the fresh mashine [21:47] machine* [21:47] and then makes some surgery there [21:47] :p [21:47] yep I get that [21:48] but if somehow the ports in the .jenv on the host that's running "juju restore" don't match what's in the backup then the user will never be able to connect to the env after it's bootstrapped [21:48] I know this should really never happen [21:49] sorry, I meant after it's restored, not after it's bootstrapped [21:49] mmm, you mean if the port change between the backup and the new machine? [21:50] I'm thinking: backup is taken, someone monkeys with the ports, then restore is done [21:51] shouldn't happen I know [21:51] menn0: mm,true, that is a story I am really not so happy to support for the moment [21:51] fair enough. [21:51] you have your LGTM anyway :) [21:51] but, really not that far from where I am [21:51] so I think I can give a shot at your idea until shinzui finishes the test :) [21:52] just refusing to restore with an error should be enough I would have thought [21:52] especially given that we currently don't support changing the API or state server ports once an env is bootstrapped [21:53] if the ports don't match between the .jenv and the backup then someone has done something they shouldn't have (or we have a bug) [22:04] thumper, is this bug in juju-core, or does it belong ubuntu/trusty/+source/lxc https://bugs.launchpad.net/bugs/1319525 [22:04] <_mup_> Bug #1319525: juju-local LXC containers hang due to App Armor Denial of rpc_fsbind request with local charms [22:04] * thumper looks [22:10] sinzui: I'm unclear, but most likely lxc [22:10] sinzui: I asked lazyPower in the bug if he knows what caused the problems with the wordpress charm [22:11] thumper: its any local charm. if i pull seafile and deploy it locally [22:11] it has the same denial message [22:11] i ran into this working the rev queue [22:11] ah... [22:11] I don't know the local charm area very well at all [22:12] I don't think it should trigger anything with lxc [22:12] sinzui: it could be either juju or lxc... [22:12] Is this issue different from https://bugs.launchpad.net/juju-core/+bug/1305280 [22:12] <_mup_> Bug #1305280: apparmor get_cgroup fails when creating lxc with juju local provider [22:12] it appears to be some kidn of fs mounting voodoo, because the error it threw was rpc_bindfs [22:12] which i immediatly thought about that feature we talked about in vegas about bind mounting the local fs, but i didn't know if we did anything with that as of yet. [22:13] sinzui: the second bug there appears to be nested lxc failing [22:13] which is expected [22:13] lazyPower: we don't yet [22:16] thumper, what make you think the cgoup bug is lxc in lxc? [22:17] because that is the failure you get if you try? [22:17] * thumper looks at the bug [22:18] sinzui: ok, could just be apparmour on arm [22:18] don't think it is us [22:19] thumper, I am sure it isn't us. I just could get someone other than us to look at the issue [22:30] rick_h_: you there? [22:30] * thumper calculates time zones [22:30] thumper: kinda [22:30] dinner time? [22:30] thumper: 6:30pm [22:30] rick_h_: got time for 10 min chat? [22:31] yea, dinner is in the oven, what's up? [22:31] sure thing [22:31] cheers [22:31] rick_h_: https://plus.google.com/hangouts/_/gsuud6kuvi6fowv6i3dfnpwsbqa?hl=en [23:01] lol [23:01] thumper: all good, let me know if you want to chat more, I want to make sure we get it right [23:02] rick_h_: I'll start documenting what we need and keep you in sync [23:02] we are done for today [23:05] thumper: k, thanks for the chat [23:27] thumper: how dare you make me think about stuff :P [23:38] rick_h_: np... more coming