[00:05] <menn0> perrito666: hi
[00:12] <jcw4> thumper: one more revision on http://reviews.vapour.ws/r/127/.  Addressed all your feedback.
[00:14] <thumper> jcw4: ack
[00:33] <jcw4> thanks thumper
[00:48] <davecheney> thumper: do you have 2 minutes for the ppc64 postgame ?
[00:48] <davecheney> make that 10
[00:48] <thumper> davecheney: sure, in a few minutes? just want to propose this branch
[00:49] <davecheney> kk
[00:49] <davecheney> i'll jump in the standup hangout
[00:49] <davecheney> i'll see you there when you're ready
[00:56] <thumper> menn0,davecheney: http://reviews.vapour.ws/r/133/diff/
[00:57] <davecheney> thumper: worst, description, ever
[01:19] <davecheney> thumper: LGTM, i added some nits
[01:19] <davecheney> you can ignore them if they are inconveneint
[01:19] <thumper> ta
[01:19] <menn0> thumper: I'm still looking at other reviews
[01:20] <davecheney> thumper: command looks good
[01:20] <davecheney> i like the little touches like sorting unknown values in the error message
[01:20] <davecheney> and printing a single return argument differently
[01:20] <davecheney> this was clearly a labour of love
[01:21] <davecheney> shit, winton-09 is completely screwed
[01:21] <davecheney> i'm going to try a chroot to get back to a normal trusty install
[01:23] <menn0> cmars: I'm done reviewing the login PR (got interrupted by a furniture delivery, sorry)
[01:24] <menn0> cmars: looks pretty good except for what happened to the "maintenance in progress" handling which is now broken I think.
[01:26] <thumper> davecheney: still needs a ship it :)
[01:26] <thumper> davecheney: yeah, I like to care how it feels to run the command
[01:27] <menn0> thumper: I'm looking at the command now
[01:31]  * thumper frowns
[01:31] <thumper> why isn't rb updating?
[01:31] <thumper> menn0: is `rbt post -u` the right thing?
[01:33] <menn0> thumper: it often doesn't. I think it just uses the hash of the topmost rev on your branch to find the review to update
[01:33] <menn0> if you've rebased or added more commits I don't think it works
[01:33] <menn0> use -r instead
[01:38] <davecheney> thumper: rbt 0r NNN
[01:38] <davecheney> thumper: rbt -r NNN
[01:38] <davecheney> otherwise you get a new review
[01:38] <thumper> yeah, I think I got a new review
[01:39] <wwitzel3> menn0: that script worked great btw, was able to replicate the errors, still don't have a fix yet, but trying to get there.
[01:39] <menn0> wwitzel3: well at least you can replicate it. that's half the battle :)
[01:41] <wwitzel3> menn0: at this point it was way less than half the battle, lol ;)
[01:41] <menn0> wwitzel3: :)
[01:41] <menn0> thumper: I have lots of feedback on that PR... still going
[01:41] <thumper> menn0: really?
[01:41] <menn0> thumper: yep
[01:41] <thumper> geez
[01:42] <thumper> menn0: not sure what your review mentor is going to think :-)
[01:42] <menn0> thumper: :)
[01:43] <menn0> thumper: my issues are not huge but I think the code can be simplified a bit
[01:52] <menn0> thumper: done
[01:52] <thumper> ta
[01:54] <thumper> menn0: one reason I decided not to start with an example showing everything is because the output is really long
[01:54] <thumper> the cacert takes up approx 25 lines
[01:55] <menn0> thumper: ok fair enough. Could you shorten the cacert output in the example?
[01:55] <thumper> sure... how?
[01:55] <thumper> ... maybe
[01:56] <menn0> thumper: yeah or "start ... end"
[01:56] <menn0> substituting in what the start and end actually look like
[01:57]  * thumper nods
[03:41] <davecheney> thumper: sniffing for getpagesize(3) didn't work
[03:41] <davecheney> it turns out that libgo already asks the os for the page size when _allocation_ heap via mmap(2)
[03:41] <davecheney> it jsut doesn't use the page size when returing memory via madvise(2) :(
[03:41] <thumper> damn
[03:42] <davecheney> so, there goes the easy route
[03:43] <davecheney> thumper: it might be easier to write a program that compiled under gccgo
[03:43] <davecheney> can introspect it's own runtime
[03:43] <davecheney> 'cos it's not really juju that's busted
[03:43] <davecheney> its the libgo
[03:43] <davecheney> ie, write a test program that exercises libgo
[03:43] <davecheney> but that sounds counter productive
[03:44] <thumper> are you able to ask it the right questions?
[03:44] <davecheney> ie, if you have to install something to test if you have a shitty libgo
[03:44] <davecheney> then why not make the program dependon the _good_ libgo
[03:44] <davecheney> and by installing it
[03:44] <davecheney> fix the problem
[03:44] <thumper> :)
[03:44] <davecheney> the test program would agrevate the libgo bug
[03:44] <davecheney> allocate lots of random stucts full of pointers
[03:44] <davecheney> then agressively try to free memory to agetate the scavenger
[03:45] <davecheney> cute excersise
[03:45] <davecheney> but probably tangental to giving the charmers a tool they can use
[03:48] <thumper> :)
[03:48]  * davecheney goes to make lunch
[04:06] <cmars> calling it a night y'all. menn0, thanks for the review, let me know how it looks now if you get the chance.
[04:07] <menn0> cmars: ok good night. I'll try to have another look before I EOD
[04:22] <axw> ericsnow: you tested backup downloads with a validating TLS client? last time I checked, the certs we use didn't support it
[04:24] <ericsnow> axw: I'm pretty sure the trick is to explicitly set the root CA
[04:25] <ericsnow> axw: Michael Foord sorted it out right before he handed backups over to me
[04:25] <ericsnow> axw: it's certainly conceivable that I've missed something though :)
[04:26] <axw> hmm, thought I tested that. there was another error about IP SANs being missing, IIRC
[04:26] <ericsnow> axw: I'll look into that first thing tomorrow
[04:27] <axw> thanks
[04:27] <ericsnow> axw: thanks for bringing it up :)
[04:27] <axw> it matters for backups more than it does tools... it may matter for charms soon, though
[04:27] <axw> nps
[04:28] <ericsnow> axw: FWIW, I see value in consolidating the HTTP request code that tools, charms, and backups have in common at some point in the future
[04:29] <axw> ericsnow: agreed
[04:29] <axw> it grew a bit organically, needs some refactoring
[04:29] <ericsnow> axw: I dabbled with in a month or two ago and revisited it today a tiny bit
[04:32] <ericsnow> anyway, I'm EOD
[04:32] <axw> ericsnow: night, see you in Brussels
[04:32] <axw> (I'm leaving tonight)
[04:34] <thumper> OMG, trying SO hard not to fix everything in this file...
[04:34]  * thumper makes a note to come back later
[05:14] <davecheney> thumper: did the best i could on a straight forward way to detect bad libgo's
[05:14] <davecheney> given it was time boxed and we're probably going to fix it with the dpkg hammer
[05:14] <thumper> ok, cool
[05:15] <davecheney> i've also been able to demonstrate that all shipping versions of trusty are broken out of the box
[05:15] <davecheney> which blows
[05:25] <thumper> :-(
[05:40] <thumper> (╯°□°)╯︵ ┻━┻
[05:40] <thumper> just some casual flipping
[05:40] <thumper> why didn't I complain more before...
[05:40]  * thumper blames himself
[05:40] <wwitzel3> ┬─┬﻿ ノ( ゜-゜ノ)
[05:40] <thumper> (╯°□°)╯︵ ┻━┻
[05:41] <wwitzel3> (╯°□°)╯︵ ┻━┻ ︵ ╯(°□° ╯)
[05:42] <thumper> :)
[05:42] <thumper> (╯°Д°）╯︵ /(.□ . \)
[05:51]  * thumper EODs
[05:51] <thumper> night all
[05:52] <wwitzel3> o/ thumper
[07:35] <TheMue> morning
[09:11] <dimitern> jam, hey
[09:12] <dimitern> jam, it seems while we're trying to reach a consensus on how agent apis need to change, others are still adding stuff to the uniter facade :)
[09:13] <dimitern> jam, I might as well land my changes and at least finish with port ranges for now
[09:13] <jam> dimitern: well, I was hoping people would actually join the conversation, lack of objection means we need to do the versioning work
[09:14] <dimitern> jam, you mean enforce it somehow?
[09:14] <jam> dimitern: I mean actually bump the version
[09:14] <dimitern> jam, as it is now?
[09:16] <jam> dimitern: as in we need to split it up and start a new version for new content. At least, that is what I recommended and nobody said "lets not do that"
[09:17] <dimitern> jam, ok, I might as well do it, yeah
[09:19] <fwereade> dimitern, first thing that springs to mind on the uniter.Machine stuff is -- do we really need to use the remote-object model here? just an AllMachinePorts (or something) method would seem to be all we really need here
[09:20] <fwereade> dimitern, the only reason we have this *Unit style is because we didn't want to rewrite *everything* -- but I don't see a big win in writing new code in the saqme style
[09:22] <dimitern> fwereade, ok, I can agree with this
[09:22] <dimitern> fwereade, I was even thinking of having AllMachinePorts taking unit tags and returning all ports for each given unit tag's assigned machine
[09:28] <dimitern> fwereade, the more painful issue is what to do about api versioning? I was going to refactor the first PR so it has UniterBaseAPI (having GetOwnerTag which is replaced by ServiceOwner in V1), UniterAPIV0 (with everything but the new methods - AssignedMachine and AllMachinePorts), and UniterAPIV1 { UniterAPIV0 } (having the new calls)
[09:29] <dimitern> basically what I did for the FirewallerAPI before the ports migration upgrade step
[09:30] <fwereade> dimitern, yeah, I think we really should be doing that, annoying though it may be
[09:32] <dimitern> fwereade, ok, I'm on with it already; and re AllMachinePorts taking unit tags? With this, I won't need to add AssignedMachine separately as well
[09:32] <fwereade> dimitern, that works for me, I think, but: gsamfira, are you doing an AssignedMachine call for reboot?
[09:33] <fwereade> dimitern, gsamfira: because finding out what machine you're on can just be one call when we bring up a uniter -- that's not going to change for the lifetime of the process, *whatever* happens ;)
[09:34] <dimitern> fwereade, well I'm calling it in NewHookContext, but I guess I could do that at uniter startup once
[09:35] <gsamfira> fwereade: yep, doing the same. If I can skip it, it would be great
[09:36] <fwereade> dimitern, gsamfira: ok, as long as one of you does it I'm easy :)
[09:36] <fwereade> dimitern, if you're already on that path it might be easiest for you?
[09:36] <dimitern> gsamfira, fwereade, yeah, I've just seen the AssignedMachine PR, well if mine lands before yours you can just use it :)
[09:37] <dimitern> an vv
[09:38] <dimitern> fwereade, right, so AllMachinePorts will take machine tags, as it is now, but it will be a top-level method
[09:38] <fwereade> dimitern, perfect
[09:39] <fwereade> dimitern, and would you make sure it includes per-relation ports? it's fine to have -1 for everything for now, but I'd prefer not to have to change the api when we introduce them
[09:40] <dimitern> fwereade, it does not even include network tags now, just unit tags and port ranges lists
[09:41] <dimitern> fwereade, per-relation ports are not in state yet, but when we later introduce them we can just add a few fields to the result
[09:42] <dimitern> fwereade, the point of having AMP() is to have a cache of all ports, regarless of network or relation, so we can verify against them each time open(close)-port is called
[09:43] <dimitern> although.. hmm.. yeah this only works because all ports are on the same network now
[09:44] <fwereade> dimitern, yeah, exactly
[09:44] <dimitern> fwereade, how about we return both []slice{UnitTag, NetworkTag, and PortRange} as result, and when per-relation ports are introduced, we add another api call to get the network of a relation endpoint?
[09:45] <fwereade> dimitern, but don't we need to know the *relation* for which a port has been opened? and to be able to infer the network from the relation, not vice versa?
[09:45] <dimitern> fwereade, I don't think it's the other way around
[09:46] <dimitern> fwereade, port conflicts occur on networks, the relation is just a way to determine the network to use
[09:47] <dimitern> fwereade, ah, so RelationNetwork() call can be added in the future, but right now I don't know for which relation a port was opened
[09:48] <dimitern> fwereade, perhaps I can add a RelationTag in addition to the UnitTag and PortRange to the result, but just leave it blank for now and put a comment
[09:48] <fwereade> dimitern, right, but opening the same port on two different relations is *not* necessarily a conflict, is it?
[09:48] <dimitern> fwereade, only if it's the same network
[09:49] <fwereade> dimitern, no, even if it's the same network it's not a conflict
[09:49] <fwereade> dimitern, I think we can trust a charm to handle its own potential collisions
[09:49] <fwereade> dimitern, but we need to track it to know when we *actually* need to open/close a port
[09:49] <fwereade> dimitern, consider mysql serving db over two different relations
[09:50] <fwereade> dimitern, we'll want to close the ports on one relation when that relation dies
[09:50] <fwereade> dimitern, but actually keep the ports open on that *network*
[09:50] <fwereade> dimitern, because there's another relation that wants them open
[09:50] <dimitern> fwereade, yeah, but if these 2 relations are on the same network it is a conflict to try open-port 80-90/tcp
[09:51] <dimitern> fwereade, what I don't get is how is that not a conflict
[09:52] <fwereade> dimitern, but it can't be, can it?
[09:52] <fwereade> dimitern, consider the mysql charm
[09:52] <fwereade> dimitern, if they conflict, how can you have more than one relation with its server endpoint?
[09:53] <dimitern> fwereade, wait, I'm trying to follow, but I still don't quite get it..
[09:53] <dimitern> fwereade, say mysql:db and mysql:cluster are both bound to juju-public; mysql listens on 80-90/tcp for db and tries to do the same for cluster
[09:54] <dimitern> fwereade, or you're saying in this case we let the charm handle this and trust it not to try opening 80-90/tcp for both relations?
[09:54] <fwereade> dimitern, I'd been thinking more about having two separate relations with db -- but, yeah
[09:55] <fwereade> dimitern, no, I'm saying that the charm must be allowed to open those ports for both its relations
[09:55] <fwereade> dimitern, and that we open a port on a given network when there's one or more relations on that network that have declared they use that port
[09:55] <fwereade> dimitern, and close it only when all relations on that network have closed it
[09:56] <dimitern> fwereade, *now* I get it :)
[09:56] <fwereade> dimitern, sorry unclear :)
[09:56]  * fwereade lunch
[09:56] <dimitern> fwereade, cheers!
[09:58] <TheMue> dimitern: got a few seconds for me?
[10:00] <dimitern> TheMue, sure
[10:00]  * TheMue fights with a race condition since yesterday
[10:00] <dimitern> TheMue, oh, what's the issue?
[10:00] <TheMue> dimitern: a test, this one https://github.com/TheMue/juju/blob/networker-mode-based-on-agent-api/cmd/jujud/machine_test.go#L1042
[10:02] <TheMue> dimitern: IMHO the problem lies in the way a.Run() runs in the background. first I had the patching and the channel declaration outside the loop, but then the tests fails even more often
[10:03] <dimitern> TheMue, looking
[10:03] <TheMue> dimitern: thanks
[10:05] <TheMue> dimitern: hoped that the selecting on the unbuffered modeCh would be enough to ensure, that only one of the tests at a time runs
[10:05] <dimitern> TheMue, ISTM the default: case inside the patched func might be the problem
[10:06] <TheMue> dimitern: aaargh, sure, "hey, I cannot sent, no problem, will still continue"
[10:07] <TheMue> dimitern: will check, thanks
[10:08] <jam> TheMue: if you are going to do something like that, can you include a "<-time.After(testing.LongWait)" instead of default ?
[10:08] <jam> the idea is that a test should fail rather than get hung
[10:08] <jam> (it should be <time.After(...): c.Fatalf()
[10:08] <TheMue> jam: yep, like for the receiving below
[10:08] <jam> yeah
[10:10] <dimitern> TheMue, apart from that I have a few comments re naming and logging
[10:11] <TheMue> dimitern: always happy about feedback
[10:11] <dimitern> TheMue, i.e. please don't include test #%d: in the description, do it in the loop; s/descr/about/; s/disable/managedNetworking/ ?
[10:13] <TheMue> dimitern: ok
[10:15] <mattyw> umm, did review board just die?
[10:16] <mattyw> nope - was just slooooow
[10:48] <TheMue> dimitern, jam:no talk today? ;)
[12:14] <fwereade> dimitern, just looking at that second review, I think what we discussed this morning applies re what we have to store -- even if we're doing everything with the implicit expose relation with id -1, I think the data changes follow quite naturally
[12:14] <fwereade> dimitern, does it need a deeper look now, or can it wait until we've got that stuff in place?
[12:15] <fwereade> dimitern, (not on the model side yet, I know)
[12:16] <dimitern> fwereade, I'm almost done with the first PR changes - api versioning, and I'll propose it soon, but the second PR should be mostly the same (except for a few changes), so I'd appreciate if you look into it
[12:16] <fwereade> dimitern, ok,cool
[12:18] <TheMue> dimitern: btw, the waitStopped() usage has been the solution
[12:20] <dimitern> TheMue, sweet!
[13:19] <mattyw> ericsnow, ping?
[13:24] <perrito666> mattyw: he usually arrives in the next half an hour
[13:24] <mattyw> perrito666, ok - it's not urgent
[13:24] <mattyw> perrito666, as you're normally happily to listen to my insanity....
[13:25] <mattyw> perrito666, I think there's a problem with the review board clock: http://reviews.vapour.ws/r/138/
[13:25] <mattyw> perrito666, look at the time the review was published and the time for the review comments
[13:29] <perrito666> mattyw: my account -> settings -> my timeszone
[13:29] <mattyw> perrito666, got it
[13:58] <perrito666> hey, has anyone instelled 14.10?
[13:58] <perrito666> I am tempted to break my machine before traveling :p
[14:04] <TheMue> ping *
[14:04] <perrito666> TheMue: I am not sure what to answer to that
[14:04] <TheMue> anyone with a fresh checkout of master able to do a test for me?
[14:04] <TheMue> perrito666: you answered, so catched you *lol*
[14:05] <TheMue> perrito666: has been a channel trap
[14:05] <ericsnow> perrito666, wwitzel3: standup?
[14:05]  * perrito666 deflects TheMue with "I dont have a fresh copy" :p
[14:05] <perrito666> a couple of days old
[14:06]  * TheMue grumbles and thinks a bout a new tactics
[14:06] <TheMue> perrito666: just a run of go test in .../juju/worker
[14:07] <TheMue> perrito666: I've got failing tests there and they don't look as they have anything to do with my changes. so I switched to a master branch and still have the error.
[14:10] <perrito666> TheMue: upgrading
[14:10] <TheMue> perrito666: thx
[14:11] <perrito666> TheMue: running
[14:12] <jcw4> TheMue: what is the error?
[14:14] <dimitern> jam, TheMue, fwereade, if you're still around, please take a look at the updated, versioned uniter API http://reviews.vapour.ws/r/123/diff/2/
[14:14] <fwereade> dimitern, cheers
[14:15] <TheMue> jcw4: multiple tests expect a string "setup"
[14:15] <TheMue> jcw4: oh, ic, may have to do with actions
[14:15] <jcw4> TheMue: running now
[14:17] <jcw4> TheMue: I often get errors in the peergrouper tests that seem to be only on my machine...
[14:18] <perrito666> TheMue: FAIL: filter_test.go:293: FilterSuite.TestConfigEvents
[14:19] <perrito666> TheMue: that is master with deps up to date
[14:19] <TheMue> perrito666: oh, this error is unknown to me, strange
[14:19] <jcw4> perrito666: that one is suspiciously close to actions stuff
[14:19] <jcw4> I was seeing it too, but only sometimes, and it didn't happen on the build server
[14:19] <jcw4> perrito666: are you using go 1.3.x
[14:19] <jcw4> ?
[14:20] <TheMue> perrito666: mine are in notifyWorkerSuite and stringsWorkerSuite
[14:21] <jcw4> TheMue: can you pastebin the output?
[14:21] <TheMue> jcw4: it's always s.actor.CheckActions(c, "setup") that fails here
[14:21] <jcw4> TheMue: hmm; that isn't related to *our* Actions AFAIK
[14:22] <TheMue> jcw4: http://paste.ubuntu.com/8472971/
[14:22] <TheMue> jcw4: and it seems to be flaky, this time three fails, but also have seen four fails in other runs
[14:23] <jcw4> TheMue: are you using go 1.3.x too?
[14:23] <jcw4> I've seen tests behave more flaky in 1.3.x than the official 1.2.x that the build server uses
[14:24] <jcw4> other than that my only test failures are in workerSuite.TestSetMembersErrorIsNotFatal
[14:24] <jcw4> which I get often
[14:25] <dimitern> fwereade, btw the diff is split in 2 http://reviews.vapour.ws/r/123/diff/2/ - second page (in case you were wondering like me :)
[14:25] <TheMue> oh, 1.2.1, will update
[14:25] <dimitern> fwereade, oops I meant http://reviews.vapour.ws/r/123/diff/2/?page=2
[14:25] <jcw4> TheMue: I think 1.2.1 is the official version that's supported
[14:29] <natefinch> morning everyone
[14:31] <jcw4> hi natefinch :)
[14:33] <dimitern> morning natefinch
[14:33] <bodie_> morning
[14:37] <TheMue> o/
[14:46] <dimitern> fwereade, re opening and closing all the pending ports in one API call.. I'd rather leave this as is and do a follow-up later that introduces a FinalizeHookContext API call to do all changes in one call
[14:46] <dimitern> it will be easier once the versioning code lands
[14:57] <TheMue> jcw4: same failures with 1.3.3
[14:57] <jcw4> TheMue: that's really strange
[14:57] <TheMue> indeed
[14:58] <jcw4> I don't see an obvious connection, but sometimes I have half a dozen borked mongo instances running after the tests, and if I killall of them and re-run my tests run fine
[15:01] <alexisb> TheMue, I am on the hangout and ready when you are
[15:02] <TheMue> alexisb: ouch, missed it, omw
[15:02] <alexisb> no rush TheMue
[15:53] <ericsnow> could I get a second pair of eyes on http://reviews.vapour.ws/r/132/ (katco was kind enough to review it first)
[15:58] <katco> ericsnow: i am enjoying reading your code. very nice. can't claim i understand all of it, but it's readable :)
[15:58] <ericsnow> katco: achievement unlocked
[15:58] <jcw4> katco: +1, I reviewed and liked the code - didn't have any feedback to give other than nice.
[15:59] <katco> ericsnow: lol
[15:59] <jcw4> ericsnow: nice :)
[16:33] <bodie_> rick_h_, you around?
[16:34] <bodie_> we're nailing down actions api stuff and thinking forward to the golden spike
[16:49] <rick_h_> bodie_: how goes?
[16:52] <bodie_> rick_h_, goes well, I think we're pretty close to (or already have?) a first rendition API landed ( jcw4? ) and just need to hash out a few more details in the doc
[16:52] <bodie_> rick_h_, I'm just thinking about the sprint and organizing the next steps
[16:52] <rick_h_> bodie_: sure thing
[16:53] <jcw4> yep, the API has landed
[16:53] <jcw4> it's not implemented yet, but the outline is there
[16:54] <rick_h_> cool
[16:54] <jcw4> rick_h_: the one question I had...
[16:54] <bodie_> rick_h_, TheMue was saying maybe we could arrange a meeting of spirits next week, so I got thinking about paving the way to a demo
[16:54] <jcw4> It makes sense for information about available actions to be exposed through the CharmInfo
[16:54] <rick_h_> bodie_: you guys at the sprint or calling in?
[16:54] <jcw4> but IIRC that is all in YAML format
[16:55] <bodie_> we'll be remote
[16:55] <jcw4> rick_h_: we don't have any information for calling in
[16:55] <rick_h_> jcw4: right, but remember we need json for the gui.
[16:55] <jcw4> rick_h_: that was my question
[16:55] <rick_h_> jcw4: bodie_ ok, just curious what you're looking at setting up
[16:56] <rick_h_> jcw4: yea, so we don't currently have a JS yaml parser and json is the way to go for most things and we're still looking at trying to use jsonschema for the validation/ui part
[16:56] <jcw4> rick_h_: we thought about a special purpose bridge between the charminfo that exposes the actions json info through the actions API as JSON
[16:57] <jcw4> rick_h_: I'm not clear though if when you consume Juju API calls on the front end if it comes in as JSON anyway?  Sounds like not?
[16:57] <rick_h_> so right now we do it over the websocket and get json payloads in there I believe?
[16:58] <jcw4> thats what I was thinking
[16:58] <jcw4> so wouldn't that automatically convert the charminfo stuff to the json format you need?
[16:58] <jcw4> or am I missing a piece ?
[16:58] <rick_h_> jcw4: no idea, I'd have to look at the core side to see how it's turning that data into json over the websocket
[16:59] <jcw4> rick_h_: I'd like to dip my toe in the front end code myself
[16:59] <rick_h_> jcw4: ok, np.
[16:59] <jcw4> maybe I'll hit you up with some questions as I attempt to get started
[16:59] <rick_h_> jcw4: if you want we can setup a hangout sometime and I can show you where our client code is, how we debug over the websocket, etc
[16:59] <jcw4> that would be great
[17:00] <rick_h_> jcw4: I don't know where that lives on the -core side. frankban and Makyo have more of an idea but hopefully easy to find
[17:00] <rick_h_> jcw4: and in theory if you've got a build of juju you could test it out and load the gui/talk to it and get a feel for it
[17:00] <bodie_> I believe the web client connects to the API seamlessly, so it must just marshal things as json
[17:00] <jcw4> rick_h_: I think I can figure out the -core side.  Testing out the comms is what I want to attempt
[17:00] <bodie_> i.e., the websocket stuff is already dealt with
[17:01] <jcw4> rick_h_: the main repo is https://github.com/juju/juju-gui right?
[17:01] <rick_h_> jcw4: correct
[17:01] <rick_h_> jcw4: https://github.com/juju/juju-gui/blob/develop/app/store/env/go.js is our client
[17:02] <jcw4> cool
[17:02] <bodie_> rick_h_, what I have in my head is something really minimal: create a new action with some parameters, see what comes back.  we could spec up the calls for that, I think
[17:02] <rick_h_> jcw4: bodie_ and chrome can let you watch all the frames come in/out of the websocket
[17:02] <rick_h_> bodie_: sounds good to me
[17:02] <bodie_> ooo, nice
[17:02] <bodie_> rick_h_, perhaps using jeremy's json-schema ui builder
[17:03] <bodie_> anyway, these are all just ideas as yet -- just thinking about where to take it from here :)
[17:03] <rick_h_> bodie_: well you can try that out. Our big hurdle is that we need to write an integration between jeremy's stuff and our databinding layer we use for monitoring for changes to the environment
[17:03] <rick_h_> bodie_: happy to help move it forward with a proof of concept. Let me know what you need for me to help
[17:04] <rick_h_> bodie_: my plan next week is to ask for some time next cycle to add the actions feature to the gui in the next cycle of ubuntu work
[17:05] <bodie_> rick_h_, okay, awesome.
[20:24] <thumper> morning
[20:25] <perrito666> thumper: morning
[20:25] <urulama> morning, thumper :)
[21:10] <thumper> katco: aargh... I had forgotten to publish my updates. thanks for the review, should have already addressed issues you raised, but I'll take another look
[21:11] <perrito666> thumper: shame, do you not see the green bar reminding yo to submit? green is the color of urgent and peding things you should have seen it
[21:11] <katco> thumper: doh! no worries, good to read through code :)
[21:11] <thumper> ha... didn't look at it after doing the rbt command line stuff
[21:11] <katco> i do that sometimes lol
[22:47] <davecheney> alexisb: ping
[22:48] <alexisb> crap davecheney sorry
[22:48] <alexisb> was on another call and didnt get my calendar ping
[22:48] <alexisb> hoping over
[22:48] <davecheney> s'ok