/srv/irclogs.ubuntu.com/2013/06/07/#juju-dev.txt

fwereade__wallyworld, I may be blithering00:00
wallyworldwe can always optimise if the approach we decide on becomes an issue00:00
fwereade__exactly00:01
wallyworldi guess i was taking the approach of don't create something until needed00:01
wallyworldsince for many/most cases it won't be needed00:01
wallyworldso we are loading up the db since we don't want to deal with potentialy dirty docs00:02
fwereade__yeah, but then -- the different field sets of the varios documents actually have very distinct usage and observation patterns that lend themselves well to implementation as distinct types00:02
wallyworldoh, i want to use a distinct type - machineContainer00:02
fwereade__wallyworld, absolutely00:02
wallyworldbut i was willing to pay the cost of dealing with potential stateness issues if it meant not having to create unnecessary ones00:03
fwereade__wallyworld, it is perfectly possible to create it lazily, but it's more hassle00:03
fwereade__wallyworld, you just need to do an existence query, and write a transaction based on that result, with an assertion that the condition still holds00:03
fwereade__wallyworld, if it fails, try again, at some point barf with ErrExcessiveContention00:04
fwereade__wallyworld, kinda tedious00:04
wallyworldyeah. would be nice to bake that logic into the framework00:05
wallyworldso optimistic locking becomes easy to use00:05
fwereade__wallyworld, (you also need to assert that the machine's in a state that allows for the desired operation -- eg, you can't add containers when it's not Alive)00:06
fwereade__wallyworld, yeah, I have some ideas floating around in that direction00:06
wallyworldi'm finding in general we are doing a lot of (repeated) boiler plate in juju :-(00:07
fwereade__wallyworld, I think there's fertile ground for consolidation of some of that00:08
fwereade__wallyworld, in a number of places in state00:08
wallyworldyes indeed00:08
fwereade__wallyworld, fwiw the reason machine ids ended up as strings was, er,  at least partially informed by my unreasoning rage at have to copy and paste the EntityWatcher code to handle ints instead of strings00:09
wallyworldlol00:10
wallyworldif only go had generics00:10
fwereade__wallyworld, I came across the two things, they did different stuff, I spent a couple of hours analyzing their behaviour in enough detail to determine that the macroscopic effect of the two appraches was in fact equivalent00:10
fwereade__wallyworld, if I get a moment, SettingsWatcher and EnvironConfigWatcher would be candidates for replacement with EntityWatcher as well00:12
wallyworldsounds good00:12
fwereade__wallyworld, but I should sleep00:13
wallyworldyes you should. thanks for the input. appreciated00:13
fwereade__wallyworld, holiday tomorrow for me, enjoy your weekend :)00:13
wallyworldyou too. i have a holiday on monday :-)00:14
fwereade__cool, enjoy00:14
fwereade__gn00:14
wallyworldhight00:14
wallyworldnight00:14
=== thumper-afk is now known as thumper
=== amithkk_ is now known as amithkk
thumpergrrrr02:42
thumperffs02:42
* thumper sighs03:18
thumperworking out how to enfixorate this test03:18
thumperha03:42
thumpercracked it03:42
thumperwallyworld: ping04:12
wallyworldhi04:12
thumperwallyworld: got time for EOW debrief?04:12
wallyworldsure04:12
thumperhttps://plus.google.com/hangouts/_/d0771f5fc18ee2fb1740aefa78b11db732ce9419?hl=en04:13
=== tasdomas_afk is now known as tasdomas
rogpeppemornin' all07:49
fwereade__TheMue, I'm on holiday, but... why are you adding an explicit TxnRevno field that you're not using?09:12
dimiternTheMue: reviewed with some comments09:16
TheMuefwereade__: Otherwise the watcher package doesn't work, it relies on it.09:16
fwereade__dimitern, fwiw the convention is that watchers always send an initial event representing "initial" state09:17
fwereade__dimitern, this applies to the single-entity watchers as much as the ones that send actual state09:17
dimiternfwereade__: ah, ok, wasn't sure this is always the case09:17
fwereade__TheMue, oh really?09:18
TheMuefwereade__: yes, that's why I had chosen a different approach first09:19
dimiternTheMue: TxnRevno is always available on any document in a mgo collection, implicitly09:19
dimiternTheMue: the same as the "_id" field09:20
fwereade__TheMue, dimitern: well, to be explicit, on any doc used by mgo/txn09:20
fwereade__TheMue, dimitern: it is *written* by that package09:20
dimiternyep09:20
TheMuefwereade__: but found it in machine, service, unit09:20
dimiternTheMue: you only need to define it on the document when you need to read it for some reason - like the presence pingers do09:21
fwereade__TheMue, dimitern: and is exported in *some* of our entity documents, for use in single-entity watches09:21
fwereade__TheMue, dimitern: it might also be exported in certain cases to do coarse-grained optimistic locking, in which you assret the precise state of a full document against which you plan to make changes09:21
TheMuefwereade__, dimitern: ah, good to know, will remove it. that led me into the wrong direction *hmpf*09:22
fwereade__TheMue, dimitern: but we have 0 cases currently in play where that is necessary09:22
TheMuefwereade__: because so the WatchCollection() is indeed a great helper.09:22
TheMuedimitern: btw, i'm almost hoarse (it's getting better), we yesterday won and are now in the finals09:24
dimiternfwereade__: do I need to update the mongo package used by tests, because now in trunk I get this error: http://paste.ubuntu.com/5741365/ (i know that test jam and davecheney fixed relied on some retry/delay behavior that changed in mongo or maybe mgo?)09:24
dimiterns/fixed/fixed a test that/09:25
TheMuedimitern: thx for your review. i forgot to destroy varnish too, to test the event09:26
dimiternTheMue: np09:27
fwereade__TheMue, would be good to test coalescing09:36
fwereade__TheMue, destroy varnish and call Cleanup, and check that only one new change is sent after that09:37
fwereade__TheMue, for example09:37
fwereade__dimitern, hmm, yes, I think just update mgo09:37
dimiternfwereade__: ah, cool09:38
dimiternfwereade__: I was just looking to file a bug :)09:38
TheMuefwereade__: yes, will do, thx for the review09:38
fwereade__TheMue, also maybe good to check the Stop behaviour, to observe the channel close, for completeness' sake09:38
TheMuefwereade__: yup, will add it too.09:41
dimiternanother trunk test failure after updating mgo: http://paste.ubuntu.com/5741417/09:51
dimiterni think somebody complained about it on the list09:52
dimiternI filed a bug 1188549 for it and sent mail to the list10:11
dimitern_mup_: wtf?10:11
dimiternhttps://bugs.launchpad.net/juju-core/+bug/118854910:11
dimiternfwereade__: i think we should revert 125710:16
dimiternfwereade__: due to the test failure above, unless i'm it's just me who's seeing it10:17
fwereade__dimitern, +110:17
dimiternTheMue: can you pull trunk tip and run all tests on it please?10:17
dimiternTheMue: to see if you can reproduce the bug above?\10:18
dimiternmgz: hey10:18
fwereade__dimitern, hmm, passes for me10:20
dimiternfwereade__: ha.. so it might be only on my machine somehow10:20
dimiternTheMue: please confirm ^^10:20
fwereade__dimitern, oh wait sorry10:20
fwereade__dimitern, nope, I agree, it's broken10:21
fwereade__dimitern, +1 on reverting it, give me 1 min to peer quickly at the code10:21
dimiternfwereade__: sure10:21
TheMuedimitern: oh, sorry, had been on a different screen. one moment, will test10:22
mgzdimitern: hey!10:22
dimiternmgz: so it turned out machine.WatchUnits is already implemented, which simplifies the deployer refactoring task10:25
mgzexcellent10:25
dimiternmgz: just compiling a short plan of steps what to do about it10:26
fwereade__dimitern, hey, looks kinda like crack to me10:28
fwereade__dimitern, revert away10:28
dimiternfwereade__: ok, will do10:28
dimiternfwereade__: one question about the deployer - you meant to remove it from the unit agent and leave it only in the machine agent, right? becasue i vaguely recall mentioning removing it from the uniter instead10:29
fwereade__dimitern, I meant agnt10:30
dimiternfwereade__: yeah, ok10:30
fwereade__dimitern, but there's some special-case shutdown code in the uniter that we can also drop10:30
dimiternfwereade__: i'll look into it10:31
fwereade__dimitern, because exiting the process will no longer take down the relevant Deployer10:31
dimiternfwereade__: ah, sure we can't miss it then, once it's removed from UA10:32
fwereade__dimitern, and I *think* that the changes all go well together: switch the deployer type on machine agent, and delete a bunch of unit agent and uniter code at the same time10:32
dimiternfwereade__: and in the first step, instead of passing a UnitsWatcher, we can just pass a machine id and call st.Machine(id).WatchUnits inside the loop, right?10:33
fwereade__dimitern, that sgtm I think10:34
dimiternfwereade__: cool10:34
TheMuedimitern: some problem here10:38
TheMuedimitern: ouch, not some, same10:39
TheMuedimitern: typo10:39
dimiternTheMue: yeah, will revert it shortly10:39
dimiternTheMue: thanks for checking10:39
TheMuedimitern: yw10:39
dimiternmgz, fwereade__: http://paste.ubuntu.com/5741545/ - some short stepwise plan for the deployer (each step is roughly a CL)10:44
fwereade__dimitern, (1) and (2) must land in the same CL10:45
fwereade__dimitern, otherwise you'll end up with double-deployed subordinate unit agents10:45
fwereade__dimitern, but the uniter shutdown code cleanup can come after, to keep things small and stepwise10:46
fwereade__dimitern, you'll need WatchUnits over that facade as well10:47
mgzhm, is the state between 2 and 3 working?10:48
fwereade__mgz, yes, because 1/2 are changes to state, and to the existing code, without hitting the API at all10:48
mgz...that's not really the right question10:48
mgzwhy is it safe to remove the deployer task from the unit agent, what is happening instead?10:49
fwereade__dimitern, something like `WatchMachineUnits([]MachineId) ([]struct{watcherid, error}, error)`10:50
fwereade__mgz, Machine.WatchUnits reports changes to all a machine's units: both its principals and their subordinates10:51
dimiternfwereade__: oh, sure10:51
mgzah, I see10:51
fwereade__mgz, so the necessary synchronization is that we must remove the unit agent deployer at the same time we change deployer to always watch a machine's complete complement of units10:51
mgzso the previous change makes the machine agent responsible for both, which is why 1+2 need to be done together10:51
fwereade__mgz, hence 1/2 having to go together10:51
fwereade__mgz, exactly10:52
dimiternmgz: dump question about bzr revers again - do I do bzr pull on trunk, then create a branch as usual, and do "bzr merge . -r1256..1257" if I need to revert only 1257?10:52
mgz-r1257..125610:52
fwereade__mgz, there's a bit of side work, in uniter/ModeShutdown() (or something like that) that causes the uniter to delay its own death until its subordinates are removed10:52
dimiternmgz: oh, I keep forgetting the order and always seems weird10:53
mgzmerging the *reverse* of the change, hence the bigger rev first10:53
fwereade__mgz, that can be removed safely once the other is done, but won't hurt anything if it persists after the switch10:53
fwereade__mgz, the effect of removing it will be that the principal unit agent dies a bit earlier, once it has no further duties to perform10:54
fwereade__mgz, and the uniter becomes a smidgen simpler10:54
danilos_whoops, forgot the topic10:58
=== danilos_ is now known as danilos
=== ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: danilos | Bugs: 8 Critical, 71 High - https://bugs.launchpad.net/juju-core/
dimiternfwereade__, mgz: updated the plan http://paste.ubuntu.com/5741571/10:59
fwereade__dimitern, uniter shutdown code can and should be a separate step11:00
fwereade__dimitern, and 3 can be expanded a little into two11:01
fwereade__dimitern, writing a clone of the deployer and unit testing it against api rather than state is a perfectly good step11:01
fwereade__311:01
dimiternfwereade__: but if that shutdown code relies on having a deployer and it's gone, won't that be a problem?11:01
fwereade__dimitern, step 4 can be dropping the state version and integrating the api version11:01
fwereade__dimitern, its duties are taken over by the deployer in the machine agent11:02
fwereade__dimitern, so a relevant deployer is always available11:02
fwereade__dimitern, it's just running in a different process now11:02
dimiternfwereade__: hmm, ok - i haven't looked at the code, sorry11:03
fwereade__dimitern, it's ModeTerminating11:04
fwereade__dimitern, the whole loop dickery can be straight-up dropped, and we can return ErrterminateAgent directly11:04
fwereade__dimitern, (once we've set StatusStopped)11:04
mgzI'd like to tackle the testing deployer clone with some pointers11:05
fwereade__mgz, really sorry, I have to go out to lunch now *but* if you ask questions now I might be able to drive-by them11:06
fwereade__mgz, will try to do a quick pass on anything you manage to type in the next 10 mins ;p11:06
fwereade__bbs11:06
mgzno problem, I'm sure dimitern will happily boss me around11:06
fwereade__(after that I'll actuallybe at lunch)11:06
dimiternmgz: am i that bad? :)11:12
mgzyou're that good!11:12
dimitern:)11:18
dimiternfwereade__, mgz: so this is the (somewhat) final plan http://paste.ubuntu.com/5741603/11:19
dimiternmgz: I'll appreciate an LGTM on this https://codereview.appspot.com/10046047/11:19
mgzlooking at both11:19
dimiternmgz: step 3 might need to be split in 2 parts still - apiserver implementation and client api implementation11:22
fwereade__dimitern, mgz, +1 to that11:22
fwereade__dimitern, mgz: and I'm comfortable with following a general approach of unit tests for the server, integration tests for the client11:23
dimiternfwereade__: +100 on that11:24
fwereade__dimitern, long-term we can get smarter -- more cheap client unit tests, fewer heavyweight integration tests -- but for now it's a sane convention11:25
mgzsure11:26
* fwereade__ really lunching now11:26
dimiterndanilos, mgz: are we g+-ing today or mumble-ing for standup?11:30
danilosdimitern, hanging out I suppose :)11:31
dimiterndanilos: can you send me a link to your CL again please?12:02
danilosdimitern, sure, https://codereview.appspot.com/9876043/12:03
dimiterndanilos: thanks12:03
danilosdimitern, I suspect yours is https://codereview.appspot.com/9937045/?12:03
dimiternfwereade__: in case you're still around at some point, take a look at the https://codereview.appspot.com/9937045/ watchers stuff12:04
dimiterndanilos: yep12:04
dimiternTheMue: you have a second review12:25
dimiterndanilos: reviewed as well12:35
dimiternmgz: so how do you feel about the api stuff lately? does it make sense, and does it help to understand what'll be needed for the deployer?12:37
danilosdimitern, thanks12:37
danilosdimitern, you've got a review as well, so perfect timing :)12:37
mgzyeah, is much clearer to me now12:37
dimiterndanilos: thanks!12:38
dimiternmgz: only the watcher stuff is a bit muddy, that why I really wanted to finish it of at both server and client side for the machiner, so it can be used as a model, but not sure if I'd manage12:39
ahasenackhi, do you guys have any tricks to the problem of having several versions of a local charm in the same directory? Like12:40
ahasenack~/charms/precise/{postgresql,postgresql-fix-1234}12:40
dimiterndanilos: blast! that's not right at all, I should've looked12:40
ahasenackjuju won't know which one to deploy, as both are called "postgresql", and the directory name is irrelevant12:40
danilosdimitern, the logging changes?12:40
dimiterndanilos: something went wrong with my long pipeline and trunk merges12:40
dimiterndanilos: yep12:40
dimiterndanilos: i'll fix it an repropose, sorry - the bits that were weird should be there :)12:41
danilosdimitern, right, I assumed as much, but wanted to put it out there just in case12:41
danilosdimitern, heh, cool12:41
fwereade__ahasenack, if they have different revisions, you can use those12:42
fwereade__ahasenack, otherwise, you could change the charm name in the metadata to distinguish them more clearly12:42
fwereade__ahasenack, otherwise you'll get one of the ones in the local repo with the highest revision12:43
ahasenackfwereade__: if I have 10k directories under ~/charms/precise, juju will look into */metadata.yaml, right?12:43
fwereade__ahasenack, yeah12:43
ahasenackfwereade__: ok12:43
fwereade__ahasenack, the local repo has always been a bit questionable, it's hard to fit it into a model that matches the charm store and we didn't really do it very nicely12:44
ahasenackthe thing with changing the charm name is dangerous to forget, you have to eventually revert that before proposing12:44
ahasenackand revert revert when fixing up a review comment, etc12:44
ahasenackok12:44
fwereade__ahasenack, yeah, understood -- I'm very open to suggestions that help us improve the experience12:44
ahasenackuse the directory name would be my first suggestion, there can only be one charm inside it, use whatever you find in *that* metadata.yaml12:45
ahasenackbut sure, there are ramifications I don't see12:45
ahasenackit's how I tried to use it the first time, and fell on my face :)12:45
rogpeppelunch13:06
fwereade__ahasenack, that does sound like a more naturally comprehensible model, indeed, but the main problem is that the charm metadata name is expected to match the charm url name, and charms are requested from the repo by charm url13:18
ahasenackthat is the root of the problem13:19
fwereade__ahasenack, it may be the case that we could transparently fake something up, but I feel the bump-revision business is already somewhat hokey, and adding further layers of hokiness feels potentially, er, suboptimal13:21
ahasenackfwereade__: on that topic, what do you think about making this error: "juju deploy --repository /foo/bar mycharm"13:22
ahasenacki.e., without the local: prefix before mycharm13:22
ahasenackI don't know what it will deploy, honestly. I think it would deploy mycharm from the store13:22
fwereade__ahasenack, yeah, it would, and I see how it's very reasonable to infer that of course a local repository is intended in that case13:23
fwereade__ahasenack, what's your opinion on `juju deploy --repository /foo/bar cs:somecharm`? if we make the `local:` inference in the case you describe, this one feels like it becomes ill-formed, because a local repo cannot contain cs: charms13:28
ahasenackfwereade__: that command should fail13:29
ahasenackfwereade__: I think --repository is all that is needed to infer where the charm is13:29
ahasenackfwereade__: what are the possible values for --repository?13:29
fwereade__ahasenack, always interpreted as a path at the moment13:29
fwereade__ahasenack, probably never used otherwise13:29
ahasenackfwereade__: are "juju deploy mycharm" and "juju deploy cs:mycharm" the same?13:30
fwereade__ahasenack, yeah13:31
ahasenackfwereade__: what about "juju deploy local:mycharm", what does it do?13:31
fwereade__ahasenack, the inference process is as follows:13:31
fwereade__ahasenack, infer a url from the supplied string, using schema cs: (if not specified) and series default-series (if not specified)13:32
fwereade__ahasenack, infer a repo from the schema and the repo path, take from $JUJU_REPOSITORY if --repository is not specified13:32
fwereade__ahasenack, ask that repo about the inferred url; if the url was unrevisioned, pick the latest revision reported by the repo and add it to the url13:33
fwereade__ahasenack, use the charm thus uniquely identified by that url13:33
fwereade__ahasenack, (uniquely... yes for the charm store, not so much for local repos)13:34
=== BradCrittenden is now known as bac
ahasenackfwereade__: sounds complicated13:34
ahasenackfwereade__: I'm confused why we have two possible "sources" in the same command line13:34
ahasenackfwereade__: --repository, and the prefix for the charm name13:34
ahasenackfwereade__: bzr doesn't have any of that, for example, and I think we were at some point following the bzr url conventions13:34
ahasenackit's bzr branch /foo /bar13:35
ahasenackbzr branch lp:foo13:35
ahasenackbzr branch lp:~oioi/foo/bar13:35
ahasenackand so on13:35
fwereade__ahasenack, yeah, it's a lot of effort just to make `juju deploy wordpress` work13:35
ahasenackworks with both local and remote13:35
ahasenackbut, now we have to worry about backwards compatibility13:35
ahasenackso that's why I suggested to make the invalid combinations an error13:36
fwereade__ahasenack, yeah, I am cautiously with you on this13:36
fwereade__ahasenack, can I ask you to mail the juju list about it, just in case anyone's using it in some important way, before we agree it's definitely a bug and can be changed as you suggest?13:37
ahasenacksure13:37
fwereade__ahasenack, (fwiw I presume you don't ever actually put bundled charms in your local repos?)13:38
fwereade__ahasenack, hell even if you do we can make it sane that way13:38
ahasenackfwereade__: what is a bundled charm?13:39
fwereade__ahasenack, a zip file basically -- just a serialized charm dir13:39
ahasenacknever heard of it, so no :)13:39
fwereade__ahasenack, the format you download charms from the store in vs the format you edit and develop them in13:39
ahasenackfwereade__: didn't even know there was a different way to grab charms other than bzr branch13:40
fwereade__ahasenack, there's not much point to it AFAICT but it is a public format13:41
fwereade__ahasenack, I think it's important to maintain the matching-names guarantee, though13:42
fwereade__ahasenack, so making a local repos that behaves no less insanely will be a little bit fiddly13:44
fwereade__ahasenack, almost certainly worth it though13:44
ahasenackfwereade__: I don't know, my personal feeling is that it's very restrictive and surprising13:45
ahasenackbut any change here could have far reaching consequences13:45
ahasenackso I would just start with making the obvious incorrect combinations error out13:45
ahasenackmaybe someone will jump in and explain what was the reasoning behind --repository and local:charm13:45
=== wedgwood_away is now known as wedgwood
dimiternfwereade__: can you have another look at https://codereview.appspot.com/9937045/ please?13:57
dimiternniemeyer: hey13:57
dimiternniemeyer: i think _mup_ has been misbehaving again and not responding to bug #13:58
dimiternrogpeppe: I don't want to get into another 3h argument about the watchers, but I think the idea about how watchers have to be implemented the same way in the server-side API is wrong13:59
rogpeppedimitern: how do you mean?13:59
dimiternrogpeppe: we do need to do coalescing at client side, so that we don't drop events14:00
dimiternrogpeppe: and as well as we do that, we can handle the initial events there14:00
rogpeppedimitern: have you got a link to the kanban hangout please? my browser is refusing to load google calendar14:01
dimiternrogpeppe: https://plus.google.com/hangouts/_/539f4239bf2fd8f454b789d64cd7307166bc908314:02
rogpeppedimitern: i think that initial events and client-side coalescing are two independent issues14:02
rogpeppedimitern: thanks14:02
niemeyerdimitern: It was down14:07
fwereade__rogpeppe, dimitern: fwiw I would probably accept rog's flip-flop proposal for the client-side watcher wrappers14:08
fwereade__rogpeppe, dimitern: we can build in coalescence later if the slightly-surprising event delivery burstiness is a problem for client code14:08
rogpeppefwereade__: +114:09
rogpeppefwereade__: that would make client-side watchers *much* simpler too14:09
fwereade__rogpeppe, dimitern: if we've done them all right it won;t be14:09
dimiternfwereade__: what's the flip-flop stuff?14:09
fwereade__rogpeppe, dimitern: I'm just a little twitchy about it because, well, there's bound to be somewhere where someone has made unwarranted assumptions about watcher behaviour14:10
fwereade__dimitern, always have one of in and out be nil14:10
fwereade__dimitern, so always deliver events with exactly the grouping they're initially received with14:10
fwereade__dimitern, this should not matter14:11
fwereade__dimitern, and probably doesn't14:11
dimiternfwereade__: but how about the initial Changes call at server-side14:12
fwereade__dimitern, I'm comfortable with whatever convention we pick14:12
fwereade__dimitern, if the convention is that every server-side Watch method pulls the first event, and returns anything interesting in its results, and that every client-side watcher always deliver those results in the first event on its Changes channel, then I'm happy14:14
fwereade__dimitern, rogpeppe: since we've got the current Next behaviour we may as wll try to make use of it -- some use cases, like uniter.Filter, still abuse it, but we can worry about that when it becomes a problem14:15
dimiternrogpeppe, fwereade__how about not calling changes at server-side, because we cannot accomodate all watchers (i.e. lifecycle one has non-empty changes)14:15
rogpeppedimitern: i'm not sure what you mean there14:16
fwereade__dimitern, the first event on a watcher is always delivered as soon as possible, and represents the base state relative to which change notifications will be delivered14:16
rogpeppefwereade__: i still think that sending the initial event back with the Watch request is a reasonable thing to do14:16
fwereade__rogpeppe, me too14:16
rogpeppes/the Watch request/the response to the Watch request/14:16
rogpeppefwereade__: oh, good14:17
rogpeppefwereade__: i thought that was considered a problem14:17
fwereade__rogpeppe, I just hadn't thought of it14:17
dimiternrogpeppe: we discussed client-side watchers to return struct{}{} instead of changes, and then after you're notified, you can get the actual changes from the api separately14:17
fwereade__rogpeppe, but it makes perfect sense, especially when we've got flip-flop client-side watchers14:17
rogpeppedimitern: let's not do that14:17
fwereade__rogpeppe, we don't even need to call Next until the initial result has actually been handed off to the watcher's client14:18
rogpeppedimitern: let's just do for {data := Next(); c <- data}14:18
rogpeppefwereade__: exactly14:18
dimiternrogpeppe: so then instead, we do proper coalescing on client-side14:18
fwereade__dimitern, depends on the watcher14:18
rogpeppedimitern: no need14:18
fwereade__dimitern, lists of documents changing state need to be lists of ids14:18
dimiternrogpeppe: you're arguing events won't be dropped?14:18
rogpeppedimitern: yes14:18
rogpeppedimitern: not with the above for loop14:19
dimiternrogpeppe: i don't see how you can guarantee that14:19
rogpeppedimitern: how can an event be dropped?14:19
dimiternrogpeppe: with the current client-side watchers you do14:19
fwereade__dimitern, we never read a new event to drop from the common watcher until we've delivered the previous event to the client14:19
rogpeppedimitern: yes, but they're being too clever14:19
rogpeppedimitern: we're gonna axe all that code14:19
rogpeppedimitern: and go *much* simpler14:19
dimiternrogpeppe: ok then14:19
rogpeppedimitern: the server side does coalescing already.14:20
dimiternrogpeppe: as long as we keep to the standards we agreed (please) about how server-side should have bulk ops and client-side only if needed14:20
rogpeppedimitern: i have a possible plan to add universal bulk ops to the rpc package14:20
fwereade__rogpeppe, that sounds pretty frickin' sweet to me14:21
rogpeppedimitern: that would mean that a) you'd only need to implement bulk ops when you could do so meaningfully and b) you'd get benefit from using bulk ops even if the server methods were implemented as singletons14:21
dimiternrogpeppe: well, as long as it provides the same protocol with less code and better readibility, i'm +114:21
rogpeppefwereade__: pretty much14:21
rogpeppefwereade__: the idea is you'd send an rpc request with Ids: ["id1", "id2"] rather than Id: "id1"14:22
dimiternrogpeppe: i don't get 1) and 2)14:22
dimiternrogpeppe: seems too smart14:22
rogpeppefwereade__: then the rpc server would try to find a specific bulk request handler for the type, and fall back to single methods (concurrently) if there's none14:23
dimiternrogpeppe: i have to see it in other words :)14:23
rogpeppedimitern: i think it might work ok14:23
dimiternrogpeppe: where's the concurrency happening? server-side only?14:23
fwereade__rogpeppe, hmm, honestly, I think it's too speculative14:23
rogpeppedimitern: yes14:23
rogpeppefwereade__: it means quite a bit less work implementing server methods14:24
rogpeppefwereade__: for a relatively small piece of work in the rpc package14:24
rogpeppefwereade__: i *think* the trade-off is worth it14:24
dimiternrogpeppe: will it also involve splitting the rpc package in more manageable chunks, which could be unit-tested separately?14:25
rogpeppedimitern: maybe.14:26
rogpeppedimitern: the rpc package isn't exactly large though14:26
dimiternrogpeppe: I read that as "no" :)14:26
dimiternrogpeppe: but anyway, we can do that later, no worries14:26
rogpeppedimitern: what do you think is unmanageable about the rpc package?14:26
rogpeppedimitern: i think it's tested reasonably well, no?14:26
dimiternrogpeppe: i'm not getting back into the argument big vs. small packages14:26
dimitern:)14:26
rogpeppedimitern: it's only 800 lines of code14:26
fwereade__rogpeppe, I would love to see the authentication behaviour split off14:27
rogpeppefwereade__: that's not part of the rpc package14:27
fwereade__rogpeppe, but I don't wish to force the issue now14:27
dimiternrogpeppe: well, once we implement the separate login step in the authentication, which is not part of the rpc interface, we'll need to split parts of it related to marshalling to reuse them14:27
fwereade__rogpeppe, ofc, sorry, I don't have a quarrel with the rpc package itself14:27
rogpeppedimitern: i do actually have a CL somewhere that splits out the method introspection code so it can be used independently.14:28
dimiternrogpeppe: great to hear14:28
rogpeppedimitern: those bits are already in a separate package14:28
dimiternsorry bbiab14:28
rogpeppedimitern: the rpc package doesn't do any marshalling or unmarshalling14:28
rogpeppedimitern: rpc/jsoncodec does that14:28
fwereade__rogpeppe, anyway, please tell me a smidgen more: how are you planning to register the handlers?14:28
rogpeppefwereade__: which handlers?14:29
fwereade__rogpeppe,  for bulk vs single operations14:29
rogpeppefwereade__: my plan was to treat some names specially - for instance types with a "Vector" suffix.14:30
fwereade__rogpeppe, you are talking about some sort of magic which I am prepared to consider, possibly, to be worth the effort, but I'd like to understand exactly what's involved :)14:30
rogpeppefwereade__: then methods on those would have to conform to the signature foo(args someStruct) (ret []resultStruct, errs []error)14:30
rogpeppefwereade__: where the params and result struct would need to match the non-bulk form of the operation if that's provided14:31
fwereade__rogpeppe, ok, so, that's how we implement a bulk API method in terms of single-entity methods14:31
fwereade__rogpeppe, where do we get the implementation for the bulk form? what happens if both single and bulk exist?14:32
rogpeppefwereade__: the bulk API method doesn't need to call the single-entity methods14:32
rogpeppefwereade__: for operations on a single id, the single form is chosen by preference, falling back to bulk form. for operations on many ids, the bulk form is chosen by preference, falling back to the single form called concurrently on all the ids.14:33
rogpeppefwereade__: we might want to call the bulk form preferentially in all cases though14:33
fwereade__rogpeppe, ok, that sounds nice tbh, I can imagine optimized single-use variants being useful14:34
rogpeppefwereade__: i'm not sure. the logic in the rpc packageis not hard though (a couple of map lookups)14:34
rogpeppefwereade__: cool14:34
fwereade__rogpeppe, but it lives or dies in my mind by clarity and comprehensibility of exactly what API methods will be provided by a given facade type14:34
rogpeppefwereade__: one problem i have with the current facade stuff is that it's only one type14:35
rogpeppefwereade__: but resources (watchers etc) don't fit within that.14:35
fwereade__rogpeppe, that one type is equivalent to State -- it's just a cut-down variant tuned to the client's needs14:35
rogpeppefwereade__: that one type doesn't include the watcher Next methods, for example14:36
rogpeppefwereade__: so it doesn't actually include the entire API surface area for the agent14:36
rogpeppefwereade__: one possibility i've been toying with is the idea of allowing an rpc server to dynamically change the current object being served.14:37
fwereade__rogpeppe, suggestion, half baked: Watcher facade, with Next(watcherId) and NextIds(watcherId) methods?14:37
fwereade__rogpeppe, such that everything authenticated can access that facade14:39
rogpeppefwereade__: not sure. i'd prefer to go more the direction of allowing a given agent to access a set of types rather than just one type14:39
fwereade__rogpeppe, the RegisterResource stuff on the server STM to be hinting in that direction -- there are common patterns that apply across all kinds of watcher14:40
rogpeppefwereade__: so the API served for the machine agent, say, might have just Machine and EntityWatcher14:40
rogpeppefwereade__: there's definitely something a little nicer lurking somewhere here14:40
fwereade__rogpeppe, yeah, something like that -- I'd been thinking of Watcher as a chunk of common business-level functionality14:41
rogpeppefwereade__: if you can dynamically change the object being served, then the first facade presented could have *only* the Admin type (with its Login method); when successfully logged in, we could choose a facade based on who's logged in and change to serving that14:41
rogpeppefwereade__: i can't immediately see any particular difficulty in doing that actually.14:42
fwereade__rogpeppe, ok, the *concept* has my cautious approval14:43
fwereade__rogpeppe, but my absolute current priority is to get us running non-manager machine agents with API connections only, and no access to state14:43
rogpeppefwereade__: agreed14:44
fwereade__rogpeppe, and I don't think that plan is a major win wrt that specific goal, even if it is in other areas14:45
rogpeppefwereade__: correct14:45
fwereade__rogpeppe, and don't take that Watcher facade idea as gospel, I think it may cause as many problems as it solves14:46
rogpeppefwereade__: do i take it then that you're ok with having a facade per authenticated entity rather than a facade per worker?14:46
rogpeppefwereade__: because that's what the above plan would need14:46
fwereade__rogpeppe, I don't think I said that14:47
rogpeppefwereade__: and i think it makes sense from a security point of view14:47
rogpeppefwereade__: if you were changing the facade based on login, why have a different facade for each worker?14:47
fwereade__rogpeppe, because I think it's a worthy goal to namespace the methods we expose according to the patterns in which we expect them to be used14:48
rogpeppefwereade__: even if it's loads of code duplication?14:49
fwereade__rogpeppe, this does not mean that there will never be facades that are so useful that practically everyone uses them14:49
rogpeppefwereade__: it just seems a weird way of making an API to me.14:49
rogpeppefwereade__: is there precedent for this kind of thing?14:50
rogpeppefwereade__: usually my goal with APIs is to make something as small and neat as possible so there's nice easily analysable surface area14:50
fwereade__rogpeppe, um, grouping code according to common purpose is not without precedent14:51
rogpeppefwereade__: i was talking about actual APIs, not just grouping code14:51
fwereade__rogpeppe, yes: by exposing the minimum information necessary to accomplish business-level tasks, you can analyse the impact of given chunks of the API in isolation14:51
rogpeppefwereade__: but from a security point of view, you're looking at the union of all the available facades, no?14:52
fwereade__rogpeppe, but I don't want Machines as a global concept, because different chunks of juju are interested in different, and not-always-overlapping, subsets of Machine functionality14:52
rogpeppefwereade__: i'm not suggesting Machines as a global concept here14:53
rogpeppefwereade__: i'm suggesting one Machine for the machine agent, and one for the unit agent (if it needs to look at a machine of course)14:53
rogpeppefwereade__: rather than one set of things for each worker14:54
rogpeppefwereade__: and one for the agent itself14:54
fwereade__rogpeppe, ok, but different tasks within the machine agent depend on different subsets of the functionality and do them for different reasons14:54
fwereade__rogpeppe, machiner has no use for instance id, for example14:56
rogpeppefwereade__: i agree. but i think unifying them into one thing at the server side will a) be much less code b) not too hard and c) make things easier to analyse from a security p.o.v.14:57
fwereade__rogpeppe, and if we do that it becomes a real hassle to migrate a task from one agent kind to another14:57
fwereade__rogpeppe, one conceptual entity that has facades of nicely-grouped methods hanging off it, fine14:58
fwereade__rogpeppe, are we in violent agreement here?14:58
rogpeppefwereade__: i'm not sure. i don't *think* so, though i would *really* like to be agreeing with someone for once. i feel like i've been disagreeing with everything recently, and i don't like that.15:00
rogpeppefwereade__: i take your point about it being a hassle to migrate a task from one agent to another15:01
fwereade__rogpeppe, yeah, I think it's the violent collision of two very different sets of experience15:01
rogpeppefwereade__: but i'm wondering how important is it that the security is so perfectly fine grained15:01
dimiternrogpeppe: the bulk ops signature has to be method(args someStructWithIds) (resultsAsArrayOfStructsWithErrorAndResultEach, error)15:02
dimiternrogpeppe: not "<rogpeppe> fwereade__: then methods on those would have to conform to the signature foo(args someStruct) (ret []resultStruct, errs []error)"15:02
rogpeppefwereade__: i'm thinking particular of information propagation here. what possible harm could happen if the machiner knows its instance id, for example?15:02
rogpeppedimitern: why so?15:02
dimiternrogpeppe: we need errors and results for single ops in an array15:02
rogpeppedimitern: why so?15:03
dimiternrogpeppe: because it helps debugging and matching error to result15:03
rogpeppedimitern: if both slices have the same number of elements, i'm not sure i see the difficulty15:03
dimiternrogpeppe: no, i tried that initially, wasn't accepted15:03
dimiternrogpeppe: and i see why now15:03
fwereade__rogpeppe, (1) namespacing is useful for people as well and (2) I would prefer to adopt a general policy of handing out information for which there is no demonstrated need15:03
dimiternrogpeppe: the debugging argument is a big one15:03
rogpeppedimitern: how is it easier to debug?15:04
dimiternrogpeppe: you have result+error for each op15:04
fwereade__dimitern, he's talking about some magic dispatch to automatically execute bulk requests as a bunch of single ops, which single ops would have that signature15:04
dimiternrogpeppe: that keeps related things together, rather than having them spread out15:04
rogpeppefwereade__: i'm concerned that at every step we're writing Big Software where we don't actually need to. we don't have to write all this code.15:04
fwereade__dimitern, it might be nice but I don't think it's justifiable in light of the current goals15:05
dimiternfwereade__: no, aiui it's about having separate errors and results15:05
fwereade__dimitern, ah, hmm, yes, I misread that line15:05
dimiternrogpeppe: writing gode is not a bad thing15:05
dimiterncode15:05
dimitern:)15:05
rogpeppedimitern: i can see why the current scheme was chosen, but there are good reasons to choose separate slices if we're automating the bulk stuff15:05
rogpeppedimitern: yes it is :-)15:06
rogpeppedimitern: the more code the worse15:06
dimiternrogpeppe: writing good code is better than writing small amount of too-smart-to-reason-with-code15:06
rogpeppedimitern: i'd like to write a small amount of easy-to-reason-about code15:06
dimiternrogpeppe: but you're rejecting good points as irrelevent15:07
dimiternrogpeppe: for the sake of code simplification15:07
rogpeppedimitern: it's always a trade-off15:07
dimiternrogpeppe: some trade-offs i'm not willing to take15:07
dimitern:)15:07
rogpeppedimitern: perfection is the enemy of the good15:07
fwereade__rogpeppe, so, the magic technique for doing so, that many of us have found helpful in our careers to date, is to resist the temptation to lump unrelated concerns together15:07
fwereade__rogpeppe, frequently this results in more lines of code15:08
rogpeppefwereade__: it all depends on your ontology of concerns :-)15:08
dimiternrogpeppe: aiming for perfection and missing the bigger picture is a bad thing15:08
fwereade__rogpeppe, but a lower cognitive load when trying to reason about the system, because the properties of indivdual components can be isolated and reasoned about in isolation15:08
TheMuefwereade__: +1, imho readability and so also maintainability is the more important factor than size (keeping in mind that this doesn't mean huge is better maintainable)15:09
fwereade__rogpeppe, I'd rather have, say, two 60-line types with one clear point of contact between them than one 100-line type15:09
dimiternrogpeppe: we don't just change the interface without need for the sake of simpler code, esp. wrt security related stuff15:09
=== tasdomas is now known as tasdomas_afk
rogpeppefwereade__: i would really like one API that asymptotes and rarely needs adjustment. what we're reaching towards is an API that is ever-growing, one piece for every new kind of client.15:09
fwereade__rogpeppe, one piece for every distinct function of a juju system, surely?15:10
dimiternrogpeppe: we're aiming towards specialized api that works for our needs, and grows with them, not something universally simple and practically complicated to use because of its simplicity15:11
fwereade__rogpeppe, our understanding of the best grouping will continue to evolve15:11
fwereade__rogpeppe, for now, I want to expose the absolute minimum necessary to perform a single task15:11
dimiternrogpeppe: ..and to secure, understand, take as a whole..15:11
dpb1`Hi all -- anyway to get more data from this bootstrap failure: http://paste.ubuntu.com/5742182/?  What url it's trying?  what data it sees?15:12
fwereade__dpb1`, there is a mail describing it on the lists, from wallyworld15:13
dpb1`thx, I'll check15:14
fwereade__dpb1`, you need to generate a simplestreams representation of your custom image, with `juju image-metadata`15:14
fwereade__dpb1`, and upload it to your bucket15:14
fwereade__dpb1`, the output from image-metadata tells you what to do in more detail15:14
rogpeppesorry, was afk15:15
rogpeppecar insurance hassles15:15
dpb1`very interesting.  thanks fwereade__ I'll poke around with it, that gives me something to go on, thanks15:17
fwereade__rogpeppe, the goal is that we get what we need for machiner and, bam, it's done15:17
rogpeppefwereade__: i *thought* it was one piece for "the state", with appropriate guards to make sure that agents can't egregiously escape their bounds of responsibility15:18
fwereade__rogpeppe, then the other tasks, one afterthe other, and switching them over to use API connections as soon as that precise set of functionality has landed15:18
rogpeppefwereade__: my goal was we get "the state" and bam, it's done15:18
rogpeppefwereade__: currently, what facades are we planning?15:18
rogpeppefwereade__: perhaps we should actually design (in advance) what each facade should contain15:19
rogpeppefwereade__: machiner, uniter, upgrader, machine agent, unit agent, provisioner, firewaller.15:19
rogpeppefwereade__: resumer?15:19
rogpeppefwereade__: cleanuper?15:20
dimiternrogpeppe: I have a list of all state calls that are needed15:20
rogpeppedimitern: that's no longer sufficient15:20
TheMuerogpeppe: s/cleanuper/cleaner/15:21
dimiternrogpeppe: that's exactly what we need because the are partitioned in who uses them15:21
dimiternrogpeppe: in addition to all these you listed we need two more: bootstrap stuff and agent base15:21
rogpeppedimitern: bootstrap stuff doesn't count - there's no API at that point15:22
rogpeppedimitern: agent base?15:22
dimiternrogpeppe: stuff in cmd/jujud/agent.go15:24
rogpeppedimitern: i don't think that can have its own facade15:25
dimiternrogpeppe: essentially: entity.Tag(), entity.Set(Mongo)Password15:25
dimiternrogpeppe: probably not15:25
dimiternrogpeppe: but it needs to access these15:26
rogpeppedimitern: i think that has to count as part of the machine-agent and unit-agent facades15:26
dimiternrogpeppe: if we can move the code in openState that needs to set the passwords in both agents, yes15:26
rogpeppedimitern: i don't understand that15:27
dimiternrogpeppe: well, if in openState in agent.go we create either the MA or UA facade, then the code that sets the password can be generic (with interfaces) and still be in one place15:28
rogpeppedimitern: that's what i thought. no need to move the code then, no?15:28
dimiternrogpeppe: I think now it's like that, except for getting the specific facade instace for the agent15:28
dimiternrogpeppe: yeah, sorry, just thinking aloud15:29
rogpeppedimitern: the facade instance is got before calling openState15:29
dimiternrogpeppe: perfect15:29
dimiternrogpeppe: we just have to make sure both UA and MA facades provide these methods that openState needs15:30
rogpeppedimitern: yes15:30
dimiterni'm sorry it seems i won't be able to finish the machiner.Watch CL today15:32
dimiternmaybe someone can take if over?15:33
dimiterni need some rest - i slept like 3h since malta15:33
dimiternhappy weekend everyone and see you on the 26th15:34
fwereade__dimitern, can you plausibly hand over to mgz?15:34
fwereade__dimitern, happy weekend15:34
dimiternfwereade__: i'll write him a mail15:34
fwereade__dimitern, cheers15:34
fwereade__rogpeppe, dimitern: just bear in mind the possibility that there may be a completely common "agent" facade trying to get out15:35
fwereade__anyway I am not really working today, I tell myself15:36
* fwereade__ bows out for a bit15:36
TheMueCarmen calls me for BBQ, have a nice weekend.15:58
rogpepperight, eod reached. have a fine weekend everyone!18:02
dpb1Hi all -- http://paste.ubuntu.com/5742835/ <-- another openstack json unmarshalling problem.  Something easy to fix?  Where should I start looking?19:29
=== BradCrittenden is now known as bac____
fwereade__dpb1, I would search for "security_group" in launchpad.net/goose21:37
fwereade__dpb1, I would guess that it's a version of openstack that returns a different id type21:38
dpb1fwereade__: Hi, I have contacted mgz about it already.  I have a bug and proposed patch that gets it working.  It's a problem of grizzly with quantum.21:38
fwereade__dpb1, cool21:38
dpb1https://bugs.launchpad.net/goose/+bug/118881521:38
_mup_Bug #1188815: security group on quantum/grizzly is uuid, goose chokes on it <Go OpenStack Exchange:Confirmed> <juju-core:Confirmed> <https://launchpad.net/bugs/1188815>21:38
fwereade__danilos, ping22:42
=== wedgwood is now known as wedgwood_away

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!