=== arosales_ is now known as arosales [02:00] wallyworld_: THE amount of things my pr breaks that I had not noticed :p [02:01] perrito666: wot? be with you soon, in a meeting [02:20] perrito666: so what's broken? [04:05] axw: wallyworld_: PTAL http://reviews.vapour.ws/r/796/ [04:05] soon, knee deep in writing tests :-( === ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: see calendar | Open critical bugs: None [04:13] wallyworld_: thnx :D [04:32] anastasiamac: reviewed [04:32] axw: thnx :) will look [04:37] axw: thnx for review! [04:37] np [04:37] axw: from brief look at the cmments it's not far off :) [04:38] anastasiamac: yes I think so, a few key assumptions to fix but mostly looking good [04:38] r u happy for me to change unit name/storage name pair to storage id? [04:38] or.... shall we allow show to "be a version of list for unit/storage pair [04:38] anastasiamac: not sure, it's not exactly a very friendly interface [04:38] anastasiamac: I guess it does double up on list if that's the case [04:39] anastasiamac: list should allow you to filter on unit+storage [04:39] anastasiamac: maybe just have it take storage ID for now [04:39] axw: k.... :D [04:39] axw: id or tag?... [04:40] anastasiamac: ID at the user level, tag at the API level [04:40] i.e. convert user-input ID to a StorageTag [04:40] and pass that to backend [04:40] axw: awesome! will do [05:00] wallyworld_or axw: tiny review please: http://reviews.vapour.ws/r/797/ [05:01] looking [05:01] axw: this fixes something that bit me while writing tests around the new envWorkerManager [05:03] menn0: reviewed [05:04] axw: cheers (agree with the comment. will fix) [05:04] cool, nps [05:42] axw: made the changes... [05:43] looking [05:53] anastasiamac: replied [05:53] axw: thnx :D [05:55] axw: m so gald to know that I ams ooo close :D [05:55] am sooo* [05:55] :) [05:55] axw: i'll address them now-ish... kids r coming home soon :( [05:55] no worries [07:06] axw: it's all very rushed as i have to disappear for a while, but here's storage-get implementation. the uniter worker factory tests use jujuconnsuite so the implementation works end-end http://reviews.vapour.ws/r/798/ [07:07] wallyworld_: cool, I'm just trying to get storage-attached working atm, will take a look in a bit [07:18] no hurry as i'm gone for a few hours [07:38] axw: sorry to bug you again. another micro-review pls? http://reviews.vapour.ws/r/799/ [07:38] sure [07:40] menn0: LGTM [07:41] axw: tyvm === fuzzy_ is now known as Fuzai [08:20] woo [08:20] wallyworld_: 2015-01-23 08:19:15 INFO juju.worker.uniter.context runner.go:149 skipped "data-storage-attached" hook (not implemented) [08:47] fwereade: can you please cast your eyes over http://reviews.vapour.ws/r/800/, specifically the changes in the worker/uniter package. I mostly want feedback on the refactoring of HookSource/HookSender [08:48] axw, certainly [08:50] axw, from the summary, that's almost completely awesome, because I was planning to extract them myself at some point :) [08:50] axw, but I'll take a look at the details ;) [08:50] fwereade: great, hopefully the implementation lives up to it ;) [08:54] hey axw [08:54] heya dimitern [08:55] my new lxc.updateContainerConfig helper landed just now, please have a look when you can, as I designed it to be useful in general (not just networking) and should also help with configuring storage settings for lxc [08:56] dimitern: awesome, thanks. I'll take a look now [08:56] axw, cheers [08:57] axw, remind me, when exactly do the storage instances get put on the unit? [08:58] axw, after the machine's set them up and everything's in place? [08:58] fwereade: storage instances get created and assigned to a unit when the unit is created (the watcher logic is wrong atm) [08:58] fwereade: it should be checking to see that the storage instances are provisioned [08:59] morning [08:59] ahoy [09:00] TheMue, o/ [09:00] ;) [09:00] hey TheMue, you've got a review btw :) [09:00] axw, is the final intent that the unit only react once they are provisioned? or will the unit be responsible for any part of that? [09:01] dimitern: ah, great, thx. will take a look [09:01] TheMue, I haven't said :ship it: as I'd like to have another look when you implement the suggestions [09:01] fwereade: the *uniter* will only react to provisioned ones, but there will be another worker in the unit agent that takes care of provisioning *some* storage instances (e.g. tmpfs) [09:02] dimitern: sure [09:02] axw, would it be reasonable to have that on the machine agent instead? [09:02] fwereade: there will also be a storage provisioner on the state server (for IaaS volumes), and one on the machine agent (for disks) [09:02] fwereade: not for filesystems-type storage instances I think, since they're owned by the unit [09:03] axw, hm, I think something may have just crystallized for me [09:03] axw, it feels like there's maybe a risk of repeating the mistakes we made with subordinates? [09:03] fwereade: (btw, I'm not at all happy with the storage model atm, if you have suggestions I'm very happy to hear) [09:04] fwereade: I'm not sure I know what those mistakes are [09:04] axw, so the big one is making subordinates specific to units, not to machines [09:05] fwereade: ok. so, in fact, storage instances can be owned by services or units, this could be extended to machines [09:05] * dimitern is sick of typing "juju" instead of "git" [09:05] it will be nice to be able to do "juju commit -m "Changes after review" or "juju checkout master"\ [09:05] fwereade: in which case I can look at putting all provisioning on the machine agent [09:05] axw, it means the relation scoping rules are weird, and if you have two colocated units of X with subordinate Y you *also* get two colocated Ys [09:05] :) [09:06] fwereade: for storage, I think that would be expected (placement is disabled for storage right now, though) [09:07] axw, agreed on that front [09:07] axw, ok, from another angle, I think my primary motivation [09:08] axw, is that having more workers in the uniter makes it harder to unify the agents [09:08] fwereade: yep, fair enough. I think I can deal with having it on the machine agent [09:09] axw, there's a strong dose of "the physical disks are definitely the machine's responsibility, and it's weird to have certain storage types handled elsewhere" [09:10] axw, but to try to explore the original feeling [09:10] axw, machines have units [09:10] axw, units have subordinate units [09:11] axw, all of those things run together on the same machine [09:12] axw, so there's a mismatch [09:12] axw, it led to a horrible watcher [09:12] axw, because the machine wants to know about the complete set of units that need to be deployed [09:13] axw, and we need tolook all over the db to find them ;p [09:13] ah, I see [09:14] axw, I'm thinking along the lines of "creating a unit/service adds a record for appropriate storage, somewhere it'll be seen by a suitable provisioner" [09:14] axw, similarly, adding storage later does the same [09:14] axw, neither of these things need to touch the unit/service docs [09:14] fwereade: so we have a "storageinstances" collection that holds them [09:15] fwereade: the docs have an "owner" field which is either a unit or service tag [09:15] that's currently how the watcher filters [09:16] fwereade: possibly I'm doing something dumb, but they do touch the unit/service to add a reference (see my latest email) [09:16] axw, right, indeed, that is necessary [09:16] axw, yeah, I've been thinking about that and have yet to usefully formulate a response, but part of that is why I'm talking to you now [09:16] ok :) [09:16] axw, so, exactly, it's the backreferences that make me uncomfortable [09:17] axw, assume everything we have today but without the backreferences [09:19] axw, but also associate storageinstances explicitly with the machine (and?, hmm, remote storage provider?) [09:20] axw, such that in either case we can have a simple watcher that connects easily with the right storage provisioner worker [09:20] fwereade: ok. so it's okay to destroy a unit that has active storage instances? [09:21] axw, and then (analogously to how we set instancedata once a machine's provisioned) we can put the actual info the unit agent needs to know in another collection somewhere, such that it's simple for the unit agent to watch it and respond once it's there [09:22] axw, and, yes, I think it is [09:22] axw, so long as they do get cleaned up, it doesn't have to be the unit's responsibility [09:23] axw, you could just have a cleanup step in state that finds all storage tagged with the removed unit and sets it to dying [09:23] axw, in fact it's quite nice to have it on the machine for that reason as well [09:23] fwereade: we *would* want to block machine destruction until that's done though right? [09:23] axw, machine destruction, yes, I think so [09:24] ok, sounds reasonable [09:24] axw, which would maybe imply backreferences on the machine [09:24] axw, but I would also say, ideally, *not* on the machine *document* [09:25] fwereade: right, I will try to avoid doing that [09:25] axw, we need to be really careful about adding fields to those docs without carefully considering the impact on existing watchers [09:26] axw, basically given the current watcher implementation it's usually sensible to group fields according to their read/write patterns [09:26] fwereade: I'm going to have to rework the storage model a bit too. because of shared storage, we'll need to have a separate entity that models a unit's (machine's?) attachment to that storage instance [09:26] likewise for volumes/block devices [09:26] (i.e. for multi-attach volumes) [09:27] axw, got you [09:27] axw, and in fact [09:28] axw, that document containing the instancedata-equivalent [09:28] axw, which is whatthe unit needs to see and watch [09:28] * axw nods [09:28] axw, maps pretty much perfectly onto that entity [09:29] axw, there may still be a question about shared storage [09:29] axw, ah-ha, maybe [09:30] ? [09:30] axw, *provisioning* and *accessing* a given storage are fundamentally different responsibilities [09:31] right [09:31] axw, so all the provisioning happens on the machine agent [09:33] axw, ie creation/destruction of actual substrate [09:34] axw, and once the storage exists, theminimum necessary info to access it is handed over to the unit agent [09:37] fwereade: I think that sounds sensible. the machine agent will provision it and send that info the the API server, which will update the storage instance *attachment* doc with the location, etc., which the unit will then see [09:37] axw, the unit agent is then responsible for turning it into the form the actual unit expects -- possibly even including things like "mount the device" and "create the filesystem"? -- but, crucially, this becomes synchronised with the unit agent, because we can do that work inside operations, and queue suitable hooks for afterwards [09:38] axw, the other thing [09:39] axw, is that the storage-watching seems like it's targeted at the dynamic storage model and I think the first use case to satisfy is the "run all the storage hooks before the install hook" one [09:40] axw, that's not to say it won't become useful [09:40] fwereade: I was intending to do that next, because it's of less demo value :) [09:41] fwereade: right now we just want to see the hooks firing when storage is assigned [09:42] fwereade: so what I was going to do is after the deploy operation has finished, but before the "install" or "upgrade-charm" hooks are run, wait for the required storage to be attached [09:42] is that the right place to interpose? [09:43] axw, yes, I think so [09:44] axw, so instead of queueing an install hook post-deploy we instead record and run some attaching-storage mode [09:45] axw, forgive dumb questions, haven't read it all yet [09:45] axw, local persistent what's-attached storage? [09:46] fwereade: attached storage is storage that's assigned to the unit and ready for use. i.e. the block device is visible and/or the filesystem is formatted and mounted [09:46] axw, we need to bear in mind that we could get bounced at any time, and we'd really like to be able to pick up seamlessly [09:46] axw, how do we know we've already run the appropriate storage-attached hook? [09:47] axw, I don't think we want to attach all the storage every time we start the agent [09:47] axw, it's bad enough doing that with config-changed ;) [09:47] fwereade: we don't (yet), I will be adding local state persistence [09:47] a la the relationer [09:48] axw, cool [09:48] axw, but it brings me to another thing [09:48] axw, that storage interface in uniter basically looks good, and the correspondence with relations is good and sane [09:49] axw, the trouble is that I'm pretty sure the relationers are in the wrong place [09:49] axw, and that they need to be associated more closely with operation state, and not with the uniter itself [09:51] axw, are you roughly familiar with any of the uniter/runner code? [09:51] fwereade: very roughly :) [09:51] axw, there's a yucky callback into the uniter when we set up a hook context to find out what relation state currently looks like [09:52] axw, but operations themselves are composed of methods that accept a state and return a new state [09:52] axw, we could and should be feeding that info along an explicit path [09:53] I see, so we're going uniter -> operation -> runner and then back to uniter via a callback? [09:53] axw, so the state we pass into the operation methods includes what storage there is, what relations there are, etc etc [09:53] axw, yeah exactly [09:54] axw, I have done a reasonable job of splitting out the types that needed to exist [09:54] axw, but now the borders are in place there's still a bunch of moving things to the right place [09:54] fwereade: ok, I'll bear it in mind when getting to the context bit... which will be very soon [09:55] axw, ofc, good point [09:55] axw, yes, please make sure it's part of the state you make accessible via the operation executor [09:56] axw, (and yes it's a *bit* weird that *that* is responsible for operation state persistence, feel free to massage it towards sanity) [09:56] axw, and if you're doing that... [09:57] axw, would you just move relations in the same way at the same time please? [09:57] axw, you don't have to fix the callback, although it would be cool if you did [09:57] axw, just, if you're moving one field please move the other because the same forces apply to both [09:57] axw, making sense? [09:58] fwereade: nope. moving which field? [09:58] sorry [09:58] axw, uniter.relations [09:58] axw, I think is completely analogous to uniter.storage [09:58] right [09:58] axw, that storage info needs to get to the execution contexts [09:58] axw, as does relation info [09:59] axw, which currently uses a scary/evil callback [09:59] axw, we should not repeat the relations mistake [10:00] axw, we should make sure storage state is passed into operations in the same way the other local state is [10:00] ok [10:00] axw, (see the uniter/operation.Operation interface) [10:00] axw, (and Executor which uses it) [10:01] TheMue: dimitern: sorry guys - I'll be a few minutes late to standu [10:01] *standup [10:01] axw, this likely implies changing something about the Operation/Executor interface (or possibly the operation.State type??) [10:01] axw, whatever you do to it to add storage, please also do to add relations [10:02] fwereade: ok, understood. I'll need to figure that bit out first, but I'll do both at the same time [10:03] axw, you can keep the field on the uniter and the callback to it -- just make sure the relations do get to the runner context via an explicit chain of calls, and then it'll be easy to remove the callback and the uniter's reference to relations [10:04] fwereade: ok. sorry for being dumb, but where would relations and storage live if not on uniter? [10:04] modes feeds them... [10:05] axw, I think they're part of operation state [10:05] axw, not sure you've had a chance to keep up with the most recent uniter stuff? [10:06] axw, particularly the operation stuff [10:06] axw, wait, you did review 760 I think [10:06] axw, so the direction we're currently moving in is [10:07] axw, strip uniter down as far as we can, it's got too many responsibilities, and the only one that's *clearly* the uniter's problem is creating (and maintaining) the other components that need to work together [10:08] axw, the filter, the juju-run handler, the actual main loop [10:09] fwereade: ok. so uniter will be responsible for taking things from the filter and passing them to the operation state [10:09] operation state will encapsulate the relations and storage bits [10:09] and ... will have a hook output channel? [10:09] axw, to support this, the various tangled skeins of "what actually happens when we X changes" are being extracted into operations [10:10] axw, which will ideally operate purely against the state they're given, but are currently supported by alarming numbers of callbacks [10:10] axw, which is still better than before because at least we can test them in some detail [10:11] axw, I'm not sure about the output channel [10:11] axw, did you see relation.Peeker? I think it was 761 [10:11] I didn't look at that one [10:12] axw, so the core idea there is that relying on one big select with a bunch of inputs is not actually good enough to get any sane sort of hook ordering behaviour [10:12] axw, so relation.Peeker is an alternative to a HookSender for using a HookSource [10:13] axw, it has a Peeks channel, which has something to give you if the source is not empty [10:13] axw, when yu read from the peeks channel you get a Peek with a hook.Info, and Consume/Reject methods [10:14] axw, you have to consume or reject the peek before it'll give you another, or read from the watcher again [10:14] axw, but basically you can leave them to run in the background and only interrupt them when you know there's no higher-priority hook to run [10:15] axw, so the "make it a HookSource" thing is great [10:15] axw, but please make sure it works with Peeker as well as HookSender [10:16] axw, all that CL has is the addition of the type, I don't use it yet [10:16] fwereade: I did update relation/peeker.go, I just didn't look into what it does [10:16] thanks for the explanation [10:16] axw, perfect then :) [10:17] axw, maybe non-obvious: this means I intend to do away with the whole starthooks/stophooks malarkey in modeabide [10:18] axw, when we know about the relations we can just watch them forever and only check the queues when we need them [10:18] fwereade: ahhh I see, so they'll just be running watchers (in a goroutine) all the time [10:19] axw, yeah, I think it's much less complex [10:19] yep [10:19] I thought it was a bit awkward [10:20] axw, since you're here and in this area though, another thing to be aware of and feel free to do if I haven't and it helps you [10:20] axw, when passing storage state around [10:21] axw, the current interface for Operation is wrong : it accepts a State, and returns *State, error, and the executor is responsible for writing out the new state [10:21] axw, this is already awkward to some degree and will not make sense at all with relations and storage [10:22] axw, it ought to be accepting something a State type that it can modify and then write if it wants to [10:22] axw, because certain operations will absolutely want to change that state [10:23] axw, attaching storage is absolutely something you want to record having done [10:23] axw, as is recording relation hook state [10:24] fwereade: how does the current interface not handle that? [10:24] axw, magic callbacks into the Uniter! if you want a catalogue of my sins, see uniter/op_callbacks.go [10:25] axw, hopefully I will get that down to (almost) nothing over time [10:25] axw, and it's already paid off because now I can at least test what individual operations actually do in some detail [10:25] fwereade: oh right, it modifies state but not the relation state [10:25] axw, yeah [10:25] axw, and the relation state modification happens in CommitHook iirc [10:26] ok [10:26] axw, as you have on the Storage interface [10:26] yup [10:26] axw, when it would be much nicer to have the state-to-modify passed in directly in both instances [10:27] :q [10:27] oops [10:27] axw, anyway, sorry my code led you astray [10:27] fwereade: no worries, thanks for the lesson :) this has been very informative [10:28] axw, cool [10:28] not sure how I'm going to do all that by Tuesday though.. maybe not sleep [10:33] axw, as always I am giving you high-level advice, we can't realise the vision all in one go [10:33] :) [10:34] axw, so, for example, so long as you do promise to fix it later [10:34] axw, it would be very reasonable to start off by just tacking the storage state onto the existing operation.State [10:34] fwereade: noted. I will probably have to use that line [10:34] ok [10:34] axw, because I think you *do* need persistence of what you've done [10:35] axw, and you can at least keep using an existing mechanism (ie executor writing it out for you) [10:35] axw, (by you returning a state with whatever changes) [10:35] * axw nods [10:36] axw, fwiw, the other thing I need to do with uniter.relations [10:37] axw, is to turn Update into an operation [10:37] axw, so there will be storage ops, and relations ops -- neither of those will run hooks, but they will write out state that directly or indirectly queues other hooks [10:38] axw, and uniter will hopefully shrink to almost nothing [10:38] sorry bbs [10:38] ok [10:45] axw, ok back [10:45] axw, so do you feel usefully guided? :) [10:46] fwereade: mostly. I'm not really seeing how I can get rid of PrepareHook/CommitHook yet [10:46] in the callbacks [10:46] fwereade: I mean, apart from coding relation/storage logic into the operations package [10:47] axw, I think that's exactly what we need [10:47] ok [10:47] axw, we currently have deploy ops, and these are analogous, I think [10:47] axw, I need to do a RelationsChanged operation soon [10:48] axw, I still don't have the full picture because of the starthooks/stophooks thing [10:48] I see, so I'd be adding a StorageAttached operation which queues a -storage-attached hook if it's committed [10:48] axw, exactly [10:49] axw, (what I might do with start/stop hooks is just make them run in every mode and shut down when the uniter shuts down [10:49] axw, and that feels sort of correct [10:50] axw, in that it's essentially just another kind of operation filter [10:50] axw, and we don't start/stop all that every time uniter state changes [10:51] sounds logical [10:51] axw, I'm wondering whether uniter is eventually going to collapse down into a single worker.Runner constructor :) [10:52] fwereade: heh, I was thinking that earlier :) [10:52] axw, I think there's enough of that stuff going on that it's what we need somewhere [10:52] axw, it's a single non-trivial responsibility and it deserves some isolation [10:53] axw, we'll get there one day :) [11:29] * dimitern steps out for ~1h [11:56] axw, reviewed in some detail, hope it's helpful [12:11] morning [12:31] thanks fwereade, about to eat, I'll take a look later [12:48] axw: i was thinking the storage-attached hook might reasonably pass in multiple storageids if several are attached before the hook fires [13:06] axw: updated PR. plugged in bare minimum :D [13:12] wallyworld_: what's the point though? if you can't guarantee that? [13:12] anastasiamac: awesome [13:13] axw: similar to bulk calls - sure it may only be 1 most times, but why not allow for > 1 at minimal extra cost [13:13] wallyworld_: I think it's more important to be consistent with other hooks. the most similar we have to compare to is relation hooks, and we don't combine hook calls for different relations [13:14] ok, fair point, i'll change it. i'm not overly familiar with the design philosphy behind hooks [13:15] wallyworld_: I think this is something that would be good to get fwereade's input on, maybe when you're at the sprint [13:16] will do, i'll make it one for now - that will also simply the output and remove the need for parsing [13:16] if we add an attribute key as you suggested [13:16] wallyworld_, yeah, I think we'd prefer to go for granular and stay consistent [13:17] fwereade: the cost is that it il translate into multiple api calls to get the instance details [13:17] one api call per hook invocation [13:17] wallyworld_, axw: bulk hooks are absolutely a good idea but one we always held off from in the past (and as you know I favour a something-changed approach for charms that reach a certain threshold of complexity [13:17] ) [13:18] wallyworld_, axw: so I'd prefer us to stay consistent with the existing approach for now and have one invocation per storage instance as we have one invocation per remote unit [13:18] ok, will do [13:19] it's not per remote unit that i'm talking about - it's per storage instance [13:19] a unit can require > 1 storage instances [13:20] these may become attached and available on the host machine prior to the attached hook firing [13:21] hence if we know of the many, we could pass them all into the hook for that unit, and a single api call made to get the storage instance details [13:21] but, the preference is for one id per hook [13:21] so i'll change it [13:24] axw: the Rb shows 1 issue against my PR but all issues are reolved and 1 dropped :P [13:24] axw: RB lies?... :D [13:25] anastasiamac: *shrug* I am reviewing now [13:25] axw: oh Awesome!! thnx :) didn't want to pull u from dina... [13:25] nah, finished [13:31] * dimitern is back [13:32] anastasiamac: done. almost there, a couple more things (sorry) [13:34] axw: thnx for review. will address it tomorrow.. [13:34] anastasiamac: no worries, get some sleep :) [13:35] axw: sleep? what's that? :P [13:47] fwereade: thanks for the comments. I'm looking into changing the storage stuff to an operation now, which I think will clean things up a bit in my code. if I do go ahead with that, I'll probably do a followup for the relation code when time's not so tight [13:48] axw, yeah, you don't have to do that for relations -- it's just if you're moving the one field please send relations via the same path if and when it's ~trivial to do so [14:01] I've noticed a couple review comments in the last week or so about copyright headers - do we have a documented standard way to update copyright headers? [14:02] 2013,2014,2015? 2013-2015? replace all with 2015? etc. [14:03] jw4: new files get the current year [14:03] perrito666: makes sense - how do we update existing files though [14:03] jw4: no clue [14:04] I would say you are right n-m [14:04] meh, vivid broke my skype [14:05] jw4: first year of publication [14:05] documented... no. there was a juju-dev thread on this a while ago [14:06] axw: what is the first year of publication? the year the file was created, or the year a new function was added, etc. etc. - [14:07] axw: was there a conclusion on the mailing list? I'll have to go back and find it I guess [14:07] jw4: good question. this is the point I say IANAL ;) [14:07] jw4: *I* just put the year the file was created [14:07] others put the range [14:08] some put all the years with commas, but I think that is discouraged [14:08] axw: heh - I see. I only ask because I've noticed an uptick on review comments giving different guidance [14:08] expected in January I guess [14:14] perrito666, ericsnow minor ux feature bug 1414021 for backup downloads [14:14] Bug #1414021: Send size in download backup to allow for progress indicator [14:16] hazmat: tx [14:16] what version of juju is it that starts having the /environment/:uuid/endpoints .. trying to do some auto negotiation in the client === ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: see calendar | Open critical bugs: 1414016 [14:19] dimitern, sorry I think your last commit broke local provider lxc jobs for trusty, vivid, and utopic. [14:20] dimitern, bug 1414016 [14:20] Bug #1414016: Local-provider lxc Failed to create lxc_container [14:21] sinzui, oh boy, looking - thanks [14:22] dimitern: thx for review [14:25] sinzui, can I get some more info for that bug? /var/lib/juju/containers/* and /var/lib/juju/removed-containers/*, as well as /var/lib/lxc/juju-*/config and /var/lib/lxc/jenkins-*/config ? [14:25] TheMue, np [14:26] dimitern, not at moment, 1.22-beta1 has started testing [14:26] I will visit a machine and try to run what was left in place [14:27] sinzui, also these lxc jobs should all be running with logging-config: juju.container=TRACE [14:27] sinzui, ok, that'll work [14:39] dimitern, I sent a email with the logs [14:39] sinzui, thanks! [14:40] sinzui, ok I think I understood the problem and started to work on a fix [14:40] thank you dimitern [15:05] hmm.. the charms facade is enabled in trunk, and is returned in login info, but invoking any of the Charms facade methods gets 'Error': u'unknown object type "Charms"' [15:06] anastasiamac, is the charms facade active? feel like i'm missing something [15:21] perrito666, katco voidspace : I need help triaging this bug. It isn't clear why juju failed or if the issue is in the cloud image or network. https://bugs.launchpad.net/juju-core/+bug/1413752 [15:21] Bug #1413752: Bootstrapping to Openstack Environment fails with "no instances found" [16:14] sinzui, that bug looks like it may need to be looked at by the onyx team who are gone for the weekend [16:15] and hazmat anastasiamac should also be on her saturday :) [16:15] alexisb, :/ [16:15] sinzui, I can send tim and team a note [16:15] alexisb, I will subscribe tim and ask for him to give an opion [16:16] sinzui, and of course if perrito666 katco voidspace and team can feel free to chime in [18:03] Is there anyone around who can help me with a problem with Review Board? [18:22] ericsnow: ^^ [18:22] cherylj: I'd be glad to help :) [18:23] thanks, ericsnow [18:23] I just pushed some changes to address some review comments, but I'm not seeing that Review Board picked them up [18:23] I see that it thinks there's another revision [18:24] But it shows nothing in the diff from this and the previous revision [18:24] http://reviews.vapour.ws/r/770/ [18:24] if I go to the PR on github, I see the correct changes [18:24] \me takes a look [18:25] cherylj: hm that should have happened automatically i think, but you can probably do "rbt post -u" to force the issue [18:25] Well, I saw that it created a new revision [18:25] cherylj: did you refresh the reviewboard page? :) [18:25] But it didn't have the changes [18:25] Yes :) [18:26] cherylj: ah, "revision" 4 is the same as 3, right? [18:26] yes [18:28] cherylj: this was the change? https://github.com/cherylj/juju/commit/b9404581ae177ab7fcd9d3a12505c97405e3cc28 [18:29] cherylj: hmm, as far as I can tell, the blank line you removed in your most recent commit is also gone in reviewboard [18:29] ericsnow: yeah i was thinking that maybe RB's diff just is incorrectly hiding that [18:29] i.e. it doesn't show it if i toggle the whitespace flag on the diff page [18:30] katco, cherylj: ah, right; I believe there is a setting in RB to ignore whitespace changes (or something like that) [18:30] ericsnow: yeah it's in the bottom right of the control panel on the diff page [18:30] Hmm, the change was more than that. Let me see if I can find the commit [18:30] ericsnow: but it doesn't seem to be working [18:31] cherylj: possibly? https://github.com/cherylj/juju/commit/03a14eff116a4edc28dcaa398c50141b090788f7 [18:31] katco: yeah, I noticed that too. [18:31] katco: yes, that's the one [18:31] cherylj: ok so you just want to look at 2-4 then [18:31] cherylj: it picked up that latest revision too which is "3" [18:31] i mean 4 [18:31] "3" is the one you're interested in [18:32] katco, 3 doesn't have the changes from the last commit [18:32] (the last one you linked to) [18:33] cherylj: did you make the changes in rapid succession? [18:33] those last 2 commits? [18:33] Yes [18:33] ericsnow: perhaps the github hook only picks the last commit? and if you do many quick ones it won't pick up some? [18:34] cherylj: you will want to install "review board tools" [18:34] cherylj: and rub rbt post -u [18:34] run even [18:34] katco: I'll check [18:34] katco, thanks, I'll do that [18:34] cherylj: https://www.reviewboard.org/downloads/rbtools/ [18:35] cd $GOPATH/src/github.com/juju/juju && rbt post -u [18:36] katco: well, the hook didn't fail so something else is afoot [18:36] ericsnow: odd; can you tell if the hook grabbed all PRs, or only the last one? [18:37] katco: it simply records the PR event (in this case "synchonize") with all it's info [18:38] ericsnow: but in this case there were multiple PR events, right? [18:38] katco: the hook then (on the RB side) makes a GH API request to pull the current diff for the PR [18:38] katco: each PR event would trigger the hook separately [18:39] katco: which is what happened [18:39] hrm. interesting. [18:39] katco: and each time the hook did not fail [18:39] ericsnow: well, the lesson is clear. don't be too productive. ;) [18:40] cherylj: when did you push those changes? [18:40] ericsnow, about an hour ago [18:42] cherylj: okay, I see one "synchronize" event for you about an hour ago (not two or more) [18:42] cherylj: so if you did two pushes in rapid succession then it may be that github merged those into one hook event [18:43] cherylj: and it's conceivable that that situation resulted in some weird behavior on RB [18:44] cherylj: could you try refreshing your branch somehow (to trigger the GH hook again)? [18:47] ericsnow, just did, and just got notification from RB... let me look at the diff [18:48] ericsnow, and it worked! [18:48] thanks ericsnow, katco [18:48] cherylj: happy coding! ericsnow is the RB master :) [18:48] cherylj: I'll chalk it up to github doing something weird that RB couldn't handle :) [18:49] katco: I wouldn't go that far :P [18:49] hehe :) [18:49] ericsnow: hehe :) [18:49] I just got a call from day care and my daughter is not feeling well :( [18:49] so, off I go [18:49] doh [18:49] hope she feels better cherylj [18:49] me too, thanks katco === kadams54 is now known as kadams54-away [19:22] kateco what is the flag for the one-line juju status output again? [19:40] mbruzek1: whoops sorry, your ping didn't register. you can do short, line, or oneline [19:40] kateco this is not in juju help status [19:41] mbruzek1: hm what version? it is on mine [19:41] right, EOW [19:41] g'night all [19:41] mbruzek1: (also, no e in katco if you want to ping me) [19:41] oh sorry [19:41] mbruzek1: no worries! :) [19:42] 1.20.14-trusty-amd64 [19:42] http://paste.ubuntu.com/9839141/ [19:43] https://github.com/juju/juju/blame/master/cmd/juju/status.go#L34-L35 [19:44] mbruzek1: i think it will be in 1.21, let me check [19:45] mbruzek1: the "oneline" flavor is in 1.21 [19:45] mbruzek1: 1.22 will support all the flavors [19:45] katco: Ok I will have to update to get that. Thanks. [19:46] mbruzek1: yeah looks like. sorry about that [19:47] alexisb: ping [19:57] sinzui: ping [19:57] hi katco [19:57] sinzui: hey, happy fri :) [19:57] sinzui: just a heads up, it looks like all of the status related work got missed for the release notes [19:58] sinzui: i'm adding it in, but i thought i'd let you know in case it's a process issue [19:58] katco, thank you. dimitern also gave me some revisions. Just update the gdoc. I also made change for the official release next week [19:59] sinzui: thanks, will do. just didn't know if there was a process issue that we might want to look at for next time. i honestly don't remember what all i worked on for v1.21 [19:59] katco, I read my history of merged branches. [20:01] sinzui: weird, is it the status stuff not in there? [20:02] katco, I see your changes that you demoed in Brussels. [20:02] katco, http://pastebin.ubuntu.com/9839406/ [20:03] sinzui: ah ok, yup that's it :) [20:03] sinzui: in addition to new filtering methods [20:09] sinzui: thanks, this is my first involvement with our release notes, so i wasn't sure how this all worked. i've updated the document. [20:28] katco, pong [20:28] whats up? [20:28] alexisb: hey it was just a question about the release docs [20:28] alexisb: i think sinzui got me straightened away (see backlog) [20:29] ah I see you and sinzui conversation below that cool [20:29] alexisb: ty for responding :) and happy friday to you! [20:29] katco, cool, sorry for the delay I just got back [20:29] alexisb: no worries at all [20:29] and happy friday to you too! [20:31] alexisb, I am editing the 1.22-beta1 release notes. I thought MESS would be more complete for 1.22 https://docs.google.com/a/canonical.com/document/d/1IPmXCwujtq8zZs9mlfps7wGcBYYv4MLC_8w2bN2CetQ/edit [20:31] alexisb, I am wondering if the info in this doc is stale [20:31] sinzui, let me go look [20:32] sinzui, that info is correct, lots of it is there but is hidden behind feature flags and/or not exposed to the user [20:33] MESS is definitely not targeted for 1.22 [20:36] alexisb, rock. then once I finish these notes, I will continue with the release of 1.22-beta1, which is already queued for release [20:36] sinzui, you da' man! thank you