=== kadams54 is now known as kadams54-away | ||
anastasiamac | axw: wallyworld: about cmd output | 01:25 |
---|---|---|
anastasiamac | axw: wallyworld: m going to add status to list table :D | 01:25 |
axw | anastasiamac: thanks | 01:25 |
anastasiamac | axw: wallyworld: m going to rename test data to avoid confusion | 01:25 |
anastasiamac | axw: wallyworld: m think that if we want to diff btw unit/service in output, should do it in this pr... | 01:26 |
anastasiamac | axw: wallyworld: thoughts? | 01:26 |
axw | anastasiamac: IMO, we shouldn't care whether or not it's owned by a service or a unit, just whether it's attached to any units | 01:27 |
axw | anastasiamac: by "we", I mean the user | 01:27 |
anastasiamac | axw: wallyworld: k :D so besides adding status and renaming test data, is there anything else that u think should b addressed in this pr on output? :D | 01:28 |
axw | anastasiamac: a few things, I'm commenting now | 01:28 |
anastasiamac | axw: thx :D | 01:28 |
wallyworld | anastasiamac: agree with axw about service vs units fwiw | 01:29 |
anastasiamac | wallyworld: tyvm :D | 01:30 |
anastasiamac | wallyworld: good to know that this part of output is *kind of* done, axw comment pending :D | 01:30 |
axw | anastasiamac wallyworld: I'm free for a hangout whenever. I've commented on the diff | 01:32 |
wallyworld | ia'll take a look | 01:32 |
anastasiamac | wallyworld: do u still want to discuss output? | 01:33 |
wallyworld | maybe we should, just so we're all on the same page | 01:33 |
wallyworld | we can go to the standup hangout | 01:34 |
wallyworld | axw: did you want to joins us quickly? | 01:34 |
axw | wallyworld: omw | 01:35 |
axw | wallyworld: tomorrow I'll need to head out in the morning for a while, to sign the transfer of land | 01:46 |
wallyworld | sure, np | 01:46 |
axw | settlement is next week :o | 01:46 |
wallyworld | \o/ | 01:46 |
gsamfira | here's a nice treat for whomever is curious: http://paste.ubuntu.com/10607588/ | 01:55 |
gsamfira | http://paste.ubuntu.com/10607590/ | 01:55 |
thumper | ha | 02:00 |
thumper | insteresting | 02:00 |
gsamfira | trying out a noop charm now | 02:02 |
gsamfira | see if it actually deploys and runs hooks | 02:02 |
gsamfira | http://paste.ubuntu.com/10597232/ <-- one other treat :) | 02:03 |
axw | gsamfira: nice :) | 02:08 |
axw | gsamfira: that's a state server on jessie? | 02:09 |
gsamfira | axw: yep :) | 02:09 |
axw | neato | 02:09 |
gsamfira | took longer to generate the jessie image for maas then get juju to run on it | 02:09 |
axw | hehe :) | 02:09 |
gsamfira | has a few bugs, but they should be easy fixes :) | 02:11 |
thumper | wallyworld: got a few minutes? | 03:23 |
wallyworld | sure | 03:23 |
thumper | 1:1 hangout? | 03:23 |
wallyworld | yup | 03:24 |
axw | wallyworld: when you're free, PTAL: https://github.com/juju/juju/pull/1841 | 03:30 |
wallyworld | sure | 03:30 |
anastasiamac | axw:wallyworld: shall i filter out stroage without Unit ? | 03:33 |
anastasiamac | axw: wallyworld: storage even... from ouput | 03:33 |
axw | anastasiamac: I would prefer we leave it there until we have a | 03:33 |
axw | flag | 03:33 |
anastasiamac | axw: k :D thnx! | 03:33 |
anastasiamac | axw: wallyworld: PR is cleaned up :D plz revisit.. m taking baby to the doc and will check l8r :D tyvm | 03:46 |
wallyworld | axw: what about AttachmentTag instead of EntityTag on params.MachineStroageId ? | 03:56 |
axw | wallyworld: sorry, was afk. hmmm I guess so | 03:57 |
axw | yeah ok, will change | 03:58 |
wallyworld | just a suggestion | 03:58 |
wallyworld | seems a little more meaningful | 03:58 |
axw | wallyworld: I wasn't very happy with EntityTag, that seems slightly better | 03:58 |
axw | anastasiamac: if you use a string, then old clients will be able to see new status values. with an int, they'll get things they don't understand | 03:59 |
axw | anastasiamac: IOW, using a string means the client doesn't need to interpret the value. | 03:59 |
=== urulama is now known as urulama|kids | ||
wallyworld | axw: if you have time, here's an initial PR to start adding support for persistent volumes | 06:51 |
wallyworld | http://reviews.vapour.ws/r/1169/ | 06:51 |
axw | wallyworld: cool, looking | 06:52 |
=== urulama|kids is now known as urulama | ||
=== ashipika1 is now known as ashipika | ||
axw | wallyworld: reviewed | 07:06 |
wallyworld | ty | 07:07 |
dimitern | wallyworld, hey, thanks for landing my fix | 07:08 |
wallyworld | dimitern: np, still waiting on si | 07:09 |
wallyworld | ci | 07:09 |
dimitern | yeah, we'll see | 07:10 |
jam | dimitern: I'll be there in just a sec, need to use the restroom | 07:34 |
dimitern | jam, sure, omw as well | 07:34 |
dimitern | wallyworld, build-revision was disabled so we might have waited whole day for nothing - so I've enabled it and that will kick off all the rest | 07:37 |
jam | dimitern: I can't hear you at lal | 07:38 |
jam | al | 07:38 |
dimitern | jam, i've rejoined | 07:39 |
wallyworld | dimitern: oh, ffs, i wonder why it was disabled. i saw CI jobs running during the day | 07:58 |
dimitern | wallyworld, yeah, so I can see the tests are running now | 08:06 |
wallyworld | axw: re persistent machine scoped volumes. you can get those if you hog smash units onto the same machine, and when the unit is destroyed, the volume remains. So i was taking persistent to pertain to the lifecycle of the unit. In most cases, that will match the lifecycle of the machine, but doesn't have to | 08:06 |
wallyworld | dimitern: so i must have been seeing a subset of the tests or something | 08:07 |
axw | wallyworld: by that definition, all storage is persistent? I'm pretty sure it's meant to be about whether or not it outlives the machine... | 08:08 |
dimitern | wallyworld, I guess so - the industrial and charm test jobs most likely | 08:08 |
axw | wallyworld: see "Data persistence" in the spec | 08:09 |
wallyworld | axw: fair enough, i was thinking it might be considered to be a bit limiting | 08:09 |
TheMue | morning o/ | 08:31 |
dimitern | TheMue, o/ | 08:34 |
wallyworld | axw: the reason i put persistent on volumeDoc was that access to volumeparams is no longer available when SetVolumeInfo is called. I take the point about being able to derive it from DeleteOnTermination but are we always going to be able to do that with other providers | 08:37 |
axw | wallyworld: in my current branch, I have code to transfer info from params to info when provisioned. I don't understand your question there - the provider *has* to be able determine whether or not it just created a persistent volume | 08:38 |
axw | wallyworld: i.e. all providers will have to know whether or not they're creating persistent volumes. if they are, then they *must* set Persistent:true, if they are not, then they *must* set it to false | 08:39 |
axw | otherwise we'll end up with volumes floating around costing people $$ | 08:39 |
wallyworld | axw: yes, agreed. my point wasn't whether we could create persistent/non-persistent volumes, but whether we'd have a way post creation to query/access that info to pass on to SetVolumeInfo. But I guess we will always have the ability to do that | 08:41 |
axw | wallyworld: we call SetVolumeInfo with the information the storage provider returns from CreateVolumes | 08:41 |
axw | wallyworld: so CreateVolumes needs to record whether or not each volume it created was persistent | 08:41 |
wallyworld | i'll wait for your current branch to land before i finally re-propose this one | 08:41 |
wallyworld | ok, if that's in the contract, fair enough | 08:42 |
axw | wallyworld: my next branch -- https://github.com/axw/juju/compare/watch-machine-storage...axw:storageprovisioner-api-attachments -- will propose after the other one lands | 09:26 |
axw | wallyworld: FYI, this commit copies bits between params/info: https://github.com/axw/juju/commit/196ad48a0ec4633f545979530e1e40d3e4529487 | 09:26 |
axw | wallyworld: so you can cherry-pick that if you want to repropose your branch in the mean time | 09:27 |
wallyworld | axw: sure, will look soon, hopefully trunk will be unblocked rsn | 09:27 |
axw | wallyworld: that branch isn't very interesting, just adding a pile of methods to storageprovisioner API for the worker changes | 09:27 |
axw | just an FYI | 09:28 |
axw | feel free to ignore until I propose it | 09:28 |
wallyworld | ty | 09:29 |
dimitern | dooferlad, voidspace, standup? | 10:00 |
perrito666 | morning | 10:03 |
TheMue | perrito666: heya o/ | 10:05 |
dimitern | voidspace, are you around? | 10:05 |
voidspace | dimitern: sorry guys - omw | 10:06 |
voidspace | got distracted | 10:06 |
dimitern | jamespage, jam, fwereade, hey guys, are we having the call now to discuss jamespage's trip to germany? | 10:28 |
dimitern | jam, fwereade, alexisb, me and jamespage are in the hangout now | 10:32 |
voidspace | dimitern: do you think I'm building my asserts correctly? | 10:32 |
voidspace | dimitern: Assert: append(isAliveDoc, unknownOrSame) | 10:32 |
voidspace | dimitern: I'm sure I saw that pattern elsewhere in our code | 10:33 |
* voidspace checking | 10:33 | |
dimitern | voidspace, I think so - looks fine on initial glance | 10:33 |
voidspace | dimitern: hmm... yes, we do exactly the same elsewhere | 10:33 |
voidspace | dimitern: e.g. state/machine.go | 10:33 |
voidspace | dimitern: Assert: append(isAliveDoc, bson.DocElem{"nonce", ""}), | 10:33 |
dimitern | voidspace, only txn.DocExists cannot be appended like this (and txn.DocMissing) | 10:35 |
jam | dimitern: I'm there | 10:35 |
voidspace | dimitern: thanks | 10:35 |
dimitern | voidspace, so when you check what caused ErrAborted in this case you'll need to also consider the doc was removed, in addition to life != alive, and state != unknown || same | 10:36 |
voidspace | dimitern: we already do that | 10:44 |
voidspace | dimitern: looking for errors.IsNotFound on Refresh | 10:44 |
dimitern | voidspace, I think the issue is around line 209 in SetState | 10:45 |
dimitern | voidspace, ErrAborted will always be the case you enter the if attempt > 0 block | 10:46 |
dimitern | voidspace, but on lin 209 you're checking the error from Refresh | 10:46 |
dimitern | voidspace, I think instead you should check i.State() != AddressStateUnknown && i.State() != newState | 10:47 |
dimitern | voidspace, and the same applies to line 245 in AllocateTo - check i.State() != AddressStateUnknown instead | 10:49 |
voidspace | dimitern: isn't it the other way round, State() == AddressStateUnknown | 10:50 |
voidspace | ah, right | 10:50 |
voidspace | dimitern: yeah, I see | 10:50 |
dimitern | voidspace, :) yeah | 10:51 |
voidspace | dimitern: and now it works, so just need to add those missing tests... | 10:52 |
voidspace | dimitern: thanks | 10:52 |
dimitern | voidspace, awesome! | 10:52 |
mup | Bug #1432577 was opened: lxc containers on AWS can not be exposed <juju-core:New> <https://launchpad.net/bugs/1432577> | 10:57 |
dimitern | voidspace, I'm omw to our 1:1 | 11:02 |
voidspace | dimitern: cool | 11:02 |
voidspace | dimitern: branch updated with test | 11:02 |
voidspace | well, push in progress | 11:02 |
dimitern | voidspace, great! | 11:02 |
mup | Bug #1432577 changed: lxc containers on AWS can not be exposed <juju-core:New> <https://launchpad.net/bugs/1432577> | 11:09 |
mup | Bug #1432577 was opened: lxc containers on AWS can not be exposed <juju-core:New> <https://launchpad.net/bugs/1432577> | 11:18 |
=== kadams54 is now known as kadams54-away | ||
voidspace | dimitern: still can't land my branch, critical bug :-) | 12:30 |
voidspace | bug 1431888 | 12:30 |
mup | Bug #1431888: Juju cannot be deployed on a restricted network <ci> <deploy> <network> <regression> <juju-core:Fix Committed by dimitern> <juju-core 1.23:Fix Committed by dimitern> <https://launchpad.net/bugs/1431888> | 12:30 |
voidspace | dimitern: ah, it's fix committed | 12:30 |
* TheMue is at lunch | 12:31 | |
dimitern | voidspace, yeah, I know - I've set the job to retest with the fix, so it should be released soon I hope | 12:31 |
voidspace | dimitern: cool | 12:31 |
voidspace | dimitern: for testing the upgrade step defined in the state package I need some IP addresses that don't have a Life field | 12:36 |
voidspace | dimitern: is there a better way than manually constructing them as bson.D{...} and doing the insert? | 12:37 |
voidspace | dimitern: if I insert them using state then they get a Life field of course | 12:37 |
voidspace | dimitern: alternatively I can add them and then *remove* the Life field | 12:37 |
dimitern | voidspace, well, older versions of juju with added ip addresses will not have a life field | 12:38 |
dimitern | voidspace, it's fairly common to insert documents directly in state to setup a scenario | 12:38 |
dimitern | voidspace, as part of an upgrade step | 12:39 |
voidspace | dimitern: ok, manual insert it is | 12:40 |
dimitern | voidspace, (I mean to test a step - in reality, those ip addresses will already be in state) | 12:40 |
voidspace | of course | 12:40 |
voidspace | dimitern: and if I insert without an _id then mongo adds it for me? | 12:41 |
dimitern | voidspace, hmm.. depends - if it's ObjectId it will | 12:42 |
dimitern | voidspace, otherwise you need to set it manually | 12:43 |
voidspace | dimitern: ok, thanks | 12:43 |
voidspace | dimitern: found a good test in upgrades_test as a template | 12:43 |
dimitern | voidspace, sweet! | 12:43 |
mfoord | that was fun | 12:55 |
mfoord | a brief power cut | 12:55 |
mfoord | on which note | 12:57 |
* mfoord lurches to lunch | 12:57 | |
dimitern | FYI, master is unblocked, 1.23 not yet; I've re-queued all recent merges which bounced on master due to the blocker | 13:26 |
mfoord | dimitern: cool, re-queuing merge then | 13:27 |
mfoord | dimitern: ah, you've done it :-) | 13:27 |
mfoord | thanks | 13:27 |
dimitern | mfoord, :) | 13:27 |
* fwereade forgot he has to be out for a few hours | 13:27 | |
mup | Bug #1431888 changed: Juju cannot be deployed on a restricted network <ci> <deploy> <network> <regression> <juju-core:Fix Released by dimitern> <juju-core 1.23:Fix Committed by dimitern> <https://launchpad.net/bugs/1431888> | 13:31 |
dimitern | 1.23 is unblocked as well - I couldn't find any PRs that bounced to re-queue though - if you have any, feel free to re-queue them | 14:00 |
mup | Bug #1432652 was opened: upgrade_test.go failing on PPC64el <ci> <ppc64el> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1432652> | 14:01 |
mup | Bug #1432654 was opened: tracker_test.go failing on ppc64el <ci> <ppc64el> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1432654> | 14:01 |
* dimitern steps out for a while | 14:03 | |
abentley | dimitern, mfoord: It would be good if you waited until we get a bless before landing a bunch of new code that could well cause a curse. | 14:05 |
mup | Bug #1431444 changed: juju run results contain extraneous newline <juju-core:Invalid by cherylj> <https://launchpad.net/bugs/1431444> | 14:31 |
mup | Bug #1431685 changed: juju nova-compute charm not enabling live-migration via tcp with auth set to none <juju-core:Invalid> <nova-compute (Juju Charms Collection):New> <https://launchpad.net/bugs/1431685> | 14:31 |
mup | Bug #1432577 changed: lxc containers on AWS can not be exposed <juju-core:New> <https://launchpad.net/bugs/1432577> | 14:31 |
mfoord | abentley: if that's the general rule we need then trunk should be blocked waiting on a bless | 14:43 |
abentley | mfoord: Yes, that works for me. | 14:43 |
mfoord | :-) | 14:44 |
* perrito666 throws a couple of liters of holy wather on the direction of jenkins | 14:45 | |
abentley | perrito666: It's not looking good: http://juju-ci.vapour.ws/job/run-unit-tests-trusty-ppc64el/2557/console | 14:46 |
perrito666 | well that is what wather will do to servers :p | 14:46 |
natefinch | wwitzel3: shouldn't the converter be running on the machine agent, not the unit agent? | 14:58 |
mfoord | dimitern: for you: https://medium.com/on-coding/programmer-passion-considered-harmful-5c5d4e3a9b28 | 15:03 |
wwitzel3 | natefinch: probably? it seems to restart just fine when the host ports are updated. | 15:06 |
wwitzel3 | natefinch: but I guess we won't be able to issue machine level updates in a unit level watcher. | 15:06 |
wwitzel3 | natefinch: I will push it up the stack | 15:06 |
natefinch | wwitzel3: cool | 15:08 |
=== kadams54 is now known as kadams54-away | ||
dimitern | mfoord, :) | 15:40 |
mfoord | dimitern: wait, does this mean the upgrade step should be in steps124 not steps 123? | 15:41 |
dimitern | abentley, we did wait for the failing test in question to pass | 15:41 |
mfoord | dimitern: would like to know if you think this test is sufficient? | 15:41 |
mfoord | dimitern: https://github.com/voidspace/juju/compare/address-life...voidspace:address-life-upgrade | 15:41 |
mfoord | dimitern: TestIPAddressesLife I mean | 15:42 |
dimitern | mfoord, will have a look in a bit | 15:43 |
mfoord | dimitern: ok | 15:43 |
mfoord | dimitern: I'll do a proper PR for it, I think it's done - so long as the test is sufficient | 15:43 |
dimitern | mfoord, so the upgrade steps need to be in steps123 I think | 15:44 |
mfoord | cool, that's where it is | 15:45 |
mfoord | dimitern: in the task for address lifecylewatcher you state: "add a state lifecycle watcher (which is a notify watcher) monitoring ipaddressesC Life changes." | 15:48 |
mfoord | dimitern: should it be a NotifyWatcher rather than a lifecycleWatcher | 15:48 |
mfoord | dimitern: a lifecycleWatcher is a commonWatcher not a NotifyWatcher AFAICS | 15:48 |
mfoord | dimitern: ah no, my mistake | 15:49 |
mfoord | dimitern: lifecycleWatcher also implements Changes | 15:49 |
dimitern | mfoord, it's the same thing - just implementation detail | 15:49 |
mfoord | making it a NotifyWatcher | 15:49 |
mfoord | yep, understood | 15:49 |
dimitern | mfoord, a notify watcher reports empty changes | 15:50 |
dimitern | mfoord, ok :) | 15:50 |
mfoord | dimitern: AFAICS lifecycleWatcher is cuurently unused... | 15:53 |
dimitern | mfoord, hmm.. ok so what's behind machine.Watch then? | 15:53 |
mfoord | where's that defined? not in state/machine.go | 15:54 |
mfoord | dimitern: it's an EntityWatcher | 15:55 |
mfoord | or entityWatcher rather | 15:55 |
mfoord | dimitern: do a grep for lifecycleWatcher in the codebase | 15:56 |
mfoord | I'm happy to use it, just sayin'... | 15:56 |
dimitern | mfoord, ok, then it might have changed since I knew that part of the code | 15:57 |
dimitern | mfoord, entity watcher it is then | 15:57 |
mfoord | dimitern: there is a lifecycleWatcher | 15:57 |
mfoord | dimitern: it should probably be deleted if it's unused | 15:57 |
dimitern | mfoord, no actually, wait a sec | 15:58 |
mfoord | I think lifecycleWatcher is what we want | 15:58 |
dimitern | mfoord, there is a whole bunch of lifecycleWatchers | 15:58 |
mfoord | entityWatcher watches for more than just lifecycle changes (I think) | 15:58 |
mfoord | are there? | 15:58 |
mfoord | there's a test that claims there are... | 15:58 |
dimitern | mfoord, yeah - check state/watcher.go | 15:58 |
mfoord | ah, newLifecycleWatcher | 15:59 |
mfoord | case change, that's why my grep failed | 15:59 |
mfoord | :-) | 15:59 |
mfoord | fair enough | 15:59 |
dimitern | mfoord, yeah :) | 15:59 |
mfoord | just keeping you on your toes... | 15:59 |
dimitern | mfoord, :D | 15:59 |
dimitern | mfoord, an entity watcher would've worked, but it will fire for any changes in the collection, not just life values | 16:00 |
mfoord | yep | 16:01 |
dimitern | mfoord, so I'm looking at the diff for your upgrade step | 16:02 |
mfoord | dimitern: cool | 16:02 |
dimitern | mfoord, what immediately springs to mind is that we should only add a life field with value "alive" to addresses allocated to machines which themselves are still alive, otherwise - it should be "dead" | 16:02 |
mfoord | dimitern: causing us to release the dead addresses | 16:03 |
mfoord | dimitern: cool | 16:03 |
dimitern | mfoord, that's right | 16:03 |
dimitern | mfoord, and only one other issue I could see off hand - other steps usually have a test like TestXYZIdempotent - which runs the step twice to ensure it's ok | 16:04 |
mfoord | dimitern: ok | 16:04 |
mfoord | dimitern: however... | 16:04 |
mfoord | dimitern: if the watcher only gets notified about changes after the watcher starts (i.e. probably not during an upgrade) then the dead addresses will probably never be removed | 16:05 |
mfoord | dimitern: unless you want starting the watcher to check for already dead addresses | 16:05 |
mfoord | dimitern: which it can do | 16:05 |
dimitern | mfoord, well think about it this way - nothing runs before the upgrade is complete, then everything restarts; | 16:06 |
dimitern | mfoord, also, workers watching other entities' life cycle will get an automatic change when the watcher is started, so will go an fetch all dead ips in our case and try to release them | 16:07 |
dimitern | mfoord, therefore, it should all work out eventually I think | 16:09 |
mfoord | dimitern: "workers watching other entities' life cycle will get an automatic change when the watcher is started, so will go an fetch all dead ips in our case and try to release them" | 16:11 |
mfoord | dimitern: why will "other entities life cycle" watchers cause all dead ips to be fetched? | 16:12 |
mfoord | dimitern: why would other watchers cause ips to be fetched *at all* anyway, let alone dead ones | 16:12 |
dimitern | mfoord, :) ok, I started out speaking in general then moved to our specific case | 16:12 |
mfoord | dimitern: do you mean a lifecycleWatcher *is* notified of new dead entities when it starts? | 16:12 |
dimitern | mfoord, I mean our ips watcher will do the same as the other life watchers | 16:13 |
dimitern | mfoord, and the worker which uses the watcher likewise - just react when a change happens (our worker reacts by getting all Dead ips and releasing them one by one) | 16:14 |
mfoord | dimitern: ah, you mean "changes" includes *all dead entities* | 16:15 |
mfoord | dimitern: so the next change (i.e. the restart) will include them | 16:15 |
dimitern | mfoord, well technically the "changes" are always empty for notify watchers, they just signify "something has changed" | 16:15 |
mfoord | ok | 16:16 |
dimitern | mfoord, so you'll need to go fetch the actual docs to see what changed, in our case - we'll just fetch all Dead ips | 16:16 |
mfoord | right | 16:16 |
mfoord | I (wrongly) assumed that the notification would include the dead ips | 16:16 |
mfoord | so that ones that *start dead* would be missed | 16:16 |
mfoord | that's fine then | 16:16 |
dimitern | yep | 16:17 |
mfoord | dimitern: http://pyfound.blogspot.co.uk/2015/02/john-pinner.html | 16:18 |
mfoord | dimitern: lovely guy, I worked with him on EuroPython for two years (when it was in Birmingham) and PyCon UK since pretty much the start | 16:18 |
dimitern | mfoord, I see - he looks like a nice guy | 16:21 |
mfoord | he was :-/ | 16:22 |
dimitern | mfoord, those sort of occasions are never welcome or expected :/ | 16:22 |
mfoord | yeah, we hoped he'd make it to the next PyCon UK - but wasn't to be | 16:22 |
dimitern | was he sick for some time? | 16:25 |
mfoord | dimitern: he had cancer, but he was expected to last longer | 16:26 |
dimitern | mfoord, oh I see.. terrible | 16:27 |
abentley | dimitern: That's a start, but we still don't have something we can release. | 16:33 |
dimitern | mfoord, you've got a review on the upgrade step btw | 16:33 |
dimitern | abentley, was it because I enabled the job or for some other reason? | 16:34 |
abentley | dimitern: I don't know. What revision was it? | 16:35 |
mfoord | dimitern: thanks | 16:35 |
dimitern | abentley, 2448 was the last one I saw for 1.23, 2449 - for master | 16:35 |
dimitern | abentley, and for both of these I manually restarted the restricted network job, as commented on the bug | 16:36 |
abentley | dimitern: So you landed cbaacb83e10f7757362f06e11c392ad3388ddf23 and 33de6a0b87bb7db23749c9a8a4f1a17dbb72f014 ? | 16:36 |
dimitern | abentley, wallyworld landed the cbaacb8 for me | 16:37 |
dimitern | abentley, I forced the other one to land as it was fixing a regression around kvm containers not being addressable under maas | 16:37 |
abentley | dimitern: So functional-restricted-networks failed for 2448. Was that the test you were watching? | 16:38 |
redelmann | hi! | 16:40 |
redelmann | one simple question! | 16:40 |
dimitern | abentley, it did fail initially because the instance had termination protection on it | 16:40 |
redelmann | should "juju scp -r service/0:/path/to/dir/ ." work? | 16:40 |
redelmann | it say: error: flag provided but not defined: -r | 16:41 |
dimitern | abentley, after that I restarted it manually with the same rev 2448 - http://juju-ci.vapour.ws:8080/job/functional-restricted-network/1317/ | 16:41 |
abentley | dimitern: I don't see any runs that passed. just 1313, 1314, 1315. | 16:41 |
redelmann | sorry | 16:42 |
abentley | 1316 was against 2449. | 16:42 |
redelmann | i forget scp -- -r | 16:42 |
dimitern | abentley, last two - 1316 and 1317 passed http://juju-ci.vapour.ws:8080/job/functional-restricted-network/ | 16:42 |
dimitern | abentley, yes, and 1317 was against 2448 | 16:43 |
abentley | dimitern: You can't run against a previous build-revision when the next build-revision has started. | 16:43 |
abentley | dimitern: The streams for 1317 were for 2449. | 16:44 |
abentley | dimitern: Because we overwrite the streams in the "publish-revision" step. | 16:44 |
dimitern | abentley, ok, I see | 16:44 |
dimitern | abentley, I wasn't going to do it anyway, jumping the line like this, but it was sitting there since yesterday | 16:45 |
dimitern | abentley, any idea why build-revision was disabled in the first place? | 16:46 |
dimitern | abentley, I suspect sinzui left it so I can connect to the ec2 instance to investigate the networking issue | 16:47 |
abentley | dimitern: That would make sense. | 16:47 |
dimitern | abentley, ok, so there was a bit of a stir with the reports due to my intervention, but I don't believe I did anything too bad - as the next rev is tested (already underway) things will fall into place | 16:49 |
abentley | dimitern: The rev currently being tested is master, not 1.23. | 16:51 |
dimitern | abentley, so then to trigger a re-test of 1.23 another change needs to land? | 16:52 |
abentley | No, we can do it manually. | 16:52 |
dimitern | abentley, ok, good | 16:52 |
dimitern | abentley, I hope that won't be too much of a trouble | 16:53 |
abentley | dimitern: But by default, juju-ci will want to test 772cb769e6277403f0f6ac6e41241a52d102badc | 16:53 |
abentley | dimitern: So I'll have to disable build-revision. | 16:54 |
dimitern | abentley, to get 1.23 ahead of master? | 16:54 |
abentley | dimitern: Yes, so that I have a window when it's not testing where I can start a manual re-test. | 16:55 |
dimitern | abentley, ok, that sounds good - i won't touch anything more today | 16:55 |
mup | Bug #1432759 was opened: Transient error on status while running deployments via quickstart <juju-core:New> <https://launchpad.net/bugs/1432759> | 18:01 |
perrito666 | ericsnow_: I might have found a small bug in http://reviews.vapour.ws/r/1172/ | 18:52 |
perrito666 | I added an issue | 18:52 |
ericsnow_ | perrito666: thanks | 18:52 |
mrpoor | hi | 19:00 |
natefinch | Hello :) | 19:03 |
=== kadams54_ is now known as kadams54-away | ||
* natefinch just got bit by that whole "This thing says it's State but it's really not State-state" | 19:21 | |
ericsnow_ | perrito666: thanks for catching that; I've updated the patch | 19:21 |
perrito666 | ericsnow_: cool :D | 19:22 |
perrito666 | natefinch: well it is A State, not The State | 19:22 |
davechen1y | natefinch: imagine me grinning, with that kind of evil, sadisic grin | 20:11 |
natefinch | davechen1y: do you have another kind of grin? | 20:11 |
davechen1y | sometimes it looks slightly less creepy | 20:12 |
* perrito666 was going to mock natefinch when he found himself between 3 different kinds of state | 20:12 | |
davechen1y | if it's dark | 20:12 |
davechen1y | and i'm not looking directly at you | 20:12 |
perrito666 | ok ok, we might need to rename a few states | 20:13 |
perrito666 | or make grep a lot smarter :p | 20:13 |
=== kadams54 is now known as kadams54-away | ||
natefinch | wwitzel3: gotta run, kids are going crazy. I'll try to get a test run on my code later... right now, enabling the watcher I had made makes my local provider fail before it even creates the ~/.juju/local/ folder... which I would have hoped would be impossible, but evidently not. | 20:39 |
natefinch | s/watcher/worker/ | 20:39 |
wwitzel3 | rgr | 20:48 |
thumper | niemeyer: ping | 20:55 |
ericsnow_ | could I get a review on http://reviews.vapour.ws/r/1173/ | 20:56 |
mattyw | thumper, as you're around I have a fairly low priority ping for you - if you're busy feel free to ignore | 21:05 |
mattyw | calling it a night all, nighty night | 21:16 |
=== kadams54-away is now known as kadams54 | ||
=== kadams54 is now known as kadams54-away | ||
perrito666 | menn0: there are errors displaying yourlast pr | 21:55 |
menn0 | perrito666: I know. it's a long running feature branch. all the changes have already been reviewed. you can ignore. I just need to get the merge commit in. | 21:55 |
menn0 | perrito666: while we're talking... | 21:56 |
menn0 | perrito666: if you run the state tests on current master do you see this too? : | 21:56 |
menn0 | $ go test ./state | 21:56 |
perrito666 | menn0: dont look at me, my reviews carry the weight of a feather | 21:56 |
menn0 | # github.com/juju/juju/state | 21:56 |
menn0 | state/unit.go:1787: assignment count mismatch: 2 = 1 | 21:56 |
menn0 | # github.com/juju/juju/state | 21:56 |
menn0 | state/unit.go:1787: assignment count mismatch: 2 = 1 | 21:56 |
menn0 | FAILgithub.com/juju/juju/state [build failed] | 21:56 |
menn0 | perrito666: :) | 21:56 |
perrito666 | menn0: lemme look | 21:57 |
ericsnow_ | menn0: that cleared up for me when I ran godeps -u | 21:57 |
menn0 | ericsnow_: ok, let me try that. I have a hook which runs godeps automatically... | 21:57 |
menn0 | ericsnow_: ...which apparently didn't work. all good now. | 21:58 |
menn0 | perrito666: never mind. thanks anyway. | 21:58 |
ericsnow_ | menn0: cool :) | 21:58 |
perrito666 | menn0: ok, I was running them | 21:58 |
menn0 | perrito666: if they're running then you're not seeing the problem. the compile fails straight away. | 21:59 |
perrito666 | menn0: ok | 21:59 |
perrito666 | wallyworld: you make me feel so roman when you call me horatio :p | 22:00 |
perrito666 | wallyworld: ping me when you are around pls | 22:20 |
wallyworld | :perrito666 in meeting, talk soon | 22:20 |
perrito666 | wallyworld: no hurry at all | 22:20 |
* perrito666 tries to get more upload and discovers that his internet provider punishes permanence as a client | 22:45 | |
davechen1y | in related news | 22:48 |
davechen1y | a googler i know has got himself locked out of his GCE account | 22:48 |
davechen1y | because his daemon is consuming all the request quota | 22:48 |
davechen1y | if you've used AWS | 22:48 |
davechen1y | you know that feel | 22:48 |
perrito666 | davechen1y: I have never been there, aparently I hvent used it enough | 22:50 |
perrito666 | also, what happened to your nickname | 22:51 |
davechen1y | internets | 22:52 |
wallyworld | perrito666: hi, free now | 22:54 |
perrito666 | wallyworld: segfault, now was not allocated | 22:55 |
perrito666 | sorry, that was a really bad joke | 22:55 |
wallyworld | groan | 22:55 |
davechen1y | yellow card | 22:57 |
perrito666 | oh cmon, really? I didn't eve hit the guy | 22:58 |
davechen1y | let's go to the video replay | 23:00 |
davechen1y | 09:55 < perrito666> wallyworld: segfault, now was not allocated | 23:00 |
davechen1y | 09:55 < perrito666> sorry, that was a really bad joke | 23:00 |
davechen1y | 09:55 < wallyworld> groan | 23:00 |
davechen1y | the referees decision is final | 23:00 |
perrito666 | oh, he is just acting, Ill complain to the league | 23:01 |
ericsnow_ | davechen1y: lol | 23:02 |
perrito666 | ok so, apparently my ISP will raise my bill a 30% in avg every 6 months if I stay for 1.5 years more | 23:05 |
perrito666 | is this a common practice over the world? | 23:06 |
menn0 | perrito666: not in my experience | 23:10 |
* perrito666 gives a tour around all the possible ISPs and finds everyone has the same behavior, that is... mad | 23:11 | |
perrito666 | so it is more convenient for me to actually drop the service every 6 months and get a new membership | 23:11 |
* perrito666 calls to drop the service and get a new connection | 23:12 | |
ericsnow_ | perrito666: PTAL http://reviews.vapour.ws/r/1173/ | 23:14 |
perrito666 | ericsnow_: going | 23:14 |
ericsnow_ | perrito666: thanks | 23:14 |
menn0 | perrito666: in my experience, if that kind of thing happens, you can usually get the deal that new subscribers get when you threaten to leave | 23:15 |
perrito666 | menn0: trying to, Ill call after ericsnow_ 's patch is reviewed | 23:16 |
ericsnow_ | wallyworld: I have a followup to your suggestion to comment individual constants: http://reviews.vapour.ws/r/1173/ | 23:22 |
wallyworld | ericsnow_: will look after meeting | 23:23 |
ericsnow_ | wallyworld: ta | 23:23 |
perrito666 | ericsnow_: lemme know if my comment makes sense | 23:27 |
ericsnow_ | perrito666: k | 23:27 |
perrito666 | I was a bit lazy to re-write the whole thing | 23:27 |
perrito666 | anyone has anything else to review? it seems Ill be on hold for a long time and the music is quite soothing, ideal for reviewing | 23:38 |
ericsnow_ | perrito666: I've updated that patch; see if it looks okay to you now | 23:39 |
perrito666 | ericsnow_: creative :) | 23:43 |
ericsnow_ | perrito666: hopefully easier to follow :) | 23:44 |
perrito666 | ericsnow_: there you go, fix then shipped | 23:52 |
ericsnow_ | perrito666: thanks | 23:53 |
axw_ | wallyworld: FYI, http://reviews.vapour.ws/r/1176/ -- would be good if you could take look later | 23:55 |
wallyworld | sure | 23:55 |
perrito666 | ericsnow_: btw, add please's and sorry's wherever it might apply, I am not being impolite, my eyes are just tired | 23:56 |
ericsnow_ | perrito666: :) | 23:56 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!