/srv/irclogs.ubuntu.com/2015/10/22/#juju-dev.txt

wallyworldmenn0: sorry, free now00:15
menn0wallyworld_: sorry, I didn't see you01:12
wallyworld_np01:13
menn0wallyworld_: I was wanting to know about how tools are stored in the controller01:13
wallyworld_sure, maybe a hangout01:13
menn0wallyworld_: ok cool01:13
wallyworld_https://plus.google.com/hangouts/_/canonical.com/tanzanite-stand01:14
menn0wallyworld_: i'm there now01:15
wallyworld_me too01:15
wallyworld_hmmm01:15
thumperhere is a personal itch being scratched: http://reviews.vapour.ws/r/2970/02:47
thumpermenn0 ^^02:49
menn0thumper: i'm looking02:54
thumpermenn0: and the related juju branch http://reviews.vapour.ws/r/2971/02:56
menn0thumper: looks cool. so in the case of Juju there would be a single well known location for the aliases file that people could monkey with?02:56
thumpermenn0: and it works surprisingly well :)02:56
thumpermenn0: see the second one :)02:56
thumpermenn0: my testing file: http://paste.ubuntu.com/12891561/02:58
thumperwell, not testing, but the one I started hacking up02:58
menn0thumper: it's cool! I guess we can acheive almost the same thing with shell aliases but the level of integration is better with your PR02:59
thumpermenn0: I expect to create an aliases command at some stage...03:00
thumperlike I did with 'bzr alias'03:00
thumperto add and remove aliases from the file through the CLI03:00
thumperthe command could be automagically added by the supercommand if it has been registered with an alias file location03:01
menn0thumper: yep that would be nice03:02
thumperthe interesting bit would be attempting to keep some stability in the file structure03:02
thumperso we don't lose user comments or whitespace03:02
thumperthat'd be easy enough :)03:02
thumperbut later03:02
rick_h_menn0: ping, got time to chat on these comments?03:03
menn0rick_h_: sure. I wanted to set up a call to get the spec sorted more quickly. now is good too though03:04
rick_h_menn0: cool03:04
rick_h_menn0: https://plus.google.com/hangouts/_/canonical.com/rick?authuser=1 *adjust authuser*03:05
* thumper is done04:40
thumperlaters04:40
wallyworldaxw: small one if you have a moment http://reviews.vapour.ws/r/2972/06:53
axwwallyworld: sure, soon as bootstrap stops flooding my connection06:55
wallyworldno rush06:55
axwwallyworld: why not just log it in APIHostPortsSetter.SetAPIHostPorts? rather than sending the results all the way back up only to be logged07:02
wallyworldyeah, probably should have07:04
wallyworldwas logging where the original was done07:04
wallyworldi'll change it07:04
axwwallyworld: thanks07:04
wallyworldaxw: well that's much simpler. sigh. so stupid07:16
axwwallyworld: cool :)07:16
axwlooking07:16
axwwallyworld: LGTM07:17
wallyworldta07:17
fwereadewaigani, ping07:53
fwereade(and menn0 if you're around as well)07:54
menn0fwereade: i happen to be around07:54
fwereadewaigani, menn0: I think it comes down to "why can't we destroy a system when it's hosting environments?"07:54
waiganifwereade: beacuse we have a txn assert that doesn't allow this07:55
waiganifwereade: are you asiking why don't we remove the assert?07:56
menn0fwereade: because if the system goes the API server goes and we don't want the API server to go before the other envs are gone07:56
fwereadewaigani, or at least make it conditional?07:56
fwereademenn0, that's the purpose of dying/dead07:56
fwereademenn0, you stay dying while you clean up07:56
menn0fwereade: i was guessing somewhat07:56
fwereademenn0, waigani: once the things that depend on you have gone away, you can become dead, and get cleaned up07:56
waiganifwereade: what if one of the environs fails to be cleaned up?07:57
waiganihow do we back out?07:57
fwereadewaigani, then the system stays dying, that env stays dying, and hopefully we report what's happpening07:57
fwereadewaigani, we don't07:57
waiganiokay07:58
waiganifwereade: when you say make it conditional...07:58
fwereadewaigani, (am I missing something? what can/should we back out if one env won't die?)07:58
menn0fwereade: FTR i'm going to need some kind of environment mode for environment migration07:59
menn0fwereade: I was thinking we could use the same mode07:59
menn0field07:59
menn0(the field is currently called migration-mode in the spec though)07:59
fwereadewaigani, re conditional, I mean that we have two use cases -- destroy-if-empty and destroy-with-contents, if you like07:59
fwereadewaigani, it doesn't particularly phase me to have those two cases differ by one assert08:00
fwereademenn0, it feels to me like they're very different08:00
fwereademenn0, no argument against migration-mode08:00
waiganiright so we have two destroy paths, one asserting no envs, the other without the assert08:00
waiganifwereade: ^?08:00
menn0fwereade: agreed. although some of the modes will also need to block the same kinds of transactions.08:01
fwereadewaigani, yeah, I think one of them is set-dying-if-refcount-0 and the other is just set-dying08:01
waiganiokay. That will also solve the problem for me.08:02
waiganias to backing out, as long as we can handle the situation of several envs in different states of life with a dying system that's fine08:02
fwereademenn0, expand a bit please?08:03
waiganioh no wait08:03
menn0fwereade: one example is when an environment is being migrated out of an environment we want to block provisioning of resources for it08:04
menn0fwereade: I guess that could/will also be done at the API server level08:04
fwereademenn0, I almost think it has to be?08:05
waiganisorry, typing out loud. So my original consern was the same as menno's above. That if we set the system to dying first, we could get zombie resources.08:05
fwereadewaigani, IMO not if we do it properly?08:05
fwereadewaigani, from my perspective "dying" literally means "the user asked us to destroy this"08:06
waiganifwereade: right, so a dying system should be considered just as reliable as an alive one, you just can't provision new resources?08:06
menn0fwereade: i'm definitely planning to lock down the API for the environment during migration.08:07
menn0fwereade: I had it in my head that we'd also do it at the txn level08:07
fwereadewaigani, yeah, certain changes to dying entities are no longer appropriate, but generally they should continue to function08:07
menn0fwereade: but that's probably overkill08:07
fwereademenn0, I *think* that if we've got a solid apiserver-level block then nothing else will be touching state and we're safe08:08
fwereademenn0, so long as we do a resume-all08:08
menn0fwereade: there's also the added protection migrations aborting early if anything is provisioning when the migration is initiated08:09
fwereademenn0, right, I'm not strongly against some txn-level mechanism to protect migrations08:10
waiganifwereade: so just to be clear, we then don't care about the race in the case where we're destroying the system and any environs. I.e. someone adds an env as I destroy everything - that env also gets a bullet?08:11
menn0fwereade: by resume-all you mean a mgo/txn Runner.ResumeAll?08:11
fwereademenn0, yeah08:11
fwereadewaigani, I think so, yes08:11
fwereadewaigani, races where someone wins aren't such a worry, it's races that leave us inconsistent that keep me up at night08:12
waiganihmm.. right I see08:12
fwereadewaigani, eg an alive system that's quietly nuking all its tenants, or a dying system accepting new ones08:12
waiganilol, fair enough, I see the difference08:13
waiganiokay I'm happy, I can move forward with this. What should we do with the environ mode branch? Still useful for migration?08:14
menn0fwereade: just to be clear, you're suggesting a ResumeAll just after the API gets locked down?08:15
fwereademenn0, yes, I think we need that -- right?08:15
fwereadewaigani, might well be, menn0 will be able  to answer more clearly there08:15
menn0waigani: yep, please keep it. we can use much of it when adding the migration-mode field08:15
waiganiokay cool, will do08:16
menn0fwereade: yes I think so. just making sure we're on the same page.08:16
fwereademenn0, I *think* we'd want the ResumeAll, wouldn't we? migration is going to write some docs, and we don't want it picking up incomplete transactions from before08:16
fwereadecool08:16
menn0yep that makes sense08:17
menn0I hadn't consider that yet but it makes sense08:17
menn0otherwise something could start changing the env when we thought it was stable08:17
menn0wallyworld: re the API address logging change08:20
menn0wallyworld: I noticed you did exactly the same thing which I thought of during dinner :)08:21
menn0wallyworld: just log the addresses inside SetAPIHostPorts08:21
menn0much simpler08:21
dooferladdimitern, jam, fwereade, TheMue, voidspace: hangout time!09:00
TheMuedooferlad: ouch, thx, omw09:01
dimiterndooferlad, omw09:01
jamomw09:01
voidspacedooferlad: omw09:03
dooferladdimitern, frobware, voidspace: hangout time!10:04
fwereadejam, also ^10:05
dimiternooh.. forgot about that one, omw10:05
voidspacefrobware: actually I might be able to reproduce the "machine agent never upgrades" problem - going from 1.20 to 1.24.610:31
voidspacefrobware: going to see if it really is reproducable (but with debug logging on)10:32
voidspaceand if it happens with 1.24.710:32
frobwarevoidspace, did you use deploy different charms this time?10:43
voidspacefrobware: nope, not sure what was different10:43
frobwarevoidspace, timing related? <shrug>?10:44
voidspacefrobware: in the last deploy (currently bootstrapping again) I saw machine-0 flatline at 100% CPU constant10:44
voidspacefrobware: and machine-1 never upgraded the agent10:44
voidspacefrobware: possibly10:44
voidspace(possibly timing related I mean)10:44
voidspacefrobware: lots of errors in the logs, but nothing *useful*10:44
voidspacefrobware: so will repeat with debug logging on10:45
frobwarevoidspace, dmesg - anything interesting in there? oom killer, et al?10:45
voidspacefrobware: ParseError10:45
voidspaceah no10:45
voidspacedmesg10:45
voidspaceI read that as "debug"10:45
voidspacefrobware: didn't look, will check next time10:45
voidspacefrobware: if they have lots of units there will be more load on the API server, so some symptoms maybe different10:46
frobwareright10:46
voidspacefrobware: ah, so this time the mysql/0 unit reports the newer version - but the *machine agent* is still reporting the older version11:07
voidspacefrobware: no 100% CPU usage this time11:07
voidspacefrobware: maybe it did happen before and I just missed it (seeing the unit agent with the upgraded version)11:08
voidspacefrobware: I'll spelunk the logs and see what I can work out11:08
alexisbdimitern, frobware pint13:13
alexisbpin13:13
alexisbping13:13
frobwarealexisb, I'll take your first offer. :)13:14
alexisblol13:15
alexisb6am and look where my mind goes, scary13:15
alexisbfrobware, can you and dimiter jump on a hangout?13:15
frobwarealexisb, sure13:15
alexisbI want to chat about hardware so we can get things rolling13:15
frobwarealexisb, ah I just started a doc on that13:16
alexisbhttps://plus.google.com/hangouts/_/canonical.com/andy-alexis13:16
dimiternalexisb, hey, in a call, bbiab13:19
mupBug #1508923 opened: Support for Azure Resource Groups <juju-core:New> <https://launchpad.net/bugs/1508923>13:29
alexisbdimitern, frobware and I chatted you are all good he has it handled13:31
dimiternalexisb, awesome!13:33
frobwaredooferlad, ping - can we HO for a bit regarding your current h/w13:34
dooferladfrobware: sure13:34
frobwaredooferlad, let's use the standup HO13:35
mupBug #1500760 changed: all juju subcommands need to respect -e env-name flag <network> <usability> <juju-core:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1500760>14:02
frobwarevoidspace, dimitern: see my recent doc invite regarding h/w - would appreciate if you could fill out your sections so that we can complete today and send to rick, et al.14:09
voidspacefrobware: ok14:45
frobwarevoidspace, thx14:45
=== perrito667 is now known as perrito666
=== Odd_Blok1 is now known as Odd_Bloke
=== jcsackett_ is now known as jcsackett
=== Ursinha_ is now known as Ursinha
natefinchfwereade: (or anyone else) anyone know why my worker would be dying with permission denied constantly?  Looks like it's the watcher returning an error on Next15:11
natefinch- http://pastebin.ubuntu.com/12894707/15:11
rogpeppejam: ping15:12
=== frobware_ is now known as frobware
rogpeppenatefinch: do you know what the status is re: merging feature branches?15:39
natefinchrogpeppe: no idea15:45
rogpeppenatefinch: any idea who might know?15:46
frobwarevoidspace, do you have additional h/w requirements - I noticed you left that section empty, so just double-checking...15:46
voidspacefrobware: well, we don't know yet do we - we haven't specced what we'll need for a sufficient dev environment15:47
frobwarevoidspace, james and dimitern are basing it on 4 machines, per the bundle spec15:47
voidspace"The openstack-base bundle indicates that 4 machines are required, each with two NICs and 2 disks however it is questionable whether developers will initially need to deploy the full bundle"15:48
voidspacefrobware: if we need the full spec, I'll need two NIC cards, 2 more disks, plus two more machines15:48
natefinchkatco: what's the process for merging feature branches?  rogpeppe is asking15:49
frobwarevoidspace, OK, maybe that needs more explanation. For networking I don't think we would need 2 disks.15:49
katconatefinch: $$merge$$15:49
voidspacefrobware: but we'll need the four machines?15:49
rogpeppekatco: so it's ok to merge a feature branch at any time, assuming it's blessed?15:49
voidspacefrobware: my requirements are basically identical to dooferlad15:49
katcorogpeppe: yes, as long as tip of branch is blessed15:50
voidspacefrobware: as my existing hardware is very similar in spec15:50
frobwarevoidspace, I say yes.15:50
rogpeppekatco: great!15:50
frobwarevoidspace, beware that is NUCs are not AMT, so you would need some kind of PDU to power on/off.15:50
frobwares/is/his15:50
voidspacefrobware: I have a PDU15:50
frobwarevoidspace, viola!15:51
voidspacefrobware: but the existing hardware table only had space for machines... :-)15:51
voidspacefrobware: so that's cool15:51
frobwarevoidspace, bleh15:51
voidspacefrobware: I assumed it was on purpose :-)15:51
frobwarevoidspace, feel free to add more characters :)15:51
frobwarevoidspace, spec and buy what we believe we will need to deliver15:52
frobwarevoidspace, thx for the update; this is a strawman proposal anyway - just wanted to get the ball rolling today15:54
voidspacefrobware: so far - one in three upgrades to 1.24.6 have succeeded15:54
voidspacefrobware: one in one upgrades to 1.24.7 have succeeded15:54
voidspacetrying again with 1.24.715:55
frobwarevoidspace, interesting15:55
voidspaceI also have debug logs from a failed one15:55
voidspace(by failed I mean that machine agent stayed on 1.20 - everything still *appeared* to work.)15:55
voidspaceyeah, weird15:55
voidspacefrobware: and will be tricky if it's actually a bug in 1.2015:56
frobwarevoidspace, ian mentioned that they had tried 1.24.7 in the RT ticket but I didn't see any mention of that explicitly in the bug15:56
voidspacefrobware: not in that original report definitely15:57
frobwarevoidspace, he mentioned it in passing, but again, worth confirming.15:57
voidspacefrobware: looking at the rt now15:58
voidspaceAh, Peter has tried 1.24.715:58
voidspacehe has logs15:58
voidspaceperrito666: ping15:59
perrito666voidspace: pong16:00
voidspaceperrito666: are you still looking at bug 150786716:00
mupBug #1507867: juju upgrade failures <canonical-bootstack> <upgrade-juju> <juju-core:Triaged> <https://launchpad.net/bugs/1507867>16:00
voidspaceperrito666: the failed upgrade rt?16:00
perrito666voidspace: I am, I did forget to own it this am16:01
perrito666voidspace: do you have anything to add to it?16:01
voidspaceperrito666: cool16:01
voidspaceperrito666: not really, I can reproduce an issue - when I upgrade from 1.20 to 1.24 *most* of the time (but not always) the machine agent fails to upgrade version16:02
voidspaceperrito666: the unit agent reports the correct new version, but not the machine agent16:02
voidspaceperrito666: however, I can't reproduce the bug as described (missing address or corrupted db)16:02
voidspaceperrito666: this is with a deployed mongo unit and ignore-machine-addresses on16:03
perrito666voidspace: maybe you can help me a bit, from reading at the logs, It seems to me that the juju binary in use is in fact the old one16:03
voidspaceperrito666: it would be weird for the machine agent and unit agent to be from different binaries16:04
voidspacebut that's what status is reporting16:05
perrito666errors in the logs correspond to older versions than the one supposedly running16:05
perrito666voidspace: do you have that env running?16:05
voidspaceperrito666: no, my *current* env succeeded16:06
voidspaceperrito666: I'll redo it (takes about 15 - 20 mins) and *usually* fails16:06
voidspaceperrito666: I'll report back shortly16:06
perrito666voidspace: appreciated16:06
perrito666a ps faux will shed some light16:06
ericsnowkatco, natefinch, wwitzel3: ptal http://reviews.vapour.ws/r/2930/16:14
wwitzel3ericsnow: looking16:17
ericsnowwwitzel3: ta16:17
cheryljsinzui, mgz, have we done long jump upgrade tests from 1.18.* to 1.24.7?17:18
sinzuicherylj: no, it isn't possible, go to 1.20, then to 1.24.17:19
cheryljsinzui: is 1.18->1.22->1.24 ok?17:21
sinzuicherylj: I don't have my table about, but I don't think 1.18 will accept anything but 1.20.x17:22
sinzuicherylj: if I wasn't busy I would just reply the upgrade tests17:22
sinzuiupgrade steps17:22
cheryljsinzui: np, I can get with you a bit later on it17:23
perrito666voidspace: any news?17:25
perrito666sinzui: I am a bit confused on why this bug https://bugs.launchpad.net/juju-core/+bug/1497301 is in the top list of 1.25 http://reports.vapour.ws/releases/top-issues?_charset_=UTF-8&__formid__=deform&previous_days=7&issue_count=20&update=update#1.2517:34
mupBug #1497301: mongodb3  SASL authentication failure <ci> <mongodb> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1497301>17:34
sinzuiperrito666: the bug happens so often in the single test we run that it dominates the count of bug frequency17:35
perrito666sinzui: but mongodb3?17:36
perrito666I have this sensation that I missed something17:36
sinzuiperrito666: http://reports.vapour.ws/releases/issue/55fc1a67749a5674698af639 shows every occurrence. It just happens all the for run-unit-tests-mongodb3 job, which core asked us to test17:36
perrito666this is on our feature branch?17:37
sinzuiperrito666:17:37
sinzuiperrito666: no. this test is run for every revision in every branch. The host we run the unit tests on has mongodb 317:38
perrito666ok, didn't know that :)17:38
sinzuicherylj: bad news. Juju doesn't support 1.25.0 I think tests were written to keep juju version in devel. I will report the bug in a few minutes17:39
sinzuicherylj: https://bugs.launchpad.net/juju-core/+bug/1509032 is super important for 1.25.017:45
mupBug #1509032: Juju doesn't support is own version of 1.25.0 <ci> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1509032>17:45
perrito666voidspace: I need to relocate, if you get to reproduce the error please get me more info :)17:46
mupBug #1509032 opened: Juju doesn't support is own version of 1.25.0 <ci> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1509032>17:47
cheryljsinzui: looking17:47
cheryljoh god, this is just a horribly wrong test17:50
cheryljsinzui:  was that the only failing test?17:52
mupBug #1509032 changed: Juju doesn't support is own version of 1.25.0 <ci> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1509032>17:53
sinzuicherylj: The bug points to 2 other status tests that failed in my three tries. I don't see a 1.25.0 connection to the failures17:54
mgzcherylj: see 5191 on jenkins github-merge-juju job17:54
cheryljk17:54
sinzuicherylj: mgz: http://ci-master.vapour.ws:8080/view/Juju%20Ecosystem/job/github-merge-juju/5192/consoleText is better because it isn't a victim of bad record mac17:55
cheryljah, thanks17:55
mgzanywa, looks like three dreal failures, the one in the bug and two status tests with ahrd to read mismatches :)17:56
cheryljokay, I'm going to handle the case of the state test failing.17:56
mupBug #1509032 opened: Juju doesn't support is own version of 1.25.0 <ci> <test-failure> <juju-core:Triaged by cherylj> <https://launchpad.net/bugs/1509032>17:56
cheryljcmars, katco, can you volunteer someone to look at the status failures here:  http://ci-master.vapour.ws:8080/view/Juju%20Ecosystem/job/github-merge-juju/5192/consoleText17:57
mupBug #1509032 changed: Juju doesn't support is own version of 1.25.0 <ci> <test-failure> <juju-core:Triaged by cherylj> <https://launchpad.net/bugs/1509032>17:59
cheryljwwitzel3, ericsnow, natefinch can one of you guys look at these failures?18:00
natefinchcherylj: I can look18:00
cheryljthanks, natefinch.  It's the status failures in this run:  http://ci-master.vapour.ws:8080/view/Juju%20Ecosystem/job/github-merge-juju/5192/consoleText18:00
mupBug #1509032 opened: Juju doesn't support is own version of 1.25.0 <ci> <test-failure> <juju-core:Triaged by cherylj> <https://launchpad.net/bugs/1509032>18:02
voidspaceperrito666: last couple of attempts to repro *failed* (i.e. the upgrade worked - failed to fail)18:21
voidspaceperrito666: and now I'm EOD and off out to Northants Geeks18:22
voidspaceperrito666: when I come back in I may try again as I can do it in front of the TV18:22
natefinchwho the hell writes tests that check 21 lines of textual output?18:24
natefinchobtained: <wall of text>  expected: <wall of text>18:25
natefinchthanks18:25
natefinchcherylj: forgot I have to watch the kids while my wife takes my daughter to a doctor's appointment, so I'll be mostly afk for an hour and a half or so.18:26
cheryljnatefinch: ok, had you found anything yet to hand off?18:28
cheryljomfg, it's a whitespace problem18:30
natefinchcherylj: sounds like you found more than I did18:48
natefinchcherylj: I was about to just diff the before and after on those statuses18:48
marcoceppihello18:48
marcoceppiI ran juju ensure-availablility on AWS and now I can't connect to the api from the CLI18:48
natefinchcherylj: ahh yeah I see it... the number of spaces after 1.25.1 in the status.  amazing.18:49
natefinchcherylj: (still not really here ;)18:50
marcoceppihttp://paste.ubuntu.com/12896313/18:50
=== natefinch is now known as natefinch-afk
cheryljnatefinch-afk: no worries, I'm going to fix the status failures in the same patch.18:50
marcoceppidb, jujud-machine-0 are both running on machine 018:51
marcoceppiI'll just ask on the list.19:01
=== urulama_ is now known as urulama
cheryljI need a review so we can move to 1.25.0:  http://reviews.vapour.ws/r/2977/19:19
cheryljcmars, katco natefinch-afk wwitzel3 mgz ^^19:22
katcocherylj: lgtm19:24
sinzuicherylj: spaces broke the status test? ouch19:31
cheryljsinzui: yeah, awesome.19:32
cheryljsinzui: I sort of hacked it so that it won't break if the length of the current version changes again19:33
cheryljbut ideally, we'd fix that test.19:33
cheryljperrito666: ping?19:53
perrito666cherylj: pong19:54
cheryljperrito666: good afternoon :)  Wanted to see if this was on your radar yet?  bug 149730119:54
mupBug #1497301: mongodb3  SASL authentication failure <ci> <mongodb> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1497301>19:54
perrito666cherylj: it sort of is, I kind of just learned that we are testing that19:56
perrito666I am not yet sure I understand what this is about19:56
cheryljperrito666: have you run into it yourself?19:56
perrito666sinzui: my branch still did not go trough? :(20:01
sinzuicherylj: you got a bad record mac20:02
mgzI resubmitted20:02
cheryljI know :(20:02
mgzand chery was faster anyway20:02
cheryljI win!20:02
mgzthat particular flavour seems more common in the gating job of late20:03
sinzuiperrito666: I see your merge. https://github.com/juju/juju/commits/restore-fix we are waiting for 1.25.0 to exist We stopped CI so we could test is as soon as it existed20:03
mgzcherylj: we presumably do want to target that bug against master as well, as the dodgy tests want fixing there too?20:06
cheryljmgz: yes, I'm working on that now20:06
cheryljbut it wouldn't hit us until we move to 1.26.020:06
sinzuicherylj: Ci has started testing your revision20:27
cheryljsinzui: yay!20:28
cheryljsinzui: Is there any ballpark of when we can expect 1.25.0 in the ppa?20:28
sinzuiTomorrow cherylj20:28
* cherylj sad panda20:29
cheryljbut so it goes...20:29
sinzuicherylj: 3+ hours to test (and make release artifacts like real agents), then 3+ jourss to get the base debs created in the secret PPA., then 1.5 hours to publish to the CPCs, then 1.5+ hours to publish to streams.canonical.com, then 1h to copy to the public PPA.20:31
cheryljsinzui: Oh I'm sure there's a lot that goes into it, it's just a shame that these bugs added to the delay20:32
sinzuicherylj: we have no control over Lp or Jerff, so we can only hope we get immediate service20:32
cheryljsinzui: what do you need from Jerff?  I can ask my office mate to help20:32
sinzuicherylj: We queue the job that makes the agents for streams.canonical.com. We expect it to deliver betweek 15 and 45 minutes past the hour. Sometimes it is many hours because the machine is nusy20:33
cheryljsinzui: I can have Rob manually trigger the job, rather than waiting the hour to pick it up20:36
cheryljnot that it helps a *whole* lot20:36
sinzuicherylj: that is nice, I can ask in #cloudware too when it needs to happen quickly. Since the release process is a queue of steps building on eachother. there is nothing to ask for now20:37
cheryljyeah20:37
mupBug #1509099 opened: juju does not error or warn when agent-stream is ignored <ci> <upgrade-juju> <juju-core:Triaged> <https://launchpad.net/bugs/1509099>21:36
perrito666wallyworld: ping me when you are here22:14
mupBug #1509097 opened: Juju 1.24.6 -> 1.24.7, upgrade never finishes <kanban-cross-team> <landscape> <juju-core:New> <https://launchpad.net/bugs/1509097>22:21
alexisbwallyworld, ping22:45
wallyworldhey, just talking to horatio, give be 10?22:46
alexisbwallyworld, np, when you are free22:46
alexisbno rush22:46
cheryljmenn0: I got the syslog for that replicaset / EMPTYCONFIG bug if you want to take a look:  bug 141262122:49
mupBug #1412621: replica set EMPTYCONFIG MAAS bootstrap <adoption> <bootstrap> <charmers> <cpec> <cpp> <maas-provider> <mongodb> <oil> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1412621>22:49
menn0cherylj: i've got a few errands to run right now but i'll take a look today22:54
wallyworldalexisb: just finished but about to do standup, can you wait another 10 minutes or so?23:14
alexisbgeeze wallyworld23:14
alexisbjust keep pushing me off ;)23:15
alexisbyes I will still be here in 10 minutes23:15
alexisbbut my info may be useful for your standup23:15
wallyworldalexisb: oh, ok let's talk quikly now then23:15

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!