[00:00] thumper: bogdan is going to continue with it tomorrow but he's done for today [00:00] :-( [00:00] only machine 0 seems to upgrade and then others don't [00:01] i'll get some reviews out of the way and then i'll take a look through the logs [00:06] thumper, wallyworld: i've been chatting to bogdanteleaga [00:06] so the issue with the upgrade thing [00:07] the CI test isn't going to work because it's upgrading from 1.24.2 [00:07] and the 1.24.2 agent won't restart for the upgrade on windows [00:07] pretty much [00:07] the starting agent needs to have the fix [00:08] bogdanteleaga: and they're the fixes you have in the PR that's up already? [00:08] yep [00:09] is there a workaround for users that have an older windows version? [00:09] I guess we either merge that and wait for it to be released, or make a different job [00:09] how do they upgrade? [00:09] they have to manually restart the service I'm afraid [00:09] the upgrade itself works [00:10] so if they just restart the service, it works, but not if they don't? [00:10] is this a regression from previous windows versions, or new? [00:10] it doesn't know to start back up by itself [00:10] it might be a regression since the new service package was introduced [00:11] hmm [00:11] not completely sure what we did before [00:11] when was it introduced? [00:11] let me check [00:11] also, which was the first juju version where we supported windows? [00:11] do those versions upgrade to the released versions? [00:11] that's a key question [00:12] thumper: menn0: bogdanteleaga: if we need to documemtn a one off manual work around to get past this issue, i think that's ok [00:12] afaik the first one was 1.21 [00:13] 1.23 has the old implementation [00:14] I'd have to try it out to see if it works though [00:14] the powershell code does not look promising [00:14] (to create the services) [00:24] bogdanteleaga: is manually testing an upgrade from 1.24 with the proposed fixes to master [00:24] thumper: ^^^ [00:25] if that works he's going to merge that into 1.24 so that at least 1.24.3 won't have this problme [00:25] we won't have CI success until 1.24.3 is released [00:25] ack [00:26] wallyworld: recap since you were disconnected: bogdanteleaga is manually testing an upgrade from 1.24 with the proposed fixes. if that works he's going to merge that into 1.24 so that at least 1.24.3 won't have this problem. we won't have CI success until 1.24.3 is released [00:27] ok ty [00:27] and for older deploys, we restart service manually [00:34] all good [00:34] I'll merge it in [00:36] waigani: hey, did you see my ping before about the maas failed deployment bug? [00:36] wallyworld: no sorry [00:36] wallyworld: do you have a link? [00:36] waigani_: can you take a look at 1.24 bug 1472711? it claims bug 1376246 may not quite be fixed [00:36] Bug #1472711: MAAS node has "failed deployment", juju just says "pending" [00:36] Bug #1376246: MAAS provider doesn't know about "Failed deployment" instance status [00:36] wallyworld: sure [00:36] tyvm [02:13] thumper: howdy? [02:13] rick_h_: I'm not around tomorrow [02:13] thumper: k [02:13] rick_h_: just letting you know as we normally have a call [02:13] thumper: anything I should know? [02:14] only that we have slow progress, but expect it in 1.25 [02:14] 1.25 feature freeze pushed out so we can hit it [02:16] thumper: ok, did that stuff help with the jes/system/environment destroy stuff? [02:17] rick_h_: yeah, rehashing now [02:17] although I have a few questions [02:17] got 5 minutes now? [02:17] for a hangotu? [02:17] sure thing, just have to watch me eat dinner :) /me grabs headphones [02:17] kk [02:18] rick_h_: https://plus.google.com/hangouts/_/canonical.com/rick [02:45] hey whats going on everyone? [02:47] I'm looking for a way to seed a custom cloud-config to my juju deployed machines...I can use maas post install scripts for machines managed by maas, but I need my custom cloud-config to run on containers too. Is there a "best practice" way to do this with juju?? [02:49] from what I can tell juju creates the cloud init for lxc services here: https://github.com/juju/juju/blob/master/cloudconfig/containerinit/container_userdata.go [02:49] amongst other places [02:50] I'm thinking there has to be a way to pass a cloud-config file as an extra parameter when deploying services ....possibly I'm just missing somethig.... [02:51] charmers:^^ [02:52] bdx: this is definitely a question for core, most charmers don't have this type of requirement [02:52] bdx: may I ask what you're trying to customize? [02:54] marcoceppi: ok, thanks. -- I'm trying to automate my puppet cloud-config getting onto containers deployed by juju [02:54] bdx: why wouldn't you just have that wrapped in a charm? [02:55] oooh ....like a secondary charm that I would deploy that was just puppet cloud-config? [02:56] bdx: sure, you could have it be a subordinate for example [02:56] marcoceppi: Oooh totally...thats a great idea! [02:57] bdx: I'd have to know more about what this puppet stuff was doing and the goal, but charming is really the best way to interact with the machines juju gives you after deployment [02:59] marcoceppi: totally....we have keys and users amongst other things that need to get onto each machine post deploy ... our infra is tightly tied in with a few puppet classes that are mandatory...plusss our newrelic and papertrial stuff ... [03:00] bdx: ah, yeah, so I would totally make a "my-management-stuff" subordinate that you just throw on everything. We also have papertrail and new-relic charms, but if those area already puppetized, just have that subordinate do all that for you [03:02] marcoceppi: awesome. Do you know any hot docs for creating subordinate charms? [03:02] bdx: it's just like a normal charm, except you set subordinate: true in the metadata.yaml and you need a relation in the new subordinate charm [03:02] bdx: let me get you a link to the docs [03:03] bdx: https://jujucharms.com/docs/stable/authors-subordinate-services and https://jujucharms.com/docs/stable/authors-implicit-relations [03:03] bdx: here's an example metadata.yaml since those pages are...confusing [03:03] marcoceppi: I have been scoping this too [03:03] http://spamaps.org/files/charm-store-policy/drafts/subordinate-internals.html [03:04] marcoceppi: nice! thanks! [03:05] marcoceppi: thats awesome...that will be a perfect solution for what I'm trying to do here [03:05] bdx: http://paste.ubuntu.com/11845656/ [03:06] that's all you need to make a subordinate charm, the rest is the same as a primary charm structure [03:06] lines 8 and 10-12 are the special bits [03:06] thats so simple [03:06] beautiful [03:07] bdx: so, as you deploy services, just do a juju add-relation and it'll attach to all units in that service, even as you scale out/in [03:07] marcoceppi: nice...even if the service exists on a container already? [03:08] bdx: regardless of the system the service is on [03:08] bdx: if it's deployed by juju, on metal, a VM, or a container [03:08] sick [03:08] sweet [03:09] thats perfect....I'm going to implement it now....I'll let you know how it goes [03:09] bdx: awesome, I should be around for a few more hours if you have any questions [03:09] marcoceppi: awesome. thanks again for the advice [03:10] wallyworld, thumper: so 1.24 is still blocked and we won't know for sure whether that bug is fixed until 1.24.3 is released and the CI test starts using that [03:11] wallyworld, thumper: should we removed the blocker tag? [03:26] menn0: what bug is it blokec on? [03:27] wallyworld: the windows upgrade one [03:27] bug 1471332 [03:27] Bug #1471332: Upgrade fails on windows machines [03:27] ah, i didn't realise that one was a blocker [03:28] i think based on your explanation above we could remove the blocker tag [03:28] wallyworld: cool. i'll do that and add some detail to the bug [03:28] ty [03:31] thumper: one line review please: http://reviews.vapour.ws/r/2124/ [03:33] * thumper looks [03:34] menn0: shipit [03:34] 1.24 is now unblocked [04:38] waigani: did you get a chance to look at bug 1472711 ? is it something that juju core needs to fix? [04:38] Bug #1472711: MAAS node has "failed deployment", juju just says "pending" [04:40] menn0: we can mark bug 1469199 as fix committed right? [04:40] Bug #1469199: State server seems to have died [04:41] on 1.24? [04:45] wallyworld: not sure [04:45] wallyworld: we've committed 2 possible fixes [04:45] wallyworld: but they might not be it [04:45] wallyworld: I can't repro it reliably [04:45] is a stakeholder going to test? [04:45] wallyworld: so I don't know the problem is fixed [04:46] we could mark as fix committed so 1.24.3 can go out [04:46] wallyworld: seems the issue hasn't happened for a while [04:46] wallyworld: yeah, mark it as fix committed. the stakeholder said they would report back if the issue happens again (with 1.24.3 or otherwise) [04:46] ok, ty [04:46] master too? [04:47] wallyworld: we can always reopen it and it's not like we can do anything more [04:47] yep [04:47] wallyworld: yep master too [04:47] ok [04:49] wallyworld: I just marked bug 1465115 as fix committed too. that just merged. [04:49] Bug #1465115: api: data race in test [04:50] wallyworld: it's next on my list. I'll just finish this and look at it now. [04:53] ty [04:53] waigani: add any comments to the bug so we can see what's happening at the release standup tomorrow [04:54] wallyworld: okay, will do. [04:54] thanks menn0 [04:54] * wallyworld relocating, afk for a bit === thumper is now known as thumper-afk [08:17] axw, wallyworld: don't suppose either of you had a chance to look into the add-machine issue I mailed you about? === anthonyf is now known as Guest97002 [10:18] bogdanteleaga: thanks for landing the upgrade changes [10:19] bogdanteleaga: do I understand from menno's comment that we can never automatically upgrade from any existing juju versions with windows units? [10:46] mgz: I created a PR that I was expecting to get reviewed, it seems to have landed! Presumably without a test run (although tests pass of course...) [10:46] mgz: https://github.com/juju/juju/pull/2750 [10:46] mgz: can you see what I did wrong? Presumably I pressed the wrong button somewhere... [10:47] >_< [10:48] mgz: hmm... I don't think I did, I think a test run was completed for a different PR and this one was merged [10:49] mgz: I had a PR for this branch which I added the magic $$merge$$ cookie - that PR has vanished [10:49] mgz: https://github.com/voidspace/juju/tree/devices-master-merge-4 [10:56] well, that was a fun mystery [10:56] mgz: hah, oops... :-) [11:00] mgz: not with whatever is released now [11:02] mgz: I'll ask somebody to try 1.23 later today, but I'm not having high hopes [11:02] mgz: 1.24.x won't work until 1.24.3 [11:03] bogdanteleaga: CI happened to do a 1.22.7 run - that also failed [11:04] so, I guess upgrades just won't work for existing windows envs [11:04] bogdanteleaga: I think we need to document some manual steps [11:04] even if the sanest thing really is to tear down the existing env and start again [11:04] mgz: yeah, then it's almost certain to fail on 1.23 as well [11:05] bogdanteleaga: I've changed the CI job to try to upgrade from your fixed 1.24 - so we'll see if the job passes like that [11:05] mgz: for 1.24 it can be as easy as accessing the machine and restarting the service [11:05] might be the same for the rest, but it needs to be tested [11:06] if I can run that over winrm, that's also possible to script [11:06] I don't see why not [11:07] though I guess it needs to see the upgrade first, then do the restart [11:18] mgz: you can probably just query the service [11:18] mgz: and when it stops just start is back up [11:18] mgz: but you need to do the same for the units, which will stop after you upgraded the machine agent === liam_ is now known as Guest88779 [13:32] ericsnow or axw: could you please take a look at https://github.com/juju/names/pull/52 ? thanks! [13:39] Bug #1473069 opened: Inconsistency of determining IP addresses in MAAS environment between hosts and LXC containers [13:48] Bug #1473069 changed: Inconsistency of determining IP addresses in MAAS environment between hosts and LXC containers [13:51] Bug #1473069 opened: Inconsistency of determining IP addresses in MAAS environment between hosts and LXC containers === kadams54 is now known as kadams54-away [14:32] hey [14:32] quick help [14:32] what's the command to get the list of help for commands you run in hook env [14:32] on a call [14:32] in a pinch [14:35] juju help-tool [14:35] * marcoceppi high fives [14:36] himself [14:36] go marcoceppi go [14:38] marcoceppi: also `juju run --unit some-unit/0 "close-port --help"` works [14:38] for help on specific tools === kadams54 is now known as kadams54-away [17:38] sinzui, ping === makyo_ is now known as Makyo === Makyo is now known as makyo_ [17:40] hi makyo_ [17:41] sinzui, working on setting up some nightly testing with Jenkins and am trying to get it to send emails to us for build status. Does QA have any experience with that? [17:44] makyo_: no, we abandoned Jenkins sending emails. our emails are sent by a script that inerprets the results [17:45] sinzui, Ah, alright. None of the solutions I found in Jenkins land were all that good, yeah. Is that up on LP/GH somewhere? [17:46] makyo_: I can suggest that is builds are trigger other builds, or something watching the builds an trigger a "results" job, you can collect everything for an email and you have a choice of trying to use jenkins' infrastructure or adding your own [17:46] makyo_: lp:ci-director does some of the emails [17:47] sinzui, alright, thanks. Will give it a look. [17:47] That sounds good. [17:51] ericsnow: o/ [17:51] katco: \o [17:52] ericsnow: how's the bug-fix coming? [17:52] ericsnow: sorry i haven't been around [17:52] katco: np [17:52] katco: good [17:52] katco: wish we had sorted all this out thursday/friday :( [17:52] ericsnow: yeah =/ [17:53] makyo_: for any job, you can check email-notification. It includes the conole log. So a "results" job that summarises other jobs can do what you want cheeply. [17:53] ericsnow: we've just been demoing basic stuff, it's ok [17:53] ericsnow: but was wondering if we'd have anything ready for today? [17:53] * sinzui has set this up, but gets these emails from other jenkins [17:53] katco: wwitzel3's working on it [17:55] ericsnow: hm. last time i talked to him he said you were ;) maybe different bug? [17:56] katco: I'm pretty sure I fixed everything I could yesterday [17:56] sinzui, alright, that makes sense yeah [17:56] katco: I've been helping wwitzel3 as much as I can this morning === kadams54 is now known as kadams54-away === makyo_ is now known as Makyo [20:06] Bug #1473197 opened: openstack: juju failed to launch instance, remains pending forever [20:21] Bug #1473197 changed: openstack: juju failed to launch instance, remains pending forever [20:24] Bug #1473197 opened: openstack: juju failed to launch instance, remains pending forever [20:54] Bug #1473209 opened: github.com/juju/juju/service/windows undefined: newConn [20:57] wallyworld, ping me whn you are in please [20:59] alexisb: hi, just about to go into meeting, i'll try and get you before release standup [21:10] alexisb: free now [21:10] lets jump on our hangout [21:31] wallyworld: do you have time to review http://reviews.vapour.ws/r/2135/ [21:31] sinzui: yes, will do === kadams54 is now known as kadams54-away [21:55] sinzui: lgtm [22:01] thank you wallyworld. this is an example armhf build log the trusty 1.20.11 has the same deps and looks similar https://launchpadlibrarian.net/210434988/buildlog_ubuntu-trusty-armhf.juju-core_1.24.2-0ubuntu1~14.04.1~juju1_BUILDING.txt.gz [22:02] ty, looking [23:17] anastasiamac: standup? [23:17] wallyworld: technical issues.. omw [23:35] bdx: were you able to get the subordinate working?