=== Spads_ is now known as Spads [00:25] ericsnow: ping? [00:25] menn0: hey [00:26] ericsnow: quick question... why is the vsphere provider set up to not build under gccgo or go 1.2? [00:27] menn0: dependencies [00:27] ericsnow: i'm reviewing the rackspace provider PR and the sshclient from vsphere has been moved and generalised to support it and the build tags came along [00:27] menn0: (govmomi requires 1.3+) [00:27] ericsnow: but the ssh client should work fine? [00:27] menn0: yep [00:27] ericsnow: ok, i'll mention that the build tag should go [00:27] menn0: it's just the Go bindings to the vsphere API that were the problem [00:28] ericsnow: thanks [00:28] menn0: np [00:28] ericsnow: actually, one more thing, the sshclient code assumes the host is Linux so won't work with windows hosts. is this ok/expected? (i know nothing about vsphere) [00:29] menn0: for vsphere I expect it's okay [00:29] ericsnow: any idea when it comes to rackspace? this PR is using the same code there. [00:30] menn0: rackspace is openstack under the hood [00:32] ericsnow: and openstack does appear to support windows hosts, but I don't know about rackspace [00:32] ericsnow: i'l raise it as a possible issue. thanks. [00:32] menn0: k, cool [01:08] katco, ericsnow, wwitzel3: evening [01:09] natefinch: o/ [01:09] natefinch: i'm in moonstone. eric and wayne are taking a break [02:20] thumper: have you forward ported bug 1468581? [02:20] Bug #1468581: juju bootstrap fails - Waiting for API to become available ERROR cannot get all blocks: EOF [02:20] I think so... [02:20] let me check [02:20] axw: and bug 1474614 [02:20] Bug #1474614: rsyslog connections fail with certificate verification errors after upgrade to 1.24.2 [02:20] and there's a few from menno as well [02:21] menn0 even [02:21] wallyworld: from the bug, "Forward-porting fix to 1.25 is currently blocked on other changes to the rsyslog worker being forward ported (i.e. the change that the upgrade step is concerned with)." [02:21] ah ok [02:21] didn't read the bug :-) [02:21] wallyworld: I'll look at doing that after I finish up what I'm working on [02:22] just the milestone page [02:22] ty [02:22] we're aiming to cut 1.25 alpha 1 real soon [02:22] hence the follow up [02:22] wallyworld: i'm planning on doing the forward port for bug 1474195 this afternoon [02:22] Bug #1474195: juju 1.24 memory leakage [02:23] awesome tyvm [02:23] wallyworld: actually, I lie. I have the PR ready for that but want Will's review (have emailed him) [02:23] wallyworld: no [02:23] wallyworld: i'm doing bug 1474606 this afternoon [02:23] menn0: and the other 2 against you? [02:23] Bug #1474606: $set updates may clear out the env-uuid field [02:23] wallyworld: I must have forgotten [02:23] bad me [02:23] * wallyworld spanks thumper [02:24] oi [02:24] none of that [02:24] wallyworld: bug 1474588 is nowhere near investigated let alone fixed [02:24] Bug #1474588: Many hook failures after upgrade [02:25] menn0: the othr one i was refering to in marked in progress bug 1457225 [02:26] wake up mup bug 1457225 [02:27] oh right, you answer that [02:27] sorry [02:51] wallyworld: mup knew that :) [02:52] Bug #1476895 opened: ec2: auto-created EBS volumes are not tagged [02:58] Bug #1476895 changed: ec2: auto-created EBS volumes are not tagged [03:16] Bug #1476895 opened: ec2: auto-created EBS volumes are not tagged [03:34] ericsnow: wwitzel3: if by any chance you're lurking around, natefinch and i are in moonstone [03:37] menn0: ignore http://reviews.vapour.ws/r/2234/ [03:37] wallyworld: would you PTAL at https://github.com/go-goose/goose/pull/12 [03:38] sure [03:38] thumper: happily ignoring :) [03:39] axw: +1 [03:40] wallyworld: ta [03:59] wallyworld: cherry picking the fixes for bug 1474606 is proving difficult because it depends the fixes for bug 1474195 (which i'm waiting on a review from will on) [03:59] Bug #1474606: $set updates may clear out the env-uuid field [03:59] Bug #1474195: juju 1.24 memory leakage [03:59] wallyworld: can it wait until tomorrow? [04:03] menn0: for both? can wait if necessary [04:04] i ping william later to make sure he reviews [04:13] wallyworld: thanks. the second one doesn't need will's review, just the one that's up. [04:13] wallyworld: i've emailed him about it as well. [04:18] ty [04:25] Bug #1457797 changed: Juju bootstrap doesn't work behind proxy [04:40] Bug #1457797 opened: Juju bootstrap doesn't work behind proxy [04:46] Bug #1457797 changed: Juju bootstrap doesn't work behind proxy [04:46] Bug #1476918 opened: juju switch doesn't work without an environments.yaml file [04:59] wallyworld: another small review if you don't mind: https://github.com/go-amz/amz/pull/56 [04:59] sure [05:18] jam: morning, would you have time soonish to talk about resources? in maybe 15 minutes when anastasia gets back from school pickup? [05:40] wallyworld: sure, are you available now? [05:41] jam: almost, just waiting to hear back from anastasia who is caught in traffic. may have to defer if it gets to close to my school pickup time [05:41] k [06:06] jam: you free now? [06:06] wallyworld: sure [06:06] https://plus.google.com/hangouts/_/canonical.com/tanzanite-stand [06:09] * thumper looks around for urulama [08:11] Bug #1476996 opened: Network communication failed during juju bootstrap [08:47] Bug #1477010 opened: provider/openstack: volumes may not attach if instance takes a long time to provision [09:00] fwereade: hangout? [09:38] ocr: quick deps update http://reviews.vapour.ws/r/2238/ thanks! [10:15] wallyworld, you still around? [10:17] wallyworld, I'm wondering whether we need to read the whole previous status doc every time we set [10:18] wallyworld, could we just write the same data to both the watched and the raw collections every time? [10:18] wallyworld, (and fwiw I think it might be more reliable?) [10:38] * fwereade was doing some python the other day and just spent *far* too long wondering why gofmt didn't like "def" [10:41] lol [10:44] wallyworld: https://bugs.launchpad.net/juju-core/+bug/1287949 [10:44] Bug #1287949: {image,tools}-metadata-url not usable w/ ec2 provider [10:57] bogdanteleaga: by design, we currently only allow signed image metadata for ec2. i don't know the reason for that decision [10:58] the the metadata were to be signed, it would work [10:58] if [10:58] yes, but as per the guy above me in the bug report [10:59] there's no way of specifying a key [10:59] so it has to be official? [10:59] wallyworld, bogdanteleaga: nor do I, but it crosses my mind that we want to be sure that anything we think is the official source is signed with our key [11:00] fwereade: that is true, but for ec2 it currently requires user specified image metadata to be signed [11:00] definitely makes sense, but then there's no way of specifying custom images [11:00] wallyworld, bogdanteleaga: I can't see any reason not to allow either additional keys or to allow unsigned -- so long as we check our source [11:00] and i don't know why [11:00] wallyworld, nor do I but I strongly suspect it was an all-we-had-time-for deal [11:01] maybe because - can people add custom images to ec2? [11:01] wallyworld, yes, and IMO that's a good thing [11:01] maybe the reasoning was, well if you can't add cutom images.... [11:01] no, but there's no way of using a ec2 AMI that's not specified on cloud-images.ubuntu.com [11:01] i.e. a windows ami [11:01] this was all done prior to windows :-) [11:02] so the code needs changing for windows [11:02] yeah, I'm aware and I do understand [11:02] wallyworld, bogdanteleaga: I strongly agree that is a bug and it's symptomatic of a wider one -- we have metadata generate-image, but a cloud configured to never work with it? [11:02] was just signaling it's an actual bug and it's getting more important [11:02] fwereade: ec2 was configured to only use signed metadatsa before genetate metatdata came along [11:03] generate metadata was only for openstack [11:03] for private clouds [11:03] and I wasn't aware it was a design decision until now [11:03] where there were no official images [11:03] bogdanteleaga: it made sense at the time because there were no unofficial images possible [11:03] wallyworld, heh, not in my mind, I always thought that the ability to specify custom images was important, just that the way we were originally doing it was badwrongevil [11:04] fwereade: at the time, everything was based on simplestreams - that was our language [11:04] and only openstack had the need for custom images [11:05] and ec2 i think deliberately precluded them [11:05] wallyworld, custom images are super-cool all over the place [11:05] ok, so for a fix: do we want to allow unsigned or to allow specifying a key? [11:05] wallyworld, if we *just* let people do this we can have the big data folks deploying an image with their JVM already in place [11:05] wallyworld, and cut their deployment times like anything [11:05] fwereade: i agree, but at the time it was impossible to have custom images for aws [11:06] so why support something impossibe [11:06] wallyworld, I am certain that I was building custom images for aws before even ensemble was a thing [11:06] i'm guessing - maybe it was a policy deciison [11:06] not sure now [11:06] bogdanteleaga: allow unsigned i think [11:07] to be consistent with ec2 [11:07] openstack [11:07] i meant [11:07] wallyworld: so I'm guessing ec2 is asking for signed only right? I was disabling it for everything for testing [11:07] fwereade: to answer your other question - i'm not sure why whole status record written each time - histerical reaosns? [11:07] bogdanteleaga: from memory there's a bool constant [11:08] wallyworld, no, read each time, for copying into hysterical statues [11:08] wallyworld, bogdanteleaga: it *would* IMO be best to allow specification of acceptable keys as well [11:08] fwereade: that's a bigger change, but sure [11:09] but do we really need that straight up? [11:09] wallyworld, bogdanteleaga: and that feels like something that shouldn't take too much per-env configuration [11:10] wallyworld, I am encouraging bogdanteleaga to do it, because apart from anything else I think security-conscious people *will* want to use signed simplestreams, and we're not going to sign the metadata for every OS we can deploy [11:11] i agree, just ensuring it's the right time to do it [11:11] how about I enable the unsigned one, and add a todo+bug report for doing signed with key [11:11] wallyworld, bogdanteleaga: that sgtm [11:11] that's what i was expecting to hapen [11:11] wallyworld, bogdanteleaga: indeed [11:12] bogdanteleaga, please talk it through with gsamfira_ though, I think it will be relevant if we expect Serious People to deploy a bunch of different OSs with juju [11:13] wallyworld, anyway re status [11:13] wallyworld, what I really want to do [11:13] wallyworld, is have every setstatus method be [11:14] go setHistoricalStatus(key, doc) // logs but ignore errors [11:14] ...and then to just set txnal status as usual [11:15] wallyworld, if I *just* do that, we'll miss one hostirical value around an upgrade [11:15] fwereade: looking at the code, it seems that it is excluing the actual current status from the history? [11:15] wallyworld, and if I take on an upgrade step as well I will feel I've strayed too far from "enable leadership" [11:16] wallyworld, hmm, hadn't looked there [11:16] wallyworld, any particular reason? [11:16] wallyworld, seems surprising [11:16] maybe "history" was taken as lieterally meaning in the past [11:16] but yeah surprising [11:16] wallyworld, latest know status is still part of history because it was set in the past [11:17] fwereade: i can't see a reason not to make the change you suggest [11:17] wallyworld, cool, thanks [11:19] wallyworld, oh hell, status history is still txnal in 1.24 [11:19] wallyworld, shouldn't it be backported? [11:19] damn, i'll ask horatio to backport - i thought it was done there [11:20] wallyworld, no worries, thanks [11:20] sorry [11:20] wallyworld, that's the trouble with branches, so many exciting opportunities to miss stuff [11:21] yeah [11:21] and we branched 1.24 off too early [11:21] way too early [11:44] wallyworld, that too === natefinch is now known as natefinch-afk [11:45] wallyworld, if you're still here: if I were to make SetStatus accept a StatusInfo, and apiserver responsible for setting the time, would that be a Bad Thing? [11:46] wallyworld, I probably won't either way, am trying not to distract myself [11:46] i'd rather apiserver not contain any business logic [11:46] business logic should be in a separate service layer [11:47] other services may call the status service to set the status [11:47] and we'd want the status service to set the time [11:48] wallyworld, so, apiserver->model->persistence, and the rules in model? [11:48] those other services may be co-located, hence the network layer would be short circuited [11:48] yes [11:49] i'd characterise it as apiserver->business_services->domain_model->persistence [11:49] business services sit on an enterprise bus [11:50] well, that implies the model knowing about persistence rather than vice versa, but that's an aside [11:50] they operate on a domain model, the model is independent of persistence [11:50] right [11:50] yea, the linear representation doesn't work [11:50] and the enterprise bus would provide pubsub, rpc, service discovery etc [11:51] wallyworld, so something akin to https://blog.8thlight.com/uncle-bob/2012/08/13/the-clean-architecture.html [11:51] wallyworld, or not? [11:51] looking [11:51] wallyworld, because the representation as services doesn't *necessarily* fit [11:52] yeah, my concept doesn't quite fit that particular model [11:53] wallyworld, and my current background thoughts are directed towards extracting an in-memory model layer, and moving business-rule responsibilities into there from state (but state still referencing the model) [11:53] wallyworld, the main driver for this is the uncommitted-state/transactionality stuff [11:53] agree with the above, need to think about the bit in brackets [11:54] by state, do you mean "persistence"? [11:54] wallyworld, in which I contend that having a dependency-free model representation, with a *single* and authoritative implementation of the business rules, is critical for composing and checking sanity of large logical operations without reference to db state [11:54] wallyworld, yeah [11:55] wallyworld, although it's sorta tricky because it's still going to have a bunch of consistency concerns that are hard enough to deserve thought [11:55] wallyworld, so, "state" as shorthand for "what's in state today, referencing business rules defined elsewhere" [11:55] state without the business rules [11:55] +1 for getting business rules right out of state [11:56] that would be a big win [11:56] for not that much effort in the big cheme of things [11:56] ideally, persistence would be abstracted so it were pluggable [11:56] that would help ensure a clean design [11:57] wallyworld, well, if you have insights, I want you to share them, because... *all* our business rules are encoded in dynamically generated strings of transaction operations [11:57] fwereade: that's an artifact of out current implementation, and sadly ties us to mongo, and sadly weaves separate concerns together [11:57] wallyworld, and I need a sane and comprehensible model of the world, its changes, and the conditions those changes require -- that I can effectively render down into txn operations [11:58] fwereade: we should be able to hand off model changes to a persistence layer, over a well defined interface boundary [11:59] we should nut this out next week [11:59] when i am less tired [11:59] wallyworld, sgtm :) [11:59] flagging a bit atm [11:59] wallyworld, sorry to keep you up [11:59] np, only 10pm but i'm tired [11:59] brain needs rest [12:00] hard to give decent answers and think deeply about complex issues [12:00] it's best to just avoid it ;) [12:26] morning [12:31] heya perrito666 [12:41] fwereade: did I completely miss you guys on actions? [12:41] jw4: ^^ [12:42] I realize I'm late coming out of the last meeting. [12:42] jam, heyhey, jw4 is having mic troubles [12:42] ah [12:51] nick natefinch === natefinch-afk is now known as natefinch [12:51] heh [12:51] *lol* [12:57] man, I *love* that we made workload processes use plugins and not hardcoded stuff from juju-core. It makes it trivial to produce specific fake technology plugins that mimic real ones without having to muddy juju-core codebase [13:44] mattyw: bug 1477157 [13:44] Bug #1477157: Broken windows dependencies [13:44] Bug #1477157 opened: Broken windows dependencies [13:45] mgz, ah crap, ack, I'll fix it [13:45] ta! [13:52] mgz, https://github.com/juju/juju/pull/2860 [13:56] TheMue: This is what I have so far for CreateSpace: https://github.com/juju/juju/compare/net-cli...dooferlad:net-cli-api-spaces-create?expand=1 [13:57] mattyw: shipit [13:57] dooferlad: one moment, will take a look in a few secs [13:58] TheMue: I am not doing any param checking in CreateSpace on the client side since the CLI already does that. This is also the case in api/action/client.go, but I wanted to start that conversation early. [13:59] dooferlad: spaces.go line 34 has a typo [14:00] TheMue: wasn't expecting a full code review, I just want to discuss client side params checking. [14:00] dooferlad: only seen it while flying over the code, eagle eyes *cough* *cough* [14:02] dooferlad: what do you think, could it make sense to rename the params type now, from add to create? simply using the opportunity [14:02] TheMue: sure [14:03] dooferlad: and when adding the subnetIds as tags into the params, don't work with append but make it with the right length and use the index [14:06] dooferlad: rest looks fine so far, I maybe would move testCreateSpace as a closure into TestCreateSpace. [14:06] perrito666, did wallyworld mention backporting status-history txnality to 1.24? [14:07] dooferlad: a very good example for the while testing is dimiterns instancepoller [14:07] perrito666, should be simple, 1.24 has state.Database now [14:07] fwereade: yup. already assigned me a card :) [14:07] perrito666, cool [14:07] perrito666, would you ping me when it's ready to review, I will at some stage become blocked on it [14:07] fwereade: sure [14:08] fwereade: want to tell me more? [14:08] perrito666, cheers [14:08] perrito666, I have to do gated service.SetStatus [14:09] perrito666, which means I need to at least touch the status-history storage [14:09] perrito666, and I would rather someone else had that change in place before I got involved in the details [14:10] fwereade: ill do as fast as possible :) [14:10] perrito666, oh, and if you're doing that, would you please make the updateStatusHistory signature match all the other similar funcs? (st, globalkey, doc) [14:11] fwereade: k [14:12] TheMue: http://reviews.vapour.ws/r/2241/ [14:22] dooferlad: reviewed [14:22] TheMue: Thanks! [14:23] dooferlad: yw [15:30] mgz, can we close this yet? https://bugs.launchpad.net/juju-core/+bug/1477157 [15:30] Bug #1477157: Broken windows dependencies [15:31] bogdanteleaga, yeah - totally forgot to do the GOOS=windows thing [15:31] bogdanteleaga, copied and pasted an old script :( [15:31] mattyw: really the process is let ci bless the branhc and close the bug itself [15:32] mgz, ack [15:34] mattyw: heh, I remember doing that a few times, but I got scared when I did git diff :) [15:39] bogdanteleaga, I saw it in the diff, and got over excited by deps being removed [15:49] mattyw: haha [16:15] TheMue: could you take a look again at that review? [16:26] okay, new run on 1.24 branch has started [16:34] if you ever feel bad about a hack https://twitter.com/adrianchm/status/623485623538712576 [16:47] perrito666: lol, that's amazing [17:24] Bug #1476996 changed: Network communication failed during juju bootstrap [18:29] juju upgrade on windows just passed for the first time. [18:29] * perrito666 cries [18:48] ericsnow: ping [18:49] wwitzel3: hey [18:49] ericsnow: never mind, I'm going ot look at the board [18:49] wwitzel3: k [18:50] ericsnow: we have an issue with the charm in that without having destroy available .. we can't really do much other than add-relattion [18:50] katco: ^ [18:51] wwitzel3: ah, because docker containers are static and need to be torn down anytime something changes? [18:51] katco: right [18:51] wwitzel3: you could add a basic proc-destroy script to the charm that calls juju-process-docker destroy... [18:54] wwitzel3: in case you missed it, I didn't merge that --extends patch (we didn't get it quite right) [18:54] wwitzel3: ericsnow: natefinch: wondering how critical it is to get that right just for the demo. we probably only need the container to spin up once, yeah? [18:55] wwitzel3: ericsnow: natefinch: i.e. i don't know if it's worth the time to write a destroy script [18:56] katco: well, the script would just be a one-liner call to the plugin [18:56] I don't even have to make a script [18:56] I can just call it directly from the charm [18:56] which I just tested, it works [18:56] wwitzel3: true [18:57] wwitzel3: pft. over achiever [18:58] I'm going a end to end of the latest charm now, no intervention, will see how it goes [18:58] * katco crosses fingers [19:00] katco: it worked here, so please give it a try [19:00] ericsnow: how did we get the parseUpdates wrong? .. [19:00] ericsnow: nvm I'll look at the reivew [19:00] wwitzel3: cool, pull from gh? [19:00] katco: yep [19:01] wwitzel3: we weren't supposed to apply the updates to c.info [19:01] wwitzel3: I left a review comment [19:06] Bug #1477263 opened: Enabling allow-lxc-loop-mounts can cause error when destroying an environment [19:16] ericsnow: so when I implement the changes you suggested, our test suite fails all over the place :( [19:17] wwitzel3: :( [19:17] wwitzel3: likely due to s.setMetadata calls [19:18] wwitzel3: I can take over that patch if you need to work on the charm [19:18] ericsnow: that would be great [19:18] wwitzel3: k [19:36] wwitzel3: hm. i think i'm doing something wrong [19:36] wwitzel3: i deploy wordpress-wpm, i deploy mysql [19:36] wwitzel3: i relate them [19:36] wwitzel3: but wordpress-wpm is stuck in the install hook [19:36] katco: the install can take a while .. since we pull the image down [19:36] katco: how long has it been stuck? [19:39] wwitzel3: most likely longer than it should be [19:39] wwitzel3: like... 15m maybe? i dunno i'm hopping around a lot [19:39] wwitzel3: actually more like 25 doing some timestamp math [19:40] wwitzel3: what order do you do things in? [19:40] katco: it shouldn't matter :) [19:40] wwitzel3: very true :) [19:40] katco: sounds like the install hook is getting hungup on the pull [19:40] wwitzel3: debug-log just spams: unit-wordpress-wpm-0[2980]: 2015-07-22 19:40:44 INFO unit.wordpress-wpm/0.install logger.go:40 [19:41] wwitzel3: empty log message [19:41] wwitzel3: so i'm not missing any steps? just need to relate the two, yeah? [19:42] yep [19:43] katco: I'm doing a fresh deploy, see if I get that same install hook hanging issue [19:43] wwitzel3: k, i'm retrying as well [19:48] wwitzel3: weird... docker.io is consuming most of the cpu [19:48] wwitzel3: but the image isn't listed [19:51] wwitzel3: any ideas why docker.io would be spinning? never seen that [20:02] katco: no idea, my install hook is spinning as well [20:03] katco: I am wondering if there is just not enough time between installing the docker.io service and starting it and issuing the pull [20:03] katco: going to try it with a sleep, see if that helps [20:03] wwitzel3: doesn't apt-get install docker.io start the docker.io service already? [20:03] wwitzel3: instead of a sleep, query the status of the service [20:06] Bug #1477281 opened: machine#0 jujud using ~100% cpu, slow to update units state [20:07] running into bug 1168154 too [20:07] Bug #1168154: Destroying a service in error state fails silently [20:08] ericsnow: process.Info.Id()'s comment states "ID composes a unique ID for the process" .... does that mean that we don't expect the process name to be unique? [20:09] natefinch: potentially [20:09] ericsnow: the reason I ask is that the status output as stated in the spec is using the process name as the key to a map, so if there's a collision.... badness [20:09] natefinch: we had discussed support for launching multiple copies of a proc [20:10] natefinch: I suppose we can cross the bridge later [20:11] we seem to be doing a lot of "let's worry about that later" with this feature, which makes me worry about it now. [20:12] natefinch: well, currently we don't support multiple copies of a proc (and we may never) [20:13] ericsnow: fair enough [20:13] ericsnow: I'm all good with YAGNI, so long as it's not YAGNIUNW (You ain't gonna need it ... until next week) [20:14] natefinch: :) [20:22] mgz: yay! [20:22] perrito666: tears of joy? :p [20:23] bogdanteleaga: ?? [20:24] juju upgrade on windows just passed for the first time. [20:24] * perrito666 cries [20:25] ah si [20:25] yes [20:26] katco: looks like there is an issue with the registry [20:26] katco: I can't pull the wp container at all, atm [20:26] wwitzel3: as in the official docker registry? [20:26] katco: yeah [20:26] wwitzel3: well that explains things lol. do they have a status page or something? [20:27] oh great, one more thing to break [20:28] wwitzel3: https://status.docker.com/ [20:28] katco: nope, but after running it 4 or 5 times it started working :) [20:28] 2015/07/22 20:19:45 Could not find repository on any of the indexed registries. [20:28] Pulling repository wordpress [20:28] 48e924db07d1: Pulling dependent layers [20:28] so now it works, lol [20:30] aaaand now it doesn't work again, wtf [20:31] wwitzel3: i wonder if there's a route issue b/t ec2 <-> registry? [20:34] katco: something weird for sure, it is working locally jsut fine [20:34] wwitzel3: again, top shows docker.io is spinning [20:35] wwitzel3: i'll try pulling it manually, but yeah something odd is happening [20:36] wwitzel3: whoa: $ docker pull wordpress [20:36] 2015/07/22 20:35:39 Post http:///var/run/docker.sock/images/create?fromImage=wordpress&tag=: dial unix /var/run/doc [20:36] ker.sock: permission denied [20:36] wwitzel3: does it have to be sudo pull? [20:37] wwitzel3: says job is already started. log doesn't have anything interesting [20:41] katco: you have to be in the group or you have to be sudo, yes [20:45] wwitzel3: cycled docker.io service, now: Could not find repository on any of the indexed registries. [20:45] [a0bad6af] -job pull(wordpress, latest) = ERR (1) [20:48] katco: yeah, something odd is going on since it was working with out incident the dozen other times I did it today [20:48] and yesterday [21:00] Bug #1477293 opened: Bootstrap attempts to use a private network address instead of the public address [21:03] Bug #1477293 changed: Bootstrap attempts to use a private network address instead of the public address [21:04] wwitzel3: so i'm pulling this manually from the ec instance, and it's doing "stuff", but it keeps returning to a header "pulling fs layer" [21:05] katco: yeah, it should do that for a while [21:05] katco: and download little bits and fs layers [21:05] wwitzel3: just seems to be taking a long, long time [21:05] katco: yeah, the registry isn't fast [21:05] katco: I've had it take 20 minuites or longer to get an image [21:05] wwitzel3: k [21:09] katco: once I get another copy of it downloaded successfully [21:09] katco: I'll put it in the charm and we can just load it locally [21:09] katco: so we won't have to worry about network for it [21:09] wwitzel3: good idea. or i wonder if we have a docker repo charmed up :) [21:09] Bug #1477293 opened: Bootstrap attempts to use a private network address instead of the public address [21:19] katco: can I get an amen http://reviews.vapour.ws/r/2244/ ? [21:20] perrito666: tal [21:21] * perrito666 brain makes an odd connection and its taken back to the past to the zope template attribute language [21:23] perrito666: did we address fwereade's concern in the todo there? [21:23] yup, that is what the pr does :D removing all trace of txn in status history [21:23] pure raw metal.. err mongo insertion :p [21:25] perrito666: i recall him disagreeing with that approach pretty strongly. i can't remember if it was ever resolved. hm [21:26] katco: he did disagree with a previous patch, which led to uncovering an issue in envuuid automatic patching [21:26] I recalled that pr and made this new one [21:26] perrito666: well not just the patch, but the idea of doing "raw" mongo ops w/o txns [21:27] katco: in this particular case it was his idea to completely remove, history works much like logs [21:27] we push on the top and remove on the bottom [21:28] perrito666: yeah i recall the counter-argument, i guess i never saw it get resolved [21:28] perrito666: i have a meeting now, ian is in it, so i'll talk to him too [21:28] fantastic :) [21:33] perrito666: you're good! :D [21:34] waigani or menn0, could one of you review http://reviews.vapour.ws/r/2167/? [21:37] oh, blocked again, it was good while it lasted [21:40] cmars: i'll take a look [21:40] menn0, thanks! [21:48] wwitzel3: http://reviews.vapour.ws/r/2245/ [21:50] bbl [21:51] cmars: done. just a few little things. [21:53] cmars: fwereade is right though: this will need to change soon when env destruction is done properly. But this is fine for now. [21:54] katco: https://launchpad.net/juju-core/1.26 exists. I as does https://launchpad.net/juju-core/+milestone/1.26.0 [21:54] sinzui: ty sir! [21:56] menn0, ok, we'll have to watch out for that .. any advance notice would be helpful, we're ready to make whatever changes necessary [22:00] cmars: you guys might not need to worry about it. we just have to move when the last ditch metric sending happens to somewhere else. [22:00] menn0, gotcha [22:01] cmars: the plan is to have a state server worker that manages environment life cycle instead of doing env destruction synchronously in the api call [22:02] cmars: b/c as it stands if anything goes wrong (e.g. machine destruction get stuck) the user has very little visibility [22:02] cmars: we also need to shut down things like storage in a more controlled fashion [22:03] cmars: all this will happen "soon". we'll keep you in the loop but you don't need to worry too much about it. [22:03] menn0, ack. cool stuff. the env lifecycle mgmt sounds especially interesting [22:04] sinzui: i proposed a blueprint for 1.26... who does that approval go to? [22:04] cmars: interesting/necessary whatever :) [22:05] katco: I think mramm, the driver of the project [22:05] sinzui: ah ok [22:05] wallyworld: abentley my go1.4 juju on vivid is succeeding [22:05] oh interesting [22:06] can we drop the bug then :-) [22:07] sinzui: what is feature-proc-mgmt in this context? https://bugs.launchpad.net/juju-core/feature-proc-mgmt [22:09] katco: There is/was a branch in github that was tested and failed. We registered the series to report bugs against it [22:10] sinzui: ah ok. i just created a blueprint... would it make more sense to report bugs against that? or is series considered a branch in lp parlance? [22:11] katco: I wish the later part of your statement was 100% true. A series should be assocated with a branch. A line of development that changes will be made too. Lp doesn't enforce the branch [22:12] sinzui: would it be an imposition to ask you to target bugs against the blueprint instead? [22:13] katco: Thate is difficult to do [22:13] sinzui: in an automated fashion, or just in general? [22:13] katco: bugs can be linked ot a blueprint, but since the blueprint feasture is broken in many places, I cannot do it [22:14] sinzui: oh, it's a permissions/lacking feature thing? [22:14] also the blueprints cannot be managed by the team. You for example could make the milkestone and series, but blueprints don't have sane ownership [22:15] sinzui: not sure what you mean? i am the starter, drafter, and assignee, and i can link bugs... do you mean no one BUT the starter can assign bugs? [22:17] katco I am not writing software to use blueprints, you are free to link issues you want fixes in blueprints. CI is not concerned with feature planning [23:24] wallyworld: menno said he'd look at the go 1.3 on vivid issue [23:44] sinzui: ping? [23:45] sinzui, wallyworld: i'm trying to be sure what the aws-deploy-trusty-amd64-on-vivid-amd64 CI job does [23:45] I think it means, deploy an EC2 env using trusty instances, with the client running on vivid [23:45] sinzui, wallyworld, mgz: is that right? [23:47] menn0: that's correct [23:47] menn0: it's mostly just exercising the vivid client [23:48] menn0: unrelated, I added some comments to the rackspace provider review [23:48] mgz: ok cool. i only managed to look at some of it. [23:49] mgz: i'm currently looking at bug 1477293 [23:49] Bug #1477293: Bootstrap fails to connect on vivid/go 1.3 [23:49] mgz: I can only see one CI run where bootstrap failed due to not being able to connect [23:49] menn0: on the ssh bug, I feel like it's just as likely to be another manifestation of ssh config screwing up juju as anything else [23:49] mgz: and it works for me [23:49] mgz: the other failed runs with the same rev seem to be something else [23:50] mgz: so i'm looking at that [23:50] juju 1.24 makes ssh generally dodgy in the presence of any existing ssh setup [23:50] mgz: what do you mean by "any existing ssh setup"? [23:50] eg, I have ~/.ssh/config with stuff in it and an agent and several keys [23:51] juju doesn't like me much. [23:51] thumper: menn0: sorry, just got out of meeting, reading backscroll [23:51] mgz: hmmm, I have that too and it works for me. but of course it might something specific in your config. [23:51] menn0: the other failed runs at the same rev look like job setup debugging [23:52] mgz: the bootstrap is failing: "Bootstrap failed, destroying environment" [23:52] mgz: but i'm not sure why [23:52] menn0: mgz: sinzui switched to go 1.4 and said it worked [23:52] nah, he tried the same thing via osx which happens to use go 1.4 and it works [23:52] Juju cannot bootstrap because no tools are available for your environment. [23:52] if it is a go 1.3 issue, then i say we don't fix anything [23:52] mgz: there it is [23:53] mgz: oh, i thought we were going to use everything the exact same, but just switck go verson [23:53] so the earlier CI failures were due to missing tools [23:54] oh, he did also use go 1.4 on ubuntu, but wily [23:54] but that's still proxuing through a different machine [23:54] but we want a controlled experiement [23:54] so it's hard to make it identical [23:54] only changde one thing [23:55] can't we switch go version on the vivid machine [23:56] we could download and build go 1.4 on that machine and make the job use it maybe [23:57] wwitzel3: /entrypoint.sh apach 19 minutes ago Exited (1) [23:57] wwitzel3: 2015/07/22 23:36:38 Stopping proxy on tcp/[::]:8080 for tcp/172.17.0.2:80 (accept tcp [::]:8080: use of closed network connection) [23:59] I think going through the workspace runner just tickles ssh issues