[01:14] menn0: hey, you're not looking at the manual provider bug atm are you? [01:14] I can see the problem now - just added a comment to the bug [01:16] oh, I see natefinch has a PR up... [01:43] axw: I was going to see if reverting my 'manual provision with custom ssh key' branch fixes it. Should I keep on or have you got it? [01:44] waigani: I can repro in my env now, so leave it with me for now [01:44] waigani: I'm pretty sure it was failing before your change went in [01:44] will see tho [01:44] axw: okey dokey [02:01] waigani: nothing to do with your change, it's related to removal of storage [02:01] axw: okay, thanks [02:12] * thumper stares at this code trying to work out what is different [02:26] axw: nate's change went in [02:27] with it I can happily bootstrap using the manual provider [02:27] the CI test is still failing, but in a different way [02:27] the SSH key error that waigani was referring to [02:28] axw: to answer your original question, no I'm not looking at this bug any more [02:29] menn0: thanks, no problems; I found the issue [02:32] https://github.com/juju/juju/pull/504 <- review please someone [02:32] fixes CI blocking bug [02:34] axw: looking [02:41] menn0: agreed about using use-sshstorage. do you have any ideas of what else I can use there? [02:41] axw: not off the top of my head [02:42] axw: I'm concerned that if change how SSH storage is used, this code is going to break [02:42] (e.g. if we stop using it all together, or start using it for the bootstrap node) [02:43] menn0: I understand and agree, but at this point I don't think there's an alternative [02:43] probably we should have a provider independent way of determining whether you're running from inside the env [02:43] axw: I'll trust your judgment on that [02:44] something like use-sshstorage, but for this purpose [02:44] could we have some tests that ensure that SSH storage is used in the current way? [02:44] there are I'm pretty sure [02:44] I just added one, too [02:45] i.e. that verification is elided if use-sshstorage=false [02:45] I'm thinking something right next to these tests that emit a message if they fail to remind us that this code needs to be updated [02:45] I saw your test and that's obviously required [02:46] but I'm also wondering if it's possible to have something that checks useSSHStorage on a bootstrap node, and not and ensures it's what we expect here [02:47] if it fails then it should error with something like: u"seSSHStorage semantics have changed. Please update manualEnviron.StateServerInstances" [02:47] maybe that's overkill [02:47] or too hard [02:47] but that's the kind of thing I'd aim for [02:48] menn0: there's also tests in provider_test.go that check that Prepare sets use-sshstorage, and Open doesn't. there should be one for Bootstrap too, I'll add one [02:48] ok sounds good [02:48] tho testing Bootstrap may be a PITA, will see... [02:50] axw: if it's going to be too hard then leave it [02:50] it's probably more important to get this fix in at this point [02:50] menn0: shouldn't take long I think, I'll see how I go [02:50] won't waste too much time on it [02:50] sweet [02:50] well you have my LGTM anyway [02:51] axw: just remembered... not sure if you need someone else's too. I'm a "junior reviewer". thumper? [02:52] * thumper sighs... [02:52] :) [02:52] I should sort that shit out [02:52] * thumper looks [02:52] thumper: at least this one is a small change === blackboxsw is now known as blackboxsw_away [03:19] * thumper needs to take kid to hockey [03:19] bbl [04:16] thanks axw [04:16] jcw4: nps [04:34] is there a publicly accessible repo with the funcitonal tests used by jenkins? [04:34] s/funcitonal/functional/ [04:51] ah, I'm guessing it's https://code.launchpad.net/juju-ci-tools [05:51] if anyone has some time, I'd really appreciate a review: https://github.com/juju/utils/pull/16 https://github.com/juju/utils/pull/19 https://github.com/juju/juju/pull/462 https://github.com/juju/juju/pull/453 [05:51] morning all [05:51] :) [05:51] ericsnow: a little collection there! [05:51] ericsnow: morning [05:52] voidspace: noone wants to review them :( [05:52] ericsnow: hehe, let me get coffee and I'll take a look [05:52] voidspace: FYI, 55a9507 (drop direct mongo access) got reverted because it broken restore [05:53] ericsnow: restore needs direct mongo access? [05:53] that's horrible [05:53] voidspace: apparently [05:53] voidspace: for now (the new restore won't) [05:53] and with that, I'm going to bed! [05:54] ericsnow: goodnight! [06:03] morning [06:11] dimitern: morning [06:11] dimitern: so "shutting off direct db access" got reverted [06:12] dimitern: as it was this change that broke restore :-( [06:12] dimitern: I thought restore used ssh rather than direct mongo access, but it seems I'm wrong [06:28] voidspace, oh, bugger :( [06:29] voidspace, I think restore needs to be smarter [06:31] voidspace, and use ssh to run mongo commands remotely [06:31] dimitern: right [06:31] dimitern: but restore is being changed anyway, so the "new restore" will be smarter [06:31] but until then... [06:32] yeah.. === uru__ is now known as urulama [07:36] morning [07:43] morning TheMue [08:00] TheMue: morning [08:22] does anyone know the lxc-create magic invocation to get it to share home directory with the host? [08:30] voidspac_, why do you need this? [08:31] dimitern: especially for nested lxc containers it makes experimenting simpler [08:31] dimitern: shared access to scripts / ssh keys etc [08:31] dimitern: only for experimentation [08:31] voidspac_, you can take a look at man lxc.container.conf - there is a way to specify additional mount points there; or just ask stgraber or hallyn in #server (@can) [08:31] dimitern: there's a u1 development wiki page that explains it somewhere, I'm looking now [08:32] it's how we used to do dev (inside an lxc container) [08:32] voidspac_, ah, nice [08:32] very useful, if you screw up your dev environment just blow it away and create a new one [08:34] voidspac_, take a look at https://wiki.debian.org/LXC - "Bind mounts inside the container" section [08:34] dimitern: thanks [08:40] dimitern: https://wiki.canonical.com/UbuntuOne/Developer/LXC [08:40] dimitern: sudo lxc-create -t ubuntu -n u1-precise -- -r precise -a i386 -b $USER [08:41] obviously modifying appropriately for trusty / amd64 [08:41] but it's the -b $USER [08:41] voidspac_, ah, even nicer, thanks! :) [08:41] then start the container as a daemon, ssh in and do your dev work there [08:42] voidspac_, so you can still mess up your /home from inside the container, but nothing else? [08:42] dimitern: right [08:42] dimitern: and you can have separate ppas and packages installed [08:46] * TheMue just wondered where his blocked PR is and then recognized that the bot merged it half an hour ago :) [08:47] dammit, I thought it was quiet -- irc client wasn't actually running :/ [08:48] *rofl* [08:48] morning fwereade [08:48] fwereade: I thought you were just hiding :) [08:49] axw, if I'd realised I was I would have been coding ;p [08:49] hello peoples [08:49] :p [08:49] * menn0 is back for more [08:49] menn0, everybody, heyhey :) [08:49] fwereade: morning :-) [08:49] menn0: morning [09:00] wallyworld: just checking: have you seen the issues that i raised on juju/blobstore? i was wondering what your thoughts were there. [09:29] rogpeppe: no, haven't seen them, have been focused on 1.20 issues and the sprint last week. i'm back at work tomorrow. do you have bug numbers? [09:31] wallyworld: https://github.com/juju/blobstore/issues [09:32] rogpeppe: oh, ok. we should be using launchpad for rasing bugs [09:32] otherwise i don't see them [09:32] wallyworld: ah [09:32] and we can't track them into milestones [09:32] wallyworld: i thought it was more appropriate to raise the issues against the repo itself [09:33] wallyworld: but i see the milestone issue too [09:33] for sub repos, that may be a good point [09:33] but, yeah, it's sort of messed up using two tools [09:33] 9 issues :-( [09:34] i may not get to look in detail till a bit later this week [09:37] wallyworld: (there's no way of tracking external bugs in lp?) [09:37] wallyworld: if you starred juju/blobstore, i think you might get email messages about issues etc with it. (but maybe that's another setting) [09:38] wallyworld: some are harder to fix than others [09:38] rogpeppe: there is a way of importing bugs yes; it would need to be set up. but since we use lp for juju-core, i'm conflicted about introducing a separate tools for other things [09:39] i may have got emails buried in my in box, will need to check - i didn't have filters set up [09:41] wallyworld: the other side of that coin is that if you're looking at a sub-repo, it makes sense to be able to trivially see all the bugs associated with it [09:41] wallyworld: but i'm happy to move the bugs to lp if you think that's better. [09:42] rogpeppe: yeah, i would prefer other affected people help make that call, not just me. i suspect the answer will be juju-core stays in lp and the sub repos are in github since they are just libraries and don't have a release schedule as such [09:49] Beret: hi, sorry i only just saw your ping in the back scroll (I've been off on leave for a few days). that work hasn't started yet but i hope to have something done by end of week for Juju 1.21 alpha [10:47] hello folks. Anyone care to review: https://github.com/juju/juju/pull/499 ? :) [11:22] fwereade_, ping? [11:22] mattyw, pong [11:45] * dimitern lunches [12:40] voidspac_: r u there? [12:49] fwereade_, if you can't make the txn meeting.. i'd prefer we just push it one day or alternatively move it to earlier today (+ natefinch) [12:55] hazmat: with the toasca thing at 10, we'd have to meet like right now to make it in earlier today. [12:56] natefinch, yup [12:56] dimitern, ping? [13:07] mattyw, pong [13:10] hazmat, I'm about to go out now and *hope* I will be back by 5 [13:10] perrito666: yep [13:10] hazmat, and I haven't taken my swap day yet and was going to take it tomorrow [13:10] hazmat, natefinch: I would hope you can be somewhat productive without me? [13:10] voidspac_: I reverted a PR from you last night [13:10] perrito666: I saw [13:11] *grrr* [13:11] :-( [13:11] perrito666: restore still requires direct db access [13:11] so we can't close it off just yet [13:12] voidspac_: new restore does too, but I do accept ideas to change that [13:12] perrito666: ssh [13:12] perrito666: we want to close off direct db access [13:13] voidspac_: I am a bit curious, how is db access going to be done now? [13:13] perrito666: not externally [13:13] perrito666: db access externally should never be needed - all calls should go through the api [13:14] perrito666: and the state server wouldn't need the port to be open to connect to it on the same machine [13:14] perrito666: so if access to the db is needed an api endpoint should be created - or ssh used [13:16] voidspac_: wait, it means that the db would be listening locally? [13:16] as in localhost:FORMERSTATEPORT ? [13:16] perrito666: yes [13:16] perrito666: it already is [13:16] I believe [13:16] fwereade_, k, hopefully we'll see you at 5 then, have fun. [13:16] voidspac_: well your patch changes that [13:17] perrito666: I didn't believe so [13:17] voidspac_: since this stopped working: [13:17] mongo --ssl -u admin -p {{.AgentConfig.Credentials.OldPassword | shquote}} localhost:{{.AgentConfig.StatePort}}/admin --eval "$1" [13:17] :) [13:17] you might want to add a test for that [13:17] and that is run via ssh [13:17] perrito666: show me where my patch changes that? [13:17] into the machine [13:18] perrito666: in the code [13:18] ... [13:18] perrito666: it may just be that the template doesn't work now [13:18] voidspac_: possible [13:18] the port (and port binding) didn't change [13:18] I also thought of that [13:18] in which case that is much easier to fix I think [13:19] voidspac_: sorry I did not try to fix it more in depth, we really needed CI back [13:20] perrito666: no problem - so long as new restore is implemented not needing external db access [13:20] voidspac_: well if you can guarantee that localhost:StatePort works should be no problem [13:20] perrito666: cool === ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: see calendar | Open critical bugs: None [13:23] woohoo .. None [13:23] perrito666, voidspac_: If StatePort is gone then it definitely makes sense that {{.AgentConfig.StatePort}} no longer works in the template. [13:23] ericsnow: it's not gone, it's just not opened externally [13:23] ericsnow: I am taking state port from the agent.config and it still is there [13:23] I don't believe I actually removed it from AgentConfig [13:24] trying to get back to the original PR as it's now closed [13:24] voidspac_: I wonder if its a problem derivedfrom the permision groups in ec2 [13:24] voidspac_, perrito666: ah [13:24] which would not make much sense but hey, you never know [13:24] voidspac_: https://github.com/juju/juju/pull/449/files [13:24] perrito666: weren't the failures on the HP cloud? [13:25] ericsnow: ah not sure actually [13:25] sinzui: ? [13:26] perrito666, at this hour there are no critical bugs affecting juju devel or stable. This is the first time in months [13:27] sinzui: dont jinx it [13:27] sinzui: was the error happening in hp too? [13:27] rogpeppe: could you take another look at https://github.com/juju/utils/pull/16? [13:29] perrito666: from http://juju-ci.vapour.ws:8080/job/functional-backup-restore/1309/console: "https://region-a.geo-1.objects.hpcloudsvc.com/v1/..." [13:29] perrito666, I am not sure what the question is. restore has failed on both aws and hpcloud. it is currently testing on hpcloud. I changed it last week to see if the recent bug was different on Hp [13:30] sinzui: this could be so much easier if we could all read each other thoughts [13:30] voidspac_: ok, its not permissions I have no clue then, I guess Ill have to find out [13:30] I'm trying to look at it as well [13:31] voidspac_: I think the issue is restore line 287 [13:33] perrito666: it's using the external address [13:33] perrito666: it's pinging mongo to wait for it to come up [13:33] that will never work [13:34] well, it used to work [13:34] yes I know [13:34] hehe [13:34] perrito666: so the restore probably succeeds - but then can't connect to mongo and thinks it has failed [13:35] voidspac_: sortof [13:35] the restore of state machine succeeds [13:35] but it fails when trying to update the other agents [13:35] bc it needs st.AllMachines [13:35] right, before updateAllMachines [13:35] exactly [13:35] does that need a new API endpoint then [13:35] which can be used instead of directly connecting to mongo [13:36] and the strategy can make repeated calls to that instead [13:36] don't we already know AllMachines? [13:37] or it could run that code on the state server or execute a mongo query [13:37] voidspac_: we dont, we try to work that out from the recently restored db [13:38] perrito666: so restoreBootstrapMachine could run an extra command to get the info [13:38] using runViaSsh [13:38] and return the extra information [13:38] or we could add a new endpoint and use apiState [13:39] perrito666: which do you think would be better? [13:39] for the case of current restore implementation we can go with runViaSsh, for the new one I can do something prettier [13:40] perrito666: shall I do this - I have some spare cycles [13:40] We are one unittest run away from having a passing devel. The anxiety is too much [13:40] sinzui: :-) [13:41] :) [13:41] wallyworld: still there by chance? [13:41] voidspac_: please do, If I keep context switching I will never in my life finish the new restore implementation [13:41] perrito666: ok [13:41] perrito666: and for new restore you will take this into account? [13:41] I will [13:41] so I'm working in restore.go still [13:41] not the plugin [13:41] voidspac_: the plugin [13:41] (just checking) [13:41] ah... [13:42] no wait [13:42] restore.go is the plugin... [13:42] voidspac_: cmd/plugins/juju-restore/restore.go [13:42] voidspac_: makesit clearer? [13:42] :) [13:42] yep, that's where I've been looking [13:42] yup [13:42] thanks [13:46] OMG [13:47] marcoceppi: ? [13:47] wrong room, though it still applies, I got excited because the buildbot is unblocked [13:52] ericsnow: looking [13:53] rogpeppe: thanks! [14:04] ericsnow: reviewed [14:04] rogpeppe: much appreciated === Ursinha is now known as Ursinha-afk === Ursinha-afk is now known as Ursinha [14:24] when updating a library, should i specify the specific commit that fixes the issue, or the latest commit in a stable release? [14:24] sorry, in dependencies.tsv [14:25] it depends [14:25] generally... latest seems like a reasonable choice, as long as it doesn't break anything else. [14:26] yeah [14:26] on the assumption that other bugs may have been fixed in the meantime, and no sense waiting until we hit them to include them in our build [14:26] this is for goyaml, so i am assuming the v1 branch is _relatively_ stable [14:26] yeah [14:26] ok cool, i'll grab latest :) [14:26] ty nate! :) [14:27] er one more question [14:27] i switched goyaml over to gopakg.in... i noticed logger isn't in the dependencies.tsv yet, but some gopakg.in packages are [14:28] wasn't gopakg.in designed to obviate godeps? [14:28] not exactly [14:28] they're somewhat orthogonal, though related [14:28] ah ok, i misunderstood its purpose then [14:30] you need godeps to ensure that you have a repeatable build. Even supposedly non-breaking changes on a stable branch by definition change behavior. For a release you need to make sure it's possible to recreate the exact same binary multiple times. Godeps does that [14:32] yeah [14:32] somewhat misunderstood what gopakg.in was designed to solve [14:33] can someone land this for me? it's already been reviewed/approved, just needs to be landed into trunk: https://code.launchpad.net/~cox-katherine-e/goamz/lp1319475.v4.signature.support === Ursinha is now known as Ursinha-afk [14:36] (i don't have permissions, or obviously i would do it myself :p) === Ursinha-afk is now known as Ursinha [14:41] katco: what happened to getting goamz moved to github? [14:41] natefinch: we haven't found a home for it yet. [14:42] and didn't want it to impede development any further. we have a customer waiting on this functionality. [14:42] katco: I guess we don't control http://github.com/go-amz huh? [14:43] natefinch: no idea, but i would have thought wallyworld or niemeyer would have mentioned it :p [14:43] i am still not clear on why we don't have some sort of canonical repo that all code goes into [14:43] big C canonical [14:43] on github [14:44] katco: 'cause we don't control http://github.com/canonical either :) [14:44] is that preventing us from registering canonical-ltd, or canonical-code, or canonical-* lol [14:48] dimitern: did you meant the SupportNetworks capability? (which btw the should be renamed to SupportsNetworks) [14:48] natefinch: Define "we"? :) [14:49] niemeyer: "Gustavo" :) [14:49] natefinch: Well, I registered go-amz, IIRC [14:50] I hoped so [14:50] but you never know in the wild west of internet name squatting [14:55] mgz: you around yet? [14:55] katco: just back from lunch now [14:55] mgz: ah ok, see pm's pls :) [14:55] hm, on irc? I may be being blind, don't see any [14:56] mgz: hrm yeah [14:57] mgz: sorry my mistake... window wasn't connected [15:01] natefinch, fwereade_ meeting time.. [15:04] perrito666: I don't have access either [15:04] anyone know *where* the October sprint is? [15:04] I don't think I can make it [15:04] voidspac_: I think only people on the cc list of the mail by sarah can see it [15:05] I already have that time booked off... [15:05] so natefinch would you tellus where it is? [15:05] I asked yesterday, she said they don't know yet [15:05] voidspac_: if you can't be there we should reschedule :) [15:06] ericsnow: definitely [15:06] I land back in the UK on the Sunday 5th October [15:10] ericsnow: actually, I could just fly from India to the sprint [15:10] I'd be away for two weeks then, but ah well [15:11] voidspac_: you'd already be packed ;) [15:11] ericsnow: well yes, but I'd need to pack for two weeks instead of one [15:11] but so be it I guess [15:11] voidspac_: conference T-shirts FTW [15:11] heh [15:11] voidspac_: the price of being a Python luminary :) [15:11] ericsnow: get them, wear them, throw them [15:12] TheMue, yes that one [15:12] voidspac_: :) [15:12] ericsnow: you should be with me... [15:12] voidspac_: when that kind of think happens to me I pack two bags and let one at home and swap upon arrival [15:12] TheMue, it'll go away soon anyway, as the new model kicks in [15:12] perrito666: I don't think I can do "land in uk then immediately fly out again" [15:12] perrito666: but fly straight from India might be doable [15:13] voidspac_: if someone is waiting for you in the airport with the spare bag you might [15:13] perrito666: heh, possible depending on flight times I guess :-) [15:15] natefinch: wwitzel3 standup? [15:18] dimitern: had been in a meeting, so answering now [15:19] TheMue, no worries, I was just replying to your earlier questions :) [15:20] natefinch, hazmat: here now if there's still worthwhile time? [15:21] dimitern: currently I also have no more questions, only wanted a confirmation ;) [15:21] fwereade_: yep, 1 minute [15:24] TheMue, :) cheers [15:26] ericsnow: btw, I recommend "starring" docs you want to be able to find, so they're under the "Starred docs" in google drive [15:26] natefinch: good tip :) [15:27] ericsnow: took me a while to figure that out, after having trouble finding docs again.... it's like google drive specific bookmarks :) [15:29] yup, seems that people behind google docs never used a filesystem on their lives [15:45] it’s also no problem to move docs to own folders as they are only virtual (like creating a symlink) [15:45] so it shoud be possible to access them via a google drive client on a phone or pc too [15:57] it looks like i might have to make some updates to some of our repositories under github.com/juju/* that are not under github.com/juju/juju... what's the workflow for that? fork/pr? [16:03] katco: yeah, same as juju/juju fork & pr [16:04] natefinch: k thanks [16:04] katco: not sure about the state of botness on those other repos, though [16:04] natefinch: this should be loads of fun. updating imports of goyaml, touches like 3 sub-repos [16:05] katco: at least there's no interdependence... no repo is passing an object from the yaml package to code from another repo... so they can be updated non-simultaneously [16:07] natefinch: at least there's that [16:17] ugh i have to backport all of these too [16:17] this is going to eat up my entire day :( [16:19] well... actually. maybe i should defer the switch to gopkg.in, since it looks like these libraries will just use whatever juju-core specifies in dependencies.tsv [16:19] and save that change for a non-backporting commit [16:22] fwereade_, are you around or busy? [16:22] katco: yeah, if there's nothing we need in the new yaml package for the old branches, I wouldn't bother backporting [16:23] mattyw, bit of both, am I behind on your reviews? [16:23] fwereade_, not at all I landed the periodic worker one as I added the test you asked for [16:23] natefinch: no, i need to backport, i'm just not going to switch to gopkg.in for this commit [16:23] fwereade_, but my metrics one I have a question [16:23] mattyw, sweet [16:23] mattyw, ah go on [16:23] natefinch: that way the sub repos should keep using launchpad.net/goyaml which juju-core should drive to the correct version [16:26] errr no wait, b/c that change is not in the launchpad version is it, so i'm looking at an import change regardless [16:29] katco: what's the change that you need in yaml? I thought the move to gopkg.in didn't have any signficant functionality changes [16:29] natefinch: https://bugs.launchpad.net/juju-deployer/+bug/1243827 [16:29] Bug #1243827: juju is stripping underscore from options Progress by cox-katherine-e> [16:30] natefinch: the move to gopkg.in was a side-effect of having to change the code already [16:30] Ladies and Gentlemen, CI has blessed Blessed: gitbranch:master:github.com/juju/juju 36fe5868 (Build #1699). Devel is regressions free after 49 days [16:30] woo! [16:31] woo hoo! [16:31] katco: ahh I see. Interesting. [16:32] sinzui, wow [16:32] wait, isn't 49 days about the length of time katco and ericsnow have been on the team.... ? ;) [16:32] lol [16:33] squirrel! [16:33] lol [16:34] sinzui, now the flood gates will be opened [16:37] alexisb, yes, I am prepared to new kinds of hate mail from CI tomorrow === Ursinha is now known as Ursinha-afk === Ursinha-afk is now known as Ursinha === Ursinha is now known as Ursinha-afk === jcw4 is now known as jcw4|away === Ursinha-afk is now known as Ursinha === perrito6` is now known as perrito666 [19:01] i have PRs to juju/(cmd|utils|charm) that need reviewing. 1-line import change. blocking cut if anyone wants to have a quick look. === Ursinha is now known as Ursinha-afk === Ursinha-afk is now known as Ursinha [19:35] cmars: can you try https://github.com/juju/juju/pull/495 with the new code? It gives much nicer error messages now. === tvansteenburgh1 is now known as tvansteenburgh [19:42] can anyone review the aforementioned changes? [19:43] katco: link me? [19:43] * natefinch is lazy [19:43] hey np, gladly :) [19:43] https://github.com/juju/utils/pull/21 [19:43] https://github.com/juju/charm/pull/39 [19:43] https://github.com/juju/cmd/pull/5 [19:44] katco: did you compile these? the package name changed from "goyaml" to "yaml" [19:45] katco: I know only because I just made the same change in another codebase, and realized it's not a one line change (unfortunately) [19:46] lol you are right, sorry. i did all these through scripting, so i forgot about that [19:46] sigh ok well good review haha [19:46] katco: heh np :) [19:48] * cmars takes another look [19:49] cmars: it'll only fail at the first line of differences, but it should be clear what's different, at least. [20:00] natefinch: have another look? all building now. still 1 line change :) [20:03] katco: ha. wondered if you'd go that route [20:05] katco: I vaguely disapprove of renaming the import just to avoid changing more lines of text, but I don't think it's a huge deal. [20:05] natefinch: actually, i don't think i know this: how do you utilize a package imported via gopakg? [20:05] would it be yaml.v1.Foo()? [20:07] katco: the import path and the package name are actually totally unrelated. by convention they are the same... but a package name cannot include punctuation (I believe the actual restriction is something like like unicode letter followed by any number of unicode letters, numbers, or undescore) [20:07] katco: the convention for gopgk.in is that the version is not part of the actual package name, so "yaml.v1" is package yaml [20:09] natefinch: ah so you just do import yaml "gopkg.in/yaml.v1"? [20:14] katco: import "gopkg.in/yaml.v1" and then use it as yaml.Foo() [20:15] katco: you don't have to name the import, it gets named by what "package foo" says in the code, which in this case is "package yaml" [20:15] natefinch: huh? how does that resolve? it elides the .v1? [20:15] ohhh i see [20:15] katco: https://github.com/go-yaml/yaml/blob/v1/yaml.go#L7 [20:17] katco: that's what I mean by the import path and the package name not being related. You can put that code at https://github.com/natefinch/ABCD and import it as import "github.com/natefinch/ABCD" and you'd still refer to the package as yaml.Foo() [20:17] natefinch: gotcha, thanks [20:17] katco: this was actually one of the biggest complaints about the way gopkg.in does versioning - the last part of the URL is not the same as the package name [20:18] natefinch: yeah, i wonder if like gopkg.in/v1/yaml would have worked [20:19] katco: there's a couple problems with that - 1.) it sorts badly in the list of imports... so gopkg.in/v1/yaml might be far away from an import of gopkg.in/v2/yaml (the .v1 .v2 imports would sort to be right next to each other) [20:19] katco: 2.) it puts a /v2/ directory in your filesystem with a bunch of unrelated code in it, and again, the v1 code is far from the v2 code [20:19] natefinch: ah [20:20] natefinch: anyway, does this all look ok? [20:20] katco: sorry, tangent :) [20:20] natefinch: not a problem :) just trying to get this in for sinzui [20:23] katco: LGTM'd. [20:23] natefinch: thanks for your help today [20:25] katco: welcome === jcw4|away is now known as jcw4 [21:00] morning [21:07] morning thumper [21:07] alexisb: morning [21:34] morning thumper [21:34] o/ katco [21:35] (not intended just for thumper) i'm running into a strange kind of circular dependency b/c of gopkg.in. i'm trying to update v3 of juju/charm, which utilizes gopkg.in/juju/charm.v3 to reference itself. so it's referencing the wrong version of itself... if that makes sense? [21:36] huh? [21:36] am i doing something wrong? or should i hack this to get around it [21:36] what exactly are you doing? [21:36] alright, so i'm working with github.com/juju/charm [21:36] and all i'm trying to do is update some imports [21:36] AFAICT, if you have packages that use gopkg.in, then you need to be in that dir [21:37] yeah... [21:37] so work in the dir gopkg.in/juju/charm.v3 [21:37] so i should be making these changes w/in gopakg.in on my machine? [21:37] * thumper nods [21:37] I think so [21:37] that's what i was doing wrong then [21:42] thumper, heyhey [21:42] hi fwereade_ [21:43] thumper, how's the time difference? [21:43] terrible [21:43] you mean now? [21:43] or from germany? [21:43] thumper, from germany [21:44] thumper, I have a notion that we may disagree on the yuckiness of the Factory varargs [21:44] heh, yeah [21:45] thumper, I'm interested in counterarguments [21:46] I'd rather have ickyness in one place, the factory, than at every call site [21:46] I agree it is a little icky [21:46] but working around golang limitiations [21:46] it was dave's idea [21:46] originally I had two methods [21:46] thumper, I think it was the *repeated* ickiness inside Factory that really put me off [21:46] for each type [21:47] but the ickiness there is limited in scope, and contained [21:47] vs. spreading it around all the places the factory is used [21:47] thumper, just to be clear, it's the nil that's yucky? [21:48] mostly, and the c [21:48] what I *want* is: factory.MakeUnit() [21:48] however [21:48] due to bugs in gocheck [21:48] we need the c [21:48] thumper, indicating "I don't care" in place of a set of explicit instructions [21:48] yes [21:49] I had earlier... [21:49] factory.makeAnyUser() [21:49] and factory.MakeUser() [21:49] damn capitals [21:49] we joined those methods together [21:49] to avoid the nil [21:49] having: factory.MakeUser(c, nil) isn't obvious [21:50] factory.MakeUser(c) is slightly more so IMO [21:50] * thumper misses python [21:51] thumper, I know the feeling [21:52] thumper, but I'm not sure that even python's varargs aren't more trouble than they're worth [21:52] thumper, explicit is better than implicit [21:53] python has the advantage of explicit default args [21:53] fwereade_: IMO, nil isn't explicitly stating what you want, you have to go look up what nil is [21:53] whereas not having nil is being explicit :) [21:54] thumper, (python has default args with some really entertaining behaviour, but anyway) [21:54] sure... [21:54] thumper, I can read it just as easily as nil=>no preference, and that as a stronger statement than no statement at all [21:54] I have a gut reaction to blind params, especially with nil [21:55] I don't care strongly enough to fight for long [21:55] speaking of which, [21:56] the whole user sub(super)command is in question in the spec [21:56] which I'm losing the will to fight as well [21:56] suppose I should write something up [21:56] thumper, oh really? I do think that we do ourselves no favours by polluting the command namespace [21:57] * thumper nod, I'll CC you on the email about it [21:57] thumper, cheers [22:03] these are always funny. just received a panic from:// can't happen, these values have been validated a number of times [22:03] panic(err) [22:05] katco: for some value of funny (i.e. not funny) [22:05] :) [22:05] thumper: haha [22:31] katco: hi, how'd you get on with the deps update? [22:32] wallyworld: still working... ran into a bunch of panics using the head of the yaml lib [22:32] oh :-( [22:32] i had to update 3 sub repos to use the latest version of goyaml [22:32] that's what took the longest [22:32] np, sounds like you poked a hornet's nest [22:32] i think i'm going to try and sit on the commit that fixed the reported issue and see what that does [22:34] wallyworld: i did get my ~20 day old change landed :) [22:34] that made katco very happy [22:35] this run seems to be going better with commit 1b9791953ba4027efaeb728c7355e542a203be5e [22:41] yeah almost done. i'm going to stick with this one and submit a PR after tests have passed [22:52] fwereade_: (on the off chance you're in a reasonable timezone for now) you still around? [22:58] moin [23:02] wallyworld: well, now i have test failures because of goamz. i'm guessing it's because we're mocking environments and not specifying a signer. can this wait until tomorrow? [23:02] katco: be with you soon, just in a meeting [23:02] thumper: standup? [23:02] sorry, on my way [23:03] wallyworld: ok [23:12] davecheney: you one of the reviewers today? [23:14] katco: yeah, it can wait. sorry that it turned out to be more problematic than first thought [23:14] wallyworld: no worries... we're almost there [23:14] yep :-) [23:14] wallyworld: just have to find out where these mocked regions are [23:15] ok [23:15] wallyworld: alright, way past my EOD. going to spend some time with my daughter before she has to go to bed :) [23:15] talk to you tomorrow! [23:15] will do, thanks for taking the extra time :-) [23:22] ericsnow: ok [23:22] i have calls for the next 2 hours [23:22] i'll take a look after that [23:28] davecheney: cool, thanks [23:29] davecheney: https://github.com/juju/utils/pull/19 https://github.com/juju/juju/pull/462 https://github.com/juju/juju/pull/453