=== bigjools_ is now known as bigjools
axwthumper: you got loggo released I take it?01:05
axwon github01:05
thumperthe github name?01:05
axwthumper: about the configstore/chown thing. I was going make it so we couldn't use sudo/root at all; now if you use sudo you'll end up with ~/.juju/environments owned by root. do you think it's worth the trouble, or people just have to learn to not do that?01:07
thumperI've been thinking a little about that01:07
thumperwhat if it is root?01:07
thumperwhat testing was doing:01:08
axwas in, not using sudo at all?01:08
thumpersudo su other01:08
thumperthen that shell does "juju bootstrap"01:08
thumperI'm not sure we should be that opinionated01:08
axwwell I think we'd have to check the current uid, and only do something if uid == 001:08
axwbut yeah... I'm starting to think it's just overly complicating things01:09
thumperI think it is safer to just leave it how it is now01:09
thumperand not add anti-features01:09
axwdo you think I should take out the bit in local that prevents root then?01:09
axwI think I may as well, if we're not doing it elsewhere01:10
axwthumper: by which I mean, the calls to ensureNotRoot in provider/local/environ.go01:10
thumperthat one I think may be worth keeping for now01:11
thumperto train people to do it right01:11
thumperotherwise people doing what they have always done will create broken systems01:11
wallyworldthumper: axw: do we need to discuss that critical destroy-env bug?01:12
axwyeah ok. if they were doing it before, then they probably have existing dirs anyway01:12
axwwallyworld: the what now? :)01:12
* wallyworld digs up the bug number01:13
thumperwallyworld: can you just chat with axw about it? I'm in the middle of something01:13
_mup_Bug #1272558: destroy-environment shutdown machines instead <ci> <destroy-machine> <intermittent-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1272558>01:13
wallyworldthis one has been causing the CI folks a lot of grief01:14
wallyworldi haven't looked closely just yet01:14
wallyworldtrying to get stuff done for tomorrow01:14
wallyworldthey were suspecting recent destroy env work01:14
wallyworldbut that's just heresay01:14
axwI will see if I can reproduce01:15
wallyworldok ta :-)01:15
thumperaxw, wallyworld: now that we are four (when waigani is around), how do you feel about a daily standup around this time?01:28
thumperaim is for 5-10min max01:29
thumperbut gives us a way to touch base and know if anyone is blocked01:29
axwthumper: sgtm01:30
axwwhere is waigani anyway?01:30
wallyworldhis sheep died01:31
thumpershall we do a quick stand up now?01:32
axwthumper: in errgo, did you consider having a Annotator type and DefaultAnnotator var, rather than RecordLocationWithAnnotations/FilePathElements globals?01:50
axwthat is more "idiomatic Go"01:51
thumperbut I thought this could be something that you want enabled for debug builds01:51
thumperand disabled for production01:51
thumperhow do you do that in idiomatic Go?01:51
axwso, IMHO, you'd do that in the user. i.e. you have juju-core/errgo or something, which uses errgo/errgo but has its own configuration01:52
axwjuju-core/errors would be a good place for it actually.01:52
* thumper sucks in his cheeks01:52
thumperhowever then the runime.Caller is all fucked up01:52
thumperI'll finish this up then we can bikeshed :-)01:53
axwno worries01:53
thumpergo has short circuit boolean evaluation right?01:55
thumperwallyworld: are the trees I need to update on the gobot in /home/tarmac/trees?02:58
wallyworldi think so yes02:59
wallyworldthere was another trees dir also but i can't recall without looking what it was for02:59
axwthumper: it just occurred to me that local provider no longer prevents units on machine 0, since it's using the common state init code03:14
axwI'll put a bug in for that...03:14
thumperyeah... I did mean to ask about that :)03:14
axwwallyworld: why can't people just set image-metadata-url to cloud-images.ubuntu.com/daily or whatever? rather than another attribute for the stream03:25
wallyworldbecause we don't want them having to know about cloud-images.u.com per se03:25
wallyworldthe stream name is used to form the product id03:26
axwah yeah03:26
wallyworldit's a bit messy03:26
waiganiaxw: hello :)05:39
axwwaigani: heya05:39
waiganiCan I ask you a Go question?05:39
axwyes go for it05:39
waiganiMy question is in the comment, just playing with channels to understand them better05:40
waiganiI get that the for loop blocks and waits for a new val to be added to the channel05:40
axwwaigani: the program is exiting before the 4th goroutine is executed05:41
waiganiyes but why?05:41
axwyou're not waiting for the goroutines to complete, so the main func just falls off the end05:41
axwone sec05:42
axwso "c" is an unbuffered chan, which means that the sender blocks until the receiver is ready. the background goroutine gets the value "4", but there's no guarantee it does anything with that value before the scheduler switches back to the main function and then exits05:43
axwdoes that make sense?05:43
waiganiI thought <-c within the for loop blocks and waits for any new vals added to the channel?05:44
axwyes it does. so the value was transferred. the goroutine has the value 4, it just didn't get as far as printing it05:45
axwwaigani: http://play.golang.org/p/9GYN3InLIH05:45
rogpeppe2axw: ping08:38
axwrogpeppe2: pong08:39
rogpeppe2axw: i've been looking at making the bootstrap stuff a bit better when things go wrong, and i'm wondering what you think a good approach is here08:40
rogpeppe2axw: in particular, when something goes wrong, we just see "Exit status 1" and no output from the failed commands08:40
rogpeppe2axw: (and then of course the instance is instantly shut down, so there's no way to find out what the issue might be)08:40
axwrogpeppe2: yeah I was planning to do something about that at some stage... ;)08:41
rogpeppe2axw: i'm thinking of saving all Stdout and Stderr and spitting it out when things fail08:41
axwrogpeppe2: probably tail cloud-init-output.log08:41
axwhmm or that I guess08:41
axwrogpeppe2: all stdout/stderr goes into /var/log/cloud-init-output.log, IIRC08:41
axwso you could just tail/cat that I think08:42
rogpeppe2axw: ah, that's useful to know - i thought that might not be happening any more08:42
rogpeppe2axw: but how many lines to tail?08:42
axwrogpeppe2: yeah... maybe just cat it on failure?08:42
rogpeppe2axw: the whole thing's going to be quite big, isn't it?08:43
axwyes I think it will get pretty big, with all the apt-get update output, and so on08:43
axwalternatively we could grab the file and write to disk, but that's only useful when calling from the CLI08:44
rogpeppe2axw: one possibility might be to try to work out which command failed and print the output from only that08:46
rogpeppe2axw: do we send commands one-by-one, or as a big bunch?08:46
axwrogpeppe2: no, as a single script08:47
axwthat would be a good default08:47
axwthere may be cases where command failure needs more context, but I can't think of any08:47
* axw thinks how to get there08:48
rogpeppe2axw: so, it would presumably be possible to print some distinctive text before each command, then print output from the last occurrence08:48
axwyes, that would work08:48
rogpeppe2axw: ok, i'll perhaps do that today08:49
rogpeppe2axw: (we've seen that problem quite a bit here)08:49
fwereadeaxw, if you're talking about the CA cert in https://codereview.appspot.com/56560043/ I think everyone *does* know it08:50
axwfwereade: oh? I thought it was a secret. it gets stripped out before we bootstrap...?08:50
fwereadeaxw, the private key does,for sure08:51
rogpeppe2axw: it shouldn't involve changing anything outside cloudinit/sshinit, right?08:51
fwereadeaxw, but agent conf always has the CA cert for the environment, and I'm pretty sure it's available over the api as well08:52
axwrogpeppe2: right, I think that's the most sensible place to do it, and it should be contained08:52
rogpeppe2axw: the CA cert is public info08:52
axwfwereade: makes sense.08:52
axwI had it in my mind we didn't publish either, but of course we have to08:53
rogpeppe2it would be good if the CA cert contained the environment UUID but that's for another day08:53
fwereaderogpeppe2: yeah, +100008:53
thumperhi rogpeppe208:53
fwereadetomorrow? ;p08:53
thumperrogpeppe2: are you in bluefin?08:53
rogpeppe2thumper: hi. yeah.08:54
=== rogpeppe2 is now known as rogpeppe
thumperrogpeppe: have you seen mramm around yet today?08:54
rogpeppethumper: i have08:54
thumperrogpeppe: could you tell him that I have a few questions for him and can he get on irc?08:54
rogpeppethumper: ok; i think he's in the scrum-of-scrums meeting right now, but will collar him after that08:55
thumperscrum of scrums?08:55
rogpeppethumper: his term08:55
thumpersounds dangerous08:55
rogpeppethumper: meeting of the team leads08:55
axwfwereade: re the ssh keys change, do you just mean you'd like to see an explicit "-i" for each of the default keys in ~/.ssh?08:55
fwereadeaxw, I'm wondering whether we should use explicit -i a bit more than we do, yeah08:56
axwfwereade: it's not necessary for the default keys, which is why I don't add it08:56
axwthey just get tried anyway08:56
fwereadeaxw, ok, yeah, if they have a .ssh then they probably do have ssh installed :)09:00
fwereadeaxw, cheers09:00
axwfwereade: ah you meant for using in go.crypto?09:01
fwereadeaxw, yeah, I was just wondering if there was any gap in our coverage as it were09:01
axwactually I did that at one point, but then realised that it wouldn't work when keys are encrypted (require a keyphrase)09:01
axwwhich is (I hope!) most of the time09:01
rogpeppethumper: i saw what seemed like a local provisioner bug yesterday BTW09:02
fwereadeaxw, so one would indeed hope ;)09:02
axwpeople can drop/symlink whatever keys they want to use into ~/.juju/ssh though09:02
thumperrogpeppe: oh?09:02
rogpeppethumper: perhaps you might have an idea of what might be going on09:02
thumperI'm currently upgrading to trusty09:02
thumperto make sure they get ironed out quick smart09:02
rogpeppethumper: so, lxc-start was failing (for some as-yet-unknown reason)09:03
thumperI've seen a few weird ones09:03
rogpeppethumper: so we could see the instances, and each one with a state holding the lxc-start status09:04
rogpeppethumper: but we couldn't get the instances out of that state09:04
rogpeppethumper: one mo, i'll try to find the bug number09:04
rogpeppethumper: no bug reported yet, oh well09:05
* thumper shrugs09:05
thumperif you see it again, please file one09:06
thumperand assign it to me09:06
thumperI'll see it then09:06
rogpeppethumper: the output status is in agent-state and somnething like "lxc-start: cannot get init-pid-status"; the problem seems to be something to do with network bridge naming09:06
rogpeppethumper: but that wasn't the particular provisioner issue i was thinking about09:07
rogpeppethumper: here's the error that we saw:09:09
rogpeppemachine-3: 2014-01-28 15:21:12 ERROR juju.container.lxc lxc.go:129 container failed to start: error executing "lxc-start": command get_init_pid failed to receive response09:09
thumperrogpeppe: this is on maas?09:10
thumperis it still using precise?09:10
rogpeppethumper: might be trusty actually09:15
thumperdo we have trusty charms?09:16
rogpeppethumper: no, bootstrap node is trusty, unit node is precise09:16
thumperrogpeppe: is this local provider?09:17
rogpeppethumper: no09:17
rogpeppethumper: yes09:17
thumperI'll look tomorrow09:17
thumperand see if I can find anything09:17
rogpeppethumper: the particular issue i'm concerned about isn't actually that one09:19
thumperrogpeppe: stick it in an email to juju-dev or just me if you prefer09:20
rogpeppethumper: will do09:20
rogpeppethere's another bug i've just seen09:21
rogpeppejuju bootstrap doesn't appear to work with --upload-tools in 1.17.109:21
rogpeppeanyone got an idea of what might be happening here? https://pastebin.canonical.com/103759/09:22
thumperrogpeppe: mramm mentioned that it looks like the network bridge isn't coming up09:23
thumperrogpeppe: I wonder if the behaviour has changed09:23
thumperwe are kinda abusive in the maas provider09:23
thumperwhere we restart networking09:24
mrammthe other thing that we should work out09:24
thumperrobie basak suggested a different approach09:24
thumperwhere we go "ifdown eth0", make the mods, bridge eth0 with br0, then go "ifup br0"09:24
thumperand that would be better09:24
thumperbut I never got time to try it09:24
thumperrogpeppe: is that enough info?09:25
rogpeppethumper: not really. i had difficulty following the lxc container code, i'm afraid09:25
rogpeppethumper: a bit of a twisty maze09:25
thumperrogpeppe: this is not in the container code09:25
thumperbut inside the maas provider09:25
thumperwhere we create the cloud-init scripts09:25
rogpeppeah ok09:26
thumperas we create a different network bridge for the host09:26
thumperso the containers inside can get to the maas dhcp server09:26
* thumper is being called to head to the lounge09:26
thumpergood luck09:27
rogpeppethumper: ok09:27
rogpeppethumper: the ifup change was how they fixed it09:27
rogpeppethumper: (manually, of course)09:28
rogpeppefwereade: it looks like the simplestreams stuff is borked in 1.17.1. or *something* is anyway.09:29
rogpeppedoes anyone know if streams.canonical.com is supposed to have useful data these days?09:29
fwereaderogpeppe, worst-case sinzui and utlemming and I will sort it out when we're together this weekend, but I was hoping something would land sooner than that09:31
fwereaderogpeppe, looks to me like it still just completely lacks content09:32
rogpeppefwereade: currently we're unable to bootstrap at all09:32
fwereaderogpeppe, --source?09:32
rogpeppefwereade: hmm, i don't know about --source09:32
fwereaderogpeppe, it's in the 1.17.1 release notes09:32
mrammok, so we *need* to get this streams.canonical.com thing sorted out quickly09:33
mrammas in before we end up getting hammered for it in Cape Town :/09:34
mrammperhaps need is strong.  But it is bad for our users, bad for us, and will result in a bit of ungentle poking in SA.09:34
rogpeppefwereade: you mean --metadata-source, presumably?09:35
fwereaderogpeppe, ha, yes, sorry09:35
fwereademramm, so, the issue AIUI is that utlemming is currently crushed under the weight of paid public cloud work09:36
rogpeppefwereade: can you just provide a directory with the tools in?09:37
fwereademramm, I will follow up with arosales when he's online, it seemed on monday that we might be able to steal an hour's work for a quick solution butthat hasn'thappened09:37
rogpeppefwereade: or do you need the metadata there too?09:37
mrammthe alternative is to make juju not broken if it isn't there... :/09:38
mrammI know the right thing to to to move us forward09:38
fwereaderogpeppe, you need metadata09:38
fwereaderogpeppe,  If your workflow previously was to download the juju tools to a local09:38
fwereadedirectory, then bootstrap with the --source option to upload the tools to your09:38
fwereadeenvironment, you need to call “juju metadata generate-tools” per the previous09:38
fwereadeexample. See “juju help bootstrap” for more information.09:38
mrammfwereade: juju help bootstrap is nice!09:39
mrammfwereade: I also understand the pressure everybody is under09:39
mrammfwereade: so it's not a problem, just trying to troubleshoot next week09:40
fwereademramm, yeah, quite so09:40
rogpeppefwereade: juju help bootstrap doesn't say anything about generate-tools09:41
rogpeppefwereade:  and "juju metadata generate-tools --help" doesn't say anything about what's expected09:42
rogpeppefwereade: (i tried it on a directory containing a jujud binary, but it failed to find tools09:44
fwereaderogpeppe, it's the same expected directory structure as juju-dist has always had09:45
fwereaderogpeppe, but yeah we should be more explicit about that09:45
rogpeppefwereade: i also tried with --metadata-source, with a directory containing jujud09:45
rogpeppefwereade: ahhh09:45
* rogpeppe tries to remember what format the directory was in09:45
fwereaderogpeppe, just put them in the tools subdir iirc09:46
rogpeppefwereade: no arch/series info?09:46
rogpeppefwereade: just tools/jujud ?09:46
hazmatrogpeppe, please capture some notes on what your doing.. and attach to https://bugs.launchpad.net/juju-core/+bug/126779509:46
rogpeppeit aaaallll broookeen09:47
fwereaderogpeppe, it's exactly how it's always been -- tgzs with names in the usual format09:48
rogpeppefwereade: ah, ok09:48
rogpeppefwereade: that's not very user friendly09:48
fwereaderogpeppe, sorry, tools/releases09:49
rogpeppefwereade: --upload-tools *should* work, right?09:49
hazmatrogpeppe,  i know.. that's why we all use upload-tools..  check the contents here (from that bug report) which has the generate-tools structure http://pastebin.ubuntu.com/6726391/09:49
hazmatyeah.. tools/releases/tarball.tgz should do it for generate-tools09:49
fwereaderogpeppe, you can just download the bits you need from juju-dist -- the fuckedness of streams.c.c is a distinct issue09:49
fwereaderogpeppe, and, yes, I thought yu *could* still use upload-tools, I thought this was about using the released tools, sorry09:50
fwereaderogpeppe, is *that* broken?09:51
rogpeppefwereade: it seems to be09:51
rogpeppefwereade: we can't bootstrap at *all*09:51
rogpeppefwereade: which is a major blocker currently09:51
rogpeppefwereade: see https://pastebin.canonical.com/103759/09:51
mrammshang says that he believes it was all working last night09:54
hazmatrogpeppe, this is on trunk?09:54
mrammjust not working today09:54
jamespagesinzui, please can you ensure that any PPA builds are using ~XXX suffixes otherwise the versions will conflict with distro09:54
rogpeppehazmat: this is on 1.17.1 as released09:55
fwereaderogpeppe, that is deeply bizarre, it seems to work here, let me dig09:55
mrammwe are not using EC2, and are on trusty09:56
hazmatrogpeppe, k.. testing.. with 1.17.1 tarball from https://launchpad.net/juju-core/+download09:56
mrammso there are some important differences09:56
fwereaderogpeppe, that "47927a59-4867-442f-8239-ae2d657f4475-tools/releases/juju-" bitis very weird09:56
rogpeppefwereade: yes09:57
hazmatfwiw.. bootstrap --upload-tools from (13.10) works with 1.17.1 tarball10:00
rogpeppefwereade: is it possible to use metadata generate-tools to generate metadata about tools in a local directory?10:00
hazmatrogpeppe, yes see that pastebin link i sent.. re private clouds  http://pastebin.ubuntu.com/6726391/10:02
fwereaderogpeppe, yes: grab the subset of the juju-dist bucket that you want/need, and point generate-tools at the base dir, then use --metadata-source10:03
rogpeppefwereade: i *think* i did that10:04
hazmatwith -d tools_dir10:04
rogpeppehazmat, fwereade: so what am i doing wrong here? http://paste.ubuntu.com/6836995/10:05
rogpeppejuju metadata generate-tools is currently hung on "finding tools"10:05
hazmatrogpeppe, can you run it with --debug10:06
hazmatand pastebin10:06
rogpeppehazmat: it's fetching all tools from somewhere10:07
rogpeppehazmat: http://paste.ubuntu.com/6837008/10:07
hazmatrogpeppe, its hitting streams.cannonical.com10:08
rogpeppehazmat: so is there a way i can tell it to generate metadata from my local directory?10:09
rogpeppehazmat: (from the tools in there)10:09
mrammhazmat: fwereade: thanks for helping with this -- it is a blocker for a couple of teams here in london10:09
hazmati thought  so.. but.. looking at it.. it looks like it just wants to store it..10:10
hazmaton the local dir, instead of using the local dir for tools, the pastebin docs are sent are confusing in that regard.10:10
rogpeppehazmat: that's how i interpret it too10:10
hazmatrogpeppe, i'm a bit more concerned with why --upload-tools is failing10:10
rogpeppehazmat: well, i want to know why normal bootstrap is failing too10:11
rogpeppehazmat: if generate-tools can get stuff from streams.canonical.com, why can't normal bootstrap?10:11
rogpeppewallyworld_: ping10:11
cgzthanks for the review axw10:13
fwereaderogpeppe, nobody can get anything from streams.c.c at the moment because there is no content :/10:14
rogpeppefwereade: so where's generate-tools getting all this from? http://paste.ubuntu.com/6837008/10:14
rogpeppefwereade: i don't *think* it's juju-dist, because juju-dist doesn't appear to have 1.17.1 in10:15
hazmatrogpeppe, 1.17.1 is  there in the testing/tools   @ https://juju-dist.s3.amazonaws.com/10:15
rogpeppeah, it does10:15
rogpeppei was looking in tools not tools/releases10:16
wallyworld_rogpeppe: hi10:16
* hazmat mramm don't think i'm helping.. but trying..10:17
rogpeppewallyworld_: do you know what might be happening here? https://pastebin.canonical.com/103759/10:17
* wallyworld_ looks10:17
* wallyworld_ hates 2fa10:17
rogpeppewallyworld_: oh, sorry, one mo10:18
hazmatrogpeppe, so it can do tool upload.. but you have to specify tools-metadata-url for the env to a local dir10:18
rogpeppewallyworld_: http://paste.ubuntu.com/6837045/10:18
hazmatfor the environ10:18
rogpeppehazmat: "it" ?10:19
rogpeppehazmat: juju metadata generate-tools ?10:19
hazmatrogpeppe, wallyworld_ re generating tools metadata for an environ with juju metadata generate-tools  from a local dir10:19
hazmatgoing through the src @ juju-core/cmd/plugins/juju-metadata/toolsmetadata.go10:19
wallyworld_rogpeppe: it looks like it didn't upload any tools in that pastebin10:20
wallyworld_rogpeppe: it should say "Uploading xxx kbytes"10:21
wallyworld_i also notice it found a jujud. i've only ever seen it have to compile jujud from source10:21
wallyworld_so maybe that other codepath is broken, not sure10:21
hazmati've always seen it use a pre-existing jujud binary..10:22
rogpeppewallyworld_: currently we can't bootstrap at all because of this issue10:22
wallyworld_i bootstrapped just on friday no worries10:22
wallyworld_on ec210:22
hazmatwallyworld_, i depend on that behavior fwiw.. re use jujud binary10:22
wallyworld_i don't think anything has changed in trunk since then?10:22
hazmatwallyworld_, their having issues on trusty10:22
rogpeppewallyworld_: "found existing jujud" should mean that it can use that10:22
wallyworld_i was on trusty i think10:23
hazmatwallyworld_, i also just did a bootstrap today with the 1.17.1 tarball on ec2 .. worked okay (i'm not on trusty though))10:23
wallyworld_rogpeppe: sure. but i didn't really write that code so am not familair with it. i can look though10:23
rogpeppewallyworld_: oh, sorry, i thought you did most of the simplestreams stuff10:23
wallyworld_rogpeppe:  i did10:23
hazmatwallyworld_, can juju metadata generate-tools  use a local directory to find tools?10:23
wallyworld_but the jujud vs compile from source is in a separate bit of code10:24
wallyworld_it's sort of independent10:24
wallyworld_hazmat: yes10:24
wallyworld_let me check the details10:24
wallyworld_hazmat: generate tools is just to produce the simplestreams metadata as you probably know10:24
hazmatwallyworld_, from the src it looks like tools-metadata-url for the env pointing to a local directory file:// ...10:25
hazmatwallyworld_, yes.. separate issues.. --upload-tools fail.. and no simplestream md makes normal bootstrap fail.10:25
wallyworld_once you have the tarballs and metadata locally, you can point bootstrap at that and it will upload that stuff to the cloud10:25
wallyworld_hazmat: lwt me check source - the tools-metadata-url should not be involved in generate-tools from a local dir10:26
wallyworld_hazmat: it should just be with the -d option10:26
wallyworld_juju metadata generate-tools -d blah10:27
wallyworld_where blah contains a releases dir with tarballs10:27
wallyworld_nd the result is a streams dir with metadata next to releases dir10:27
hazmatwallyworld_, that looks like where to place the md  per its help description, not where to find the tools.. and it looks like it still goes to streams.canonical it looks like.10:27
hazmatwallyworld_, ah.. so -d for both output  and input10:28
rogpeppeoh, tim's broken dependencies.tsv10:28
wallyworld_hazmat: when you say "find the tools" above, do you mean for bootstrap?10:28
wallyworld_cause that's different to generate-tools10:28
wallyworld_generate-tools is to take tarballs as input and produce the metadata so that the tarballs and metadata can be used for bootstrap10:29
wallyworld_bootstrap will take those and upload to cloud storage10:29
wallyworld_that is different and separate to using tools-metadata-url10:29
hazmatright.. that's what --metadata-source on bootstrap is for10:30
wallyworld_tools-metatdata-url is for when the tools and metadata have already been uploaded or exposed somewhere via a public url10:30
wallyworld_and yes, the generate-tools output is insitu10:30
wallyworld_cause then you have a dir tree that bootstrap can consume10:31
wallyworld_does that make sense?10:31
hazmatwallyworld_, i was ref'ing tools-metadata-url based on the src for juju-core/cmd/plugins/juju-metadata/toolsmetadata.go which does this odd check  envtools.DefaultBaseURL of that appending file://10:31
wallyworld_hazmat: that is to allow people to just specify a dir name ans it turns it into a url. it also allows people to forget to put the /tools suffix and it will deal with that robustly10:32
wallyworld_the envtools.DefaultBaseURL is overridden from the --metadats-source from bootstrap10:33
wallyworld_the --metadata-source hides streams.c.com so that bootstrap gets its data from the local dir10:33
hazmatwallyworld_, it makes sense.. except the part where its not working :-)10:33
wallyworld_which provider?10:34
hazmatwallyworld_, rog is using maas, i'm trying with ec210:34
wallyworld_ok. i used ec2 on friday10:34
hazmati think everyone in london this week is using maas10:34
wallyworld_i haven't tested maas. let me look at the CI dashboard10:35
hazmatwallyworld_,  just to be clear.. i mean generate-tools re ec2.. bootstraping with --upload-tools works okay for me (not on trusty).10:35
wallyworld_when i tested with ec2 onfriday, i used --upload-tools10:35
wallyworld_so generate-tools fails?10:36
wallyworld_i'm on trusty now, can't recall if i tested when i was still on precise10:36
hazmatwallyworld_, for maas.. rogpeppe is getting a hang.. see bottom of this http://paste.ubuntu.com/6837008/10:36
rogpeppeapparently they can't bootstrap with 1.17.0 either now10:37
wallyworld_bugger. CI doesn't test maas it seems
wallyworld_i'm confused by that pastebin10:38
wallyworld_i didn't think juju-dist.s3.amazonaws.com was in the codebase anymore10:38
hazmatwallyworld_, i'm getting a different error.. with generate-tools http://paste.ubuntu.com/6837127/10:39
wallyworld_i'll check10:39
hazmatactually my error is just pointing to tools instead of .10:39
wallyworld_the dir for -d is the dir containing tools10:40
wallyworld_since it's tehn consistent with what's used for images10:40
wallyworld_so you can have a dir with tools and images metadata10:40
rogpeppeoh bugger, of course. godeps doesn't support git.10:40
hazmatwallyworld_, so  can the output of generate-tools be used to seed a private cloud for all users, if its placed in a 'magic' bucket?10:42
cgzrogpeppe: doesn't support how? just the update bit?10:43
wallyworld_that is the idea10:43
rogpeppecgz: no at all - i'm just doing it10:43
wallyworld_hazmat: with openstack, you can even put the url in keystone10:43
wallyworld_like we do for canonistack10:43
hazmatwallyworld_, and that bucket name varies?.. i'm wondering about maas atm.10:43
hazmatits just juju-dist everywhere ?10:44
wallyworld_hmmm. maas. i'm not 100% familair with the conventions there as i've never run juju on maas10:44
wallyworld_um, not sure about juju-dist10:44
wallyworld_let me have a quick look10:44
jamwallyworld_, dimitern, rogpeppe, fwereade: standup?10:46
wallyworld_jam: sec, just helping on a support issue10:46
wallyworld_hazmat: the latest code looks like it just uses the same conventions as for other clouds - a tools dir in private storage for the cloud10:46
jamhazmat: unfortunately, AIUI MaaS doesn't have a shared "bucket/container" concept10:47
jamso you only have user-local stuff10:47
wallyworld_by private storage, i mean whatever env.Storage() returns10:47
jamthe Simplestreams stuff *does* support any-old HTTP location, which means you could host them somewhere to be shared.10:47
wallyworld_i think we require people using maas to --upload-tools right?10:48
wallyworld_if no public tools are available?10:48
wallyworld_hazmat: we're in a standup now. did you want to join after to ask questions?10:55
wallyworld_where the whole dev team is available?10:55
fwereaderogpeppe, any luck with tools-metadata-url or is that also problematic?10:58
hazmatwallyworld_, rogpeppe is the one whose blocked.. i'm mostly asking to help there and for future ref on others.11:03
rogpeppefwereade: i haven't tried that11:03
rogpeppefwereade: what should it be?11:03
rogpeppefwereade: some URL for juju-dist?11:04
wallyworld_hazmat: ok. rogpeppe can pop into the standup for help :-)11:04
rogpeppewallyworld_: will do11:04
wallyworld_hazmat: if you have further questions, let me know,maybe email me or whatever11:05
hazmatwallyworld_, will do, thanks11:05
rogpeppenatefinch: ha, sorry, there are no tests :-)11:17
natefinchrogpeppe: that's ok :)  I can write tests :)11:18
TheMuerogpeppe: heya, do you know the error "missing or bad WebSocket-Origin"?11:19
rogpeppeTheMue: sounds like you're not adding a websocket origin to the request11:20
rogpeppeTheMue: where is the error coming from?11:20
TheMuerogpeppe: it tells websocket.Dial, so should be the client11:21
rogpeppeTheMue: have you grepped for it in the source?11:21
TheMuerogpeppe: it's not in there, the whole line is "ERROR websocket.Dial wss://ec2-54-197-191-161.compute-1.amazonaws.com:17070/log?lines=10&entity=environment-3740322e-a899-4849-8fd2-a21c7e3b8d3a: missing or bad WebSocket-Origin"11:23
rogpeppeTheMue: have you grepped in the go source too?11:23
rogpeppeTheMue: and in the websocket source11:23
TheMuerogpeppe: not yet, will do11:23
rogpeppeTheMue: it's usually a good first step11:24
TheMuerogpeppe: origin killed, no stumbling over a certificate error *sigh*11:41
fwereademramm, so, how can we make noise about getting some damn resources so we can do CI on MAAS?11:46
bigjoolsfwereade: I suggested to Curtis that you share the maas lab11:46
natefinchfwereade: yeah, it's pretty pathetic that we aren't even doing CI with our own product11:46
mrammfwereade: I am making noise right now11:49
mrammfwereade: and will make some more this afternoon11:49
fwereademramm, tyvm11:49
mrammI also wan to do CI on the orange box -- just because that's where we plan to do demos11:50
mrammif we are doing all our demos on a specific maas configuration with specific hardware -- that damn well better work11:50
mrammevery time11:50
mrammand the bundles we demo should get tested regularly11:50
rogpeppewallyworld_, fwereade: that doesn't seem to work: https://pastebin.canonical.com/103776/12:16
rogpeppewallyworld_: it's looking for index.json, which doesn't seem to be there12:17
rogpeppewallyworld_, fwereade: 2014-01-29 12:09:25 DEBUG juju.environs.simplestreams simplestreams.go:482 fetchData failed for "http://juju-dist.s3.amazonaws.com/streams/v1/index.sjson": cannot find URL "http://juju-dist.s3.amazonaws.com/streams/v1/index.sjson" not found12:17
fwereaderogpeppe, fuck, tools-url needs trailing tools12:18
rogpeppefwereade: yes, i just realised that12:18
fwereaderogpeppe, cool12:18
rogpeppefwereade, wallyworld_: that worked, thanks12:22
rogpeppewallyworld_: time is thereby bought12:23
cgzmorning niemeyer12:27
niemeyercgz: Heya12:27
=== gary_poster|away is now known as gary_poster
gary_posterhey jamespage.  if you get a chance today, I'd like to get your thoughts about https://bugs.launchpad.net/ubuntu/+bug/1273865 (juju quickstart inclusion in Trusty).  Please let me know if you might have some availability.13:30
dimiternfwereade, https://codereview.appspot.com/53210044/  again? :)13:52
hazmatdimitern, fwereade how was the networking discussion?13:57
hazmatdimitern, i'm going through your network doc and adding some additional comments.13:57
dimiternhazmat, you mean about goamz?13:57
hazmatdimitern, yeah13:58
hazmatthere's like 20 forks adding various features on github..13:58
sinzuir2268 broke dependencies.tsv13:58
dimiternhazmat, we decided to extend goamz with vpc/networking support and put all new apis in the EC2 type, keeping the public API intact14:00
cgzsinzui: rogpeppe is aware, apparently godeps lacks git support, but is sprinting so don't think he's fixed it yet14:02
hazmatdimitern, there's one significant item that feels like it missing, the current sec group per machine need disappears.14:02
hazmatdimitern, since groups can be attached at runtime to machines14:03
vdsif I go install juju-core from LP trunk, from where do I get commands like relation-set?14:03
dimiternhazmat, that's not entirely related is it?14:04
dimiternhazmat, it's to do with the firewaller mostly14:04
hazmatdimitern, its an additional api for the set.. also why the EIP manipulation?14:04
jamespagegary_poster, looking14:04
sinzuiI would suggest reverting the change then. The project is not releasable or testable14:05
dimiternhazmat, i don't think we'll change how we handle security groups much, just add vpc support for them, at least for now14:05
rogpeppecgz: godeps does actually have git support,14:05
rogpeppecgz: but thumper broke dependencies.tsv unfortunately14:06
dimiternhazmat, the EIPs are needed to allocate/ssign more than one public IP to a machine, and subsequently assign them to containers on that machine14:06
rogpeppesinzui: do you want to propose a fix, or shall i?14:06
hazmatdimitern, there seems to be some fundamental misunderstanding14:06
cgzrogpeppe: I thought it did... that's why I was confused earlier :)14:07
dimiternhazmat, oh? what is it?14:07
cgzif it's just thumper's normal spaces, I can fix that again14:07
sinzuirogpeppe, cgz, it has no revno14:07
dimiternsorry, bbiab14:07
hazmatdimitern, you don't need eips for a public ip on a vpc instance, they are a precious resource with small limits, that juju shouldn't be touching (ie like 5 per vpc) so also entirely inappropriate for that use. and the important aspect for containers is private ip addressing via additional veths and ips which isn't covered14:07
sinzuirogpeppe, cgz, The file is tab separated, but there is no revno for github.com/loggo/loggo14:08
rogpeppesinzui: it doesn't need a revno14:08
rogpeppesinzui: but it does need a trailing tab14:08
rogpeppesinzui: git doesn't have revnos14:08
sinzuiI have the file open, I will test and propose now. Thank your rogpeppe14:09
rogpeppesinzui: it should probably be a bit more lenient about the number of fields actually14:09
hazmatdimitern, nevermind i do see private addresses in the api changes, veth are also useful (esp cross subnet traffic) but perhaps not required14:09
hazmatdimitern, oh. and that's 5 vpc EIPs for an account not per vpc.14:10
sinzuirogpeppe, godeps is not happy once it can read the line:14:11
sinzui$ godeps -u dependencies.tsv14:11
sinzuigodeps: cannot parse "dependencies.tsv": cannot parse "github.com/loggo/loggo\tgit\t89458b4dc99692bc24efe9c2252d7587f8dc247b\t": unknown VCS kind "git"14:11
rogpeppesinzui: try go get -u launchpad.net/godeps14:11
jamespagegary_poster, do you publish releases of quickstart anywhere?14:11
rick_h_hazmat: what's the format for adding a kvm machine via the cli? I'm huning for some docs but not finding any. Design is asking what to call creating a kvm instance and getting confused a bit in 'machine' 'bare metal' etc14:12
gary_posterjamespage: on call, in ppa, details here: https://bugs.launchpad.net/ubuntu/+bug/127386514:12
jamespagegary_poster, yes - I read that14:12
hazmatrick_h_, sub kvm for lxc.. ie juju deploy --to=kvm:1  ... its in juju deploy/add-unit --help14:12
sinzuisorry rogpeppe , godeps still reports the same error14:13
gary_posterso jamespage only ppa then14:13
jamespagegary_poster, how about releasing versions to pypi like hazmat does for juju-deployer?14:13
rick_h_hazmat: awesome thanks14:13
hazmatrick_h_, not he kvm, but the container syntax with lxc.. and then its just s/lxc/kvm14:13
rogpeppesinzui: just to sanity check, what revision of godeps are you on?14:14
sinzui10, rogpeppe14:14
rogpeppesinzui: ah, 14 is the latest14:15
sinzuiwhoa, your on 1414:15
rogpeppesinzui: try bzr pull14:15
sinzuiI did14:15
rogpeppesinzui: try removing the godeps directory and doing go get again14:16
sinzuirogpeppe, this my problem, not CI, it always gets a fresh checkout of godeps14:16
dimiternhazmat, yeah, these limits will hit us at some point14:17
hazmatdimitern, some point? even a trivial environment would hit those.14:17
rogpeppesinzui: if CI gets godeps for fresh, it should work ok14:18
hazmatits not a good solution for the use case you gave of density.14:18
rogpeppesinzui: i just tried it and it seems to work (with the fixed .tsv file)14:19
dimiternhazmat, not necessarily - most services will happily work with private addresses, just the ones that need exposing will be an issue I think (after the limit is hit)14:19
sinzuirogpeppe, yep, It does. My branch is stuck because I was fighting with autobuilding in September.14:19
cgzif I bootstrap, it fails, I update the environments.yaml, and bootstrap again, it seems the ENV.jenv doesn't get updated14:19
cgzthat's not deliberate, right?14:19
hazmatdimitern, there's seems to be a misunderstanding.. you don't need an eip for a vpc instance to attach the public net14:19
rogpeppecgz: yes, you'll need to remove iyt14:20
cgzrogpeppe: that kinda sucks14:20
rogpeppecgz: we really need to fix that behaviour14:20
rogpeppecgz: agreed14:20
rogpeppecgz: if bootstrap fails, it should remove the .jenv file if it just created it14:20
cgzokay, bug 1247152 it seems14:22
dimiternhazmat, you need an EIP for each instance in a non-default VPC14:23
hazmatdimitern, no you don't14:23
dimiternhazmat, from the aws docs: We assign each instance in a nondefault VPC only a private IP address, unless you specifically request a public IP address during launch. To ensure that an instance in a nondefault VPC that has not been assigned a public IP address can communicate with the Internet, you must allocate an Elastic IP address for use with a VPC, and then associate that EIP with the elastic network interface (ENI) attached to the instance.14:24
hazmatdimitern, why do you think yoou need it?14:24
hazmatsee the unless part?14:24
dimiternhazmat, yes14:25
dimiternhazmat, but that won't be the case until you actually need a public address14:25
hazmatso if you launch with a public addr why do you need it?14:25
hazmatdimitern, you can't communicate to the internet from the instance without it14:25
sinzuirogpeppe, cgz : do eith of you have a minute to review https://codereview.appspot.com/5826004414:26
hazmater. without at least a public address attached to the instance, eip, or nat isntance setup14:26
rogpeppesinzui: will do14:26
dimiternhazmat, with a default vpc, you can - they use nat to relay private ip outgoing traffic to a public ip from the ec2-vpc pool, not your account14:27
hazmatdimitern, basically depending on more than one pub address per instance is basically broken, due to the low limits on eips (which are meant primarily not for pub addresses, but for static addresses)14:28
hazmatdimitern, default vpc is the same as launching with a public address, hence option 1 of the 3 i listed.14:28
hazmatand no it doesn't use nat.14:28
hazmatnats are for private subnets outbound traffic, default vpc, is basically public subnet with every instance getting a public address at launch by default.14:29
dimiternhazmat, "We assign each instance in a default VPC two IP addresses at launch: a private IP address and a public IP address that is mapped to the private IP address through network address translation (NAT)."14:29
hazmatwhich is transparent to the user, its effectively ec2 internal impl.. the traditional nat in ec2 vpc is something very different14:30
dimiternhazmat, so for the default vpc we can still assign additional EIPs14:30
hazmatdimitern, ie.. this is nat for vpc.. http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html14:31
hazmathmm.. bad link.14:32
hazmatdimitern, yes.. up to 5 max for a user's entire account, and then things break.14:33
dimiternhazmat, so what can we do? there's a form you can fill in to raise your limits I saw somewhere14:35
hazmatdimitern, yes.. and justify use.. i'm not sure.. juju needs it.. is going to be a good response.14:35
dimiternhazmat, "juju needs it" won't do, because it's per account14:36
dimiternhazmat, so users that need to deploy more than 5 machines/containers within a non-default vpc and all of them need EIPs, can be advised to file that form14:37
gary_posterjamespage: sorry, off call.  would putting it on PyPI help Ubuntu packaging?  To be clear, the story we want to advertise is something to the effect of "To get started with Juju, run `sudo apt-get install juju-quickstart && juju-quickstart [bundle name]` to immediately be walked through installation, configuration, and deploying your first full working solution on top of Juju`14:43
jamespagegary_poster, thats fine - but I'd prefer that we have something from juju-quickstart as an upstream that is 'released'14:45
jamespagerather than some arbitary snapshot that we agree on14:46
hazmatdimitern, and juju is going to handle that partial failure mode well.. and its less than 5 if their doing standard eip jobs (bastion hosts, nat instances, etc) designing something that breaks with real world use feels quite odd. amz is pretty stingy with eip allocations. but maybe its okay.. this use of eips wouldn't fly with any aws vpc user i know, but their all using elbs (public or private) for the entry points / expose endpoints.14:48
gary_posterjamespage: ah, ok.  makes sense.  we'll get that done ASAP and I'll update the bug?14:48
jamespagegary_poster, sounds good - I'll cut a package for the archive and upload sometime this week14:48
gary_posteraqwaome thanks!14:48
jamespageyour packaging branch general looks good btw14:49
jamespagejust needs a few extras for the archive14:49
hazmatdimitern, the reality check is no public cloud hands out public addresses like candy.. neutron is exposed in rackspace and hpcloud, but you can't get additional pub ips there.14:49
dimiternhazmat, are you saying we can alleviate some of the issues by using elb?14:49
hazmatdimitern, i'm saying real world usage of aws, doesn't expose services directly, they front with elb.14:49
dimiternhazmat, right14:50
dimiternhazmat, how about the dense containerization story?14:50
dimiternhazmat, it seems such deployments are more suited for private clouds on hyperscale hardware + openstack more than on ec214:51
hazmatdimitern, even there what does expose mean?14:51
hazmatdimitern, a) firewaller not imiplemented.. b) a generic iptable firewall would be generic and useful across all of them14:52
hazmatc) its not a public ip addr14:52
hazmatdimitern, the story for private clouds is basically the same as public .. if you subtract the delta on expose.. its all about private networking14:53
=== gary_poster is now known as gary_poster|away
=== gary_poster|away is now known as gary_poster
dimiternhazmat, well, you can still have a large deployment with 100s of hadoop nodes and only a handful of web servers that take tasks or display results14:54
hazmatdimitern, dense containerization isn't really about lots of public endpoints.. its about efficiency14:54
dimiternhazmat, and in that case, you'll need EIPs only for the web servers and a nat instance14:55
hazmatdimitern, but you don't need it for the web servers.. you already have a public ip there14:55
hazmatand you wouldn't have  nat instance created by juju..14:56
rogpeppeaxw: ping14:56
rogpeppeaxw: (i can be hopeful :-])14:56
cgzrogpeppe: he's almost certainly asleep :)14:56
axwam not :)14:57
dimiternhazmat, it seems juju needs to manage a nat instance anyway14:57
hazmatdimitern, why?14:57
axwrogpeppe: what's up?14:57
dimiternhazmat, and only assign EIPs to exposed machines14:57
hazmatdimitern, nat instances are for a very particular use case14:57
rogpeppeaxw: looking at sshinit.ConfigureScript14:57
dimiternhazmat, because all the other nodes will need net access14:57
hazmatdimitern, did you see my nat instance vpc link?14:57
dimiternhazmat, yes14:57
hazmatdimitern, that's why you can give them public ips when launching.14:57
hazmatdimitern, maintaining an ec2 nat instance is  pita.. and its huge spof without a bunch of hacky scripts.14:58
hazmatwell. its not a pita. but eliminating the spof is14:58
dimiternhazmat, non-default vpc, only private addresses unless added at launch (which we need not do unless we know the instance will need it), a nat instance for all the other nodes + EIP for it14:58
hazmatdimitern, you just always create with one14:58
rogpeppeaxw: i'm looking at sshinit.ConfigureScript14:59
rogpeppeaxw: and wondering how the stderr logic could possibly work14:59
dimiternhazmat, and you hit the wall after the 5th instance14:59
hazmatdimitern, those are not EIPs14:59
rogpeppeaxw: first i wondered why there were two nested ()s14:59
hazmatthere random public ips.. eips are static addresses associated to the account14:59
dimiternhazmat, which are those?14:59
rogpeppeaxw: then i saw that AFAICS there's nothing to separate stdout from stderr15:00
hazmatdimitern, if you launch a vpc instance with a pub addr its .not an eip allocation15:00
dimiternhazmat, in a non default vpc eips are the only way to have a public address, which is most likely our case15:00
hazmatdimitern, that's not true15:00
dimiternhazmat, it is15:00
axwrogpeppe: just a moment, context switching15:01
dimiternhazmat, according to the docs15:01
rogpeppeaxw: np15:01
hazmatdimitern, your reading them wrong15:01
* hazmat sighs15:02
dimiternhazmat, "We don't automatically assign a public IP address to an instance that you launch in a nondefault subnet. Therefore, if you want an instance in a nondefault subnet to communicate with the Internet, you must either enable the public IP addressing feature during launch, or associate an Elastic IP address with the primary or any secondary private IP address assigned to the network interface for the instance."15:02
hazmatdimitern, OR15:03
hazmateither you spec pub at launch.. which is not an eip allocation15:03
hazmator you runtime attach an eip15:03
hazmatso launch it with a pub15:03
axwrogpeppe: stdout and stderr are combined in our usage of cloud-init15:03
rogpeppeaxw: yeah15:03
rogpeppeaxw: so that logic is unlikely to have been tested15:04
dimiternhazmat, ok, I think I see now, sorry15:04
rogpeppeaxw: AFAICS if you specify stderr, it will be lost.15:04
rogpeppeaxw: or rather, not redirected anywhere15:05
axwrogpeppe: it won't be lost, it'll go into /var/log/cloud-init-output.log15:05
axwsorry just a sec15:05
rogpeppeaxw: i don't think so15:05
rogpeppeaxw: because if stderr is specified, there appears to be nothing that actually redirects stderr15:05
dimiternhazmat, ah, unless there are more than one network interface attached to the instance15:06
rogpeppeaxw: i *think* what you want is (commands) >stdoutfile 2>stderrfile15:06
hazmatdimitern, none of this changes imo, that eip usage like this is a bad idea, trying to break one pub addr per instance is not going to be portable to any other public cloud, its entailing alot of work for a minute gain that is inherently limited in it scale, and won't work with any other cloud.15:06
axwrogpeppe: true, the SetOutput value would need to be mangled to get it to do the right thing15:06
rogpeppeaxw: whereas currently it's ((commands) > stdoutfile) > stderrfile)15:06
axwrogpeppe: well, the "> ..." bit is specified in cloudinit.Config.SetOutput15:07
axwbut it'd be awkward15:07
dimiternhazmat, from juju's perspective, we need to be able to assign multiple public addresses to an instance, regardless of cloud15:07
axwand you're right, we don't do it and it's not tested15:07
dimiternhazmat, in order to use these public addresses for containers running on that instance15:08
axwrogpeppe: do we need to separate stdout/stderr?15:08
rogpeppeaxw: not too awkward, except that i don't know if bash allows redirecting stderr to a pipe. maybe you can do: foo 2| bar15:08
rogpeppeaxw: not really15:08
rogpeppeaxw: we could just remove all the related logic15:08
dimiternhazmat, it seems the only way to achieve that on EC2 is using VPC+EIPs15:08
dimiternhazmat, do we agree so far?15:09
axwrogpeppe: if we did that, I'd prefer if we just removed it all the way up to and including juju-core/cloudinit15:09
hazmatdimitern, there are other techniques.. and how do you achieve that on hp cloud or rackspace, or google, or joyent or azure.15:09
hazmatquite simply... you don't15:09
axwto prevent someone using something that's not supported all the way down15:09
rogpeppeaxw: yeah, that's what i was thinking15:09
hazmatdimitern, fwereade would be nice to schedule a followup meeting.15:10
dimiternhazmat, on other clouds that support assigning multiple public ips to an instance, it will work just as well15:10
dimiternhazmat, with the relevant changes to the libraries for openstack, joyent, etc.15:10
dimiternhazmat, yes, a meeting will be nice15:10
axwrogpeppe: you can do "2>&1 |", but I don't know about piping without stdout...15:11
dimiternhazmat, but all this doesn't really change anything re goamz changes - we need these extra API calls anyway, no matter how we decide juju should use them15:11
hazmatdimitern, other clouds? there are no other public clouds that allow that.. i just listed a bunch, and none of them do. and amz does it at a very limited scale (and those addresses have primary uses for other purposes) and for private openstack, its not a public address.15:11
hazmatdimitern, fair enough15:12
hazmatdimitern, pls, pls move goamz to github15:12
hazmatdimitern, there are so many forks out there because its not on github15:12
dimiternhazmat, it's not up to me as you know :)15:12
hazmatniemeyer, , pls, pls move goamz to github15:12
axwrogpeppe: see bottom of http://www.tldp.org/LDP/abs/html/io-redirection.html15:12
rogpeppeaxw: you can do it AFAIR15:12
rogpeppeaxw: yeah, that's similar to the approach i'd use15:13
dimiternhazmat, ok, so by "public ip" I mean "accessible from outside the cloud somehow", regardless public or private15:14
rogpeppeaxw: but i think we can just forget it all15:14
axwrogpeppe: yep, +1 to deleting code15:15
niemeyerhazmat: With fwereade on a call.. biab15:15
hazmatdimitern, as an another example of where that's moot.. one use case people do with vpc is to extend their org ip address space into the public cloud via directconnect or vpn connectivity, so the subnets ip ranges are from the org and traffic gets routed back through the org as egress. the whole org gets access though because its all part of their network.15:16
axwsleepy time. good night folks15:18
dimiternhazmat, that's the vpn story - gateways on both sides of the vpc subnet15:18
hazmatright but what does expose mean there... vpns make every 'accessible outside of the cloud somehow'15:20
hazmatwithin the org15:20
arosalesfwereade, good afternoon15:20
arosalesfwereade, current status on simple stream is sinzui is making the jenkins workflow that he will hand off to utlemming. utlemming has adjusted priorities to make this happen asap.15:21
fwereadearosales, <315:21
sinzuiarosales, fwereade I am testing the simplstreams job now15:21
dimiternhazmat, yes, and in this case private ips are in fact "public" (using the above definition) - but that's a good point - one we have networks in juju that vpn network that spans from the org into the cloud will be set as a public network15:22
arosalessinzui, thanks I know you have been hitting it hard lately on QA, tools, reporting, bugs, and releases15:23
arosalessinzui, utlemming should be available once your testing is complete and you need to add the workflow to the build server15:23
arosalesmramm, ^15:23
mrammarosales: perfect15:24
mrammarosales: sinzui: thanks for jumping on this15:24
* arosales ows sinzui a drink of his choose next week :-)15:26
dimiternfwereade, review poke if you have 5m?15:26
=== Guest12754 is now known as wedgwood
niemeyermramm, fwereade: It actually crashed15:43
niemeyerGood timing15:43
fwereadedimitern, I have 5m now, on it15:43
dimiternfwereade, cheers!15:44
niemeyerMaybe they have speech detection for "take care"15:44
niemeyerhazmat: So, using github would be good, but as we discussed in the call, the way goamz is organized also encourages people to fork away15:45
niemeyerI'd like to solve that at some point15:46
gary_posterjamespage: https://pypi.python.org/pypi/juju-quickstart has a 1.0 and I added a comment to bug with link, FWIW.  Thanks again, and please let me know if we should do anything else.15:46
hazmatniemeyer, i think on github that people will push pull requests.. its a much more common etiquette... and helps build into a central core for all amz svc usage when people expand usage.. but understood re goamz structure and svc mapping.. i just don't think that's the underlying issue15:47
hazmatmost of the forks are adding features to extant services (esp big ones like ec2), a few do new services (dynamodb) but that's not as common.. i think having a 'canonical' repo on github will become a magnet for both.15:48
niemeyerhazmat: I think it is, actually.. the API is gigantic, and people have to fork for every single field we didn't care to wrap15:48
hazmatniemeyer, so that's more fundamental.. i thought you meant more of a service split of the pkg.. but yeah.. addressing the fundamental of the api service would be good.15:49
hazmati know your not a fan.. but i still think auto-gen from the api json would be a win.15:49
mrammhazmat: that does feel like a reasonable possibility, definitely a theory that is worth testing15:50
niemeyerhazmat: There are better ways to do it15:50
hazmatand could cover a param passing style that is extensible15:50
niemeyerhazmat: Auto-generation leads to a crappy and undocumented API15:50
mrammhazmat: niemeyer: I mean the github bit, not nessisarily the auto-generation -- I have not thought that through at all.15:51
niemeyerhazmat: We can have an extensible and dynamic interface without that15:51
niemeyerhazmat: As we do in lpad, for example15:51
hazmatniemeyer, the aws autogen and gce autogen do include docs though.. agree though its not always the most idiomatic interface15:53
niemeyerhazmat: And they're usually poor docs, often making no sense for the wrapped interface15:53
hazmatits more about just getting solid coverage of the huge api in a single step15:53
niemeyerhazmat: If we were to auto-generate, we can as well just build a dynamic layer and point people to the upstream docs15:54
niemeyerWhich is what other libraries do15:54
natefinchhazmat, niemeyer:  if people are forking to add more to the API.... could we not just then encourage them to issue a pull request to put them back in the base repo?  Isn't that the whole point of open source collaboration?15:54
hazmatniemeyer, so would you be able to move goamz to github.com/juju/goamz ?15:54
niemeyernatefinch: Kind of.. I don't want to be the one in the front line of such an infinite number of pull requests adding tiny bits each15:55
hazmatniemeyer, or do you want to explore the extensible/dyn interface more first15:55
hazmatniemeyer, yeah.. boto had the same issue.. hence botocore based on autogen15:55
hazmatre infinite pull requests15:56
niemeyerhazmat: I shouldn't be the one moving that, if a decision is made15:57
niemeyerhazmat: I haven't been working on it, nor depend on it for anything I'm working on15:57
natefinchhazmat: pretty much anyone in core can move it15:58
hazmatniemeyer, gotcha. ok.15:58
niemeyerhazmat: It'd be wonderful if we could solve the API extensibility issue.. I have a path, but haven't had much of an incentive to stop what I'm doing to fix it15:58
niemeyerhazmat: Rather than being immediately helpful, this will actually create more work for other people15:58
hazmatnatefinch, i had asked dimitern about doing it as part of the networking work, and he wanted niemeyer's ok first i think15:58
niemeyerhazmat: We can talk next week15:59
hazmatniemeyer, sounds good15:59
niemeyerhazmat: Regarding moving, I'm happy either way15:59
niemeyerhazmat: We should have a clear understanding of why we're doing it, though15:59
niemeyerhazmat: "because we'll have pull requests" doesn't look like a good answer15:59
niemeyerhazmat: We have had pull requests now, and seldom people step up to care16:00
natefinchniemeyer: because sabdfl said so? ;)16:00
niemeyernatefinch: What?16:00
hazmatniemeyer, they do on gh and its part of the social culture, and their forking there, its a trivial process for them to push it back.16:00
niemeyernatefinch: I have no idea about what you mean by that16:01
niemeyerhazmat: I mean we *have* pull requests16:01
niemeyerhazmat: and we *have* people in the mailing list asking questions16:01
niemeyerhazmat: and we don't have IMO a good maintainership story16:02
hazmatniemeyer, understood, i'm saying we'll likely get more.. and decrease the long lived extant forks that i see out there.16:02
niemeyerhazmat: Having *more* requests won't solve that issue :)16:02
hazmatthe maintainer story is an issue16:02
hazmatprobably the biggest, and a good informal topic for next week16:03
rogpeppeanyone know what revno 1.17.0 corresponded to?16:37
natefinchrogpeppe: Any reason not to use a RWMutex in SharedValue instead of a regular mutex?16:46
hazmatrogpeppe, 217316:52
hazmatrogpeppe,  $ bzr tags16:52
rogpeppenatefinch: could do perhaps, but it's an optimisation that's not really worth it17:11
rogpeppenatefinch: making it an RWMutex would make the code more complex17:12
rogpeppenatefinch: we don't hold the lock for any length of time, so the chance of contention are extremely slight, particularly in our use case17:12
natefinchrogpeppe: it barely makes the code more complex.  It adds zero lines, just instead of calling Lock in Get you call RLock.17:13
natefinchrogpeppe: I'm sure it doesn't matter for our purposes, just seemed like the right thing to use.17:14
rogpeppenatefinch: i generally think of RWMutex as an optimisation - it makes the code a little harder to reason about, and the gain in this case is zero17:14
rogpeppenatefinch: if clients were calling Get very frequently, it might be worth it, but most clients will call Getter17:16
natefinchrogpeppe: well, either way they're locking to check the value.  In fact... it might be better, because when the value changes and the Broadcast fires, every watcher is going to "simultaneously" try to relock and check the value17:18
natefinchrogpeppe: with an RWMutex they can all do it at the same time17:18
natefinchrogpeppe: I don't know at what level of watchers you'd get any perceptible benefit, but I also don't see the added complexity as being terribly large either.17:19
rogpeppenatefinch: fair enough; do it if you like. (you'll want to use RWLock.Locker)17:20
natefinchrogpeppe: yep17:20
rogpeppenatefinch: i can't get excited about it unless the frequency of changes is in the millions per second and there's more than one watcher, neither of which is true for us.17:20
natefinchrogpeppe: "for us"  :)  I'm hoping at some point we'll move all this generic code outside the juju walled garden and make them independent open source repos.17:21
rogpeppenatefinch: yeah. you know i have mixed feelings about that :-)17:22
natefinchrogpeppe: I know. :)17:23
natefinchrogpeppe: btw, that Cond.Wait magic sauce is pretty awesome17:25
rogpeppenatefinch: i took a little while to arrive at the particular idiom you see there, but it does work nicely, doesn't it?17:26
natefinchrogpeppe: very cool17:30
TheMuerogpeppe: got my breakthrough from command line to logging. especially filtering for multiple entities is nice.17:36
rogpeppeTheMue: yay!17:36
TheMuerogpeppe: only used the whole afternoon finding a self-produced bug *grmblfxÜ17:37
rogpeppetrivial (one line, but critical) review anyone? https://codereview.appspot.com/56070044/17:40
rogpeppefwereade, fwereade, dimitern: ^17:40
natefinchrogpeppe: reviewed17:45
rogpeppenatefinch: thanks17:46
natefinchthumper: o/19:45
dimiternsmall review anyone? https://codereview.appspot.com/5817004520:02
natefinchdimitern: sure20:03
dimiternnatefinch, thanks!20:03
natefinchdimitern: could bootstrap-ssh-timeout be simply called bootstrap-timeout?  the fact that we're connecting with SSH doesn't really matter, right? This is just a generic bootstrap timeout, right?20:05
thumpermorning natefinch, dimitern20:06
dimiternnatefinch, well, all of these 3 are only used for waitSSH20:06
dimiternthumper, hey20:06
dimiternnatefinch, but I guess ssh is a inside detail20:07
natefinchdimitern: exactly my point. To a user, this is just the timeout on juju bootstrap.  Putting ssh on there is confusing.20:07
dimiternnatefinch, i'm not against renaming it :) just comment on it pls20:08
natefinchdimitern: I am :)  Just wanted to discuss live first, to make sure I was understanding it correctly.20:08
natefinchdimitern: there you go :)20:11
natefinchdimitern: nice to get that in.20:11
dimiternnatefinch, tyvm20:11
natefinchdimitern: welcome20:12
=== gary_poster is now known as gary_poster|away
thumpernatefinch: you're on trusty, right?21:41
thumpernatefinch: why do I get this...21:41
thumpertim@jake:~/go/src/code.google.com/p/go.tools/cmd/vet$ go install21:41
thumpergo install code.google.com/p/go.tools/cmd/vet: open /usr/lib/go/pkg/tool/linux_amd64/vet: permission denied21:41
thumperwhy is it trying to put it in /usr/lib?21:41
thumperdoesn't for juju21:41
thumperwhen I go make install there21:41
natefinchthumper: that happened to me21:42
natefinchthumper: I forget what I did to fix it though.....21:43
natefinchthumper: have you pulled and updated the code?21:46
natefinchthumper: I had to do that first.... now it go installs just fine.21:46
thumperno changes21:48
natefinchthumper: I did upgrade to go 1.2... not sure if that affects anything... I wouldn't think it would change where things are installed.21:49
thumperdo I need to set another GOPATH type var?21:49
natefinchthumper: just gopath is all you need21:49
thumperseems not21:49
thumpercan't use lbox21:50
thumperbecause it wants go vet21:50
thumpercan't install go vet because it is dumb21:50
* thumper will poke davecheney about it later, perhaps he knows21:50
* thumper goes to beat something up21:51
=== thumper is now known as thumper-afk
hatch1.17.1 on 12.04 after trying to bootstrap local without sudo I get the following error ERROR Get dial tcp connection refused22:16
hatchI thought that 1.17.1 removed the sudo requirement from local deploys?22:16
davecheneythumper-afk: might be simpler to remove that requirement from lbox.check22:33
davecheneythe debain packaing for 1.2 is AFU22:34
davecheneycombined with some poor decisions from upstream means you probably won't be able to make that work without building go from source22:34
mbruzekHi juju-dev.  I have an up to date trusty system and I can not use the local environment.23:07
mbruzekWhen I bootstrap local that runs but I can't get a juju status afterward23:07
wallyworld_mbruzek: thumper-afk is your best bet for local provider issues. i know there may have been some trusty issues, but am not across the detail23:35
mbruzekthanks wallyworld_  afk means away from keyboard right?23:35
wallyworld_yeah, he's not too far away23:35
wallyworld_he should be back soon23:35
wallyworld_we're still ironing out any remaining trusty issues. lxc changed between precise and trusty so we have some work to do23:36
mbruzekI understand, and have evaluated .deb packages for sinzui before23:37
mbruzekI just can't make any progress on my local and I would *really* like to.23:37
* mbruzek had it working 2 days ago, but decided to upgrade 23:37
davecheneyjuju is not happy this morning23:40
davecheneyis the environment bootstrapped ? or not23:40
wallyworld_davecheney: looks like something killed your instances? and now juju is confused23:43
wallyworld_cause it has the .jenv file and thinks it should be bootstrapped23:43
wallyworld_does the hp console show anything running?23:44
hazmatdavecheney, juju destroy-env..23:47
hazmati thought this bug got listed as fixed23:47
davecheneyhazmat: wallyworld_ yeah, removed the .jevn file and everything was fine23:47
davecheneyi guess the bug isn't fixed23:47
hazmatdavecheney,  https://bugs.launchpad.net/juju-core/+bug/117696123:48
hazmatnope.. its not.. it got marked low.23:48
wallyworld_well that is less than optimal23:48
hazmatits the one where you can't bootstrap or destroy.. and if you don't know to jenv remove..  it kinda sucks.23:48
wallyworld_i did hear mumblings that we should fix that issue23:48
* wallyworld_ thinks it should be High23:48
wallyworld_i'll follow up with some folks and see if we can get that one sorted out23:49
wallyworld_it is a pretty poor user experience23:49
hazmatcool, thanks wallyworld_23:49
wallyworld_np. i'll even have a go myself once i clear my current work items if i can't get traction to get it sorted23:50
davecheneynot the best error message23:53

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!