[01:05] <axw> thumper: you got loggo released I take it?
[01:05] <axw> on github
[01:05] <thumper> the github name?
[01:05] <thumper> yes
[01:05] <axw> cool
[01:07] <axw> thumper: about the configstore/chown thing. I was going make it so we couldn't use sudo/root at all; now if you use sudo you'll end up with ~/.juju/environments owned by root. do you think it's worth the trouble, or people just have to learn to not do that?
[01:07] <thumper> I've been thinking a little about that
[01:07] <thumper> what if it is root?
[01:08] <thumper> or
[01:08] <thumper> what testing was doing:
[01:08] <axw> as in, not using sudo at all?
[01:08] <thumper> sudo su other
[01:08] <thumper> then that shell does "juju bootstrap"
[01:08] <thumper> I'm not sure we should be that opinionated
[01:08] <axw> well I think we'd have to check the current uid, and only do something if uid == 0
[01:09] <axw> but yeah... I'm starting to think it's just overly complicating things
[01:09] <thumper> I think it is safer to just leave it how it is now
[01:09] <thumper> and not add anti-features
[01:09] <axw> ok
[01:09] <axw> do you think I should take out the bit in local that prevents root then?
[01:10] <axw> I think I may as well, if we're not doing it elsewhere
[01:10] <axw> thumper: by which I mean, the calls to ensureNotRoot in provider/local/environ.go
[01:11] <thumper> that one I think may be worth keeping for now
[01:11] <thumper> to train people to do it right
[01:11] <thumper> otherwise people doing what they have always done will create broken systems
[01:12] <wallyworld> thumper: axw: do we need to discuss that critical destroy-env bug?
[01:12] <axw> yeah ok. if they were doing it before, then they probably have existing dirs anyway
[01:12] <axw> wallyworld: the what now? :)
[01:13]  * wallyworld digs up the bug number
[01:13] <thumper> wallyworld: can you just chat with axw about it? I'm in the middle of something
[01:13] <wallyworld> sure
[01:13] <wallyworld> https://bugs.launchpad.net/juju-core/+bug/1272558
[01:13] <_mup_> Bug #1272558: destroy-environment shutdown machines instead <ci> <destroy-machine> <intermittent-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1272558>
[01:14] <axw> looking
[01:14] <wallyworld> this one has been causing the CI folks a lot of grief
[01:14] <wallyworld> i haven't looked closely just yet
[01:14] <wallyworld> trying to get stuff done for tomorrow
[01:14] <axw> huh
[01:14] <wallyworld> they were suspecting recent destroy env work
[01:14] <axw> probably
[01:14] <wallyworld> but that's just heresay
[01:15] <axw> I will see if I can reproduce
[01:15] <wallyworld> ok ta :-)
[01:28] <thumper> axw, wallyworld: now that we are four (when waigani is around), how do you feel about a daily standup around this time?
[01:28] <wallyworld> sure
[01:29] <thumper> aim is for 5-10min max
[01:29] <wallyworld> \o/
[01:29] <thumper> but gives us a way to touch base and know if anyone is blocked
[01:30] <axw> thumper: sgtm
[01:30] <axw> where is waigani anyway?
[01:31] <wallyworld> his sheep died
[01:32] <thumper> shall we do a quick stand up now?
[01:33] <axw> sure
[01:33] <wallyworld> ok
[01:33] <thumper> https://plus.google.com/hangouts/_/7ecpjir6j9gl1e7ttn38o7hq40?hl=en
[01:50] <axw> thumper: in errgo, did you consider having a Annotator type and DefaultAnnotator var, rather than RecordLocationWithAnnotations/FilePathElements globals?
[01:51] <thumper> briefly
[01:51] <axw> that is more "idiomatic Go"
[01:51] <thumper> but I thought this could be something that you want enabled for debug builds
[01:51] <thumper> and disabled for production
[01:51] <thumper> how do you do that in idiomatic Go?
[01:52] <axw> so, IMHO, you'd do that in the user. i.e. you have juju-core/errgo or something, which uses errgo/errgo but has its own configuration
[01:52] <axw> juju-core/errors would be a good place for it actually.
[01:52]  * thumper sucks in his cheeks
[01:52] <thumper> however then the runime.Caller is all fucked up
[01:53] <thumper> I'll finish this up then we can bikeshed :-)
[01:53] <axw> no worries
[01:55] <thumper> go has short circuit boolean evaluation right?
[01:55] <axw> yes
[02:58] <thumper> wallyworld: are the trees I need to update on the gobot in /home/tarmac/trees?
[02:59] <wallyworld> i think so yes
[02:59] <wallyworld> there was another trees dir also but i can't recall without looking what it was for
[03:14] <axw> thumper: it just occurred to me that local provider no longer prevents units on machine 0, since it's using the common state init code
[03:14] <axw> I'll put a bug in for that...
[03:14] <thumper> yeah... I did mean to ask about that :)
[03:25] <axw> wallyworld: why can't people just set image-metadata-url to cloud-images.ubuntu.com/daily or whatever? rather than another attribute for the stream
[03:25] <wallyworld> because we don't want them having to know about cloud-images.u.com per se
[03:25] <wallyworld> and
[03:26] <wallyworld> the stream name is used to form the product id
[03:26] <axw> ah yeah
[03:26] <wallyworld> it's a bit messy
[05:39] <waigani> axw: hello :)
[05:39] <axw> waigani: heya
[05:39] <waigani> Can I ask you a Go question?
[05:39] <axw> yes go for it
[05:39] <waigani> http://play.golang.org/p/W1UxbGdERJ
[05:40] <axw> looking
[05:40] <waigani> My question is in the comment, just playing with channels to understand them better
[05:40] <waigani> I get that the for loop blocks and waits for a new val to be added to the channel
[05:41] <axw> waigani: the program is exiting before the 4th goroutine is executed
[05:41] <waigani> yes but why?
[05:41] <axw> you're not waiting for the goroutines to complete, so the main func just falls off the end
[05:42] <axw> one sec
[05:42] <waigani> sure
[05:43] <axw> so "c" is an unbuffered chan, which means that the sender blocks until the receiver is ready. the background goroutine gets the value "4", but there's no guarantee it does anything with that value before the scheduler switches back to the main function and then exits
[05:43] <axw> does that make sense?
[05:44] <waigani> I thought <-c within the for loop blocks and waits for any new vals added to the channel?
[05:45] <axw> yes it does. so the value was transferred. the goroutine has the value 4, it just didn't get as far as printing it
[05:45] <axw> waigani: http://play.golang.org/p/9GYN3InLIH
[08:38] <rogpeppe2> axw: ping
[08:39] <axw> rogpeppe2: pong
[08:40] <rogpeppe2> axw: i've been looking at making the bootstrap stuff a bit better when things go wrong, and i'm wondering what you think a good approach is here
[08:40] <rogpeppe2> axw: in particular, when something goes wrong, we just see "Exit status 1" and no output from the failed commands
[08:40] <rogpeppe2> axw: (and then of course the instance is instantly shut down, so there's no way to find out what the issue might be)
[08:41] <axw> rogpeppe2: yeah I was planning to do something about that at some stage... ;)
[08:41] <axw> umm
[08:41] <rogpeppe2> axw: i'm thinking of saving all Stdout and Stderr and spitting it out when things fail
[08:41] <axw> rogpeppe2: probably tail cloud-init-output.log
[08:41] <axw> hmm or that I guess
[08:41] <axw> rogpeppe2: all stdout/stderr goes into /var/log/cloud-init-output.log, IIRC
[08:42] <axw> so you could just tail/cat that I think
[08:42] <rogpeppe2> axw: ah, that's useful to know - i thought that might not be happening any more
[08:42] <rogpeppe2> axw: but how many lines to tail?
[08:42] <axw> rogpeppe2: yeah... maybe just cat it on failure?
[08:43] <axw> hmm
[08:43] <rogpeppe2> axw: the whole thing's going to be quite big, isn't it?
[08:43] <axw> yes I think it will get pretty big, with all the apt-get update output, and so on
[08:44] <axw> alternatively we could grab the file and write to disk, but that's only useful when calling from the CLI
[08:46] <rogpeppe2> axw: one possibility might be to try to work out which command failed and print the output from only that
[08:46] <rogpeppe2> axw: do we send commands one-by-one, or as a big bunch?
[08:47] <axw> rogpeppe2: no, as a single script
[08:47] <axw> that would be a good default
[08:47] <axw> there may be cases where command failure needs more context, but I can't think of any
[08:48]  * axw thinks how to get there
[08:48] <rogpeppe2> axw: so, it would presumably be possible to print some distinctive text before each command, then print output from the last occurrence
[08:48] <axw> yes, that would work
[08:49] <rogpeppe2> axw: ok, i'll perhaps do that today
[08:49] <axw> cool
[08:49] <rogpeppe2> axw: (we've seen that problem quite a bit here)
[08:50] <fwereade> axw, if you're talking about the CA cert in https://codereview.appspot.com/56560043/ I think everyone *does* know it
[08:50] <axw> fwereade: oh? I thought it was a secret. it gets stripped out before we bootstrap...?
[08:51] <fwereade> axw, the private key does,for sure
[08:51] <rogpeppe2> axw: it shouldn't involve changing anything outside cloudinit/sshinit, right?
[08:52] <fwereade> axw, but agent conf always has the CA cert for the environment, and I'm pretty sure it's available over the api as well
[08:52] <axw> rogpeppe2: right, I think that's the most sensible place to do it, and it should be contained
[08:52] <rogpeppe2> axw: the CA cert is public info
[08:52] <axw> fwereade: makes sense.
[08:53] <axw> I had it in my mind we didn't publish either, but of course we have to
[08:53] <rogpeppe2> it would be good if the CA cert contained the environment UUID but that's for another day
[08:53] <fwereade> rogpeppe2: yeah, +1000
[08:53] <thumper> hi rogpeppe2
[08:53] <fwereade> tomorrow? ;p
[08:53] <thumper> rogpeppe2: are you in bluefin?
[08:54] <rogpeppe2> thumper: hi. yeah.
[08:54] <thumper> rogpeppe: have you seen mramm around yet today?
[08:54] <rogpeppe> thumper: i have
[08:54] <thumper> rogpeppe: could you tell him that I have a few questions for him and can he get on irc?
[08:54] <thumper> :)
[08:55] <rogpeppe> thumper: ok; i think he's in the scrum-of-scrums meeting right now, but will collar him after that
[08:55] <thumper> scrum of scrums?
[08:55] <rogpeppe> thumper: his term
[08:55] <thumper> sounds dangerous
[08:55] <rogpeppe> thumper: meeting of the team leads
[08:55] <axw> fwereade: re the ssh keys change, do you just mean you'd like to see an explicit "-i" for each of the default keys in ~/.ssh?
[08:56] <fwereade> axw, I'm wondering whether we should use explicit -i a bit more than we do, yeah
[08:56] <axw> fwereade: it's not necessary for the default keys, which is why I don't add it
[08:56] <axw> they just get tried anyway
[09:00] <fwereade> axw, ok, yeah, if they have a .ssh then they probably do have ssh installed :)
[09:00] <fwereade> axw, cheers
[09:01] <axw> fwereade: ah you meant for using in go.crypto?
[09:01] <fwereade> axw, yeah, I was just wondering if there was any gap in our coverage as it were
[09:01] <axw> actually I did that at one point, but then realised that it wouldn't work when keys are encrypted (require a keyphrase)
[09:01] <axw> which is (I hope!) most of the time
[09:02] <rogpeppe> thumper: i saw what seemed like a local provisioner bug yesterday BTW
[09:02] <fwereade> axw, so one would indeed hope ;)
[09:02] <axw> people can drop/symlink whatever keys they want to use into ~/.juju/ssh though
[09:02] <thumper> rogpeppe: oh?
[09:02] <rogpeppe> thumper: perhaps you might have an idea of what might be going on
[09:02] <thumper> I'm currently upgrading to trusty
[09:02] <thumper> to make sure they get ironed out quick smart
[09:03] <rogpeppe> thumper: so, lxc-start was failing (for some as-yet-unknown reason)
[09:03] <thumper> hmm..
[09:03] <thumper> I've seen a few weird ones
[09:04] <rogpeppe> thumper: so we could see the instances, and each one with a state holding the lxc-start status
[09:04] <rogpeppe> thumper: but we couldn't get the instances out of that state
[09:04] <thumper> ?!?
[09:04] <thumper> weird
[09:04] <rogpeppe> thumper: one mo, i'll try to find the bug number
[09:05] <rogpeppe> thumper: no bug reported yet, oh well
[09:05]  * thumper shrugs
[09:06] <thumper> if you see it again, please file one
[09:06] <thumper> and assign it to me
[09:06] <thumper> I'll see it then
[09:06] <rogpeppe> thumper: the output status is in agent-state and somnething like "lxc-start: cannot get init-pid-status"; the problem seems to be something to do with network bridge naming
[09:07] <rogpeppe> thumper: but that wasn't the particular provisioner issue i was thinking about
[09:07] <thumper> hmm...
[09:07] <thumper> ok
[09:09] <rogpeppe> thumper: here's the error that we saw:
[09:09] <rogpeppe> machine-3: 2014-01-28 15:21:12 ERROR juju.container.lxc lxc.go:129 container failed to start: error executing "lxc-start": command get_init_pid failed to receive response
[09:10] <thumper> rogpeppe: this is on maas?
[09:10] <thumper> is it still using precise?
[09:15] <rogpeppe> thumper: might be trusty actually
[09:16] <thumper> do we have trusty charms?
[09:16] <rogpeppe> thumper: no, bootstrap node is trusty, unit node is precise
[09:17] <thumper> rogpeppe: is this local provider?
[09:17] <rogpeppe> thumper: no
[09:17] <thumper> maas?
[09:17] <rogpeppe> thumper: yes
[09:17] <thumper> ok
[09:17] <thumper> I'll look tomorrow
[09:17] <thumper> and see if I can find anything
[09:19] <rogpeppe> thumper: the particular issue i'm concerned about isn't actually that one
[09:20] <thumper> rogpeppe: stick it in an email to juju-dev or just me if you prefer
[09:20] <rogpeppe> thumper: will do
[09:21] <rogpeppe> there's another bug i've just seen
[09:21] <rogpeppe> juju bootstrap doesn't appear to work with --upload-tools in 1.17.1
[09:22] <rogpeppe> anyone got an idea of what might be happening here? https://pastebin.canonical.com/103759/
[09:23] <thumper> rogpeppe: mramm mentioned that it looks like the network bridge isn't coming up
[09:23] <thumper> rogpeppe: I wonder if the behaviour has changed
[09:23] <thumper> we are kinda abusive in the maas provider
[09:24] <thumper> where we restart networking
[09:24] <mramm> the other thing that we should work out
[09:24] <thumper> robie basak suggested a different approach
[09:24] <thumper> where we go "ifdown eth0", make the mods, bridge eth0 with br0, then go "ifup br0"
[09:24] <thumper> and that would be better
[09:24] <thumper> but I never got time to try it
[09:25] <thumper> rogpeppe: is that enough info?
[09:25] <rogpeppe> thumper: not really. i had difficulty following the lxc container code, i'm afraid
[09:25] <rogpeppe> thumper: a bit of a twisty maze
[09:25] <thumper> rogpeppe: this is not in the container code
[09:25] <thumper> but inside the maas provider
[09:25] <thumper> where we create the cloud-init scripts
[09:26] <rogpeppe> ah ok
[09:26] <thumper> as we create a different network bridge for the host
[09:26] <thumper> so the containers inside can get to the maas dhcp server
[09:26]  * thumper is being called to head to the lounge
[09:27] <thumper> good luck
[09:27] <rogpeppe> thumper: ok
[09:27] <rogpeppe> thumper: the ifup change was how they fixed it
[09:28] <rogpeppe> thumper: (manually, of course)
[09:29] <rogpeppe> fwereade: it looks like the simplestreams stuff is borked in 1.17.1. or *something* is anyway.
[09:29] <rogpeppe> does anyone know if streams.canonical.com is supposed to have useful data these days?
[09:31] <fwereade> rogpeppe, worst-case sinzui and utlemming and I will sort it out when we're together this weekend, but I was hoping something would land sooner than that
[09:32] <fwereade> rogpeppe, looks to me like it still just completely lacks content
[09:32] <rogpeppe> fwereade: currently we're unable to bootstrap at all
[09:32] <fwereade> rogpeppe, --source?
[09:32] <rogpeppe> fwereade: hmm, i don't know about --source
[09:32] <fwereade> rogpeppe, it's in the 1.17.1 release notes
[09:33] <mramm> ok, so we *need* to get this streams.canonical.com thing sorted out quickly
[09:34] <mramm> as in before we end up getting hammered for it in Cape Town :/
[09:34] <mramm> perhaps need is strong.  But it is bad for our users, bad for us, and will result in a bit of ungentle poking in SA.
[09:35] <rogpeppe> fwereade: you mean --metadata-source, presumably?
[09:35] <fwereade> rogpeppe, ha, yes, sorry
[09:36] <fwereade> mramm, so, the issue AIUI is that utlemming is currently crushed under the weight of paid public cloud work
[09:37] <rogpeppe> fwereade: can you just provide a directory with the tools in?
[09:37] <fwereade> mramm, I will follow up with arosales when he's online, it seemed on monday that we might be able to steal an hour's work for a quick solution butthat hasn'thappened
[09:37] <rogpeppe> fwereade: or do you need the metadata there too?
[09:37] <mramm> ok
[09:38] <mramm> the alternative is to make juju not broken if it isn't there... :/
[09:38] <mramm> I know the right thing to to to move us forward
[09:38] <mramm> but...
[09:38] <fwereade> rogpeppe, you need metadata
[09:38] <fwereade> rogpeppe,  If your workflow previously was to download the juju tools to a local
[09:38] <fwereade> directory, then bootstrap with the --source option to upload the tools to your
[09:38] <fwereade> environment, you need to call “juju metadata generate-tools” per the previous
[09:38] <fwereade> example. See “juju help bootstrap” for more information.
[09:39] <mramm> fwereade: juju help bootstrap is nice!
[09:39] <mramm> fwereade: I also understand the pressure everybody is under
[09:40] <mramm> fwereade: so it's not a problem, just trying to troubleshoot next week
[09:40] <fwereade> mramm, yeah, quite so
[09:41] <rogpeppe> fwereade: juju help bootstrap doesn't say anything about generate-tools
[09:42] <rogpeppe> fwereade:  and "juju metadata generate-tools --help" doesn't say anything about what's expected
[09:44] <rogpeppe> fwereade: (i tried it on a directory containing a jujud binary, but it failed to find tools
[09:44] <rogpeppe> )
[09:45] <fwereade> rogpeppe, it's the same expected directory structure as juju-dist has always had
[09:45] <fwereade> rogpeppe, but yeah we should be more explicit about that
[09:45] <rogpeppe> fwereade: i also tried with --metadata-source, with a directory containing jujud
[09:45] <rogpeppe> fwereade: ahhh
[09:45]  * rogpeppe tries to remember what format the directory was in
[09:46] <fwereade> rogpeppe, just put them in the tools subdir iirc
[09:46] <rogpeppe> fwereade: no arch/series info?
[09:46] <rogpeppe> fwereade: just tools/jujud ?
[09:46] <hazmat> rogpeppe, please capture some notes on what your doing.. and attach to https://bugs.launchpad.net/juju-core/+bug/1267795
[09:47] <rogpeppe> it aaaallll broookeen
[09:48] <fwereade> rogpeppe, it's exactly how it's always been -- tgzs with names in the usual format
[09:48] <rogpeppe> fwereade: ah, ok
[09:48] <rogpeppe> fwereade: that's not very user friendly
[09:49] <fwereade> rogpeppe, sorry, tools/releases
[09:49] <rogpeppe> fwereade: --upload-tools *should* work, right?
[09:49] <hazmat> rogpeppe,  i know.. that's why we all use upload-tools..  check the contents here (from that bug report) which has the generate-tools structure http://pastebin.ubuntu.com/6726391/
[09:49] <hazmat> yeah.. tools/releases/tarball.tgz should do it for generate-tools
[09:49] <fwereade> rogpeppe, you can just download the bits you need from juju-dist -- the fuckedness of streams.c.c is a distinct issue
[09:50] <fwereade> rogpeppe, and, yes, I thought yu *could* still use upload-tools, I thought this was about using the released tools, sorry
[09:51] <fwereade> rogpeppe, is *that* broken?
[09:51] <rogpeppe> fwereade: it seems to be
[09:51] <rogpeppe> fwereade: we can't bootstrap at *all*
[09:51] <rogpeppe> fwereade: which is a major blocker currently
[09:51] <rogpeppe> fwereade: see https://pastebin.canonical.com/103759/
[09:54] <mramm> shang says that he believes it was all working last night
[09:54] <hazmat> rogpeppe, this is on trunk?
[09:54] <mramm> just not working today
[09:54] <jamespage> sinzui, please can you ensure that any PPA builds are using ~XXX suffixes otherwise the versions will conflict with distro
[09:55] <rogpeppe> hazmat: this is on 1.17.1 as released
[09:55] <fwereade> rogpeppe, that is deeply bizarre, it seems to work here, let me dig
[09:56] <mramm> we are not using EC2, and are on trusty
[09:56] <hazmat> rogpeppe, k.. testing.. with 1.17.1 tarball from https://launchpad.net/juju-core/+download
[09:56] <mramm> so there are some important differences
[09:56] <fwereade> rogpeppe, that "47927a59-4867-442f-8239-ae2d657f4475-tools/releases/juju-" bitis very weird
[09:57] <rogpeppe> fwereade: yes
[10:00] <hazmat> fwiw.. bootstrap --upload-tools from (13.10) works with 1.17.1 tarball
[10:00] <rogpeppe> fwereade: is it possible to use metadata generate-tools to generate metadata about tools in a local directory?
[10:02] <hazmat> rogpeppe, yes see that pastebin link i sent.. re private clouds  http://pastebin.ubuntu.com/6726391/
[10:03] <fwereade> rogpeppe, yes: grab the subset of the juju-dist bucket that you want/need, and point generate-tools at the base dir, then use --metadata-source
[10:04] <rogpeppe> fwereade: i *think* i did that
[10:04] <hazmat> with -d tools_dir
[10:05] <rogpeppe> hazmat, fwereade: so what am i doing wrong here? http://paste.ubuntu.com/6836995/
[10:05] <rogpeppe> juju metadata generate-tools is currently hung on "finding tools"
[10:06] <rogpeppe> sigh
[10:06] <hazmat> rogpeppe, can you run it with --debug
[10:06] <hazmat> and pastebin
[10:07] <rogpeppe> hazmat: it's fetching all tools from somewhere
[10:07] <rogpeppe> hazmat: http://paste.ubuntu.com/6837008/
[10:08] <hazmat> rogpeppe, its hitting streams.cannonical.com
[10:09] <rogpeppe> hazmat: so is there a way i can tell it to generate metadata from my local directory?
[10:09] <rogpeppe> hazmat: (from the tools in there)
[10:09] <mramm> hazmat: fwereade: thanks for helping with this -- it is a blocker for a couple of teams here in london
[10:10] <hazmat> i thought  so.. but.. looking at it.. it looks like it just wants to store it..
[10:10] <hazmat> on the local dir, instead of using the local dir for tools, the pastebin docs are sent are confusing in that regard.
[10:10] <rogpeppe> hazmat: that's how i interpret it too
[10:10] <hazmat> rogpeppe, i'm a bit more concerned with why --upload-tools is failing
[10:11] <rogpeppe> hazmat: well, i want to know why normal bootstrap is failing too
[10:11] <rogpeppe> hazmat: if generate-tools can get stuff from streams.canonical.com, why can't normal bootstrap?
[10:11] <rogpeppe> wallyworld_: ping
[10:13] <cgz> thanks for the review axw
[10:14] <fwereade> rogpeppe, nobody can get anything from streams.c.c at the moment because there is no content :/
[10:14] <rogpeppe> fwereade: so where's generate-tools getting all this from? http://paste.ubuntu.com/6837008/
[10:15] <rogpeppe> fwereade: i don't *think* it's juju-dist, because juju-dist doesn't appear to have 1.17.1 in
[10:15] <hazmat> rogpeppe, 1.17.1 is  there in the testing/tools   @ https://juju-dist.s3.amazonaws.com/
[10:15] <rogpeppe> ah, it does
[10:16] <rogpeppe> i was looking in tools not tools/releases
[10:16] <wallyworld_> rogpeppe: hi
[10:17]  * hazmat mramm don't think i'm helping.. but trying..
[10:17] <hazmat> whoops
[10:17] <rogpeppe> wallyworld_: do you know what might be happening here? https://pastebin.canonical.com/103759/
[10:17]  * wallyworld_ looks
[10:17]  * wallyworld_ hates 2fa
[10:18] <rogpeppe> wallyworld_: oh, sorry, one mo
[10:18] <wallyworld_> ok
[10:18] <hazmat> rogpeppe, so it can do tool upload.. but you have to specify tools-metadata-url for the env to a local dir
[10:18] <rogpeppe> wallyworld_: http://paste.ubuntu.com/6837045/
[10:18] <hazmat> for the environ
[10:19] <rogpeppe> hazmat: "it" ?
[10:19] <rogpeppe> hazmat: juju metadata generate-tools ?
[10:19] <hazmat> rogpeppe, wallyworld_ re generating tools metadata for an environ with juju metadata generate-tools  from a local dir
[10:19] <hazmat> going through the src @ juju-core/cmd/plugins/juju-metadata/toolsmetadata.go
[10:20] <wallyworld_> rogpeppe: it looks like it didn't upload any tools in that pastebin
[10:21] <wallyworld_> rogpeppe: it should say "Uploading xxx kbytes"
[10:21] <wallyworld_> i also notice it found a jujud. i've only ever seen it have to compile jujud from source
[10:21] <wallyworld_> so maybe that other codepath is broken, not sure
[10:22] <hazmat> i've always seen it use a pre-existing jujud binary..
[10:22] <rogpeppe> wallyworld_: currently we can't bootstrap at all because of this issue
[10:22] <wallyworld_> i bootstrapped just on friday no worries
[10:22] <wallyworld_> on ec2
[10:22] <hazmat> wallyworld_, i depend on that behavior fwiw.. re use jujud binary
[10:22] <wallyworld_> i don't think anything has changed in trunk since then?
[10:22] <hazmat> wallyworld_, their having issues on trusty
[10:22] <rogpeppe> wallyworld_: "found existing jujud" should mean that it can use that
[10:23] <wallyworld_> i was on trusty i think
[10:23] <hazmat> wallyworld_, i also just did a bootstrap today with the 1.17.1 tarball on ec2 .. worked okay (i'm not on trusty though))
[10:23] <wallyworld_> rogpeppe: sure. but i didn't really write that code so am not familair with it. i can look though
[10:23] <rogpeppe> wallyworld_: oh, sorry, i thought you did most of the simplestreams stuff
[10:23] <wallyworld_> rogpeppe:  i did
[10:23] <hazmat> wallyworld_, can juju metadata generate-tools  use a local directory to find tools?
[10:24] <wallyworld_> but the jujud vs compile from source is in a separate bit of code
[10:24] <wallyworld_> it's sort of independent
[10:24] <wallyworld_> hazmat: yes
[10:24] <wallyworld_> let me check the details
[10:24] <wallyworld_> hazmat: generate tools is just to produce the simplestreams metadata as you probably know
[10:25] <hazmat> wallyworld_, from the src it looks like tools-metadata-url for the env pointing to a local directory file:// ...
[10:25] <hazmat> wallyworld_, yes.. separate issues.. --upload-tools fail.. and no simplestream md makes normal bootstrap fail.
[10:25] <wallyworld_> once you have the tarballs and metadata locally, you can point bootstrap at that and it will upload that stuff to the cloud
[10:26] <wallyworld_> hazmat: lwt me check source - the tools-metadata-url should not be involved in generate-tools from a local dir
[10:26] <wallyworld_> hazmat: it should just be with the -d option
[10:27] <wallyworld_> juju metadata generate-tools -d blah
[10:27] <wallyworld_> where blah contains a releases dir with tarballs
[10:27] <wallyworld_> nd the result is a streams dir with metadata next to releases dir
[10:27] <hazmat> wallyworld_, that looks like where to place the md  per its help description, not where to find the tools.. and it looks like it still goes to streams.canonical it looks like.
[10:28] <hazmat> wallyworld_, ah.. so -d for both output  and input
[10:28] <rogpeppe> oh, tim's broken dependencies.tsv
[10:28] <wallyworld_> hazmat: when you say "find the tools" above, do you mean for bootstrap?
[10:28] <wallyworld_> cause that's different to generate-tools
[10:29] <wallyworld_> generate-tools is to take tarballs as input and produce the metadata so that the tarballs and metadata can be used for bootstrap
[10:29] <wallyworld_> bootstrap will take those and upload to cloud storage
[10:29] <wallyworld_> that is different and separate to using tools-metadata-url
[10:30] <hazmat> right.. that's what --metadata-source on bootstrap is for
[10:30] <wallyworld_> tools-metatdata-url is for when the tools and metadata have already been uploaded or exposed somewhere via a public url
[10:30] <wallyworld_> yes
[10:30] <wallyworld_> and yes, the generate-tools output is insitu
[10:31] <wallyworld_> cause then you have a dir tree that bootstrap can consume
[10:31] <wallyworld_> does that make sense?
[10:31] <hazmat> wallyworld_, i was ref'ing tools-metadata-url based on the src for juju-core/cmd/plugins/juju-metadata/toolsmetadata.go which does this odd check  envtools.DefaultBaseURL of that appending file://
[10:32] <wallyworld_> hazmat: that is to allow people to just specify a dir name ans it turns it into a url. it also allows people to forget to put the /tools suffix and it will deal with that robustly
[10:33] <wallyworld_> the envtools.DefaultBaseURL is overridden from the --metadats-source from bootstrap
[10:33] <wallyworld_> the --metadata-source hides streams.c.com so that bootstrap gets its data from the local dir
[10:33] <hazmat> wallyworld_, it makes sense.. except the part where its not working :-)
[10:34] <wallyworld_> which provider?
[10:34] <hazmat> wallyworld_, rog is using maas, i'm trying with ec2
[10:34] <wallyworld_> ok. i used ec2 on friday
[10:34] <hazmat> i think everyone in london this week is using maas
[10:35] <wallyworld_> i haven't tested maas. let me look at the CI dashboard
[10:35] <hazmat> wallyworld_,  just to be clear.. i mean generate-tools re ec2.. bootstraping with --upload-tools works okay for me (not on trusty).
[10:35] <wallyworld_> when i tested with ec2 onfriday, i used --upload-tools
[10:36] <wallyworld_> so generate-tools fails?
[10:36] <wallyworld_> i'm on trusty now, can't recall if i tested when i was still on precise
[10:36] <hazmat> wallyworld_, for maas.. rogpeppe is getting a hang.. see bottom of this http://paste.ubuntu.com/6837008/
[10:37] <rogpeppe> apparently they can't bootstrap with 1.17.0 either now
[10:37] <wallyworld_> bugger. CI doesn't test maas it seems http://162.213.35.54:8080/
[10:38] <wallyworld_> i'm confused by that pastebin
[10:38] <wallyworld_> i didn't think juju-dist.s3.amazonaws.com was in the codebase anymore
[10:39] <hazmat> wallyworld_, i'm getting a different error.. with generate-tools http://paste.ubuntu.com/6837127/
[10:39] <wallyworld_> i'll check
[10:39] <hazmat> actually my error is just pointing to tools instead of .
[10:39] <wallyworld_> yep
[10:40] <wallyworld_> the dir for -d is the dir containing tools
[10:40] <wallyworld_> since it's tehn consistent with what's used for images
[10:40] <wallyworld_> so you can have a dir with tools and images metadata
[10:40] <rogpeppe> oh bugger, of course. godeps doesn't support git.
[10:40] <wallyworld_> \o/
[10:42] <hazmat> wallyworld_, so  can the output of generate-tools be used to seed a private cloud for all users, if its placed in a 'magic' bucket?
[10:43] <cgz> rogpeppe: doesn't support how? just the update bit?
[10:43] <wallyworld_> that is the idea
[10:43] <rogpeppe> cgz: no at all - i'm just doing it
[10:43] <wallyworld_> hazmat: with openstack, you can even put the url in keystone
[10:43] <wallyworld_> like we do for canonistack
[10:43] <hazmat> wallyworld_, and that bucket name varies?.. i'm wondering about maas atm.
[10:43] <hazmat> wallyworld_,
[10:44] <hazmat> its just juju-dist everywhere ?
[10:44] <wallyworld_> hmmm. maas. i'm not 100% familair with the conventions there as i've never run juju on maas
[10:44] <wallyworld_> um, not sure about juju-dist
[10:44] <wallyworld_> let me have a quick look
[10:46] <jam> wallyworld_, dimitern, rogpeppe, fwereade: standup?
[10:46] <wallyworld_> jam: sec, just helping on a support issue
[10:46] <wallyworld_> hazmat: the latest code looks like it just uses the same conventions as for other clouds - a tools dir in private storage for the cloud
[10:47] <jam> hazmat: unfortunately, AIUI MaaS doesn't have a shared "bucket/container" concept
[10:47] <jam> so you only have user-local stuff
[10:47] <wallyworld_> by private storage, i mean whatever env.Storage() returns
[10:47] <jam> the Simplestreams stuff *does* support any-old HTTP location, which means you could host them somewhere to be shared.
[10:48] <wallyworld_> i think we require people using maas to --upload-tools right?
[10:48] <wallyworld_> if no public tools are available?
[10:55] <wallyworld_> hazmat: we're in a standup now. did you want to join after to ask questions?
[10:55] <wallyworld_> where the whole dev team is available?
[10:58] <fwereade> rogpeppe, any luck with tools-metadata-url or is that also problematic?
[11:03] <hazmat> wallyworld_, rogpeppe is the one whose blocked.. i'm mostly asking to help there and for future ref on others.
[11:03] <rogpeppe> fwereade: i haven't tried that
[11:03] <rogpeppe> fwereade: what should it be?
[11:04] <rogpeppe> fwereade: some URL for juju-dist?
[11:04] <wallyworld_> hazmat: ok. rogpeppe can pop into the standup for help :-)
[11:04] <rogpeppe> wallyworld_: will do
[11:05] <wallyworld_> hazmat: if you have further questions, let me know,maybe email me or whatever
[11:05] <hazmat> wallyworld_, will do, thanks
[11:17] <rogpeppe> natefinch: ha, sorry, there are no tests :-)
[11:18] <natefinch> rogpeppe: that's ok :)  I can write tests :)
[11:19] <TheMue> rogpeppe: heya, do you know the error "missing or bad WebSocket-Origin"?
[11:20] <rogpeppe> TheMue: sounds like you're not adding a websocket origin to the request
[11:20] <rogpeppe> TheMue: where is the error coming from?
[11:21] <TheMue> rogpeppe: it tells websocket.Dial, so should be the client
[11:21] <rogpeppe> TheMue: have you grepped for it in the source?
[11:23] <TheMue> rogpeppe: it's not in there, the whole line is "ERROR websocket.Dial wss://ec2-54-197-191-161.compute-1.amazonaws.com:17070/log?lines=10&entity=environment-3740322e-a899-4849-8fd2-a21c7e3b8d3a: missing or bad WebSocket-Origin"
[11:23] <rogpeppe> TheMue: have you grepped in the go source too?
[11:23] <rogpeppe> TheMue: and in the websocket source
[11:23] <TheMue> rogpeppe: not yet, will do
[11:24] <rogpeppe> TheMue: it's usually a good first step
[11:41] <TheMue> rogpeppe: origin killed, no stumbling over a certificate error *sigh*
[11:46] <fwereade> mramm, so, how can we make noise about getting some damn resources so we can do CI on MAAS?
[11:46] <bigjools> fwereade: I suggested to Curtis that you share the maas lab
[11:46] <natefinch> fwereade: yeah, it's pretty pathetic that we aren't even doing CI with our own product
[11:49] <mramm> fwereade: I am making noise right now
[11:49] <mramm> fwereade: and will make some more this afternoon
[11:49] <fwereade> mramm, tyvm
[11:50] <mramm> I also wan to do CI on the orange box -- just because that's where we plan to do demos
[11:50] <mramm> if we are doing all our demos on a specific maas configuration with specific hardware -- that damn well better work
[11:50] <mramm> every time
[11:50] <mramm> and the bundles we demo should get tested regularly
[11:50] <natefinch> yup
[12:16] <rogpeppe> wallyworld_, fwereade: that doesn't seem to work: https://pastebin.canonical.com/103776/
[12:17] <rogpeppe> wallyworld_: it's looking for index.json, which doesn't seem to be there
[12:17] <rogpeppe> wallyworld_, fwereade: 2014-01-29 12:09:25 DEBUG juju.environs.simplestreams simplestreams.go:482 fetchData failed for "http://juju-dist.s3.amazonaws.com/streams/v1/index.sjson": cannot find URL "http://juju-dist.s3.amazonaws.com/streams/v1/index.sjson" not found
[12:18] <fwereade> rogpeppe, fuck, tools-url needs trailing tools
[12:18] <rogpeppe> ah!
[12:18] <rogpeppe> fwereade: yes, i just realised that
[12:18] <fwereade> rogpeppe, cool
[12:22] <rogpeppe> fwereade, wallyworld_: that worked, thanks
[12:22] <wallyworld_> \o/
[12:23] <rogpeppe> wallyworld_: time is thereby bought
[12:23] <wallyworld_> whooohooo
[12:27] <cgz> morning niemeyer
[12:27] <niemeyer> cgz: Heya
[13:30] <gary_poster> hey jamespage.  if you get a chance today, I'd like to get your thoughts about https://bugs.launchpad.net/ubuntu/+bug/1273865 (juju quickstart inclusion in Trusty).  Please let me know if you might have some availability.
[13:52] <dimitern> fwereade, https://codereview.appspot.com/53210044/  again? :)
[13:57] <hazmat> dimitern, fwereade how was the networking discussion?
[13:57] <hazmat> dimitern, i'm going through your network doc and adding some additional comments.
[13:57] <dimitern> hazmat, you mean about goamz?
[13:58] <hazmat> dimitern, yeah
[13:58] <hazmat> there's like 20 forks adding various features on github..
[13:58] <sinzui> r2268 broke dependencies.tsv
[13:59] <sinzui> http://162.213.35.54:8080/job/prepare-new-version/946/console
[14:00] <dimitern> hazmat, we decided to extend goamz with vpc/networking support and put all new apis in the EC2 type, keeping the public API intact
[14:02] <cgz> sinzui: rogpeppe is aware, apparently godeps lacks git support, but is sprinting so don't think he's fixed it yet
[14:02] <hazmat> dimitern, there's one significant item that feels like it missing, the current sec group per machine need disappears.
[14:03] <hazmat> dimitern, since groups can be attached at runtime to machines
[14:03] <vds> if I go install juju-core from LP trunk, from where do I get commands like relation-set?
[14:04] <dimitern> hazmat, that's not entirely related is it?
[14:04] <dimitern> hazmat, it's to do with the firewaller mostly
[14:04] <hazmat> dimitern, its an additional api for the set.. also why the EIP manipulation?
[14:04] <jamespage> gary_poster, looking
[14:05] <sinzui> I would suggest reverting the change then. The project is not releasable or testable
[14:05] <dimitern> hazmat, i don't think we'll change how we handle security groups much, just add vpc support for them, at least for now
[14:05] <rogpeppe> cgz: godeps does actually have git support,
[14:06] <rogpeppe> cgz: but thumper broke dependencies.tsv unfortunately
[14:06] <dimitern> hazmat, the EIPs are needed to allocate/ssign more than one public IP to a machine, and subsequently assign them to containers on that machine
[14:06] <rogpeppe> sinzui: do you want to propose a fix, or shall i?
[14:06] <hazmat> dimitern, there seems to be some fundamental misunderstanding
[14:07] <cgz> rogpeppe: I thought it did... that's why I was confused earlier :)
[14:07] <dimitern> hazmat, oh? what is it?
[14:07] <cgz> if it's just thumper's normal spaces, I can fix that again
[14:07] <sinzui> rogpeppe, cgz, it has no revno
[14:07] <dimitern> sorry, bbiab
[14:07] <hazmat> dimitern, you don't need eips for a public ip on a vpc instance, they are a precious resource with small limits, that juju shouldn't be touching (ie like 5 per vpc) so also entirely inappropriate for that use. and the important aspect for containers is private ip addressing via additional veths and ips which isn't covered
[14:08] <sinzui> rogpeppe, cgz, The file is tab separated, but there is no revno for github.com/loggo/loggo
[14:08] <rogpeppe> sinzui: it doesn't need a revno
[14:08] <rogpeppe> sinzui: but it does need a trailing tab
[14:08] <sinzui> ah
[14:08] <rogpeppe> sinzui: git doesn't have revnos
[14:09] <sinzui> I have the file open, I will test and propose now. Thank your rogpeppe
[14:09] <rogpeppe> sinzui: it should probably be a bit more lenient about the number of fields actually
[14:09] <hazmat> dimitern, nevermind i do see private addresses in the api changes, veth are also useful (esp cross subnet traffic) but perhaps not required
[14:10] <hazmat> dimitern, oh. and that's 5 vpc EIPs for an account not per vpc.
[14:11] <sinzui> rogpeppe, godeps is not happy once it can read the line:
[14:11] <sinzui> $ godeps -u dependencies.tsv
[14:11] <sinzui> godeps: cannot parse "dependencies.tsv": cannot parse "github.com/loggo/loggo\tgit\t89458b4dc99692bc24efe9c2252d7587f8dc247b\t": unknown VCS kind "git"
[14:11] <rogpeppe> sinzui: try go get -u launchpad.net/godeps
[14:11] <jamespage> gary_poster, do you publish releases of quickstart anywhere?
[14:12] <rick_h_> hazmat: what's the format for adding a kvm machine via the cli? I'm huning for some docs but not finding any. Design is asking what to call creating a kvm instance and getting confused a bit in 'machine' 'bare metal' etc
[14:12] <gary_poster> jamespage: on call, in ppa, details here: https://bugs.launchpad.net/ubuntu/+bug/1273865
[14:12] <jamespage> gary_poster, yes - I read that
[14:12] <hazmat> rick_h_, sub kvm for lxc.. ie juju deploy --to=kvm:1  ... its in juju deploy/add-unit --help
[14:13] <sinzui> sorry rogpeppe , godeps still reports the same error
[14:13] <gary_poster> so jamespage only ppa then
[14:13] <jamespage> gary_poster, how about releasing versions to pypi like hazmat does for juju-deployer?
[14:13] <rick_h_> hazmat: awesome thanks
[14:13] <hazmat> rick_h_, not he kvm, but the container syntax with lxc.. and then its just s/lxc/kvm
[14:14] <rogpeppe> sinzui: just to sanity check, what revision of godeps are you on?
[14:14] <sinzui> 10, rogpeppe
[14:15] <rogpeppe> sinzui: ah, 14 is the latest
[14:15] <sinzui> whoa, your on 14
[14:15] <rogpeppe> sinzui: try bzr pull
[14:15] <sinzui> I did
[14:16] <rogpeppe> sinzui: try removing the godeps directory and doing go get again
[14:16] <sinzui> rogpeppe, this my problem, not CI, it always gets a fresh checkout of godeps
[14:17] <dimitern> hazmat, yeah, these limits will hit us at some point
[14:17] <hazmat> dimitern, some point? even a trivial environment would hit those.
[14:18] <rogpeppe> sinzui: if CI gets godeps for fresh, it should work ok
[14:18] <hazmat> its not a good solution for the use case you gave of density.
[14:19] <rogpeppe> sinzui: i just tried it and it seems to work (with the fixed .tsv file)
[14:19] <dimitern> hazmat, not necessarily - most services will happily work with private addresses, just the ones that need exposing will be an issue I think (after the limit is hit)
[14:19] <sinzui> rogpeppe, yep, It does. My branch is stuck because I was fighting with autobuilding in September.
[14:19] <cgz> if I bootstrap, it fails, I update the environments.yaml, and bootstrap again, it seems the ENV.jenv doesn't get updated
[14:19] <cgz> that's not deliberate, right?
[14:19] <hazmat> dimitern, there's seems to be a misunderstanding.. you don't need an eip for a vpc instance to attach the public net
[14:20] <rogpeppe> cgz: yes, you'll need to remove iyt
[14:20] <cgz> rogpeppe: that kinda sucks
[14:20] <rogpeppe> cgz: we really need to fix that behaviour
[14:20] <rogpeppe> cgz: agreed
[14:20] <rogpeppe> cgz: if bootstrap fails, it should remove the .jenv file if it just created it
[14:22] <cgz> okay, bug 1247152 it seems
[14:23] <dimitern> hazmat, you need an EIP for each instance in a non-default VPC
[14:23] <hazmat> dimitern, no you don't
[14:24] <dimitern> hazmat, from the aws docs: We assign each instance in a nondefault VPC only a private IP address, unless you specifically request a public IP address during launch. To ensure that an instance in a nondefault VPC that has not been assigned a public IP address can communicate with the Internet, you must allocate an Elastic IP address for use with a VPC, and then associate that EIP with the elastic network interface (ENI) attached to the instance.
[14:24] <hazmat> dimitern, why do you think yoou need it?
[14:24] <hazmat> see the unless part?
[14:25] <dimitern> hazmat, yes
[14:25] <dimitern> hazmat, but that won't be the case until you actually need a public address
[14:25] <hazmat> so if you launch with a public addr why do you need it?
[14:25] <hazmat> dimitern, you can't communicate to the internet from the instance without it
[14:26] <sinzui> rogpeppe, cgz : do eith of you have a minute to review https://codereview.appspot.com/58260044
[14:26] <hazmat> er. without at least a public address attached to the instance, eip, or nat isntance setup
[14:26] <rogpeppe> sinzui: will do
[14:27] <cgz> it'sarace
[14:27] <dimitern> hazmat, with a default vpc, you can - they use nat to relay private ip outgoing traffic to a public ip from the ec2-vpc pool, not your account
[14:28] <hazmat> dimitern, basically depending on more than one pub address per instance is basically broken, due to the low limits on eips (which are meant primarily not for pub addresses, but for static addresses)
[14:28] <hazmat> dimitern, default vpc is the same as launching with a public address, hence option 1 of the 3 i listed.
[14:28] <hazmat> and no it doesn't use nat.
[14:29] <hazmat> nats are for private subnets outbound traffic, default vpc, is basically public subnet with every instance getting a public address at launch by default.
[14:29] <dimitern> hazmat, "We assign each instance in a default VPC two IP addresses at launch: a private IP address and a public IP address that is mapped to the private IP address through network address translation (NAT)."
[14:30] <hazmat> which is transparent to the user, its effectively ec2 internal impl.. the traditional nat in ec2 vpc is something very different
[14:30] <dimitern> hazmat, so for the default vpc we can still assign additional EIPs
[14:31] <hazmat> dimitern, ie.. this is nat for vpc.. http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html
[14:32] <hazmat> hmm.. bad link.
[14:33] <hazmat> dimitern, yes.. up to 5 max for a user's entire account, and then things break.
[14:35] <dimitern> hazmat, so what can we do? there's a form you can fill in to raise your limits I saw somewhere
[14:35] <hazmat> dimitern, yes.. and justify use.. i'm not sure.. juju needs it.. is going to be a good response.
[14:36] <dimitern> hazmat, "juju needs it" won't do, because it's per account
[14:37] <dimitern> hazmat, so users that need to deploy more than 5 machines/containers within a non-default vpc and all of them need EIPs, can be advised to file that form
[14:43] <gary_poster> jamespage: sorry, off call.  would putting it on PyPI help Ubuntu packaging?  To be clear, the story we want to advertise is something to the effect of "To get started with Juju, run `sudo apt-get install juju-quickstart && juju-quickstart [bundle name]` to immediately be walked through installation, configuration, and deploying your first full working solution on top of Juju`
[14:45] <jamespage> gary_poster, thats fine - but I'd prefer that we have something from juju-quickstart as an upstream that is 'released'
[14:46] <jamespage> rather than some arbitary snapshot that we agree on
[14:48] <hazmat> dimitern, and juju is going to handle that partial failure mode well.. and its less than 5 if their doing standard eip jobs (bastion hosts, nat instances, etc) designing something that breaks with real world use feels quite odd. amz is pretty stingy with eip allocations. but maybe its okay.. this use of eips wouldn't fly with any aws vpc user i know, but their all using elbs (public or private) for the entry points / expose endpoints.
[14:48] <gary_poster> jamespage: ah, ok.  makes sense.  we'll get that done ASAP and I'll update the bug?
[14:48] <jamespage> gary_poster, sounds good - I'll cut a package for the archive and upload sometime this week
[14:48] <gary_poster> aqwaome thanks!
[14:49] <jamespage> your packaging branch general looks good btw
[14:49] <gary_poster> great
[14:49] <jamespage> just needs a few extras for the archive
[14:49] <hazmat> dimitern, the reality check is no public cloud hands out public addresses like candy.. neutron is exposed in rackspace and hpcloud, but you can't get additional pub ips there.
[14:49] <dimitern> hazmat, are you saying we can alleviate some of the issues by using elb?
[14:49] <hazmat> dimitern, i'm saying real world usage of aws, doesn't expose services directly, they front with elb.
[14:50] <dimitern> hazmat, right
[14:50] <dimitern> hazmat, how about the dense containerization story?
[14:51] <dimitern> hazmat, it seems such deployments are more suited for private clouds on hyperscale hardware + openstack more than on ec2
[14:51] <hazmat> dimitern, even there what does expose mean?
[14:52] <hazmat> dimitern, a) firewaller not imiplemented.. b) a generic iptable firewall would be generic and useful across all of them
[14:52] <hazmat> c) its not a public ip addr
[14:53] <hazmat> dimitern, the story for private clouds is basically the same as public .. if you subtract the delta on expose.. its all about private networking
[14:54] <dimitern> hazmat, well, you can still have a large deployment with 100s of hadoop nodes and only a handful of web servers that take tasks or display results
[14:54] <hazmat> dimitern, dense containerization isn't really about lots of public endpoints.. its about efficiency
[14:55] <dimitern> hazmat, and in that case, you'll need EIPs only for the web servers and a nat instance
[14:55] <hazmat> dimitern, but you don't need it for the web servers.. you already have a public ip there
[14:56] <hazmat> and you wouldn't have  nat instance created by juju..
[14:56] <rogpeppe> axw: ping
[14:56] <rogpeppe> axw: (i can be hopeful :-])
[14:56] <cgz> rogpeppe: he's almost certainly asleep :)
[14:57] <axw> am not :)
[14:57] <dimitern> hazmat, it seems juju needs to manage a nat instance anyway
[14:57] <hazmat> dimitern, why?
[14:57] <axw> rogpeppe: what's up?
[14:57] <cgz> waaaat
[14:57] <dimitern> hazmat, and only assign EIPs to exposed machines
[14:57] <hazmat> dimitern, nat instances are for a very particular use case
[14:57] <rogpeppe> axw: looking at sshinit.ConfigureScript
[14:57] <dimitern> hazmat, because all the other nodes will need net access
[14:57] <hazmat> dimitern, did you see my nat instance vpc link?
[14:57] <dimitern> hazmat, yes
[14:57] <hazmat> dimitern, that's why you can give them public ips when launching.
[14:58] <hazmat> dimitern, maintaining an ec2 nat instance is  pita.. and its huge spof without a bunch of hacky scripts.
[14:58] <hazmat> well. its not a pita. but eliminating the spof is
[14:58] <dimitern> hazmat, non-default vpc, only private addresses unless added at launch (which we need not do unless we know the instance will need it), a nat instance for all the other nodes + EIP for it
[14:58] <hazmat> dimitern, you just always create with one
[14:59] <rogpeppe> axw: i'm looking at sshinit.ConfigureScript
[14:59] <rogpeppe> axw: and wondering how the stderr logic could possibly work
[14:59] <dimitern> hazmat, and you hit the wall after the 5th instance
[14:59] <hazmat> dimitern, those are not EIPs
[14:59] <rogpeppe> axw: first i wondered why there were two nested ()s
[14:59] <hazmat> there random public ips.. eips are static addresses associated to the account
[14:59] <dimitern> hazmat, which are those?
[15:00] <rogpeppe> axw: then i saw that AFAICS there's nothing to separate stdout from stderr
[15:00] <hazmat> dimitern, if you launch a vpc instance with a pub addr its .not an eip allocation
[15:00] <dimitern> hazmat, in a non default vpc eips are the only way to have a public address, which is most likely our case
[15:00] <hazmat> dimitern, that's not true
[15:00] <dimitern> hazmat, it is
[15:01] <axw> rogpeppe: just a moment, context switching
[15:01] <dimitern> hazmat, according to the docs
[15:01] <rogpeppe> axw: np
[15:01] <hazmat> dimitern, your reading them wrong
[15:02]  * hazmat sighs
[15:02] <dimitern> hazmat, "We don't automatically assign a public IP address to an instance that you launch in a nondefault subnet. Therefore, if you want an instance in a nondefault subnet to communicate with the Internet, you must either enable the public IP addressing feature during launch, or associate an Elastic IP address with the primary or any secondary private IP address assigned to the network interface for the instance."
[15:03] <hazmat> dimitern, OR
[15:03] <hazmat> either you spec pub at launch.. which is not an eip allocation
[15:03] <hazmat> or you runtime attach an eip
[15:03] <hazmat> so launch it with a pub
[15:03] <axw> rogpeppe: stdout and stderr are combined in our usage of cloud-init
[15:03] <rogpeppe> axw: yeah
[15:04] <rogpeppe> axw: so that logic is unlikely to have been tested
[15:04] <dimitern> hazmat, ok, I think I see now, sorry
[15:04] <rogpeppe> axw: AFAICS if you specify stderr, it will be lost.
[15:05] <rogpeppe> axw: or rather, not redirected anywhere
[15:05] <axw> rogpeppe: it won't be lost, it'll go into /var/log/cloud-init-output.log
[15:05] <axw> sorry just a sec
[15:05] <rogpeppe> axw: i don't think so
[15:05] <rogpeppe> axw: because if stderr is specified, there appears to be nothing that actually redirects stderr
[15:06] <dimitern> hazmat, ah, unless there are more than one network interface attached to the instance
[15:06] <rogpeppe> axw: i *think* what you want is (commands) >stdoutfile 2>stderrfile
[15:06] <hazmat> dimitern, none of this changes imo, that eip usage like this is a bad idea, trying to break one pub addr per instance is not going to be portable to any other public cloud, its entailing alot of work for a minute gain that is inherently limited in it scale, and won't work with any other cloud.
[15:06] <axw> rogpeppe: true, the SetOutput value would need to be mangled to get it to do the right thing
[15:06] <rogpeppe> axw: whereas currently it's ((commands) > stdoutfile) > stderrfile)
[15:07] <axw> rogpeppe: well, the "> ..." bit is specified in cloudinit.Config.SetOutput
[15:07] <axw> but it'd be awkward
[15:07] <dimitern> hazmat, from juju's perspective, we need to be able to assign multiple public addresses to an instance, regardless of cloud
[15:07] <axw> and you're right, we don't do it and it's not tested
[15:08] <dimitern> hazmat, in order to use these public addresses for containers running on that instance
[15:08] <axw> rogpeppe: do we need to separate stdout/stderr?
[15:08] <rogpeppe> axw: not too awkward, except that i don't know if bash allows redirecting stderr to a pipe. maybe you can do: foo 2| bar
[15:08] <rogpeppe> axw: not really
[15:08] <rogpeppe> axw: we could just remove all the related logic
[15:08] <dimitern> hazmat, it seems the only way to achieve that on EC2 is using VPC+EIPs
[15:09] <dimitern> hazmat, do we agree so far?
[15:09] <axw> rogpeppe: if we did that, I'd prefer if we just removed it all the way up to and including juju-core/cloudinit
[15:09] <hazmat> dimitern, there are other techniques.. and how do you achieve that on hp cloud or rackspace, or google, or joyent or azure.
[15:09] <hazmat> quite simply... you don't
[15:09] <axw> to prevent someone using something that's not supported all the way down
[15:09] <rogpeppe> axw: yeah, that's what i was thinking
[15:10] <hazmat> dimitern, fwereade would be nice to schedule a followup meeting.
[15:10] <dimitern> hazmat, on other clouds that support assigning multiple public ips to an instance, it will work just as well
[15:10] <dimitern> hazmat, with the relevant changes to the libraries for openstack, joyent, etc.
[15:10] <dimitern> hazmat, yes, a meeting will be nice
[15:11] <axw> rogpeppe: you can do "2>&1 |", but I don't know about piping without stdout...
[15:11] <dimitern> hazmat, but all this doesn't really change anything re goamz changes - we need these extra API calls anyway, no matter how we decide juju should use them
[15:11] <hazmat> dimitern, other clouds? there are no other public clouds that allow that.. i just listed a bunch, and none of them do. and amz does it at a very limited scale (and those addresses have primary uses for other purposes) and for private openstack, its not a public address.
[15:12] <hazmat> dimitern, fair enough
[15:12] <hazmat> dimitern, pls, pls move goamz to github
[15:12] <hazmat> dimitern, there are so many forks out there because its not on github
[15:12] <dimitern> hazmat, it's not up to me as you know :)
[15:12] <hazmat> niemeyer, , pls, pls move goamz to github
[15:12] <axw> rogpeppe: see bottom of http://www.tldp.org/LDP/abs/html/io-redirection.html
[15:12] <rogpeppe> axw: you can do it AFAIR
[15:13] <rogpeppe> axw: yeah, that's similar to the approach i'd use
[15:14] <dimitern> hazmat, ok, so by "public ip" I mean "accessible from outside the cloud somehow", regardless public or private
[15:14] <rogpeppe> axw: but i think we can just forget it all
[15:15] <axw> rogpeppe: yep, +1 to deleting code
[15:15] <niemeyer> hazmat: With fwereade on a call.. biab
[15:16] <hazmat> dimitern, as an another example of where that's moot.. one use case people do with vpc is to extend their org ip address space into the public cloud via directconnect or vpn connectivity, so the subnets ip ranges are from the org and traffic gets routed back through the org as egress. the whole org gets access though because its all part of their network.
[15:18] <axw> sleepy time. good night folks
[15:18] <dimitern> hazmat, that's the vpn story - gateways on both sides of the vpc subnet
[15:20] <hazmat> right but what does expose mean there... vpns make every 'accessible outside of the cloud somehow'
[15:20] <hazmat> within the org
[15:20] <arosales> fwereade, good afternoon
[15:21] <arosales> fwereade, current status on simple stream is sinzui is making the jenkins workflow that he will hand off to utlemming. utlemming has adjusted priorities to make this happen asap.
[15:21] <fwereade> arosales, <3
[15:21] <sinzui> arosales, fwereade I am testing the simplstreams job now
[15:22] <dimitern> hazmat, yes, and in this case private ips are in fact "public" (using the above definition) - but that's a good point - one we have networks in juju that vpn network that spans from the org into the cloud will be set as a public network
[15:23] <arosales> sinzui, thanks I know you have been hitting it hard lately on QA, tools, reporting, bugs, and releases
[15:23] <arosales> sinzui, utlemming should be available once your testing is complete and you need to add the workflow to the build server
[15:23] <arosales> mramm, ^
[15:24] <mramm> arosales: perfect
[15:24] <mramm> arosales: sinzui: thanks for jumping on this
[15:26]  * arosales ows sinzui a drink of his choose next week :-)
[15:26] <dimitern> fwereade, review poke if you have 5m?
[15:43] <niemeyer> Wow
[15:43] <niemeyer> mramm, fwereade: It actually crashed
[15:43] <niemeyer> :)
[15:43] <niemeyer> Good timing
[15:43] <fwereade> haha
[15:43] <fwereade> dimitern, I have 5m now, on it
[15:44] <dimitern> fwereade, cheers!
[15:44] <niemeyer> Maybe they have speech detection for "take care"
[15:45] <fwereade> lol
[15:45] <niemeyer> hazmat: So, using github would be good, but as we discussed in the call, the way goamz is organized also encourages people to fork away
[15:46] <niemeyer> I'd like to solve that at some point
[15:46] <gary_poster> jamespage: https://pypi.python.org/pypi/juju-quickstart has a 1.0 and I added a comment to bug with link, FWIW.  Thanks again, and please let me know if we should do anything else.
[15:47] <hazmat> niemeyer, i think on github that people will push pull requests.. its a much more common etiquette... and helps build into a central core for all amz svc usage when people expand usage.. but understood re goamz structure and svc mapping.. i just don't think that's the underlying issue
[15:48] <hazmat> most of the forks are adding features to extant services (esp big ones like ec2), a few do new services (dynamodb) but that's not as common.. i think having a 'canonical' repo on github will become a magnet for both.
[15:48] <niemeyer> hazmat: I think it is, actually.. the API is gigantic, and people have to fork for every single field we didn't care to wrap
[15:48] <hazmat> true
[15:49] <hazmat> niemeyer, so that's more fundamental.. i thought you meant more of a service split of the pkg.. but yeah.. addressing the fundamental of the api service would be good.
[15:49] <hazmat> i know your not a fan.. but i still think auto-gen from the api json would be a win.
[15:50] <mramm> hazmat: that does feel like a reasonable possibility, definitely a theory that is worth testing
[15:50] <niemeyer> hazmat: There are better ways to do it
[15:50] <hazmat> and could cover a param passing style that is extensible
[15:50] <niemeyer> hazmat: Auto-generation leads to a crappy and undocumented API
[15:51] <mramm> hazmat: niemeyer: I mean the github bit, not nessisarily the auto-generation -- I have not thought that through at all.
[15:51] <niemeyer> hazmat: We can have an extensible and dynamic interface without that
[15:51] <hazmat> true
[15:51] <niemeyer> hazmat: As we do in lpad, for example
[15:53] <hazmat> niemeyer, the aws autogen and gce autogen do include docs though.. agree though its not always the most idiomatic interface
[15:53] <niemeyer> hazmat: And they're usually poor docs, often making no sense for the wrapped interface
[15:53] <hazmat> its more about just getting solid coverage of the huge api in a single step
[15:54] <niemeyer> hazmat: If we were to auto-generate, we can as well just build a dynamic layer and point people to the upstream docs
[15:54] <niemeyer> Which is what other libraries do
[15:54] <natefinch> hazmat, niemeyer:  if people are forking to add more to the API.... could we not just then encourage them to issue a pull request to put them back in the base repo?  Isn't that the whole point of open source collaboration?
[15:54] <hazmat> niemeyer, so would you be able to move goamz to github.com/juju/goamz ?
[15:55] <niemeyer> natefinch: Kind of.. I don't want to be the one in the front line of such an infinite number of pull requests adding tiny bits each
[15:55] <hazmat> niemeyer, or do you want to explore the extensible/dyn interface more first
[15:55] <hazmat> niemeyer, yeah.. boto had the same issue.. hence botocore based on autogen
[15:56] <hazmat> re infinite pull requests
[15:57] <niemeyer> hazmat: I shouldn't be the one moving that, if a decision is made
[15:57] <niemeyer> hazmat: I haven't been working on it, nor depend on it for anything I'm working on
[15:58] <natefinch> hazmat: pretty much anyone in core can move it
[15:58] <hazmat> niemeyer, gotcha. ok.
[15:58] <niemeyer> hazmat: It'd be wonderful if we could solve the API extensibility issue.. I have a path, but haven't had much of an incentive to stop what I'm doing to fix it
[15:58] <niemeyer> hazmat: Rather than being immediately helpful, this will actually create more work for other people
[15:58] <hazmat> natefinch, i had asked dimitern about doing it as part of the networking work, and he wanted niemeyer's ok first i think
[15:59] <niemeyer> hazmat: We can talk next week
[15:59] <hazmat> niemeyer, sounds good
[15:59] <niemeyer> hazmat: Regarding moving, I'm happy either way
[15:59] <niemeyer> hazmat: We should have a clear understanding of why we're doing it, though
[15:59] <niemeyer> hazmat: "because we'll have pull requests" doesn't look like a good answer
[16:00] <niemeyer> hazmat: We have had pull requests now, and seldom people step up to care
[16:00] <natefinch> niemeyer: because sabdfl said so? ;)
[16:00] <niemeyer> natefinch: What?
[16:00] <hazmat> niemeyer, they do on gh and its part of the social culture, and their forking there, its a trivial process for them to push it back.
[16:01] <niemeyer> natefinch: I have no idea about what you mean by that
[16:01] <niemeyer> hazmat: I mean we *have* pull requests
[16:01] <niemeyer> hazmat: and we *have* people in the mailing list asking questions
[16:02] <niemeyer> hazmat: and we don't have IMO a good maintainership story
[16:02] <hazmat> niemeyer, understood, i'm saying we'll likely get more.. and decrease the long lived extant forks that i see out there.
[16:02] <niemeyer> hazmat: Having *more* requests won't solve that issue :)
[16:02] <hazmat> the maintainer story is an issue
[16:03] <hazmat> probably the biggest, and a good informal topic for next week
[16:37] <rogpeppe> anyone know what revno 1.17.0 corresponded to?
[16:46] <natefinch> rogpeppe: Any reason not to use a RWMutex in SharedValue instead of a regular mutex?
[16:52] <hazmat> rogpeppe, 2173
[16:52] <hazmat> rogpeppe,  $ bzr tags
[17:11] <rogpeppe> natefinch: could do perhaps, but it's an optimisation that's not really worth it
[17:12] <rogpeppe> natefinch: making it an RWMutex would make the code more complex
[17:12] <rogpeppe> natefinch: we don't hold the lock for any length of time, so the chance of contention are extremely slight, particularly in our use case
[17:13] <natefinch> rogpeppe: it barely makes the code more complex.  It adds zero lines, just instead of calling Lock in Get you call RLock.
[17:14] <natefinch> rogpeppe: I'm sure it doesn't matter for our purposes, just seemed like the right thing to use.
[17:14] <rogpeppe> natefinch: i generally think of RWMutex as an optimisation - it makes the code a little harder to reason about, and the gain in this case is zero
[17:16] <rogpeppe> natefinch: if clients were calling Get very frequently, it might be worth it, but most clients will call Getter
[17:18] <natefinch> rogpeppe: well, either way they're locking to check the value.  In fact... it might be better, because when the value changes and the Broadcast fires, every watcher is going to "simultaneously" try to relock and check the value
[17:18] <natefinch> rogpeppe: with an RWMutex they can all do it at the same time
[17:19] <natefinch> rogpeppe: I don't know at what level of watchers you'd get any perceptible benefit, but I also don't see the added complexity as being terribly large either.
[17:20] <rogpeppe> natefinch: fair enough; do it if you like. (you'll want to use RWLock.Locker)
[17:20] <natefinch> rogpeppe: yep
[17:20] <rogpeppe> natefinch: i can't get excited about it unless the frequency of changes is in the millions per second and there's more than one watcher, neither of which is true for us.
[17:21] <natefinch> rogpeppe: "for us"  :)  I'm hoping at some point we'll move all this generic code outside the juju walled garden and make them independent open source repos.
[17:22] <rogpeppe> natefinch: yeah. you know i have mixed feelings about that :-)
[17:23] <natefinch> rogpeppe: I know. :)
[17:25] <natefinch> rogpeppe: btw, that Cond.Wait magic sauce is pretty awesome
[17:26] <rogpeppe> natefinch: i took a little while to arrive at the particular idiom you see there, but it does work nicely, doesn't it?
[17:30] <natefinch> rogpeppe: very cool
[17:36] <TheMue> rogpeppe: got my breakthrough from command line to logging. especially filtering for multiple entities is nice.
[17:36] <rogpeppe> TheMue: yay!
[17:37] <TheMue> rogpeppe: only used the whole afternoon finding a self-produced bug *grmblfxÜ
[17:40] <rogpeppe> trivial (one line, but critical) review anyone? https://codereview.appspot.com/56070044/
[17:40] <rogpeppe> fwereade, fwereade, dimitern: ^
[17:45] <natefinch> rogpeppe: reviewed
[17:46] <rogpeppe> natefinch: thanks
[19:45] <natefinch> thumper: o/
[20:02] <dimitern> small review anyone? https://codereview.appspot.com/58170045
[20:03] <natefinch> dimitern: sure
[20:03] <dimitern> natefinch, thanks!
[20:05] <natefinch> dimitern: could bootstrap-ssh-timeout be simply called bootstrap-timeout?  the fact that we're connecting with SSH doesn't really matter, right? This is just a generic bootstrap timeout, right?
[20:06] <thumper> morning natefinch, dimitern
[20:06] <dimitern> natefinch, well, all of these 3 are only used for waitSSH
[20:06] <dimitern> thumper, hey
[20:07] <dimitern> natefinch, but I guess ssh is a inside detail
[20:07] <natefinch> dimitern: exactly my point. To a user, this is just the timeout on juju bootstrap.  Putting ssh on there is confusing.
[20:08] <dimitern> natefinch, i'm not against renaming it :) just comment on it pls
[20:08] <natefinch> dimitern: I am :)  Just wanted to discuss live first, to make sure I was understanding it correctly.
[20:11] <natefinch> dimitern: there you go :)
[20:11] <natefinch> dimitern: nice to get that in.
[20:11] <dimitern> natefinch, tyvm
[20:12] <natefinch> dimitern: welcome
[21:41] <thumper> WTH
[21:41] <thumper> natefinch: you're on trusty, right?
[21:41] <thumper> natefinch: why do I get this...
[21:41] <thumper> tim@jake:~/go/src/code.google.com/p/go.tools/cmd/vet$ go install
[21:41] <thumper> go install code.google.com/p/go.tools/cmd/vet: open /usr/lib/go/pkg/tool/linux_amd64/vet: permission denied
[21:41] <thumper> why is it trying to put it in /usr/lib?
[21:41] <thumper> doesn't for juju
[21:41] <thumper> when I go make install there
[21:42] <natefinch> thumper: that happened to me
[21:43] <natefinch> thumper: I forget what I did to fix it though.....
[21:46] <natefinch> thumper: have you pulled and updated the code?
[21:46] <natefinch> thumper: I had to do that first.... now it go installs just fine.
[21:48] <thumper> hmm
[21:48] <thumper> no changes
[21:49] <natefinch> thumper: I did upgrade to go 1.2... not sure if that affects anything... I wouldn't think it would change where things are installed.
[21:49] <thumper> do I need to set another GOPATH type var?
[21:49] <natefinch> thumper: just gopath is all you need
[21:49] <thumper> seems not
[21:50] <thumper> ffs
[21:50] <thumper> can't use lbox
[21:50] <thumper> because it wants go vet
[21:50] <thumper> can't install go vet because it is dumb
[21:50]  * thumper will poke davecheney about it later, perhaps he knows
[21:51]  * thumper goes to beat something up
[22:16] <hatch> 1.17.1 on 12.04 after trying to bootstrap local without sudo I get the following error ERROR Get http://10.0.3.1:8040/provider-state: dial tcp 10.0.3.1:8040: connection refused
[22:16] <hatch> I thought that 1.17.1 removed the sudo requirement from local deploys?
[22:33] <davecheney> thumper-afk: might be simpler to remove that requirement from lbox.check
[22:34] <davecheney> the debain packaing for 1.2 is AFU
[22:34] <davecheney> combined with some poor decisions from upstream means you probably won't be able to make that work without building go from source
[23:07] <mbruzek> Hi juju-dev.  I have an up to date trusty system and I can not use the local environment.
[23:07] <mbruzek> When I bootstrap local that runs but I can't get a juju status afterward
[23:35] <wallyworld_> mbruzek: thumper-afk is your best bet for local provider issues. i know there may have been some trusty issues, but am not across the detail
[23:35] <mbruzek> thanks wallyworld_  afk means away from keyboard right?
[23:35] <wallyworld_> yeah, he's not too far away
[23:35] <wallyworld_> he should be back soon
[23:35] <mbruzek> thanks
[23:36] <wallyworld_> we're still ironing out any remaining trusty issues. lxc changed between precise and trusty so we have some work to do
[23:37] <mbruzek> I understand, and have evaluated .deb packages for sinzui before
[23:37] <mbruzek> I just can't make any progress on my local and I would *really* like to.
[23:37]  * mbruzek had it working 2 days ago, but decided to upgrade 
[23:40] <davecheney> http://paste.ubuntu.com/6840848/
[23:40] <davecheney> juju is not happy this morning
[23:40] <davecheney> is the environment bootstrapped ? or not
[23:43] <wallyworld_> davecheney: looks like something killed your instances? and now juju is confused
[23:43] <wallyworld_> cause it has the .jenv file and thinks it should be bootstrapped
[23:44] <wallyworld_> does the hp console show anything running?
[23:47] <hazmat> davecheney, juju destroy-env..
[23:47] <hazmat> i thought this bug got listed as fixed
[23:47] <davecheney> hazmat: wallyworld_ yeah, removed the .jevn file and everything was fine
[23:47] <davecheney> i guess the bug isn't fixed
[23:48] <hazmat> davecheney,  https://bugs.launchpad.net/juju-core/+bug/1176961
[23:48] <hazmat> nope.. its not.. it got marked low.
[23:48] <wallyworld_> well that is less than optimal
[23:48] <hazmat> its the one where you can't bootstrap or destroy.. and if you don't know to jenv remove..  it kinda sucks.
[23:48] <wallyworld_> i did hear mumblings that we should fix that issue
[23:48]  * wallyworld_ thinks it should be High
[23:49] <wallyworld_> i'll follow up with some folks and see if we can get that one sorted out
[23:49] <wallyworld_> it is a pretty poor user experience
[23:49] <hazmat> cool, thanks wallyworld_
[23:50] <wallyworld_> np. i'll even have a go myself once i clear my current work items if i can't get traction to get it sorted
[23:52] <davecheney> sweet
[23:52] <davecheney> thanks
[23:53] <davecheney> http://paste.ubuntu.com/6840900/
[23:53] <davecheney> not the best error message