[01:19]  * thumper does a little scream
[01:52] <thumper> wallyworld_: ping
[01:52] <wallyworld_> hi
[01:52] <thumper> wallyworld_: will the world explode if I return nil for hardware characteristics in start instance?
[01:52] <wallyworld_> no, but it would be nice to have
[01:53] <wallyworld_> i think nil is ok, but not sure if an empty struct is preferred
[01:53] <wallyworld_> let me check
[01:53] <thumper> kk
[01:54] <wallyworld_> nil is ok
[01:56] <thumper> wallyworld_: ok, ta
[02:53] <thumper> hmm...
[03:05] <thumper> stabby
[03:05] <thumper> stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby stabby v
[03:06]  * bigjools sends calm thoughts across The Ditch
[03:24]  * thumper scratches his head
[03:25] <thumper> jam1: btw, your lxc upgrade check breaks all containers
[03:25] <thumper> jam1: we need to be smarter about them
[03:25] <thumper> jam1: I'm still working out how
[03:34] <thumper> more stabbies
[03:34] <thumper> wtf...
[03:42] <thumper> wallyworld_: we need public address and private address on state.Machine ASAP
[03:42] <thumper> this is driving me crazy
[03:55]  * thumper wonders wtf is going on here
[04:19]  * thumper thinks
[05:38] <wallyworld_> thumper: sorry, i didn't see your message before. i'll talk to martin tonight
[05:38] <thumper> wallyworld_: sure, I felt I was all alone again
[05:39] <wallyworld_> sometimes i don't notice the irc window blink
[05:39] <thumper> :)
[05:39] <wallyworld_> maybe i need two monitors
[05:39] <thumper> I'm this |..| close to getting the local provider working
[05:39] <wallyworld_> awsome
[05:39] <thumper> I found a major(ish) problem though
[05:39] <thumper> which I'm simmering on
[05:40] <wallyworld_> which is?
[05:40] <thumper> I need the storage provider to be running on machine-0 all the time
[05:40] <thumper> and to refactor how the environment gets created
[05:41] <thumper> I want to talk to fwereade about it when he starts
[05:41] <thumper> to get his thoughts on the best way to do this
[05:41] <wallyworld_> by all the time, you mean it is kicked off when juju bootstrap is run?
[05:42] <thumper> yes, and stays running
[05:42] <thumper> so we'd have: mongod, machine-agent, storage provider
[05:42] <thumper> to deploy a charm
[05:42] <thumper> the uniter accesses the storage of the environment
[05:42] <thumper> which isn't listening :)
[05:43] <thumper> or at least, isn't listening in the right place at the right time for the uniter
[05:44] <wallyworld_> yay
[05:44] <wallyworld_> i guess it was all different in py juju
[05:44] <thumper> probably
[08:04] <rogpeppe> morning all!
[08:11] <TheMue> rogpeppe: heya, how have your holidays been?
[08:11] <jam1> hi rogpeppe, good to have you back
[08:11] <rogpeppe> TheMue: great thanks!
[08:12] <rogpeppe> TheMue: we had excellent weather in Norway
[08:12] <rogpeppe> jam: thanks
[08:13] <rogpeppe> i've barely glanced at what's been going on while i've been away. any particularly significant mail threads/changes of direction  i should be aware of?
[08:13] <TheMue> rogpeppe: sounds good. and Norway itself?
[08:13] <rogpeppe> TheMue: really good. expensive though!
[08:14] <rogpeppe> TheMue: particularly alcohol
[08:14] <TheMue> rogpeppe: oh, didn't know. but I remember they have high taxes there.
[08:14] <rogpeppe> TheMue: yeah. beer is about 8-12 euros / 500cl
[08:15] <TheMue> rogpeppe: if it's only alcohol it doesn't hurt you. you surely never drink. :D
[08:15] <rogpeppe> TheMue: surely not
[08:15] <rogpeppe> TheMue: :-)
[08:15] <TheMue> rogpeppe: ouch, that's really expensive
[08:15] <rogpeppe> TheMue: we bought our full quota from the duty free on the way in
[08:16] <rogpeppe> TheMue: 6 bottles of wine and 8 bottles of beer :-)
[08:16] <TheMue> rogpeppe: clever
[08:16] <rogpeppe> TheMue: mandatory :-)
[08:16] <TheMue> rogpeppe: hehe, indeed
[08:18] <TheMue> rogpeppe: regarding the project everything went on fine. the first week a bit slower, as you, Dimiter and for two days me were off
[08:18] <TheMue> rogpeppe: but last week with more speed again
[08:23] <TheMue> rogpeppe: Tim and me found a bad behavior in the http package. here a request takes a Reader, but casts it into a ReadCloser and calls Close() later. this is used in the local storage. so the Seek(0, 0) when writing the fake tools fails.
[08:25] <rogpeppe> TheMue: interestin
[08:25] <rogpeppe> g
[08:25] <rogpeppe> TheMue: i can't see that that behaviour is documented
[08:25] <rogpeppe> TheMue: it's trivial to work around though
[08:28] <TheMue> rogpeppe: Tim is working on it. he stumbled about this behavior.
[08:28] <rogpeppe> TheMue: all you need to do is pass in ioutil.NopCloser(r) to NewRequest
[08:30] <rogpeppe> TheMue: presumably this is only actually a problem when the tools writing fails and is retried?
[08:30]  * rogpeppe goes to get some sun cream
[08:34] <TheMue> rogpeppe: the writing of the first tool file works, but the second file fails because the file is closed.
[09:03]  * TheMue is afk for a moment, bringing his daughter to the railway station (first single longer ride)
[09:04] <dimitern> rogpeppe: hey, welcome back!
[09:04] <rogpeppe> dimitern: yo!
[09:05] <dimitern> rogpeppe: good holiday?
[09:05] <rogpeppe> dimitern: great thanks
[09:05] <rogpeppe> dimitern: two holidays really :-)
[09:05] <dimitern> roh
[09:05] <dimitern> rogpeppe: awesome :)
[09:05] <rogpeppe> dimitern: both in norway, but the first week we rented a cabin in the south; the second we spent in oslo
[09:06] <dimitern> rogpeppe: cool and refreshing weather i presume?
[09:06] <rogpeppe> dimitern: pretty warm, particularly the second week. temp around 22-25
[09:06] <rogpeppe> dimitern: first week cooler, but we were on the sea
[09:06] <dimitern> rogpeppe: oh, compared to 32-34 here is nice
[09:06] <rogpeppe> dimitern: we were v lucky
[09:07] <rogpeppe> dimitern: yeah, perfect
[09:07] <rogpeppe> dimitern: and come back to find the garden has gone mental!
[09:07] <rogpeppe> dimitern: (in a good way)
[09:07] <dimitern> rogpeppe: :)
[10:02] <jam> rogpeppe: I think the big focus last week was that upgrading 1.10 => 1.11.2 showed lots of things broken. I think William and I have managed to track them down, I have a couple patches up for review
[10:03] <jam> We're looking to get a 1.11.3 out this week, with the hopes of it being considered a stable 1.12 with an upgrade path from 1.10
[10:03] <rogpeppe> jam: i was looking at one of your patches and trying to get my head around the underlying issues
[10:03] <jam> rogpeppe: If you need to ask questions, I can give more context
[10:04] <rogpeppe> jam: in particular i was trying to work out what permutations of versions/upgrading would cause the issues to be seen
[10:04] <rogpeppe> jam: let me go back to the CL and have another look.
[10:04] <fwereade> rogpeppe, heyhey, welcome back!
[10:04] <rogpeppe> fwereade: yo!
[10:05] <jam> rogpeppe: bootsrap with 1.10 client starting 1.10 server, upgrade using 1.11.2 (or trunk --upload-tools)
[10:05] <jam> stuff breaks
[10:05] <jam> fwereade: o/
[10:05] <fwereade> jam, heyhey
[10:05] <rogpeppe> jam: ah, yes, i saw that issue, but didn't get to the bottom of it
[10:06] <jam> fwereade:  so the good and bad bits, the good is that I have: https://code.launchpad.net/~jameinel/juju-core/api-set-creds-1199915/+merge/174620 up for review wrt bug 1199915
[10:06] <jam> the bad is that the existing bug actually means only machine-0 sort-of-works after upgrade
[10:06] <jam> because all other workers wait for the main thread to get the list of jobs from the api
[10:06] <jam> but the api can't be connected to because we didn't set the agent's password
[10:07] <jam> bug #1199915
[10:07] <jam> where did the bot go?
[10:07] <jam> _mup_: bug #1199915
[10:07] <rogpeppe> jam: launchpad says it's in the middle of an update
[10:08] <fwereade> jam, heh, LP doesn't want to talk to me
[10:08] <rogpeppe> jam: i was just trying to look at that bug
[10:08] <rogpeppe> jam: how much of these changes are temporary cruft that should go away eventually?
[10:09] <jam> rogpeppe: we generally need to come up with an answer for how we handle changing "stuff" when upgrading. Some of this will last as long as we want to be able to upgrade from 1.10
[10:09] <jam> which may be "short" depends on how long we support 1.10
[10:09] <rogpeppe> jam: yes, i wondered that.
[10:10] <rogpeppe> jam: i think we should make sure that we have comments in the code that indicates when the workaround might be removed
[10:10] <jam> rogpeppe: but generally, I don't think we've done much engineering around "binary 2.X needs these packages and this configuration that 2.X-1 didn't need, how do we configure it after upgrade" ?
[10:11] <rogpeppe> jam: yeah, that's definitely something we need to think about
[10:11] <rogpeppe> jam: packages are possibly easier than config
[10:13] <jam> rogpeppe: so *right* now agents have both a DB connection and an API one, so they can poke the DB connection directly to get their creds updated inside the db. At some point when we cut off direct DB access, upgrading from 1.10 will be infeasible (I think)
[10:14] <jam> rogpeppe: lp seems to be back up
[10:14] <jam> fwereade: ^^
[10:14] <rogpeppe> jam: we should perhaps make it so that the upgraded version sets the API passwords appropriately so that people can get there with an intermediate upgrade
[10:14] <jam> rogpeppe: which is what my patch does
[10:15] <rogpeppe> jam: ah, cool
[10:15] <jam> so you could do 1.10 => 1.12 => version-that-doesn't-have-direct-db-access
[10:15] <fwereade> jam, I'm fine saying that we only cut off direct db access for 2.0, and we only allow upgrades to 2.0 from 1.x versions that have done the requisiste juggling
[10:16] <fwereade> jam, what you said basically :)
[10:18] <jam> oddly enough on my system, "make simplify" is faster than "go fmt ./..." even though the former is passing the '-s' flag. I wonder if 'go fmt' search through the code is slower than 'find *.go'
[10:18] <jam> (1.7s vs 2.1s)
[10:25] <rogpeppe> jam: i think that might be because of an outstanding issue with the go command's package matching
[10:26] <rogpeppe> jam: try find . -name '*.go' | xargs gofmt -w
[10:26] <rogpeppe> jam: i *thought* the issue had been fixed tho
[10:27] <jam> 1.56s
[10:28] <rogpeppe> jam: the problem was that the go command scans all directories in GOPATH anyway, even if there's a fixed prefix
[11:03] <fwereade> frankban, dimitern: responded to https://codereview.appspot.com/11003044/
[11:03] <fwereade> frankban, dimitern: LGTM, but I may be missing some nuance of dimitern's thoughts
[11:04] <dimitern> fwereade, frankban: will take a look again
[11:06] <frankban> thanks fwereade and dimitern!
[11:18] <fwereade> jtv, jam: responded to https://codereview.appspot.com/11234043/
[11:27] <jam> fwereade: sure, you review all the easy ones first :)
[11:28] <fwereade> jam, apart from wallyworld's pipeline which I'm saving for after the meeting, I'm going purely by age, I thin
[11:28] <jam> fwereade, rogpeppe, dimitern, mgz, wallyworld: standup in 3 min: https://plus.google.com/hangouts/_/f497381ca4d154890227b3b35a85a985b894b471
[11:28] <fwereade> jam, I'm on your scary one now :)
[11:28] <jam> fwereade: doesn't mean you didn't get to easy ones first
[11:28] <dimitern> frankban, fwereade: LGTM
[11:54] <rogpeppe> jam: gimme 3 minutes while i sort a couple of things out
[11:55] <jam> rogpeppe: np
[11:58] <jam> fwereade: +1 on the separate provider section, I also worry how we get a provisioner running in each provider (or does the provisioner just run at the lowe
[11:58] <rogpeppe> jam: back
[11:58] <jam> at the state server node, and just knows how to set stuff on each provisioner.)
[11:58] <jam> rogpeppe: starting
[11:59] <fwereade> jam, I am very consciously not thinking about how we fix the code for this, it's still percolating
[11:59] <fwereade> jam, I think it'll be one provisioner per provider though
[12:14] <frankban> dimitern: thanks
[12:15] <dimitern> frankban: yw
[12:18] <fwereade> jam, I think I just understood what you were saying better
[12:20] <fwereade> jam, we'll want the manager node(s) running all the provider provisioners because I Do Not Want to spread manager nodes across clouds until someone can demonstrate that it really is sane & safe to do so
[12:20] <jam> fwereade: right
[12:20] <jam> you have to have an instance running on each cloud you want to provision on isn't great
[12:20] <jam> (an instance running the jujud provisioner code)
[12:21] <fwereade> jam, can't get away with it forever, though, keeping all manager nodes in one cloud renders the whole env susceptible to failures of that cloud
[12:21] <fwereade> jam, not one to fix this year, though
[12:22]  * fwereade lunch
[12:30] <rogpeppe> TheMue: just finished meeting with jam. i've got lunch now. i'll ping you after that, if that's ok.
[12:43] <TheMue> rogpeppe: it's fine with me, yes
[13:47] <rogpeppe> TheMue: https://plus.google.com/hangouts/_/a969ef2b58bd85872816e26fa3574a46ef3cb428
[13:54] <fwereade> jam, https://codereview.appspot.com/11137044/ reviewed
[13:55] <jam> fwereade: so the SetPassword before setting the Config thing. We are pulling the password out of the existing conf, so we can do it again
[13:56] <jam> If we write-to-disk first, then we can't tell we need to SetPassword
[15:46] <fwereade> if anyone's wondering where dimitern is, he's had a power cut for a while now
[15:46] <mgz> how frustrating
[15:47] <mgz> I take it he's got phone and computer, just no internet?
[15:52] <fwereade> I presume he has at least some power in his battery, and yeah, he texted me
[16:15] <hazmat> what's the recommended way of getting mongodb for dev?
[16:15] <hazmat> and testing
[16:15] <hazmat> per https://bugs.launchpad.net/juju-core/+bug/1175493 the package and ppa versions don't work
[16:15] <_mup_> Bug #1175493: Tests fail using the mongo package from raring, or ppa <juju-core:Triaged> <https://launchpad.net/bugs/1175493>
[16:16] <hazmat> nm.. the link is in the readme
[16:38] <Daviey> mgz: Is this expected? http://pb.daviey.com/wIxM/
[16:39] <mgz> Daviey: yup
[16:39] <mgz> to use --upload-tools implies you're compiling locally and want to use your own build stuff
[16:40] <mgz> arguably we want a nicer error/to hide that flag for general users
[16:40] <mgz> see bug 1135564
[16:40] <_mup_> Bug #1135564: juju bootstrap --upload-tools fails, missing dependency <juju-core:Triaged> <https://launchpad.net/bugs/1135564>
[16:40] <Daviey> mgz: http://pb.daviey.com/wP3I/
[16:41] <Daviey> right now, i feel fubared
[16:41] <mgz> Daviey: you need to follow lp:juju-core README if you actually want to do that
[16:42] <mgz> involves getting all the source and building it
[16:42] <mgz> probably what you really want is not that flag, but to just specify our latest release binaries?
[16:45] <mgz> Daviey: what exactly are you trying to do?
[16:47] <hazmat> Daviey, unless your testing a branch/trunk.. this is probably what you want  $ juju sync-tools
[16:56] <mgz> fwereade: can you cast an eye over codereview.appspot.com/11284044 to see if you have any issues with the suggested model
[16:57] <fwereade> mgz, sorry, just going out; I've opened a tab with it and will give it proper attention later today
[16:57] <mgz> ta
[16:58] <sidnei> mramm: bug #1201503
[16:58] <_mup_> Bug #1201503: Add disk constraint <juju-core:New> <https://launchpad.net/bugs/1201503>
[16:58] <sidnei> which i believe elmo was poking  you about
[17:00] <rogpeppe> fwereade: ping
[17:00] <rogpeppe> fwereade: oops sorry, just saw you went out
[17:43] <rogpeppe> right, that's me for the day
[17:43] <rogpeppe> see y'all tomorrow
[18:10] <hazmat> are there are any known issues that cause the test suite to just hang and spin on cpu
[18:10] <hazmat> i've tried with mongo 2.2.4 from raring, and from the s3 dist of mongo 2.2.0
[20:52] <thumper> morning
[21:14] <thumper> mramm: did you want to chat at some stage?
[21:15] <mramm> thumper: sure
[21:15] <thumper> mramm: like... when?
[21:16] <mramm> I'm free now
[21:16] <mramm> and in about 2 hours
[21:17] <thumper> now is good
[21:21] <sidnei> hey thumper, i was looking at getting my feet wet with some juju-core and implement a fix for bug #1201503 but wanted some confirmation that it does make sense in the first place
[21:22] <_mup_> Bug #1201503: Add disk constraint <juju-core:New> <https://launchpad.net/bugs/1201503>
[21:52] <thumper> sidnei: hmm...
[21:52] <thumper> sidnei: I'm not entirely sure, do we have access to the amount of disk?
[21:52] <thumper> sidnei: also
[21:52] <thumper> sidnei: there are some future plans for dealing with mounting "special" block devices on cloud images
[21:52] <thumper> as this is provided by some cloud providers
[21:52] <thumper> but out of scope for right now
[21:54] <sidnei> thumper: nova flavor-list shows disk and ephemeral disk: http://paste.ubuntu.com/5878931/ in prodstack there's some custom flavors with eg 2 CPUs and 10, 20, 50, 100 of disk, as if multiple variations of m1.small with only different disk size
[21:54] <sidnei> s/m1.small/m1.medium
[21:55] <thumper> sidnei: well, if you think it makes sense, and likely that someone wants to do that, then sure, maybe
[21:55] <thumper> sidnei: you may want to run it by fwereade
[21:55] <sidnei> thumper: when i said it wasn't supported i got a 'dude, seriously' from elmo, so yeah. :)
[21:55] <thumper> heh
[21:56] <thumper> I'll take that as a "it will be used" type thing
[21:56] <thumper> you should write that in the bug :)
[21:56] <sidnei> it's somewhat of a workaround to https://bugs.launchpad.net/juju-core/+bug/1183831 since if instance-type was supported we'd just use that
[21:56] <_mup_> Bug #1183831: ec2 constraints missing <juju-core:Triaged> <https://launchpad.net/bugs/1183831>
[21:57] <sidnei> i guess workaround is a bad way to put it
[22:07] <fwereade> thumper, sidnei: I've been having some very interesting conversations that may be able to let us do an end-run around the problems with provider-specific constraints, and it seems that sabdfl is very keen on the direction; so while it remains a little up in the air right now, it is more likely than not that we will see movement on that front soon
[22:08] <thumper> fwereade: yeah, just reading your doc
[22:08] <fwereade> thumper, sidnei: a disk constraint is not a bad idea in itself necessarily but we'd need to be quite clear on exactly what it is and is not
[22:08] <fwereade> thumper, sidnei: and I worry about possible interactions with future plans for storage
[22:09] <thumper> yeah, the future storage thing was something I was a bit worried about too
[22:09] <sidnei> yeah, me too. it is somewhat of a blocker though for moving to juju-core, since we don't have instance-type as a forward path either.
[22:09] <fwereade> thumper, sidnei: so my inclination is to discourage and take this as an even-stronger vote for instance-type/flavor constraints
[22:10] <fwereade> sidnei, but if you had them, it would be moot, or at least very low priority?
[22:10] <sidnei> either one of them would be fine, instance-type or disk constraint
[22:11] <sidnei> instance-type would be the less friction one since we wouldn't need to change existing scripts
[22:11] <fwereade> sidnei, all the better then, I'd rather have that too
[22:13] <fwereade> sidnei, can we sync up again on weds evening, at which point relevant plans and schedule discussions will have progressed another couple of steps?
[22:13] <sidnei> sure
[22:13] <sidnei> since i haven't seen the proposal, maybe im talking bullshit but would it make sense to prefix provider-specific constraints with a provider tag? as in ec2:instance-type or openstack:flavor and so on? and if you try to use those in a provider that doesn't support them they would be ignored?
[22:14] <fwereade> sidnei, openstack:flavor is IMO a big no, that was the main problem -- but canonistack:flavor seems just fine
[22:14] <fwereade> sidnei, openstack:flavor looks way more portable than it actually is
[22:15] <sidnei> because one can define custom flavors?
[22:15] <fwereade> sidnei, but prodstack:flavor, or hp:flavor, or whatever, *do* make sense and are also *clearly* not globally portable
[22:15] <fwereade> sidnei, yeah, the vocabularies may have enough overlap to be tempting but trying too apply them too broadly will hurt like hell ;)
[22:17] <fwereade> sidnei, and hopefully in practice we'll be able to infer provider anyway, so it should actually just be "instance-type" or "flavor" in many many cases
[22:17] <sidnei> im thinking in terms of 'openstack:flavor' is a filter that gets passed through to the openstack provider without vocabulary validation; there's no 'hp' or 'prodstack' provider, and neither 'hp' or 'prodstack' are values you specify in the environments config
[23:06] <arosales> quick 'juju init' question regarding the boiler template
[23:18] <davecheney> arosales: shoot
[23:21] <arosales> the boiler template has the following for hpcloud:
[23:21] <arosales>     # Usually set via the env variable AWS_SECRET_ACCESS_KEY, but can be specified here
[23:21] <arosales> should that the HP_SECRET_ACCESS_KEY ?
[23:21]  * arosales not sure if that was intentional or not, thus thought I would ping here.
[23:24] <davecheney> arosales: probably shouldn't be anything, I don't think there is sucha var for HP
[23:24] <davecheney> it's probably copy pasta
[23:24] <arosales> I thought you guys did land keys for hp .. .
[23:24]  * arosales checks env.yaml
[23:25] <davecheney> arosales: yes, but they woukd't be called HP_*
[23:25] <davecheney> we don't have a HP provider, we have an openstack provider
[23:26] <arosales> ah, ok.  Well that was just a comment in the boiler template
[23:26] <arosales> the actualy yaml key was
[23:26] <arosales> secret-key: <secret>
[23:26] <arosales> access-key: <secret>
[23:27] <arosales> basically the stanza http://pastebin.ubuntu.com/5879146/
[23:28] <davecheney> hmm, i see what you mean
[23:28] <davecheney> neither is correct
[23:31] <arosales> marteen was trying to make it easier to get started with juju and was trying to script editing the env file and I found the keys for hp and aws to be the same.
[23:31] <arosales> davecheney, ok sounds like I should file a bug for follow up.
[23:34] <arosales> davecheney, https://bugs.launchpad.net/juju-core/+bug/1201628
[23:34] <_mup_> Bug #1201628: HP Cloud Boiler-plate (juju init) template has AWS info <juju-core:New> <https://launchpad.net/bugs/1201628>
[23:35] <davecheney> arosales: ta
[23:35] <wallyworld> arosales: those aws keys do work for openstack
[23:36] <wallyworld> although the openstack ones work too
[23:36] <arosales> wallyworld, I am guessing the key auth does work for openstack (ie hpcloud)
[23:36]  * arosales was just wondering about the AWS comment.
[23:36] <wallyworld> key auth does work now, or is supposed to
[23:36] <wallyworld> i haven't tested it personally
[23:36] <arosales> I think I have seen it work
[23:37] <arosales> bug is out there for triage accordingly
[23:37]  * arosales grabs some dinner
[23:37] <arosales> wallyworld, davecheney thanks for the help.
[23:37] <wallyworld> arosales: np. the AWS key is supported for ease of migration between ec2 and openstack
[23:38] <wallyworld> it looks for openstack specific env vars first
[23:38]  * thumper beats the machine over the head with the local provider stick
[23:38] <wallyworld> but uses the aws ones if there
[23:44] <thumper> hmm...