[00:02] <davecheney> rick_h_: it's *always* mongodb
[00:03] <waigani> I'm hitting an error that I can't track down. Here is my wip: https://codereview.appspot.com/76670043
[00:03] <waigani> error: launchpad.net/juju-core/testing/testbase.PatchEnvPathPrepend(0): not defined
[00:09] <davecheney> wallyworld: launchpad.net/juju-core/testing/testbase.PatchEnvPathPrepend(0)
[00:09] <davecheney> ^ still importing the lp version
[00:28] <davecheney> https://bugs.launchpad.net/juju-core/+bug/1293269
[00:28] <_mup_> Bug #1293269: juju fails to destroy environment due to missing configuration key <ppc64el> <juju-core:Triaged> <https://launchpad.net/bugs/1293269>
[00:29] <davecheney> great, now destroy environment doesn't work
[00:32] <thumper> o/ davecheney
[00:32] <thumper> I have a branch for the local provider lxc settings
[00:32] <thumper> davecheney: ah, I know what you did there
[00:32] <davecheney> thumper: thanks
[00:32] <thumper> davecheney: that was all me...
[00:33] <thumper> they used to be omit, then I changed the default to "", and now they are omit again
[00:33] <thumper> pretty sure they were only omit for releases
[00:33] <davecheney> :(
[00:33] <thumper> dev envs are likely screwed
[00:34] <davecheney> here is my environment.yaml
[00:34] <davecheney>     local:
[00:34] <davecheney>         type: local
[00:34] <davecheney> <<EOF
[00:34] <thumper> yeah...
[00:34] <thumper> you will need to do the manual thing
[00:35] <thumper> although, I have a bug on my plate to fix destroy environment
[00:35] <thumper> however due to the early dying
[00:35] <thumper> won't help this case
[00:37] <davecheney> fudge
[00:37] <davecheney> thumper: has this screwed 1.17.4 -> 1.17.5 upgrades ?
[00:38] <thumper> davecheney: no, those are fine
[00:38] <thumper> the setting was changed and changed back in beteween releases
[00:39] <davecheney> right
[00:39] <davecheney> cool
[00:39] <davecheney> sorta
[00:40] <thumper> yeah...
[00:40] <thumper> kinda
[00:40]  * davecheney remains alarmed, but not alert
[00:41] <bodie_> rick_h_, davecheney thanks :) I'm on the packaged Mongo for Ubuntu 14.04 -- tried building 2.4.9 from source but I got tons of build errors for some reason.  bleh
[00:42] <davecheney> bodie_: that is the worst of all choices
[00:42] <davecheney> sudo apt-get install juju-mongodb
[00:42] <bodie_> oy
[00:42] <bodie_> ok
[00:42] <bodie_> much appreciated
[00:42] <davecheney> this should have already happened for you
[00:42] <davecheney> this package is a dep of juju-local
[00:42] <davecheney> is this a documentation bug ?
[00:44] <bodie_> Maybe it didn't go in because I already had Mongo
[00:44] <bodie_> but, I don't think Go would fetch a package, would it?  Just source, I thought
[00:45] <bodie_> I'm still just getting my head wrapped around the whole thing, I think I may have been told the packaged Mongo was suitable
[00:45] <davecheney> not suitable
[00:45] <davecheney> manditory :)
[00:45] <davecheney> this smells like a bug
[00:46] <davecheney> juju should detect that you had /usr/bin/mongdb
[00:46] <davecheney> where we rerequire
[00:46] <davecheney> /usr/lib/juju/bin/mongodb [sic]
[00:46] <davecheney> if you uninstall mongo
[00:46] <bodie_> hm
[00:46] <davecheney> and run a juju command it *should* give you reliable advice
[00:47] <davecheney> thumper: well fuck, how do I delete that environment then ?
[00:47] <davecheney> just table flip it ?
[00:47] <thumper> yeah...
[00:47] <thumper> lxc-destroy all the machines
[00:47] <thumper> stop the services
[00:47] <thumper> and delete the upstart entries
[00:47] <thumper> delete the rsyslog entry
[00:47] <davecheney> right-o
[00:47] <thumper> and delete everything from /var/lib/juju/containers
[00:50] <bodie_> I think I have some kinda conflict with a mongo already sitting on my system, but I just now removed everything mongo using apt... maybe the one I'd built from source
[00:50] <bodie_> yeah
[00:51] <bodie_> okay, removed that and now it's suggesting I install mongodb-server
[00:51] <bodie_> but you're saying I need to use juju-mongodb
[00:51] <davecheney> bodie_: this is absolteyly a bug
[00:54] <bodie_> yay, I accomplished something
[00:54] <wallyworld> davecheney: you recently updated the ec2 provider to add ppc and arm arches to tools lookup constraints, right?
[00:56] <davecheney> wallyworld: only ppc
[00:56] <davecheney> someone else is tpo blame for arm
[00:56] <davecheney> arm64
[00:56] <wallyworld> davecheney: because we have a list of hard coded aws instances types (m1.small, etc) which are recorded as supporting i386 and amd64 only
[00:56] <davecheney> wallyworld: i know
[00:57] <wallyworld> hence image matching will fai;
[00:57] <wallyworld> fail
[00:57] <davecheney> this was raised in review and I believe I removed those lines were removed
[00:57] <wallyworld> which lines?
[00:57] <wallyworld> the tools lookup arches?
[00:57] <davecheney> the ones in the ec2 and azure provider from memory
[00:58] <wallyworld> so
[00:58] <wallyworld> 	return &simplestreams.MetadataLookupParams{
[00:58] <wallyworld> 		Series:        e.ecfg().DefaultSeries(),
[00:58] <wallyworld> 		Region:        region,
[00:58] <wallyworld> 		Endpoint:      ec2Region.EC2Endpoint,
[00:58] <wallyworld> 		Architectures: []string{"amd64", "i386", "arm", "arm64", "ppc64"},
[00:58] <wallyworld> 	}, ni
[00:58] <davecheney> guess I didn't get all of them
[00:58] <wallyworld> the Architectures above should just be i386, amd64 right?
[00:58] <davecheney> not sure about arm
[00:58] <wallyworld> no problem
[00:58] <davecheney> i'd say no
[00:58] <davecheney> but I think arm via stgrabers' qemu ami is possible
[00:59] <davecheney> being realistic, just 386 and amd64
[00:59] <wallyworld> ok. so long as we are consistent across the board
[00:59] <wallyworld> that's what i'm aiming for
[00:59] <wallyworld> then we can add in the extra arhes
[00:59] <wallyworld> arches
[01:00] <wallyworld> i just got a little confused when reading the code
[01:00] <bodie_> do I need to submit a bug report or can I simply lbox propose my fix?
[01:00] <davecheney> bodie_: if you can fix it yourself, go for it
[01:03] <bodie_> so it HAS to be juju-mongodb?  I thought it just had to be a version with SSL support
[01:03] <davecheney> bodie_: they are one in the same
[01:03] <davecheney> especially if you are on precise
[01:04] <bodie_> hmm
[01:04] <bodie_> not in my case I guess since I'm using trusty tahr
[01:04] <bodie_> it installed 2.7.0-pre
[01:05] <davecheney> bodie_: oh bodie_ the problem is so many turtles deeper
[01:05] <bodie_> I like turtles... but not this many turtles
[01:06] <bodie_> was really confusing trying to make my tests pass -- I knew it had something to do with the Mongo version but beyond that it gets really murky
[01:06] <bodie_> and then having to wrangle build issues on top of that was making me insane
[01:07] <bodie_> I think GCC 4.8.2 treats a certain thing as a warning that isn't supposed to be treated as a warning
[01:07] <bodie_> do you know if the Vagrant box is suitable for dev?
[01:07] <davecheney> bodie_: is the README in the project incorrect ?
[01:10] <bodie_> ah, the make install-dependencies bit?  I hadn't tried that, I think I stopped reading when I saw the directions for installing the binaries and assumed the README was about that
[01:11] <bodie_> the juju build was failing due to an incompatibility with the latest gwacl, but someone told me about the "known working revisions" doc
[01:21] <bodie_> anyway, thanks for the assist :)
[01:26] <waigani> the "PatchEnvPathPrepend(0): not defined" error I was hitting was resolved by rebuilding the pkg dir
[01:27] <waigani> which means the branch is now ready for review: https://codereview.appspot.com/76670043
[02:22] <davecheney> thumper: good news
[02:22] <thumper> davecheney: yes?
[02:22] <davecheney> building static tools means we don' tneed install moew pkgs into the environment
[02:30] <wallyworld__> axw: how far away is this?
[02:30] <wallyworld__> 	// TODO(axw) 2014-02-11 #pending-review
[02:30] <wallyworld__> 	//     Embed state.Prechecker, and introduce an EnvironBase
[02:30] <wallyworld__> 	//     that embeds a no-op prechecker implementation.
[02:30] <axw> wallyworld__: oops, that was reviewed and should've had an issue added...
[02:31] <axw> but the EnvironBase thing may not happen, as the implementation ended up changing
[02:31] <axw> wallyworld__: why?
[02:31] <wallyworld__> axw: i want to introduce some more common, shared functionality across all providers
[02:31] <wallyworld__> in this case SupportedArchitectures()
[02:32] <axw> wallyworld__: okay. I have no immediate plan to add it, so feel free to do it
[02:32] <wallyworld__> i'll just do a new interface i think
[02:32] <wallyworld__> Go loves single method interfaces :-/
[02:34] <wallyworld__> just gotta think of a name, and no I don't want to use Archer cause there's no arrows to be seen anywhere
[02:34] <axw> why do you need another interface?
[02:35] <axw> adding it to Environ seems appropriate
[02:35] <wallyworld__> guess so
[02:36] <wallyworld__> i think though having an EnvironCapability interface would be good
[02:36] <wallyworld__> we can add more methods to it as needed
[02:36] <wallyworld__> eg is block storage supported
[02:37] <wallyworld__> and easier to stub out for testing
[02:38] <axw> sounds reasonable
[02:45] <thumper> wallyworld__:  https://codereview.appspot.com/74660044
[02:46] <wallyworld__> thumper: https://codereview.appspot.com/76710043/
[02:46] <wallyworld__> :-)
[02:46] <thumper> np
[02:50] <thumper> wallyworld__: I have a parent teacher interview, but will review ASAP on return
[02:50] <wallyworld__> ok, no hurry
[02:50] <wallyworld__> have fun
[02:51] <thumper> wallyworld__: why not utils.arch ?
[02:51] <thumper> then arch.AMD64
[02:51] <thumper> or something
[02:51] <wallyworld__> thought about that, yeah can do that
[02:51] <thumper> I'll keep looking first
[02:51] <wallyworld__> just didn't want to introduce such a small package
[02:51] <thumper> just wondering out loud at the first thing I saw
[02:51] <thumper> ok, let me keep looking first
[02:52] <wallyworld__> sure, but i think you may have a valid poitn cause i almost did it that way
[02:56] <davecheney> 2014-03-17 02:54:32 INFO juju.worker.uniter uniter.go:474 running "config-changed" hook
[02:56] <davecheney> 2014-03-17 02:54:32 ERROR juju.worker.uniter uniter.go:480 hook failed: fork/exec /var/lib/juju/agents/unit-u0-0/charm/hooks/config-changed
[02:56] <davecheney> just hit this with 1.17.5.1
[03:01] <bodie_> latest test output....
[03:01] <bodie_> http://paste.ubuntu.com/7105935/
[03:01] <bodie_> going to bed, will look over this in the morning. any input welcome
[03:01] <bodie_> o/
[03:03] <davecheney> bodie_: sorry mate, still the wrong mongodb
[03:04] <davecheney> or more speciically
[03:04] <davecheney> the test suite tries to bring up a mongodb, but fails
[03:04] <davecheney> most of those tests don't consider the case that mongo couldn't fail to start
[03:04] <davecheney> and so panic during tear down
[03:09] <davecheney> well fuck
[03:09] <davecheney> i need to log a bug about charm-helpers-sh being missing
[03:09] <davecheney> but i can't confirm if it works on precise because I've hit another bug
[03:40] <thumper> davecheney: is that hook failure one where the hook doesn't exist?
[03:41] <thumper> davecheney: the code paths should catch that, and I was horribly confused...
[03:41] <davecheney> thumper: that is correct
[03:41] <davecheney> it shouldn't happen
[03:41] <thumper> I don't understand how we get that error
[03:42] <davecheney> axw: was looking into it
[03:42] <davecheney> i think
[03:42] <thumper> we explicitly catch that error
[03:42] <thumper> and I see that explicit catch happening locally
[03:42] <thumper> and on ec2
[03:42] <thumper> so I'm dumb struck
[03:44] <axw> I was looking at what?
[03:44] <axw> nope.. don't think so
[03:46] <axw> oh yeah I remember this...
[03:55] <davecheney>  /insert jarring chord
[03:57] <jam> morning thumper, I'm in the hangout whenever you're around
[03:58] <wallyworld__> thumper: so can we now drop JUJU_TESTING_LXC_FORCE_SLOW?
[03:59] <thumper> wallyworld__: yes
[03:59] <thumper> jam: ok
[04:00] <wallyworld__> thumper: wanna do that then as part of your work?
[04:00] <thumper> wallyworld__: yeah
[04:00] <wallyworld__> ok, i'll re-review then
[04:15] <axw> davecheney: so the uniter code normally works because exec.Command internally uses LookPath
[04:15] <axw> LookPath is the one that returns an *exec.Error
[04:15] <axw> is it not working with gccgo?
[04:17] <davecheney> axw: this was one ec2
[04:17] <davecheney> with gc
[04:17] <axw> huh
[04:17] <davecheney> this is not a ppc64el specific issue
[04:31] <thumper> wallyworld__: https://codereview.appspot.com/74660044/
[04:31] <wallyworld__> ok
[04:41] <axw> davecheney: you're on Go tip aren't you? (or > 1.2 at any rate)
[04:41] <axw> davecheney: behaviour of os/exec.Command has changed in 1.3
[04:41] <davecheney> axw: is that a question or a statement ?
[04:42] <axw> first is a question, second is a statement
[04:42] <davecheney> then yes, and ok
[04:42] <axw> the statement leads to my question, I'm backwards like that
[04:42] <axw> ok
[04:42] <davecheney> but others have hit this problem using released tools
[04:42] <axw> mmk, nfi why it would fail otherwise
[06:05] <jam> axw: LGTM on https://code.launchpad.net/~axwalk/juju-core/lp1293310-non-existent-hooks/+merge/211247
[06:06] <axw> jam: thanks
[08:11] <rogpeppe3> mornin' all
[08:13] <axw> morning rogpeppe3
[08:13] <rogpeppe3> axw: hiya
[08:14] <axw> rogpeppe3: I made the changes to https://codereview.appspot.com/75990043/, but thought I'd wait until you had a glance in case you weren't happy about the new package
[08:14] <rogpeppe3> axw: looking
[08:19] <rogpeppe3> axw: tiny thing: i wonder if the test should use /bin/sh rather than /bin/bash, as i think that might be what cloud-init is using.
[08:19] <rogpeppe3> axw: (i may be wrong there though)
[08:20] <axw> rogpeppe3: the synchronous bootstrap bit is always bash
[08:21] <rogpeppe3> axw: ok, and the DumpFileOnError code is only ever run in that context?
[08:21] <axw> rogpeppe3: and add-machine ssh, but that's also bash
[08:22] <rogpeppe3> axw: LGTM
[08:22] <axw> thanks
[08:39]  * jam => luncdh
[08:56] <dimitern> wallyworld_, are you around?
[09:02] <jam> morning dimitern
[09:02] <dimitern> jam, morning
[09:02] <jam> dimitern: I missed you earlier
[09:02] <jam> rogpeppe3: 1:1?
[09:03] <rogpeppe3> jam: ah yes!
[09:03] <dimitern> jam, i'm sorry i trusted my nexus ubuntu touch to sound the alarm but it didn't and i read later that it's not implemented yet
[09:27] <wallyworld_> dimitern: hi
[09:27] <vladk> dimitern: hi
[09:36] <dimitern> hey vladk
[09:36] <dimitern> wallyworld_, I have a question about upgrades
[09:36] <wallyworld_> sur
[09:36] <wallyworld_> e
[09:36] <dimitern> wallyworld_, we need to create some dirs with root permissions
[09:36] <wallyworld_> ok. where?
[09:36] <dimitern> wallyworld_, specifically, when upgrading rsyslog config and logdir
[09:37] <dimitern> wallyworld_, what's the best way to do that? i've seen provider/local calling exec.Command("sudo", "/bin/bash", "-s") and attaching stdout and stderr
[09:38] <wallyworld_> dimitern: the  machine agent runs as root, right?
[09:38] <dimitern> wallyworld_, or maybe generate a script and pass it on stdin (i.e. mkdir -p <logdir>)
[09:38] <dimitern> wallyworld_, so you're saying we should be able to create /var/log/juju-<namespace> in an upgrade step?
[09:38] <wallyworld_> i think so
[09:38] <dimitern> ok, i'll try that
[09:39] <wallyworld_> it's just an educated guess
[09:39] <wallyworld_> but would be the simplest i think
[09:39] <wallyworld_> cause an upgrade step is easy enough to write, and is only executed the once when needed to get to 1.18
[09:40] <dimitern> wallyworld_, yeah, i was testing my fix for bug 1291400 and found out some errors due to the logdir in var/log not getting created and hence rsyslog cannot create its certs there
[09:40] <_mup_> Bug #1291400: migrate 1.16 agent config to 1.18 properly (DataDir, Jobs, LogDir) <regression> <upgrade-juju> <juju-core:In Progress by dimitern> <https://launchpad.net/bugs/1291400>
[09:42] <wallyworld_> dimitern: good luck, let me know if it works. my only question is if the owner of the dir would be root but i think so since machine agent runs as root
[09:42] <dimitern> wallyworld_, it has to be a specific user/group i think, i'll check
[09:43] <wallyworld_> maybe, not sure ottomh
[09:43]  * dimitern really hates how we don't chown local provider logs to the user so I don't have to do sudo less ...
[09:43] <wallyworld_> even so, the process will have the priviliges to do it
[09:43] <wallyworld_> dimitern: i agree. file a bug :-)
[09:44] <dimitern> will do :)
[09:44] <wwitzel3> hola
[09:48] <rogpeppe3> hi all
[09:48] <rogpeppe3> voidspace: ping
[09:48] <wwitzel3> hey rogpeppe3
[09:49] <rogpeppe3> wwitzel3: yo!
[09:49] <wwitzel3> rogpeppe3: you have a good weekend?
[09:49] <rogpeppe3> wwitzel3: an excellent weekend, thanks
[09:49] <rogpeppe3> wwitzel3: we actually had some sun yesterday, and even better, managed to go out and take advantage of it
[09:50] <rogpeppe3> wwitzel3: you?
[09:50] <wwitzel3> rogpeppe3: that's great :)
[09:51] <wwitzel3> rogpeppe3: yeah, it was good, nothing exciting, but nice and relazing
[09:51] <wwitzel3> relaxing
[09:53] <voidspace> rogpeppe3, morning
[09:53] <rogpeppe3> voidspace: hiya
[09:53] <voidspace> rogpeppe3, hi :-)
[09:53] <voidspace> rogpeppe3, sorry about Friday
[09:54] <rogpeppe3> voidspace: how's your configuration working now?
[09:54] <voidspace> rogpeppe3, well, mostly good
[09:54] <voidspace> rogpeppe3, :-)
[09:54] <voidspace> rogpeppe3, I can't get the USB monitor working with Ubuntu and I think I'm stuck on a maximum of three monitors
[09:54] <voidspace> rogpeppe3, but other than that, good
[09:55] <voidspace> rogpeppe3, I still need to move the drives to the new machine
[09:55] <rogpeppe3> voidspace: only three monitors, how can you survive? :-)
[09:55] <voidspace> heh
[09:55] <voidspace> it is hard
[09:55] <voidspace> anyway
[09:55] <voidspace> rogpeppe3, shall I pair with you today?
[09:55] <rogpeppe3> voidspace: i was just about to suggest that, yes
[09:55] <voidspace> great
[09:56] <rogpeppe3> voidspace: whenever you're ready
[09:56] <voidspace> rogpeppe3, I'm pretty much ready
[09:56] <voidspace> rogpeppe3, hangout for audio?
[09:57] <rogpeppe> voidspace: let's start with that
[09:58]  * rogpeppe tries to work out how to create a hangout
[09:58] <voidspace> rogpeppe: shall we use the juju-core one, or start our own?
[09:59] <rogpeppe> voidspace: let's just use the juju-core one
[09:59] <rogpeppe> voidspace: https://plus.google.com/hangouts/_/canonical.com/juju-core
[10:08] <mgz> jam: are you around?
[10:09] <jam> mgz: I am, I just need to do a quick "get some coffee/tea" run, and then I'll be happy to chat
[10:09] <jam> mumble or G+ ?
[10:09] <mgz> sure thing
[10:09] <mgz> lets try mumble
[10:16] <jam> mgz: no love....
[10:16] <jam> I'll try again
[10:17] <jam> mgz: so I'm guessing this is Dubai's anti-VOIP software going into effect.
[10:17] <mgz> we'll have to make a new hangout
[10:17] <mgz> cunning
[10:17] <jam> At least, it is what it sounds like when I try to use Skype-out
[10:17] <jam> mgz: there is a hangout associated with our 1:1 calendar event
[10:17] <jam> can you get to that?
[10:17] <mgz> is the canonical mumble server not over ssl?
[10:18] <mgz> lets see
[10:19] <rogpeppe> mgz: have you got the link to the instructions for reconfiguring the gobot, please?
[10:20] <jam> rogpeppe:  https://lists.ubuntu.com/archives/juju-dev/2014-March/002182.html <https://lists.ubuntu.com/archives/juju-dev/2014-March/002182.html>
[10:21] <rogpeppe> jam: thanks!
[10:21] <jam> rogpeppe: he's in G+ with me, and its easier to paste via me, then copy it to IRC :)
[10:27] <rogpeppe> mgz: is there a reason that the verify_command in the tarmac configuration doesn't do a godeps -u ?
[10:28] <mgz> because it would also need go get -u
[10:28] <jam> rogpeppe: because half the time that would just break because you have to fetch first?
[10:28] <jam> and go get -u can overwrite the current branch that we are trying to test?
[10:28] <rogpeppe> jam: in that case, it *should* break, i think
[10:29] <rogpeppe> jam: otherwise we're testing with the wrong deps
[10:30] <rogpeppe> jam: currently i'm trying to help michael get his Instances aggregration branch in, and there's no easy way to do it
[11:06] <voidspace> rogpeppe, https://pastebin.canonical.com/106567/
[11:06] <rogpeppe> voidspace: yeah, just fixed it, i think
[11:08] <voidspace> rogpeppe, cool, thanks
[11:08] <voidspace> rogpeppe, nope :-)
[11:08] <rogpeppe> voidspace: indeed
[11:10] <mgz> now the hook borked
[11:16] <voidspace> rogpeppe, I'm grabbing coffee
[11:16] <rogpeppe> voidspace: k
[11:32] <perrito666> hi everyone
[11:40] <rogpeppe> fwereade: just encountered an interesting error with juju resolved
[11:41]  * fwereade peers nervously at rogpeppe
[11:41] <fwereade> hi again perrito666 :)
[11:41] <rogpeppe> fwereade: it's not too bad actually
[11:42] <rogpeppe> fwereade: i did "juju resolved", and the hook was still in error state
[11:42] <rogpeppe> fwereade: i did it again, and it said "ERROR cannot set resolved mode for unit "tarmac/0": already resolved"
[11:43] <rogpeppe> fwereade: but that was because a hook was still running
[11:44] <rogpeppe> fwereade: i wonder if the uniter should set the unit status out of error state the moment it starts to try rerunning the hook
[11:45] <fwereade> rogpeppe, it *was* designed that way but I'm trying to remember why
[11:45] <rogpeppe> fwereade: BTW the first time i did "juju resolved --retry"
[11:46] <rogpeppe> fwereade: it certainly felt a bit weird when it happen: "why can't i resolve this hook that's in error state?")
[11:46] <fwereade> rogpeppe, IIRC the idea is that we make sure we actually try to handle the resolved before we mark it ready to try again
[11:46] <rogpeppe> happened
[11:46] <rogpeppe> fwereade: i think that's the bit that's not so intuitive
[11:46] <voidspace> fwereade, I've been exchanging emails with  Erik Naslund :-)
[11:47] <fwereade> rogpeppe, the failure mode of setting resolved early is that we could not actually respond, but claim we did
[11:47] <rogpeppe> fwereade: because the user has no visibility into when hooks are actually running
[11:47] <fwereade> rogpeppe, I *think* that's worse
[11:47] <fwereade> voidspace, ah cool -- how's he doing?
[11:47] <fwereade> voidspace, say hi :)
[11:48] <voidspace> fwereade, I will do
[11:48] <voidspace> fwereade, he is using mock at his startup and sent an email saying thanks
[11:48] <fwereade> voidspace, excellent, I'm pretty sure I introduced him to it :)
[12:04] <ahasenack> ng
[12:05] <wwitzel3> natefinch: ping
[12:07] <natefinch> wwitzel3: howdy
[12:08] <wwitzel3> natefinch: quick hangout?
[12:08] <natefinch> sure
[12:10] <rogpeppe> x
[12:13] <bodie_> is there a standard for which version of ubuntu this should work best on?  davecheney says I'm still using the wrongodb, but this is definitely the one make install-dependencies set me up with
[12:14] <jam> bodie_: most testing is done on Precise, we tried to enable Trusty correctly, but then ended up regressing support elsewhere, so we have a patch in progress to restore proper support for Trusty's local provider
[12:15] <bodie_> makes sense
[12:15] <bodie_> is there anything I can do to help there?
[12:15] <bodie_> maybe I'll try the Vagrant setup in the meantime
[12:16] <axw> fwereade: heya. did you want to review https://codereview.appspot.com/70190050/ before Jesse lands it?
[12:16] <axw> it's a big 'un
[12:17] <axw> looks (nearly) good to me, but just checking if I should provision a LGTM on your review
[12:19] <axw> rogpeppe: yeah, I found that (resolved not changing error state immediately) a bit unintuitive too. some sort of feedback would be nice
[12:25] <fwereade> axw, if I haven't looked at it by eod I think we can go ahead,it
[12:25] <bodie_> just downloaded the 639MB vagrant image in almost exactly a minute... love this fios
[12:25] <axw> fwereade: okey dokey
[12:25] <fwereade> 's had several rounds and you were making smart comments last time I saw
[12:26] <fwereade> axw, you can represent me :)
[12:26] <axw> I'll do my best
[12:26] <fwereade> axw, btw I have been trying to assimilate all the awesome azure changes, couple of quick questions
[12:27] <fwereade> axw, 1 (possibly trivially working, probably deserves minor reflection) subordinates in fancy-azure mode
[12:27] <fwereade> axw, I suspect they will all work fine, but if you haven't considered them explicitly please do some thinking
[12:28] <axw> I haven't, and I think they'll work fine, but yes I will have a think about it
[12:28] <fwereade> axw, 2 (maybe more of a sticking point) can we change the startinstance params so that it's expressed in terms of instances to avoid, rather than leaking the concept of "principals" into environs?
[12:29] <fwereade> axw, I haven't figured out whether that screws everything up wrt AddMachine though
 axw, I suspect they will all work fine, but if you haven't considered them explicitly please do some thinking
 I haven't, and I think they'll work fine, but yes I will have a think about it
[12:30] <axw> sorry, net cut out
 axw, 2 (maybe more of a sticking point) can we change the startinstance params so that it's expressed in terms of instances to avoid, rather than leaking the concept of "principals" into environs?
 axw, I haven't figured out whether that screws everything up wrt AddMachine though
[12:31] <axw> fwereade: that makes sense for ec2, but not for azure I think?
[12:32] <fwereade> axw, my though for azure was that we could just put the machine into the same cs/as as all supplied instances (in fancy-azure mode) or ignore (in placement mode)
[12:32] <axw> i.e. with AZ you want to avoid other machines in the AZ, but in azure you want to stick them all in the same Cloud Service
[12:32] <axw> true
[12:32] <fwereade> axw, yeah, but in azure we can do so iff the instances to avoid all match
[12:33] <fwereade> axw, otherwise we can't distribute for reliability but whether that's an issue depends on the setting I think
[12:33] <fwereade> axw, thank god we decided not to allow racing provisioners though
[12:33] <axw> :)
[12:33] <axw> indeed
[12:35] <fwereade> axw, there might be some horrible hole in my logic but I'd like us to try quite hard to keep state concepts out of environ implementations
[12:35] <fwereade> axw, the jobs/info dependencies make me grumpy but they don't feel too fundamental, this raher does
[12:36] <axw> fwereade: yeah, I'll have to have a think about how it'll work
[12:36] <fwereade> axw, tyvm
[12:36] <axw> It'll mean recording more information specifically for provisioning
[12:36] <fwereade> axw, expand please?
[12:37] <axw> fwereade: instances to avoid is determined by the provider policy, right? that needs to happen as we're doing AddMachine if we're going to support clean machine assignment for ec2
[12:38] <axw> then it needs to be picked back up by the provisioner to pass to StartInstance
[12:39] <fwereade> axw, I think we can just always assume instances-to-avoid == instances-running-a-unit-of-this-service
[12:39] <fwereade> axw, provider is free to best-fit that
[12:39] <fwereade> axw, and in the case of manual placement it's completely overridden
[12:40] <fwereade> axw, ah doh I see
[12:40] <fwereade> axw, ormaybe not
[12:40] <axw> fwereade: I think that might work, need to let it marinate
[12:41] <fwereade> axw, it's more info we need at provisioning time, but I think we can just get away with a quick scan of the units on that machine -- if any -- and an API call to find out what other instances are also running those units
[12:41] <fwereade> axw, (prinicpals only, I think, but you already know that)
[12:42] <axw> yup. so basically what I was going to do, but hiding the principals from StartInstance
[12:42] <fwereade> axw, (also consider how/whether we can/should handle principals running on hosted machines -- I'd accept a TODO-figure-it-out there because it won't become a real concern until we start implementing zone distribution inside the other providers)
[12:42] <fwereade> axw, yeah, exactly
[12:43] <fwereade> axw, it's reimplementation but not fundamentally rearchitecting :)
[12:43] <axw> on hosted machines...?
[12:44] <axw> fwereade: what do you mean by "hosted machines"?
[12:44] <fwereade> axw, 2/lsc/3
[12:45] <fwereade> axw, 2/lxc/3
[12:45] <axw> ah
[12:45] <axw> yep, I only thought about it in so far as it's not really supported in Azure :)
[12:48] <fwereade> axw, I'm not quite decided what the Right Thing is yet, in an ideal world we'd be supplying some sort of weighting information -- ie it matter much more that we're not near instance X than Y, because the unit running on X is not well-distributed yet but the one on Y is
[12:48] <fwereade> axw, but that's off in best-is-enemy-of-good territory
[12:48] <fwereade> axw, so sticking with the simplest thing might be wise
[12:49] <fwereade> gaah need to do something quickly before meeting at 2, gtg
[12:49] <axw> fwereade: later, thanks for the chat
[13:23] <rogpeppe> voidspace: lp:~rogpeppe/juju-core/515-state-rename-api-addresses
[13:27] <voidspace> mgz, ping
[13:27] <voidspace> mgz, I'm starting again on a new machine
[13:28] <voidspace> mgz, so I need the bzr magic incantations to create the local branches
[13:28] <voidspace> mgz, and yet again I don't remember them :-)
[13:29] <voidspace> nor indeed where you put the instructions
[13:31] <mgz> voidspace: yeah, I said I should write them down
[13:32] <mgz> voidspace: `bzr switch -b trunk; bzr pull --remember lp:juju-core; bzr switch -b feature_branch`
[13:32] <voidspace> mgz, wonderful, thanks
[13:32] <voidspace> I will email those to myself
[13:32] <voidspace> it's just the initialisation I can't remember
[13:32] <voidspace> using it is fine :-)
[13:33] <voidspace> https://pastebin.canonical.com/106571/
[13:33] <voidspace> rogpeppe, ^^
[13:37] <wwitzel3> the logging calls in state/open.go where do those get logged to? logger.Infof
[13:38] <rogpeppe> wwitzel3: i'm not sure i understand the question
[13:38] <rogpeppe> wwitzel3: do you mean "where do the log messages end up?"
[13:39] <wwitzel3> there are logger.Infof calls in state/open.go, Open() where do the string values passed to Infof end up?
[13:41] <wwitzel3> what file would I tail to see them?
[13:45] <wwitzel3> cloud-init-output.log has them
[13:55] <bodie_> Is there a virtualbox / vagrant image specifically for core dev?
[13:55] <bodie_> i.e., Go installed, etc
[13:55] <rogpeppe> trivial method-renaming review anyone? https://codereview.appspot.com/76890043
[13:55] <bodie_> if not I'm going to host one
[13:55] <rogpeppe> wwitzel3: it depends on the context
[13:55] <rogpeppe> wwitzel3: is this when running a real environment?
[13:56]  * rogpeppe goes for lunch
[13:56] <wwitzel3> rogpeppe: I found it on the machine being bootstrapped in cloud-init-output.log
[14:00] <bodie_> does anyone know an eta on fixing the azure provider?
[14:01] <bodie_> er, rather the compat with the updated provider
[14:01] <bodie_> ... with the updated driver*
[14:01] <bodie_> sigh
[14:02] <mgz> bodie_: use godeps to flip back to the comaptible revision
[14:02] <bodie_> I was just using bzr
[14:02] <mgz> axw has some juju-core branches in progress but they're not yet landed
[14:02] <bodie_> is there a fancy simple go way?
[14:02] <mgz> and this is why we have godeps :0
[14:02] <mgz> rogpeppe: ^you got a potted simple way?
[14:03] <axw> godeps -u dependencies.tsv
[14:03] <bodie_> hehe.. potted.  fun working with people who have variations in their syntax ;)
[14:03] <bodie_> thanks axw
[14:03] <axw> nps
[14:04] <mgz> should be something like: `go get launchpad.net/godeps/...; go install launchpad.net/godeps/...; godeps -u dependencies.tsv`
[14:04] <mgz> from inside juju-core dir
[14:04] <axw> go install is not necessary if you do go get
[14:05] <bodie_> go install rebuilds, right?
[14:05] <bodie_> *double-checks note to self: "stop asking dumb questions"*
[14:07] <axw> bodie_: go install = go build and install the result to $GOPATH/(pkg|bin)
[14:10] <mgz> go get installs? that seems obnoxious, no wonder I generally avoid it
[14:12] <bodie_> heh
[14:12] <bodie_> I feel like the go tool is trying to do too much and not doing enough at the same time
[14:12] <bodie_> it should either do very little, or do things properly
[14:19] <natefinch> wwitzel3: ready to start up again?
[14:25] <natefinch> bodie_: the go tool does the right thing... we're doing slightly the wrong thing right now.   *usually* trunk just builds.  I actually think it's a mistake for us to ever have trunk not build.... but others may disagree.
[14:28] <bodie_> heh
[14:31] <jamespage> sinzui, good morning
[14:31] <sinzui> hi jamespage
[14:32] <jamespage> sinzui, so I was poking at 1.17.5 for trusty upload the morning and wanted to make a time/risk decision that I was not in the full facts of to make
[14:32]  * sinzui nods
[14:32] <jamespage> sinzui, and that really boils down to when 1.18.x will be released; I don't really want to upload 1.17.5 (and revert the juju-mongodb change I put in for .4) if its only really going to be there for a few days
[14:33] <sinzui> jamespage, understood
[14:34] <wwitzel3> natefinch: yep, ready
[14:34] <sinzui> jamespage, My only angst relates to ppc64 and arm64.
[14:34] <jamespage> sinzui, in the context of 1.18?
[14:35] <sinzui> jamespage, I can ask utlemming to release streams.canonical.com, but without those arch among the tools, they wont be tested
[14:35] <sinzui> jamespage, but too your point...they need to know juju-mongodb
[14:36] <jamespage> sinzui, right
[14:36] <sinzui> jamespage, I still think we can see 1.18.0 this week.
[14:36] <jamespage> sinzui, so will 1.18.0 use juju-mongodb across all archs if avaliable? this is fairly critical for the MIR
[14:36] <sinzui> or I release again when I see juju-mongodb restored
[14:36]  * jamespage scrubs fairly from that sentence
[14:37] <sinzui> jamespage, yes, juju-db will be used when it is available
[14:37] <jamespage> sinzui, great
[14:37] <jamespage> sinzui, sorry - juju-db or juju-mongodb?
[14:37] <jamespage> (just to make sure we all know which one)
[14:38] <sinzui> jamespage, sorry. I tried to type less. I mean the latter
[14:38]  * jamespage breathes again
[14:38] <jamespage> I know that was discussed as a rename
[14:42] <bodie_> so I'm on the verge of wiping my workstation and installing 12.04 (using 14.04 now) -- should I wait for this change to go in and just use a VM for now or something?
[14:42] <bodie_> tests seem to be breaking
[14:42] <bodie_> gustavo said it's mongo
[14:42] <bodie_> and I am using juju-mongodb
[14:43] <natefinch> bodie_: 14.04 works fine.  I'm on 14.04 and so are several other devs.
[14:43] <natefinch> bodie_: does mongod --help have the ssl section in the help now?
[14:43] <bodie_> yeah
[14:44] <natefinch> bodie_: can you pastebin the failing tests?  I know you have before, but I want to see if there are differences now that you have the right mongo
[14:45] <bodie_> http://paste.ubuntu.com/7108482/
[14:45] <bodie_> feeling a tiny bit hopeless at this point, it's been 4 or 5 solid days of struggle
[14:45] <bodie_> mongod isn't in my path
[14:46] <bodie_> it's in /usr/lib/juju/bin/mongod
[14:46] <natefinch> bodie_: the code I believe right now expects mongod to be in /usr/bin/
[14:46] <bodie_> it's these tiny little gotchas that make this impossible for people to join in on
[14:47] <bodie_> ok, not impossible
[14:47] <bodie_> frustratingly unobvious
[14:47] <bodie_> this is what I got from using make install-dependencies
[14:47] <natefinch> bodie_: that actually is probably the problem.... we have a bug with the current code about the juju-mongodb.   I know it's terrible, but try moving mongod to /usr/bin
[14:48] <bodie_> can I just symlink it?
[14:48] <natefinch> bodie_: possibly, but you may need to actually remove the juju/bin one.... try symlinking first.
[14:48] <natefinch> bodie_: and yes, we need a better onboarding process, with better documentation etc.
[14:49] <bodie_> I keep thinking it's just me being an idiot
[14:49] <bodie_> was really hoping to get some code in by friday at least
[14:49] <bodie_> thanks for the assistance
[14:50] <bodie_> see, gustavo said it was the wrong mongo version
[14:50] <bodie_> I don't know how he got that from my test output
[14:50] <bodie_> do I just need to move mongod or all 5 binaries?
[14:51] <bodie_> 6*
[14:52] <natefinch> bodie_: I think just mongod, but I'm honestly not 100% sure.
[14:54] <natefinch> bodie_: That is very strange that you're getting all those panics, though.   While we have problems with the juju/bin mongod.... it doesn';t usually panic like that
[14:57] <bodie_> you're on trusty, right?
[14:57] <natefinch> bodie_: yes
[14:58] <bodie_> so I don't really have any reason to expect switching distros to help
[14:58] <bodie_> i'll see if moving the binaries fixes this
[14:59] <bodie_> where did you get your copy of mongodb?
[14:59] <bodie_> James used make install-dependencies and it put his copy in /usr/bin, unlike mine
[15:00] <natefinch> bodie_: I built mine from source
[15:01] <bodie_> finally.  looks like the symlinks worked
[15:01] <bodie_> interesting, what version of gcc?
[15:01] <natefinch> bodie_: 4.8 I think lemme check
[15:01] <natefinch> bodie_: 4.8.2
[15:01] <bodie_> I wasn't able to get the source build to work, it kept spitting out warnings
[15:01] <bodie_> -_-
[15:02] <bodie_> forums looked like maybe a gcc 4.8 issue
[15:02] <bodie_> but I guess not
[15:02] <natefinch> bodie_: er.... hmm.... that's my version of gcc now, but I was on Raring when I built mongo
[15:02] <bodie_> ah
[15:02] <natefinch> bodie_: so possibly an older version of gcc, I don't know
[15:02] <bodie_> ok
[15:03] <rogpeppe> dimitern, natefinch, mgz, fwereade: trivial review please? https://codereview.appspot.com/76890043/
[15:03] <bodie_> aaaand 107,772 test errors
[15:05] <mgz> rogpeppe: wwitzel3 got there already
[15:05] <rogpeppe> mgz: cool
[15:06] <dimitern> rogpeppe, reviewed
[15:06] <rogpeppe> dimitern: thanks
[15:06] <rogpeppe> wwitzel3: thanks to you too - i just hadn't refreshed the page to see your review :-)
[15:07] <rogpeppe> dimitern: i don't understand your comment
[15:08] <rogpeppe> dimitern: the API hasn't change (and hopefully won't change)
[15:08] <rogpeppe> changed
[15:09] <dimitern> rogpeppe, ah, sorry - the public/agent facing api is still the same
[15:09] <rogpeppe> dimitern: yeah
[15:09] <dimitern> rogpeppe, then ignore me ;)
[15:09] <rogpeppe> dimitern: duly ignored
[15:11] <TheMue> adeuring: started testing but have to update my 3rd party packages first
[15:12] <TheMue> adeuring: could you paste me your error(s)?
[15:13] <adeuring> TheMue: http://paste.ubuntu.com/7108636/
[15:14] <TheMue> adeuring: hehe, yep, bingo
[15:14] <TheMue> adeuring: exactly the same here
[15:15] <adeuring> TheMue: the last error is from r 2428. Reverting to r2427 fixes that one
[15:15] <adeuring> but the other two error messages remain...
[15:15] <TheMue> adeuring: that's bad :/
[15:16] <mgz> adeuring: you need the right rev of gwacl, not top
[15:16] <mgz> *tip
[15:17] <mgz> and you want the ratelimiter package
[15:17] <mgz> see the log ^ for godeps tips
[15:17] <adeuring> mgz: ahhh.. thanks, that fixed it!
[15:23] <bodie_> natefinch... any thoughts?  http://paste.ubuntu.com/7108679/
[15:23] <bodie_> I have very little idea what could be causing this
[15:25] <natefinch> bodie_: that's the test that fails if mongod exists in /usr/lib/juju/bin :)
[15:26] <natefinch> bodie_: rename or move mongod from that directory and it'll pass.
[15:26] <natefinch> bodie_: it's a bug in the code & tests
[15:27] <natefinch> bodie_: we half-added a feature to use mongo from that directory
[15:27] <natefinch> bodie_: (where we = me)
[15:34] <natefinch> rogpeppe, mgz: is there a technical reason why cloud-init-output.log isn't mirrored back to the client when bootstrapping with --debug?  That would make my life so much easier
[15:34] <rogpeppe> natefinch: axw is the one to ask
[15:34] <rogpeppe> natefinch: i think it now sends back the output if the bootstrap fails
[15:38] <mgz> natefinch: it's because bootstrap is still in a funny place
[15:38] <mgz> were I redoing from scratch, I'd just have --debug echo/tail/poll the console-log nova api, and similar for local/manua
[15:40] <mgz> curent;y we have a funny mix of cloud-init parts and ssh-in-and-do-things parts that have disparate logging
[15:40] <bodie_> oy.  ok.  thanks natefinch
[15:40] <natefinch> mgz: probably why some of our errors come back as "rc: 1"
[15:41] <mgz> yeah, those are paricularly bad
[15:41] <mgz> because the stderr gets dropped
[15:41] <mgz> unlike with cloud-init
[15:42] <natefinch> mgz: I actually tried fixing that, and ended up with stderr being "return code 1"  or something along those lines equally useless
[15:43] <mgz> :)
[15:43] <mgz> too many lovely layers
[15:47] <stokachu> is there any documentation on using kvm as a container?
[15:48] <stokachu> the code looks like it supports it but couldnt find any documentation
[15:49] <bodie_> natefinch, can you point me to the bit that tries to use /usr/lib/juju/bin ?  would be nice if I could take a crack at fixing something
[15:49] <natefinch> stokachu: there's a bug about us needing to add documentation about it
[15:49] <stokachu> ok cool, any notes lying around?
[15:50] <bodie_> I guess /agent/mongo/mongo.go
[15:52] <natefinch> bodie_: yeah... I think the problem is that the test is using the MongoPath method, and the MongoUpstartService function is not.  we hard coded the MongoUpstartService but forgot to fix the test.  REally we should hardcode MongoPath for now, and put MongoUpstartService back to using MongoPath.
[15:52] <natefinch> stokachu: juju help add-machine, and just replace lxc with kvm
[15:52] <stokachu> natefinch: ah ok thanks
[15:52] <bodie_> I see
[15:52] <stokachu> and use manual provider right?
[15:53] <perrito666> hey, make on master yells errors in lxc.go and clonetemplate.go
[15:53] <natefinch> stokachu: that's orthogonal
[15:53] <perrito666> is this expected or I have something broken?
[15:53] <natefinch> perrito666: I'm not sure anyone actually uses make
[15:54] <natefinch> stokachu: do juju help deploy to see how to deploy to a specific machine (including new container)
[15:54] <stokachu> ok thanks will do
[16:15] <rogpeppe> voidspace: can you hear me?
[16:15] <rogpeppe> voidspace: or are you still coffeeing?
[16:22] <voidspace> rogpeppe, https://lastpass.com/
[16:23] <voidspace> rogpeppe, vault
[16:23] <voidspace> rogpeppe, that's the lastpass terminology anyway
[16:25] <rogpeppe> dimitern, fwereade, natefinch, mgz: small review (adding State.SetAPIAddresses)? https://codereview.appspot.com/76950043
[16:33] <stokachu> i <3 juju+kvm
[16:38] <natefinch> voidspace, rogpeppe: I love lastpass.  I don't know how anyone actually uses the web without it (or something like it)\
[16:38] <rogpeppe> natefinch: yeah, i think i need to use it
[16:38] <rick_h_> natefinch: +1 got the wife using it as well
[16:38] <voidspace> natefinch, I've just migrated from 1password as part of the move away from OS X
[16:39] <voidspace> natefinch, so far so good :-)
[16:39] <natefinch> it makes it trivial to use a different very strong password with every single service, which is really the only way to be secure.
[16:40] <natefinch> voidspace: I've been using it for a few years now.  No complaints.
[16:44] <rogpeppe> two branches out for review, please: https://codereview.appspot.com/76950043, https://codereview.appspot.com/76870044
[16:44] <rogpeppe> dimitern, natefinch, wwitzel3, mgz, fwereade: ^
[16:44] <natefinch> voidspace: one thing I like is that if you pay for it, you get the mobile client and access to the dolphin browser plugin (if you're on android, not sure if they have it on iOS)... the dolphin plugin works just like the desktop plugin, which is handy for browsing on your phone
[16:44] <natefinch> rogpeppe: I'll grab at least one
[16:44] <voidspace> natefinch, I've been using 1password for years which is very similar, except with syncing via dropbox instead of the passwords on their servers...
[16:45] <rogpeppe> natefinch: the 1st is a prereq of the 2nd
[16:45] <voidspace> natefinch, you can't have browser plugins on iOS unfortunately :-/
[16:45] <natefinch> voidspace: pretty sure lastpass is encrypted locally before you send your data to them, btw
[16:45] <natefinch> voidspace: I have a suggestion you probably won't like ;)
[16:45] <voidspace> natefinch, you buy me a new phone and I'll use it :-)
[16:45] <rick_h_> natefinch: works in mobile firefox. I use it in that on android
[16:45] <natefinch> voidspace: rofl
[16:49] <natefinch> voidspace: that's cool.  I'll have to try it and see how it compares to dolphin
[16:49] <natefinch> er rick_h_ ^
[16:53] <natefinch> rogpeppe: there you go
[16:54] <rogpeppe> natefinch: thanks
[16:55] <rogpeppe> natefinch: the second one is even simpler :-)
[16:55] <natefinch> rogpeppe: I did both ;)
[16:55] <rogpeppe> natefinch: brilliant!
[17:20] <voidspace> rogpeppe, back
[17:24] <voidspace> rogpeppe, I got dumped out of the hangout - trying to join again
[17:24] <rogpeppe> voidspace: k
[17:25] <voidspace> rogpeppe, hmm... it won't let me back in
[17:26] <rogpeppe> voidspace: what error do you see?
[17:27] <voidspace> rogpeppe, it takes me to the error page and says "there was an error", with a "Start a new hangout" button
[17:27] <rogpeppe> voidspace: awesome
[17:27] <voidspace> rogpeppe, now it's saying "It's taking too long to connect you to this video call.Try again in a few minutes"
[17:28] <voidspace> rogpeppe, which at least is progress I think...
[17:28] <rogpeppe> voidspace: i could try leaving and joining again
[17:28] <wwitzel3> Google is having some issues with hangouts atm
[17:28] <voidspace> that would explain it...
[17:28] <rogpeppe> voidspace: it still thinks you're connected, BTW
[17:29] <rogpeppe> voidspace: ha, i also cannot rejoin
[17:35] <voidspace> rogpeppe, creating a new one fails. wwitzel3 would appear to be correct :-)
[17:35] <rogpeppe> voidspace: yup
[17:37] <voidspace> rogpeppe, I'm going to work on my vim setup as that's pretty essential
[17:37] <voidspace> rogpeppe, I have half an hour to EOD
[17:37] <rogpeppe> voidspace: sgtm
[17:37] <rogpeppe> voidspace: we've done alright today i reckon
[17:37] <voidspace> rogpeppe, Mondays I have to leave promptly at 6pm due to krav maga, most other days I can be more flexible
[17:37] <voidspace> rogpeppe, yep, been fun
[17:38] <rogpeppe> voidspace: branch just merged
[17:38] <rogpeppe> voidspace: one still to go
[17:38] <voidspace> I saw
[17:38] <voidspace> great
[17:38] <voidspace> yep
[17:38] <rogpeppe> dimitern: ping
[17:39] <rogpeppe> fwereade: ^
[17:44] <natefinch> TIL not to mess with voidspace.... unless there's some other krav maga I don't know about, that's like, a UK form of knitting
[17:49] <voidspace> natefinch, hehe
[17:49] <voidspace> natefinch, I've only been doing it for a couple of months
[17:49] <voidspace> natefinch, I really enjoy it but still at the "complete beginner" stage
[17:51] <voidspace> natefinch, I still advise not messing with me though...
[17:56] <perrito666> natefinch: wwitzel3 hi, just wanted to say hello, you seem to be my most overlapped co-teammers :)
[17:56] <voidspace> right - EOD
[17:56] <voidspace> 'night all
[17:56] <bodie_> laters
[17:56] <voidspace> see you tomorrow
[17:56] <natefinch> voidspace: night!
[17:57] <natefinch> perrito666: hi, yes, it's nice not to be the only one here in the afternoons :)
[17:58] <wwitzel3> perrito666: hi :)
[18:08] <natefinch> rogpeppe, mgz: wwitzel3 and I were wondering if there were things in state.Open we should *not* be doing in an HA environment... .like if you're not the first bootstrap node to come up.   It's a little hard to tell what might cause problems if it were ran more than once.
[18:09] <rogpeppe> natefinch: i've certainly be *trying* to make everything that happens in state.Open applicable in a HA environment
[18:09] <rogpeppe> s/be /been /
[18:10] <natefinch> rogpeppe: we're having a problem where bootstrap seems to finish successfully, but then juju status gets an error ERROR state/api: websocket.Dial wss://ec2-54-198-131-15.compute-1.amazonaws.com:17070/: dial tcp 54.198.131.15:17070: connection refused
[18:11] <rogpeppe> natefinch: is this on tip?
[18:11] <natefinch> rogpeppe: tip-ish.  I synced my branch last thursday, I believe
[18:11] <rogpeppe> natefinch: what do you see in the logs?
[18:12] <rogpeppe> natefinch: (it sounds like the machine agent isn't coming up properly)
[18:12] <natefinch> rogpeppe: I'm bringing up a new environment now, I'll check and see if I see anything noteworthy in the logs.
[18:19] <natefinch> rogpeppe: ahh, hmm, looks like we're getting a mongo error when we try to read the replsetconfig while opening state (we do that to know if we need to call initiate)
[18:20] <rogpeppe> natefinch: sounds right
[18:20] <rogpeppe> natefinch: i have some code that does the initial replicaset setup for state, if it might be helpful for you
[18:21] <natefinch> rogpeppe: we actually just were working on that today.  Right now our code does it in state.Open if the replset isn't already initiated
[18:22] <rogpeppe> natefinch: i'm not sure that's a great idea
[18:22] <rogpeppe> natefinch: it seems to take quite a long time after setting the initial (1 member!) replica set configuration until you can actually use the mongo again
[18:23] <rogpeppe> natefinch: i have no idea why that should be
[18:23] <rogpeppe> natefinch: i would prefer to keep the replicaset logic out of the state package
[18:24] <natefinch> rogpeppe: I was just putting it there because there's 40 lines of setup to dial mongo, which we have to do anyway
[18:27] <rogpeppe> natefinch: yeah, i see that, but i wonder if there some other possibility
[18:28] <natefinch> rogpeppe: I'm sure we can factor out the 40 lines so we don't have to write it multiple times.  And re-dialing isn't a big deal.
[18:28] <rogpeppe> natefinch: well, we'll have to redial anyway
[18:29] <natefinch> rogpeppe: right
[18:29] <bodie_> This line should read juju-mongodb, right?  http://paste.ubuntu.com/7109636/
[18:29] <bodie_> It currently says mongodb-server
[18:29] <bodie_> which I don't think will come from the ppa
[18:29] <rogpeppe> natefinch: i can think of two possibilities atm
[18:29] <rogpeppe> natefinch: 1) add a method to State that returns the mongo DialInfo that's appropriate for dialling the state's mongo server.
[18:30] <rogpeppe> natefinch: 2) factor out the dial code into another package and make state.Open call it
[18:31] <natefinch> rogpeppe: seems like the mongo package we already have would be a reasonable place to put the refactored method.  Getting stuff out of state seems like a good idea.
[18:31] <rogpeppe> natefinch: 3) factor the dial code out of state entirely and pass a mongo session into state.Open and state.Initialize
[18:32] <natefinch> rogpeppe: 2 or 3 seem fine, I don't really have an opinion as to which is better.  Probably passing the session into state is more flexible.
[18:33] <natefinch> (so I guess I do have an opinion ;)
[18:33] <natefinch> no strong opinion :)
[18:33] <rogpeppe> natefinch: the first option is least work
[18:33] <rogpeppe> natefinch: the third option is the most work
[18:34] <natefinch> rogpeppe: the difference between 2 and 3 seems small. though I don't know about the tests.  Mostly it seems like cut and paste.
[18:34] <natefinch> (aside from tests)
[18:35] <rogpeppe> natefinch: yeah, 3 isn't actually so bad - there are only 33 calls to state.Open in the code
[18:35] <natefinch> heh
[18:35] <rogpeppe> natefinch: i think it's my preferred option actually
[18:37] <rogpeppe> natefinch: but it is actually quite a big change, now i think about it
[18:37] <rogpeppe> natefinch: we'd need to move state.Info out of state
[18:38] <rogpeppe> natefinch: it might all work out nicely though
[18:40] <rogpeppe> natefinch: i would definitely run the idea past fwereade though, as it is definitely making the state abstraction a more leaky
[18:40] <rogpeppe> s/a more/more/
[18:42] <wwitzel3> natefinch: done for the day, but I will still be here on and off
[18:43] <wwitzel3> natefinch: if you make any break throughs make sure you push it to your branch so I can check it out in the morning
[19:05] <natefinch> wwitzel3: will do.
[19:06] <natefinch> rogpeppe: I'll mention it to fwereade.  Probably to get things going ,the best idea right now is to do #1.  Bringing on extra work, even if it's refactoring things a little more nicely, is probably not the best idea.
[19:06] <rogpeppe> natefinch: agreed
[19:12] <perrito666> hey, I am on my way out, see you all tomorrow :)
[19:12] <natefinch> perrito666: see you tomorrow :)
[19:26] <thumper> o/
[19:30] <natefinch> o/ thumper
[19:50] <hazmat> thumper, so with aufs local lxc.. is that optional?
[19:50] <thumper> hazmat: it is about to be
[19:50] <thumper> hazmat: I have a branch that allows configuration
[19:51] <hazmat> thumper, awesome
[19:51] <thumper> spent yesterday writing tests for it mostly
[19:51] <thumper> hazmat: it was on all the time (other than btrfs)
[19:52] <hazmat> yeah.. twould be nice as constraint..
[19:52] <hazmat> but env config is an ok fallback
[19:52] <hazmat> thumper, cause next would be aa-unconfined profile as container/workload constraint
[19:52] <hazmat> re ostack on local
[19:53] <hazmat> all of which would come back to supporting provider specific constraints
[19:53]  * thumper nod
[19:53] <thumper> s
[20:01] <rogpeppe> right, i'm done
[20:01] <rogpeppe> g'night all
[21:05] <natefinch> EOD for me.  Night all.
[21:06] <marcoceppi> thumper: did fast lxc land in 1.17.5?
[21:06] <thumper> marcoceppi: yes, but it seems there may be permission issues
[21:06] <thumper> looking into it
[21:07] <marcoceppi> thumper: well if there's something i have to do, like modify a path or whatever, I'm down for that, I just want fast lxc while writing this charm
[21:07] <thumper> yes, it is there
[21:13] <thumper> mwhudson: ping
[21:13] <thumper> mwhudson: maybe you are flying...
[22:21]  * thumper waits for the merge...
[22:32] <thumper> phew, third time lucky
[22:40] <marcoceppi> thumper: so, after the first time I deploy, each deployment (even between bootstraps) should be fast?
[22:43] <marcoceppi> thumper: oh, actually, I got an error executing lxc-clone
[22:43] <marcoceppi> thumper: ah, juju-local needs to depend on aufs-tools
[22:44] <thumper> ah
[22:44] <marcoceppi> thumper: :\ http://paste.ubuntu.com/7110975/
[22:44] <thumper> write that shit down L)
[22:44] <marcoceppi> after install aufs-tools
[22:45] <thumper> hmm...
[22:46] <marcoceppi> thumper: oh,I'll def file a bug as soon as I track down the issue
[22:46] <thumper> coolio
[22:46] <marcoceppi> s/I/you
[22:50] <thumper-gym> heh
[22:56] <marcoceppi> thumper-gym: yeah, I aufs-tools didn't fix it and i'm nto sure where else to look. I'm guessing the lxc-container-aufs option is in trunk and not released?
[23:34] <mwhudson> thumper-gym: hey, i am at kingsford smith airport so am happy for you to try to stop me getting bored out of my mind :-)
[23:40] <mwhudson> (that said the internet is pretty rubbish, so i might not be very good conversation)