[00:23] <wwitzel3> perrito666: you still around?
[03:48] <davecheney> https://bugs.launchpad.net/juju-core/+bug/1353239
[05:19] <davecheney> menn0_: i'm going to try again
[05:19] <davecheney> but this time run my mongo in the foreground
[05:19] <menn0_> davecheney: good idea
[05:20] <menn0_> I'm getting closer to finding the culprit rev
[05:21] <davecheney> menn0_: i'm not sure it's our fault
[05:21] <davecheney> mongo appears to be shitting itself
[05:21] <davecheney> we're still using 2.4.x on precise
[05:21] <menn0_> I think it's triggered by the way we're setting up mongo though
[05:22] <davecheney> sure
[05:22] <davecheney> no argument there
[05:22] <menn0_> whether what we're doing is reasonable or not is another matter
[05:22] <davecheney> but having to work around fragile software does not lead to robust systems
[05:23] <menn0_> sure but let's narrow this down first before jumping to conclusions :)
[05:34] <davecheney> reasons to hate upstart number 1<<72
[05:34] <davecheney> ubuntu@ip-10-251-35-4:~$ service juju-db
[05:34] <davecheney> juju-db: unrecognized service
[05:34] <davecheney> ubuntu@ip-10-251-35-4:~$ service juju-db status
[05:34] <davecheney> juju-db start/running, process 10783
[05:37] <menn0> davecheney: "service" isn't upstart is it? You want: "status <service-name>"
[05:38] <davecheney> why does the 2 arg form but the 3 arg version work ?
[05:39] <davecheney> menn0: i'm seeing the same mongo connection reauthenticating itself over and over again
[05:39] <menn0> davecheney: yep I think I see the same
[05:39] <davecheney> that sounds wrong
[05:40]  * davecheney logs bug
[05:41] <davecheney> https://bugs.launchpad.net/juju-core/+bug/1353275
[05:46] <davecheney> hmmm, Wed Aug  6 05:45:38.641 [rsMgr] replSet not electing self, not all members up and we have been up less than 5 minutes
[05:46] <davecheney> Wed Aug  6 05:45:38.641 [rsMgr] replSet info electSelf 1
[05:46] <davecheney> Wed Aug  6 05:45:39.574 [conn2] end connection 127.0.0.1:32995 (1 connection now open)
[05:53] <davecheney> menn0: even when mongo doesn't shit itself
[05:53] <davecheney> the environment is still unusable
[05:53] <davecheney> the api server is offline
[05:53] <menn0> yes, I've seen that
[05:53] <menn0> in fact I have an env doing exactly that now
[05:54] <menn0> I'm beginning to think this is culprit:
[05:54] <menn0> commit 62e172632c3e9d8496805ed5223f9f4acc28986a
[05:54] <menn0> Merge: 8275fa4 ce0840e
[05:54] <menn0> Author: Juju bot <jujubot@users.noreply.github.com>
[05:54] <menn0> Date:   Thu Jul 31 12:53:28 2014 +0100
[05:54] <menn0>     Merge pull request #416 from axw/mongo-journalenabled
[05:54] <menn0>     
[05:54] <menn0>     Only set Safe.J (with mgo.SetSafe) if journaling is enabled
[05:54] <menn0>     
[05:54] <menn0>     Mongo 2.6 errors if you attempt to set Safe.J when journaling is disabled. We introduce a new fun
[05:55] <davecheney> menn0: if you want to push the revert button
[05:55] <davecheney> i'll LGTM that
[05:55] <davecheney> i think it's time for some drastic action
[05:55] <menn0> I'm going to try a local build first without that rev and see what happens
[05:56] <davecheney> oh look, https://bugs.launchpad.net/mgo/+bug/1340275
[06:01] <menn0_> davecheney: interesting...
[06:03] <davecheney> menn0_: more information
[06:03] <davecheney> machines 1, and 2
[06:03] <davecheney> the new replicas
[06:03] <davecheney> never start
[06:03] <davecheney> sorry
[06:03] <davecheney> the machine gets through cloud init
[06:03] <davecheney> but they can never connect to the original api server
[06:04] <davecheney> so they can never get their configuration and find they are running a state server job
[06:06]  * menn0_ nods
[06:06] <menn0_> I'm not sure it always happens that way but I have seen something like that
[06:06] <menn0_> I'm trying now with the Safe.J change removed
[06:16] <menn0_> davecheney: how much longer are you going to be around
[06:16] <menn0_> ?
[06:16] <menn0_> I need to go eat with the family
[06:19] <davecheney> menn0_: i'm out pretty much now
[06:19] <davecheney> need to get into the city for that meetup
[06:19] <menn0_> ok
[06:19] <menn0_> davecheney: that ensure-avail just finish and 3 state servers are up and "started"
[06:19] <menn0_> things aren't happy in the logs but removing that revision seems to have helped the immediate issue
[06:20] <menn0_> i'll send an email a bit later on
[06:20] <davecheney> menn0_: what do you think ?
[06:20] <davecheney> do you want to propose it ?
[06:20] <menn0_> I think I'll email people and wait for some feedback
[06:20] <davecheney> ok
[06:22] <menn0_> davecheney: update... so things have now settled down post "juju ensure-availability" and the env looks healthy
[06:22] <menn0_> I think it's probably that rev
[06:26] <davecheney> menn0_: my vote is to roll that rev back
[06:26] <davecheney> you have my LGTM if you want to land that change
[06:42] <voidspace> morning all
[06:48] <menn0_> voidspace: mornin'
[06:48]  * menn0_ is about to upgrade his router firmware to see if it helps his flaky link
[06:52] <voidspace> menn0_: morning :-)
[06:55] <dimitern> morning
[06:55] <voidspace> dimitern: morning
[07:18] <voidspace> m1k43l
[07:33] <voidspace> dimitern: ping
[07:33] <voidspace> dimitern: if a machine is "pending" (instance-id and agent-state for the unit) how do I tell what state it's actually in
[07:33] <voidspace> i.e. what it's waiting for / where it has got to
[07:34] <voidspace> dimitern: this is local provider, and I suspect it's downloading the lxc image
[07:35] <voidspace> but I'd like to *know*
[07:35] <dimitern> in a call, will get back to you sorry
[07:35] <voidspace> there's nothing of use in the logs
[07:35] <voidspace> dimitern: sure
[07:35] <voidspace> np
[07:35] <voidspace> will go make coffee
[07:35] <voidspace> maybe it just needs time
[07:35] <dimitern> 10x :)
[07:55] <dimitern> voidspace, i'm back; so to answer your question - "pending" means the machine agent haven't yet started (or at least haven't logged into the api to start the agent alive presence pinger)
[07:56] <voidspace> dimitern: right, and I suspect that the vm (in this case an lxc container is not up)
[07:56] <voidspace> dimitern: how do I *tell*
[07:56] <voidspace> dimitern: it's still pending
[07:56] <dimitern> voidspace, ah, debugging why an lxc container is not starting is fun :)
[07:56] <voidspace> is it even possible...
[07:57] <dimitern> voidspace, i'd try a couple of things: 1) set logging-config in env.yaml for that environment to <root>=DEBUG
[07:57] <voidspace> ok, cool
[07:57] <dimitern> voidspace, oops sorry, <root>=TRACE
[07:57] <voidspace> good start
[07:57] <voidspace> right
[07:57] <dimitern> voidspace, trace logs for all lxc ops should be in the MA log
[07:58] <jam1> dimitern: I rejoined juju-networking
[07:58] <dimitern> voidspace, then, you can check $ sudo lxc-ls --fancy on the machine to see if lxc is running
[07:58] <voidspace> so machine-1.log
[07:58] <jam1> voidspace: I think if you find things like "downloading lxc images" aren't in the logs, but are there under TRACE, we need to bump those up to DEBUG
[07:58] <jam1> at least, if not INFO
[07:58] <voidspace> ah, I was using lxc-ls but without sudo
[07:58] <dimitern> voidspace, and in /var/lib/lxc and /var/log/lxc (as root) you can find useful logs sometimes
[07:58] <jam1> something that might take 10 min should be at INFO level, certainly
[07:59] <jam1> though I think LXC itself downloads images outside of our control
[07:59] <voidspace> heh, lots of fun
[07:59] <voidspace> I *think* that's where it's stuck - but hard to tell
[07:59] <voidspace> ah, cool
[07:59] <voidspace> so lxc-ls --fancy
[08:00] <voidspace> tells me that the precise-template exists (which is how I was able to start a precise vm)
[08:00] <voidspace> but there's no trusty template
[08:00] <voidspace> so I assume (hope) that's still downloading
[08:00] <voidspace> precious little network activity though
[08:00] <voidspace> I will restart with trace logging
[08:15] <TheMue> mornign
[08:18] <voidspace> TheMue: morning
[08:19] <TheMue> voidspace: just seen you’re playing with lxc
[08:19] <voidspace> TheMue: heh
[08:19] <voidspace> TheMue: unavoidable, I'm working with local provider :-/
[08:19] <TheMue> voidspace: hehe
[08:20] <TheMue> voidspace: I’m in contact with Serge and Stéphane about IPv6 and LXC
[08:20] <voidspace> ah, cool
[08:39] <jam1> morning TheMue, welcome back
[08:39] <jam1> voidspace:
[08:39] <jam1> if you're restarting in the middel
[08:39] <jam1> then likely there are filesystem locks that are stale now
[08:46] <TheMue> jam1: morning to Nuremberg
[08:46]  * TheMue has to fix his client to do a better signaling
[09:00] <voidspace> jam1: so how would I check / resolve?
[09:00] <voidspace> jam1: a reboot in between should be sufficient to release locks, right
[09:00] <dimitern> voidspace, what are you seeing?
[09:01] <jam1> voidspace: no
[09:01] <voidspace> dimitern: machine never starts, no trusty template downloaded
[09:01] <jam1> voidspace: there is a directory that gets created as a lock, that persists
[09:01] <dimitern> voidspace, ah, bugger
[09:01] <jam1> so destroy-environment while it is held doesn't clean it up
[09:01] <voidspace> so that's probably the problem
[09:02] <jam1> voidspace: IIRC, there is a plugin "juju-clean" that nukes everything that needs nuking
[09:02] <dimitern> voidspace, I have a nifty little snippet to obliterate a local env + all lxc artifacts
[09:02] <voidspace> is this an lxc lock or a juju lock?
[09:02] <jam1> I've found it in the past using debugging and finding the file it was waiting for and selectively deleting stuff
[09:02] <voidspace> dimitern: cool
[09:02] <jam1> voidspace: creation of the "precise-template" or trusty is a juju lock
[09:02] <dimitern> voidspace, http://blog.naydenov.net/2014/03/remove-juju-local-environment-cleanly/ - give it a try to see if it'll help, or just run some of the steps
[09:03] <jam1> voidspace: https://github.com/juju/plugins
[09:03] <jam1> dimitern: ^^
[09:03] <jam1> juju-clean is in there, IIRC
[09:03] <voidspace> jam1: thanks, awesome
[09:04] <jam1> its a pretty big hammer
[09:04] <jam1> and I'd like us to never need it, so file bugs when you do need it
[09:04] <jam1> I wish it would report what it actually had to clean up, but anyway
[09:10] <dimitern> :) nice
[09:11] <dimitern> thanks jam1
[09:21] <voidspace> lunch
[09:49] <voidspace> jam1: the juju clean plugin solved my problem and unblocked me.
[09:49] <voidspace> I don't think I can usefully file a bug though as I don't know what it solved :-)
[09:50] <voidspace> I suspect that it was stale filesystem locks from a failed / stalled template download due to my crappy network
[09:50] <voidspace> but I don't know
[09:50] <voidspace> and now I have new and weirder issues, but at least the lxc container starts ok and I have both precise and trusty templates
[09:53] <dimitern> voidspace, it might be the templates weren't downloaded properly
[09:53] <voidspace> dimitern: right
[09:54] <voidspace> now I have working containers, but install hooks seem to fail consistently - but succeed when retried
[09:54] <voidspace> digging in
[09:54] <dimitern> voidspace, can you paste some logs?
[09:55] <dimitern> from the failing unit
[09:55] <voidspace> dimitern: in a bit, I just blew everything away to try again :-)
[09:55] <dimitern> :) alright
[09:55] <voidspace> the failure mode is consistent
[09:55] <dimitern> voidspace, which charm are you trying?
[09:56] <voidspace> mmand "lsmod | grep -q 8021q || modprobe 8021q" failed
[09:56] <voidspace> command "lsmod | grep -q 8021q || modprobe 8021q" failed
[09:56] <dimitern> voidspace, right!
[09:56] <voidspace> machine-1: 2014-08-06 09:19:28 ERROR juju.worker runner.go:219 exited "networker": command "lsmod | grep -q 8021q || modprobe 8021q" failed (code: 1, stdout: , stderr: modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/3.13.0-32-generic/modules.dep.bin'
[09:56] <dimitern> voidspace, somebody reported that recently
[09:57] <voidspace> dimitern: that's the mysql one
[09:57] <voidspace> dimitern: I reported it yesterday
[09:57] <dimitern> voidspace, :) ah
[09:57] <dimitern> voidspace, the problem with this could be solved if you modprobe 8021q on the host before starting the container, hopefully
[09:57] <voidspace> ah
[09:57] <voidspace> maybe
[09:58] <voidspace> modprobe is looking in the wrong place inside the container
[09:58] <voidspace> it's using the host path
[09:58] <voidspace> so maybe that would solve it
[09:58] <voidspace> dimitern: juju resolved --retry
[09:58] <voidspace> dimitern: seems to consistently fix it
[09:58] <dimitern> voidspace, from inside the container, you can't modprobe stuff, but if it was done on the host, lsmod will list it - that's the intention of the script
[09:59] <voidspace> dimitern: ok
[09:59] <voidspace> so maybe that's fixed
[09:59] <voidspace> just waiting for wordpress to come up so I can test the relation
[09:59] <dimitern> voidspace, i think I see the problem
[10:00] <voidspace> dimitern: do you want me to file a bug for this?
[10:00] <voidspace> in the meantime
[10:00] <voidspace> moar coffeez
[10:00] <dimitern> voidspace, the hook is not really failing, maybe just the uniter (or the whole unit agent) gets killed and restarted, but because the networker already run the lsmod script, retry "solves" it
[10:00] <voidspace> heh
[10:01] <voidspace> sounds plausible
[10:01] <dimitern> voidspace, yes, please, and attach the unit + machine logs
[10:47] <dimitern> voidspace, standup?
[10:47] <voidspace> dimitern: on my way
[10:48] <voidspace> dimitern: it's finally letting me in
[10:48] <voidspace> my crappy network
[10:52] <dimitern> voidspace, rejoin?
[10:57] <perrito666> so there are bad luck days and then there are days where you come to work on a borrowed office and forget the power brick for the computer....
[11:09] <rogpeppe1> any chance of a review of a charm package PR, please? https://github.com/juju/charm/pull/36
[11:12] <rogpeppe1> perrito666: :-(
[11:30] <perrito666> https://github.com/juju/utils/pull/18
[11:30] <perrito666> mm, menno and dave left already
[11:40] <TheMue> perrito666: +1
[11:41] <perrito666> TheMue: thanks, but for what exactly?
[11:41] <TheMue> perrito666: your PR
[11:41] <TheMue> ;)
[11:41] <perrito666> ah lol sorry
[11:42] <perrito666> thank you
[11:42] <TheMue> perrito666: yw
[11:43] <TheMue> dimitern: btw, will ping you a bit later. a friend just called and asked if he can grab a cup of coffee :)
[11:43] <dimitern> TheMue, no worries :)
[11:43] <perrito666> nooo, I just discovered the coolest thing from github
[11:44] <TheMue> dimitern: I’ll also invite you to the current doc/collection of notes on Google Docs
[11:44] <perrito666> you can convert a pr into a patch or diff
[11:44] <perrito666> by just adding .diff or .patch to the pr
[11:51] <dimitern> TheMue, sure, can you send me a link?
[11:52] <TheMue> dimitern: you should have received the google mail
[11:52] <dimitern> TheMue, ah, I've just seen it, thanks!
[11:55]  * TheMue is afk
[11:58] <voidspace>  dimitern: I filed that bug by the way https://bugs.launchpad.net/juju-core/+bug/1353443
[11:58] <voidspace> tasdomas: https://bugs.launchpad.net/juju-core/+bug/1353443
[11:59] <dimitern> voidspace, cheers!
[12:08] <voidspace> dimitern: and I can confirm, with this branch I can use local provider and it doesn't kill my networking
[12:08] <voidspace> https://github.com/voidspace/juju/tree/network-interfaces
[12:08] <voidspace> dimitern: I need to work on tests, and myabe prettify it - but confirmation that this works for MAAS would be good
[12:16] <dimitern> voidspace, great! I'll get to it soon to test on MAAS
[12:17] <voidspace> dimitern: cool, let me know please
[12:17] <dimitern> voidspace, will do
[12:34] <natefinch> voidspace: huzzah for not killing networking
[12:35] <voidspace> natefinch: heh, well hopefully this fix works for MAAS
[12:35] <voidspace> natefinch: but yeah, the fix is basically "don't screw with networking on the local provider host"
[12:39] <perrito666> natefinch: lol
[12:39]  * perrito666 tries menn0 and davechenneys patch without success
[12:39] <perrito666> sinzui: any clue when will that land on jenkins?
[12:41] <sinzui> perrito666, I see something being tested now http://juju-ci.vapour.ws:8080/
[12:42] <sinzui> perrito666, wallyworld's  maas-hostname-address is being tested now
[12:42] <voidspace> natefinch: ping
[12:43] <natefinch> voidspace: what's up?
[12:52] <perrito666> I did not understand if menn0s patch actually fixes the bug :|
[12:55] <wwitzel3> rogpeppe1: ping
[12:55] <rogpeppe1> wwitzel3: pong
[12:56] <wwitzel3> rogpeppe1: re your mailing list reply .. what collection in mongo do I need to put the machine-0 peer group member?
[12:57]  * rogpeppe1 looks
[12:57] <wwitzel3> rogpeppe1: well, I guess I could look at what collection the code is reading from ..
[12:57] <wwitzel3> rogpeppe1: no point in make you do my foot work :P
[12:57] <rogpeppe1> wwitzel3: :-)
[12:57] <wwitzel3> rogpeppe1: was just checking if you knew without looking :)
[12:58] <rogpeppe1> wwitzel3: i can never remember state collection names...
[12:58] <rogpeppe1> wwitzel3: you'll need to know the field names too of course
[12:59] <jam1> dimitern: hey, you brought up that we actually get 2 IPv6 addresses as the common case, can you clarify what those are?
[12:59] <jam1> I *think* they are (1) Link Local (which isn't really routable from the outside and (2) Actually the routable one
[12:59] <jam1> is that true?
[13:00] <jam1> in which case, we can really just ignore (1), and we are down to just one IPv6 address
[13:00] <dimitern> jam1, true
[13:01] <jam1> dimitern: and LinkLocal is a known IPv6 prefix, right? So it is trivial to filter out in the instance poller
[13:02] <dimitern> jam1, the link-local is required by the IPv6 spec to exist; obviously ::1 will exist as well, and a private (or Unique Local Address in IPv6 terms) might be there or not (in the latter case we can't usefully do anything ofc)
[13:02] <dimitern> jam1, it is, and we're actually doing that already
[13:02] <jam1> dimitern: can we usefully detect that it is a "private" address that we can't make use of ?
[13:03] <natefinch> sinzui: do we have a tag for "blocks CI"?  if not, can we make one?
[13:03] <dimitern> jam1, so my comment was more about "we need to be aware of this, if we're aiming to model and display all addresses"
[13:03] <dimitern> jam1, if you refer to a ULA - no, we can't unless we try
[13:04] <sinzui> natefinch, "ci regression" are the tags in combination
[13:04] <dimitern> jam1, otherwise, any valid IPv6 can be usable locally in the cloud of course
[13:07] <natefinch> sinzui: how do I search for two tags at the same time?
[13:08] <natefinch> nevermind, I see the tags field in advanced search
[13:09] <natefinch> sinzui: there's a bug marked medium that is tagged ci regression.... I presume that's not blocking CI
[13:10] <natefinch> http://goo.gl/URSQcV
[13:10] <sinzui> natefinch, advance search has an option for both, but right only triaged and inprogress critical ci regressions block
[13:12] <sinzui> natefinch, This is the smallest query https://bugs.launchpad.net/juju-core/+bugs?field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.importance%3Alist=CRITICAL&field.tag=ci+regression+&field.tags_combinator=ALL
[13:13] <abentley> sinzui: I'm catching up on email now.  It seems a fair few things happened when I was gone.  Should we have a chat?
[13:13] <sinzui> abentley, I cannot now
[13:14] <sinzui> abentley, I would like to chat, maybe I can steal away from a meeting
[13:15] <abentley> sinzui: Sure.  I'm around all day.
[13:16] <natefinch> sinzui: thanks for the link
[13:18] <sinzui> natefinch, I will change reports.vapour.ws to show that information. Lp requires separate queries for trunk and stable
[13:18] <perrito666> sinzui: natefinch arosales I believe one of you is the person who can give me azure credentials
[13:18] <sinzui> perrito666, arosales can link your azure account to the canonical account. I can help you run tests as CI
[13:19]  * perrito666 thinks he does not have an azure account at all
[13:19] <natefinch> perrito666: I used to have an azure account, but then my last company decided to stop paying for my MSDN subscription 10 months after I left.  I guess they don't love me any more.
[13:20] <sinzui> perrito666, you have mail
[13:21] <perrito666> sinzui: yes I do :p
[13:21] <sinzui> natefinch, you can read environments.yaml in cloud-city to run as CI
[13:27] <natefinch> sinzui: thanks
[13:31] <perrito666> sinzui: hoy likely am I to break things with the data you just passed to me?
[13:36] <natefinch> wwitzel3:  let's do our 1:1 after the standup, if that's ok?
[13:36] <wwitzel3> natefinch: that's fine
[14:01] <natefinch> mup: help
[14:01] <mup> natefinch: Run "help <cmdname>" for details on: bug, echo, help, infer, poke, run, sendraw, sms
[14:01] <natefinch> mup: help sms
[14:01] <mup> natefinch: sms <nick> <message ...> — Sends an SMS message.
[14:01] <mup> natefinch: The configured LDAP directory is queried for a person with the provided IRC nick ("mozillaNickname") and a phone ("mobile") in international format (+NN...). The message sender must also be registered in the LDAP directory with the IRC nick in use.
[14:02] <natefinch> mup help poke
[14:02] <natefinch> mup: help poke
[14:02] <mup> natefinch: poke <query ...> — Searches people in the LDAP directory.
[14:02] <mup> natefinch: The provided query will be searched for as an exact IRC nick ("mozillaNickname") or part of a name ("cn").
[14:02] <natefinch> mup: poke Finch
[14:02] <mup> natefinch: Plugin "ldap" is not enabled here.
[14:02] <natefinch> doh
[14:03] <natefinch> mup: help infer
[14:03] <mup> natefinch: infer [-all] <query ...> — Queries the WolframAlpha engine.
[14:03] <mup> natefinch: If -all is provided, all known information about the query is displayed rather than just the primary results.
[14:03] <natefinch> mup: infer time
[14:03] <mup> natefinch: 10:03:44 am EDT Wednesday, August 6, 2014.
[14:05] <natefinch> mup: sms natefinch boo!
[14:05] <mup> natefinch: Plugin "aql" is not enabled here.
[14:11] <ericsnow> natefinch: getting cozy with mup, eh?
[14:12] <ericsnow> :)
[14:14] <natefinch> ericsnow: just read gustavo's email, so figured I'd try it out.  Evidently it's good I did.
[14:14] <ericsnow> natefinch: plugins are overrated <wink>
[14:27] <dpb1> Hi -- when juju clones an LXC template I notice the cloud-init gets run again.  How does that happen?
[14:28] <ericsnow> here's a fix for (at least most of, if not all of) issue #1351019: https://github.com/juju/juju/pull/476
[14:28] <wwitzel3> sinzui: I noticed that manual for precise and trusty had clean runs this morning, is bug #1347715 still an issue?
[14:38] <sinzui> wwitzel3, Those were 1.20
[14:39] <sinzui> wwitzel3, http://juju-ci.vapour.ws:8080/job/manual-deploy-precise-amd64/ shows that the blues are 1.20 and the reds are master
[14:39]  * sinzui adds task to make reports/jenkins be clear about what is under test
[14:40] <wwitzel3> sinzui: ahh ok, I'm having trouble replicating the error locally with master. I manually started up an EC2 instance on AWS, configured the manual provider with its host, then I've done juju bootstrap .. and I'm not getting any of the SSH errors.
[14:43] <sinzui> wwitzel3, Have you set authorized-keys or authorized-key-path. CI is always stating which key to use
[14:44]  * sinzui thinks about the test
[14:45] <dimitern> voidspace, so far my testing with the local provider and your branch looks fine (had some trouble convincing it to run at all at first)
[14:45] <wwitzel3> sinzui: no, I will set that in the config and try again .. does it set both?
[14:46] <dimitern> voidspace, unfortunately, I won't be able to do the MAAS test today, as I'll need to go out soon, but I'll do it tomorrow morning, if that's ok
[14:49] <dimitern> voidspace, it seems the install hook errors you're getting are happening to me as well, and in 2 cases: 1) when trying parallel deployments (i.e. deploy wordpress, then deploy mysql without waiting the first to install and start), 2) with each lxc container after the first one
[14:51] <dimitern> both of these are related to the lxc filesystem cloning or whatever, due to "Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?" (i.e. apt-get fails on one container, because the lock is held by another container running apt-get or the host itself)
[14:52] <mattyw> hi folks, is someone able to answer some questions about MESS for me?
[14:52] <jrwren> dimitern: each container should have its own /var/lib/dpkg, shouldn't it?
[14:53] <dimitern> jrwren, I've guessed so, but something apparently changed recently
[14:54] <dimitern> ..and it's looks troubling
[14:59] <sinzui> wwitzel3, CI ran something like this to create the instance
[14:59] <sinzui> euca-run-instances -k id_rsa -t m1.large -g my-manual-test ami-36aa4d5e
[14:59] <mattyw> rogpeppe1, ping?
[14:59] <sinzui> wwitzel3, ^ change id_rsa to the key you have in ec2. That might help reproduce the ec2 setup
[15:01] <bodie_> pinging fwereade re his concerns on pr 415 -- addressed or commented
[15:06] <wwitzel3> sinzui: ok, thanks
[15:12] <voidspace> dimitern: how odd
[15:12] <voidspace> dimitern: are you seeing that on trunk?
[15:12] <voidspace> dimitern: or with my branch
[15:12] <voidspace> it seems unlikely that my branch causes that
[15:12] <natefinch> wwitzel3: ericsnow: perrito666:  exceedingly late standup?
[15:13] <wwitzel3> natefinch: oops, sorry
[15:13] <ericsnow> natefinch: hangouts is flaking out on me
[15:13] <wwitzel3> omw
[15:13] <perrito666> natefinch: sorry going there
[15:13] <jam1> mgz: poke
[15:13] <voidspace> dimitern: I have to go out soon as well (~20minutes), but I'll be working again later tonight
[15:14] <dimitern> voidspace, I guess trunk is the same as your branch in this case (no local-provider-specific lxc changes); maybe it's just my env config
[15:14] <voidspace> dimitern: maybe - I don't think that's the *same* error I was seeing
[15:15] <rogpeppe1> mattyw: pong
[15:18] <mattyw> rogpeppe1, do you know much about the potential multi environment state server stuff? I have a specific question about how it might or might not affect the agent on the state server
[15:19] <rogpeppe1> mattyw: some, but i'm not directly involved
[15:19] <mattyw> rogpeppe1, do you know who is?
[15:19] <rogpeppe1> mattyw: in the implementation of the multi-tenant state server
[15:19] <rogpeppe1> mattyw: no, sorry
[15:20] <voidspace> mattyw: I believe that would be thumper's team
[15:20] <voidspace> mattyw: natefinch was involved in the planning of that feature - but has now passed it on I believe
[15:20] <mattyw> voidspace, ok great thanks
[15:21] <mattyw> rogpeppe1, multi tenant is where the state server might be split across machines?
[15:21] <rogpeppe1> mattyw: no, where a single state server can serve several environments
[15:22] <rogpeppe1> mattyw: the state server itself can always be split over multiple machines (HA)
[15:22] <rogpeppe1> pwd
[15:22] <mattyw> rogpeppe1, that's what I was going to ask - I thought it was the same as ha
[15:22] <voidspace> rogpeppe1: C:\Documents and Settings
[15:22] <rogpeppe1> mattyw: nope. entirely orthogonal.
[15:22] <rogpeppe1> rofl
[15:22] <alexisb> natefinch, ping
[15:22] <mattyw> voidspace, nice
[15:23] <natefinch> alexisb: what is up, yo?
[15:24] <natefinch> ericsnow: reboot, it always fixes things (or deletes your networking config.... one of those two)
[15:24] <wwitzel3> ericsnow: did you try turning it off and back on again? ;)
[15:24] <ericsnow> natefinch: wwitzel3: gee, thanks ;)
[15:46] <fwereade> natefinch, ping
[15:47] <fwereade> natefinch, do you have a rough developer-weeks estimate for remaining time on backup/restore
[15:48] <perrito666> fwereade: he is afk
[15:56] <natefinch> back
[16:15] <natefinch> wwitzel3: should I reassign the manual provider bug to you?
[16:17] <jcw4> I have a couple of minor pull requests waiting to land that are just test changes that will isolate a handful of tests from the users .bashrc better
[16:18] <jcw4> Any objections to me JFDI'ing those PR's since they're approved and they would only contribute to more stable tests, and don't change non-test code?
[16:19] <natefinch> jcw4: show me the PRs?
[16:19] <jcw4> natefinch: https://github.com/juju/juju/pull/450
[16:20] <jcw4> natefinch: https://github.com/juju/juju/pull/454
[16:21] <jcw4> natefinch: re-reading 450 it does change non-test code
[16:21] <jcw4> so I'm fine with that one waiting...
[16:22] <perrito666> aghh lost sinzui again
[16:27] <mgz> jcw4: I'm sending pr454 through now
[16:27] <jcw4> mgz: tx!
[16:27] <mgz> 450 also seems fine
[16:27] <wwitzel3> natefinch: I will pick it up, I'm in there now anyway
[16:28] <jcw4> yeah trivial change too.  Thanks mgz and natefinch
[16:28] <wwitzel3> though I can't replicate it even using the euca command sinzui gave me.
[16:32] <perrito666> bbl ppl
[17:47] <mattyw> calling it a day folks, night!
[18:05] <natefinch> phew... electrician and bee inspector showed up at the same time
[18:06] <mgz> electric bees!
[18:07] <natefinch> I do have an electric fence around the bees, but that's just coincidental :)
[19:31] <perrito666> natefinch: what is a bee inspector_
[19:32]  * perrito666 has the strangest bug no browser will start
[19:32] <wwitzel3> perrito666: I've had that before
[19:32] <wwitzel3> perrito666: I never figured it out, just ended up logout/login
[19:33] <perrito666> lol if I tell you what it is you will laugh at me
[19:33] <perrito666> browsers where starting up in the external screen and the monitor was off
[19:34] <wwitzel3> lol
[19:34] <wwitzel3> you are right
[19:35] <natefinch> haha
[19:35] <natefinch> I've had that happen
[19:35] <natefinch> What's worse is when they open up completely off the screen top or bottom because you've changed resolutions or something
[19:36] <perrito666> natefinch: I use all external screen apps full screen to avoid that
[19:36] <perrito666> (as in f11)
[19:36] <natefinch> perrito666: a bee inspector is someone paid by the state to go around to all the known beehives in his county and check them for disease, mites, etc.
[19:37] <perrito666> wow you guys really have the state in your lives, our state only takes money and leaves us alone
[19:42] <natefinch> perrito666: not all states in the US have inspectors, but some do.  Bees are pretty important for agriculture, so we try to make sure some idiot doesn't spread disease and screw everything up
[19:45] <perrito666> natefinch: that makes sense, I dont think we have control for bees here, at least not for small producers
[19:55] <perrito666> sinzui: are you around?
[20:04] <natefinch> mup: infer time in Nuremburg
[20:04] <mup> natefinch: Cannot infer much out of this. :-(
[20:04] <wwitzel3> lol
[20:04] <natefinch> hrmph
[20:05] <natefinch> mup: infer time
[20:05] <mup> natefinch: 4:05:16 pm EDT Wednesday, August 6, 2014.
[20:05] <natefinch> mup: infer time in nuremburg, germany
[20:05] <mup> natefinch: Cannot infer much out of this. :-(
[20:05] <natefinch> dang
[20:05] <jcw4> mup: infer time UTC+0200
[20:05] <mup> jcw4: 10:05:56 pm GMT+2 Wednesday, August 6, 2014.
[20:06] <natefinch> it worked for gustavo in his email :/
[20:06] <jcw4> teh suk
[20:06] <natefinch> mup: infer timezone in nuremburg
[20:06] <mup> natefinch: Cannot infer much out of this. :-(
[20:06] <niemeyer> mup: infer time in nurenbERG
[20:06] <mup> niemeyer: 10:06:37 pm CEST Wednesday, August 6, 2014.
[20:06] <natefinch> agg frig
[20:06] <jcw4> s/m/n s/u/e/
[20:07] <natefinch> niemeyer: spelling gets me every time
[20:09] <natefinch> google says it's nuremberg
[20:09] <natefinch> anyway
[20:11] <niemeyer> Not burg
[20:12] <natefinch> yep
[20:12] <jcw4> niemeyer: so that's mountain not city right?
[20:13] <niemeyer> jcw4: What is?
[20:13] <jcw4> berg vs. burg
[20:13] <niemeyer> Good question, I don't know, but I'm curious now
[20:13] <jcw4> niemeyer: at least that's the difference in afrikaans
[20:13] <niemeyer> The real city name is actually Nürnberg
[20:13] <natefinch> mup: infer time in Nürnberg
[20:14] <mup> natefinch: Cannot infer much out of this. :-(
[20:14] <natefinch> boo
[20:14] <jcw4> heh
[20:14] <niemeyer> natefinch: Ask in German
[20:14] <jcw4> what does mup use? google?
[20:14] <natefinch> rofl
[20:14] <niemeyer> mup: help infer
[20:14] <mup> niemeyer: infer [-all] <query ...> — Queries the WolframAlpha engine.
[20:14] <mup> niemeyer: If -all is provided, all known information about the query is displayed rather than just the primary results.
[20:14] <jcw4> mup: infer -all berg vs. burg
[20:14] <mup> jcw4: Distances: distance flight time, Berg, Jamtland, Sweden to Burg 729 miles 1 hour 20 minutes, (assuming direct flight path at 550 mph) — Demographics: Berg, Jamtland, Sweden Burg, Saxony-Anhalt, Germany, population 8175 people 24364 people.
[20:14] <mup> jcw4:  — Geographic properties: Berg, Jamtland, Sweden Burg, area 2384 mi^2 (square miles), average elevation 124.7 feet.
[20:14] <natefinch> niemeyer: my german is umm.... somewhat rusty
[20:14] <jcw4> lol
[20:15] <perrito666> mup: folgern Zeit in Nürnberg
[20:15] <mup> perrito666: In-com-pre-hen-si-ble-ness.
[20:15] <natefinch> rofl
[20:15] <perrito666> mup: infer Zeit in Nürnberg
[20:15] <mup> perrito666: Cannot infer much out of this. :-(
[20:16] <jcw4> mup: was ist die ziet in Nürnberg
[20:16] <mup> jcw4: Roses are red, violets are blue, and I don't understand what you just said.
[20:16] <natefinch> mup: infer -all Nuremberg
[20:16] <mup> natefinch: Population: city population 505664 people (country rank: 14th) (2010 estimate).
[20:18] <katco> niemeyer: hey, while you're here. where will the goamz lib live on github?
[20:18] <katco> niemeyer: also, hello :)
[20:19] <niemeyer> katco: Hey :)
[20:19] <niemeyer> katco: I don't really know yet
[20:19] <natefinch> oh hey, while people are paying attention, how do you specify an empty map in yaml?  Google is failing me.
[20:20] <katco> niemeyer: ok no worries. we're just about wrapping up changes, so it would be nice to have a home for the little guy somewhat soon, but we can work around it if not :)
[20:21] <jcw4> mup: infer -all all your base
[20:21] <mup> jcw4: ...are belong to us., (according to the video game Zero Wing).
[20:21] <katco> mup: infer \b
[20:21] <mup> katco: Cannot infer much out of this. :-(
[20:22] <katco> good job, mup :)
[20:22] <jcw4> hehe
[20:22] <jcw4> mup: infer how to represent an empty map in yaml?
[20:22] <katco> haha
[20:22] <mup> jcw4: Cannot infer much out of this. :-(
[20:22] <katco> there we go
[20:22] <katco> mup: help
[20:22] <mup> katco: Run "help <cmdname>" for details on: bug, echo, help, infer, poke, run, sendraw, sms
[20:23] <jcw4> oooh sms
[20:23] <katco> mup: help poke
[20:23] <mup> katco: poke <query ...> — Searches people in the LDAP directory.
[20:23] <mup> katco: The provided query will be searched for as an exact IRC nick ("mozillaNickname") or part of a name ("cn").
[20:23] <katco> mup: poke katco-
[20:23] <mup> katco: Plugin "ldap" is not enabled here.
[20:23] <natefinch> poke and sms don't work on freenode
[20:23] <katco> ah
[20:24]  * natefinch made that mistake this morning :)
[20:24] <katco> natefinch: hey new site looks nice
[20:24] <katco> mup: help sendraw
[20:24] <mup> katco: sendraw [-account=<string>] <text ...> — Sends the provided text as a raw IRC protocol message.
[20:24] <mup> katco: If an account name is not provided, it defaults to the current one.
[20:24] <perrito666> sinzui: ping me when you are around
[20:25] <natefinch> katco: thanks... Hugo is pretty cool, and since it's in Go, I can actually understand the code ;)
[20:25] <katco> natefinch: lol
[20:26] <katco> i have been thinking about switching to a static based site
[20:26] <natefinch> katco: it's mostly the jekyll theme, ported to Hugo's format, and tweaked slightly by me.  I have near zero web dev / design ability... but I am really good at changing margins and colors
[20:26] <katco> just using blogger right now
[20:26] <katco> lol
[20:27] <natefinch> er... the "hyde" jekyll theme
[20:27] <katco> http://stchuxderbychix.appspot.com/ <-- old website i did in Go for my derby team. not using best practices though.
[20:27] <natefinch> katco: highly recommend Hugo.  It's pretty easy to use, since it's all Go templates and pretty well thought out.
[20:28] <natefinch> katco: I actually asked Steven Francia to be a committer on the repo so I could help fix bugs and land PRs and stuff
[20:28] <natefinch> also, free hosting on github is pretty sweet
[20:28] <katco> yeah haha
[20:28] <katco> google app engine is probably better free hosting ;)
[20:28] <katco> geographical redundancy for my dinky blog!
[20:28] <natefinch> katco: that site looks 100 times better than anything I could produce.  Nice thing about a blog is that it's mostly text :)
[20:29] <katco> natefinch: i was pretty proud of it. it aggregates data from different web services
[20:29] <natefinch> Oh yeah, I remember you mentioning that. Pretty cool.
[20:30] <katco> it's funny that it's a dead site, but it still has some current info
[20:30] <katco> e.g. next bout looks like
[20:31] <natefinch> katco: can GAE easily host a static site?  I like that I don't have to do anything but git push to github to update my site.
[20:31] <katco> natefinch: yeah. git push is way easier, but they have a python script that pushes things up for you
[20:31] <katco> but yeah, does static content quite nicely actually
[20:31] <katco> and since under some huge amount of traffic is free, it's essentially free hosting :p
[20:32] <katco> b/c it was designed for actual applications
[20:32] <katco> but you can also run some live go code on there if you want a bit of dynamic ability. python, java, i think php now
[20:33] <natefinch> yeah, that's cool
[20:34] <katco> of course i don't know why anyone would want to use python /duck
[20:35]  * natefinch notes katco doesn't even need to mention java or php because.... *shudder*
[20:35] <katco> haha
[20:35] <katco> java... ok i can do that. php... just... no. never again.
[20:35] <katco> i wrote a CMS in php back in the early 00's. before it had support for classes.
[20:35] <natefinch> I've never done PHP.   Java I've done.  It's ok.
[20:36] <natefinch> gotta go in  a sec
[20:36] <katco> java is ok albeit verbose. php to me is design by accident
[20:36] <perrito666> there is this cool tool called nikola which generates your whole website which includes comments support, paging and some other goodies from rest
[20:37] <natefinch> perrito666: hugo does that from markdown and Go templates
[20:37] <natefinch> http://hugo.spf13.com/
[20:37] <natefinch> ok, gotta go
[21:06] <ericsnow> could anyone spare a few minutes for a review (the patch is small)? https://github.com/juju/juju/pull/476
[21:19] <menn0> waigani: morning
[21:19] <perrito666> menn0: morning
[21:19] <perrito666> good findings
[21:19] <menn0> perrito666: howdy
[21:19] <waigani> menn0: morning
[21:19] <menn0> yeah
[21:19] <menn0> it took all day :)
[21:20] <menn0> but I learned a lot along the way
[21:20] <waigani> perrito666: morning :)
[21:20] <waigani> menn0: what took all day?
[21:20] <menn0> figuring out the HA problem
[21:20] <perrito666> menn0: you make me feel better, i was al eeffing day setting those things and failing
[21:20] <waigani> menn0: you solved it?
[21:21] <menn0> well I found the commit that caused the problem and reverted it
[21:21] <menn0> although, I've seen that the merge failed
[21:21] <menn0> more to look at ...
[21:21] <waigani> ah yeah, just saw your PR - nice work man!
[21:22] <menn0> I need to take care of something here (screaming kids)
[21:22] <menn0> back in a bit
[21:22] <perrito666> menn0: your revert should break the build somewhere else though
[21:53] <menn0> perrito666: sorry.
[21:53] <menn0> perrito666: yeah I feared that it might
[21:53] <perrito666> menn0: I was waiting on it to merge
[21:54] <menn0> perrito666: I will try to understand the Safe.J change some more
[21:54] <perrito666> but reading that commits message it seems to be fixing a bug in newer versions of juju
[21:54] <menn0> new versions of mongo right?
[21:54] <perrito666> well apparently setting safe.j was ok even though you had no journal set
[21:54] <perrito666> menn0: yes
[21:55] <perrito666> but since 2....6? i think that is no longer true
[21:57] <menn0> well before doing anything else I'm trying the merge again because the way it failed is one of the errors we see regularly in test runs
[22:27] <perrito666> menn0: your changes got merged
[22:28] <perrito666> please have an eye on jenkins
[22:28] <menn0> perrito666: \o/
[22:28] <menn0> perrito666: what do you mean: "please have an eye on jenkins"
[22:29] <perrito666> menn0: there are a couple of CI jobs linked on the ticket
[22:29] <perrito666> make sure they pass when they test your rev
[22:29] <menn0> perrito666: right
[22:29] <menn0> perrito666: I'm also running the CI test you were testing with (test_recovery.py --ha) manually now
[22:30] <menn0> perrito666: so far it's already getting further than it did before
[22:30] <menn0> perrito666: thanks for those instructions btw. they help a lot.
[22:30] <perrito666> menn0: I am glad
[22:31] <menn0> I didn't end up doing the custom stream creation though
[22:31] <menn0> I hacked up the test helpers a bit ...
[22:36] <perrito666> heh
[22:51] <menn0> perrito666: hooray! test_recovery.py --ha passes
[23:03] <ericsnow> menn0: \o/
[23:08] <perrito666> sweet
[23:08] <perrito666> now we need to bug curtis to see if it passes a few times
[23:22] <menn0> perrito666: Curtis knows
[23:34] <waigani> menn0: I'm happy to do some reviewing if that helps
[23:34] <menn0> waigani: that would be awesome
[23:35] <waigani> menn0: okay - you owe me a beer ;)
[23:35] <menn0> waigani: done!
[23:35] <menn0> waigani: we should coordinate to make sure we don't end up overlapping too much
[23:36] <waigani> menn0: on that point I think you can actually assign a reviewer to a PR
[23:37] <menn0> waigani: ok cool. I didn't know that.
[23:37] <menn0> let's do that.