[00:23] perrito666: you still around? === mwhudson_ is now known as mwhudson === mjs0 is now known as menn0 === Ursinha is now known as Ursinha-afk === Ursinha-afk is now known as Ursinha [03:48] https://bugs.launchpad.net/juju-core/+bug/1353239 [05:19] menn0_: i'm going to try again [05:19] but this time run my mongo in the foreground [05:19] davecheney: good idea [05:20] I'm getting closer to finding the culprit rev [05:21] menn0_: i'm not sure it's our fault [05:21] mongo appears to be shitting itself [05:21] we're still using 2.4.x on precise [05:21] I think it's triggered by the way we're setting up mongo though [05:22] sure [05:22] no argument there [05:22] whether what we're doing is reasonable or not is another matter [05:22] but having to work around fragile software does not lead to robust systems [05:23] sure but let's narrow this down first before jumping to conclusions :) === menn0_ is now known as menn0 [05:34] reasons to hate upstart number 1<<72 [05:34] ubuntu@ip-10-251-35-4:~$ service juju-db [05:34] juju-db: unrecognized service [05:34] ubuntu@ip-10-251-35-4:~$ service juju-db status [05:34] juju-db start/running, process 10783 [05:37] davecheney: "service" isn't upstart is it? You want: "status " [05:38] why does the 2 arg form but the 3 arg version work ? [05:39] menn0: i'm seeing the same mongo connection reauthenticating itself over and over again [05:39] davecheney: yep I think I see the same [05:39] that sounds wrong [05:40] * davecheney logs bug [05:41] https://bugs.launchpad.net/juju-core/+bug/1353275 [05:46] hmmm, Wed Aug 6 05:45:38.641 [rsMgr] replSet not electing self, not all members up and we have been up less than 5 minutes [05:46] Wed Aug 6 05:45:38.641 [rsMgr] replSet info electSelf 1 [05:46] Wed Aug 6 05:45:39.574 [conn2] end connection 127.0.0.1:32995 (1 connection now open) [05:53] menn0: even when mongo doesn't shit itself [05:53] the environment is still unusable [05:53] the api server is offline [05:53] yes, I've seen that [05:53] in fact I have an env doing exactly that now [05:54] I'm beginning to think this is culprit: [05:54] commit 62e172632c3e9d8496805ed5223f9f4acc28986a [05:54] Merge: 8275fa4 ce0840e [05:54] Author: Juju bot [05:54] Date: Thu Jul 31 12:53:28 2014 +0100 [05:54] Merge pull request #416 from axw/mongo-journalenabled [05:54] [05:54] Only set Safe.J (with mgo.SetSafe) if journaling is enabled [05:54] [05:54] Mongo 2.6 errors if you attempt to set Safe.J when journaling is disabled. We introduce a new fun [05:55] menn0: if you want to push the revert button [05:55] i'll LGTM that [05:55] i think it's time for some drastic action [05:55] I'm going to try a local build first without that rev and see what happens [05:56] oh look, https://bugs.launchpad.net/mgo/+bug/1340275 [06:01] davecheney: interesting... [06:03] menn0_: more information [06:03] machines 1, and 2 [06:03] the new replicas [06:03] never start [06:03] sorry [06:03] the machine gets through cloud init [06:03] but they can never connect to the original api server [06:04] so they can never get their configuration and find they are running a state server job [06:06] * menn0_ nods [06:06] I'm not sure it always happens that way but I have seen something like that [06:06] I'm trying now with the Safe.J change removed [06:16] davecheney: how much longer are you going to be around [06:16] ? [06:16] I need to go eat with the family [06:19] menn0_: i'm out pretty much now [06:19] need to get into the city for that meetup [06:19] ok [06:19] davecheney: that ensure-avail just finish and 3 state servers are up and "started" [06:19] things aren't happy in the logs but removing that revision seems to have helped the immediate issue [06:20] i'll send an email a bit later on [06:20] menn0_: what do you think ? [06:20] do you want to propose it ? [06:20] I think I'll email people and wait for some feedback [06:20] ok [06:22] davecheney: update... so things have now settled down post "juju ensure-availability" and the env looks healthy [06:22] I think it's probably that rev [06:26] menn0_: my vote is to roll that rev back [06:26] you have my LGTM if you want to land that change [06:42] morning all [06:48] voidspace: mornin' [06:48] * menn0_ is about to upgrade his router firmware to see if it helps his flaky link [06:52] menn0_: morning :-) [06:55] morning [06:55] dimitern: morning [07:18] m1k43l [07:33] dimitern: ping [07:33] dimitern: if a machine is "pending" (instance-id and agent-state for the unit) how do I tell what state it's actually in [07:33] i.e. what it's waiting for / where it has got to [07:34] dimitern: this is local provider, and I suspect it's downloading the lxc image [07:35] but I'd like to *know* [07:35] in a call, will get back to you sorry [07:35] there's nothing of use in the logs [07:35] dimitern: sure [07:35] np [07:35] will go make coffee [07:35] maybe it just needs time [07:35] 10x :) [07:55] voidspace, i'm back; so to answer your question - "pending" means the machine agent haven't yet started (or at least haven't logged into the api to start the agent alive presence pinger) [07:56] dimitern: right, and I suspect that the vm (in this case an lxc container is not up) [07:56] dimitern: how do I *tell* [07:56] dimitern: it's still pending [07:56] voidspace, ah, debugging why an lxc container is not starting is fun :) [07:56] is it even possible... [07:57] voidspace, i'd try a couple of things: 1) set logging-config in env.yaml for that environment to =DEBUG [07:57] ok, cool [07:57] voidspace, oops sorry, =TRACE [07:57] good start [07:57] right [07:57] voidspace, trace logs for all lxc ops should be in the MA log [07:58] dimitern: I rejoined juju-networking [07:58] voidspace, then, you can check $ sudo lxc-ls --fancy on the machine to see if lxc is running [07:58] so machine-1.log [07:58] voidspace: I think if you find things like "downloading lxc images" aren't in the logs, but are there under TRACE, we need to bump those up to DEBUG [07:58] at least, if not INFO [07:58] ah, I was using lxc-ls but without sudo [07:58] voidspace, and in /var/lib/lxc and /var/log/lxc (as root) you can find useful logs sometimes [07:58] something that might take 10 min should be at INFO level, certainly [07:59] though I think LXC itself downloads images outside of our control [07:59] heh, lots of fun [07:59] I *think* that's where it's stuck - but hard to tell [07:59] ah, cool [07:59] so lxc-ls --fancy [08:00] tells me that the precise-template exists (which is how I was able to start a precise vm) [08:00] but there's no trusty template [08:00] so I assume (hope) that's still downloading [08:00] precious little network activity though [08:00] I will restart with trace logging [08:15] mornign [08:18] TheMue: morning [08:19] voidspace: just seen you’re playing with lxc [08:19] TheMue: heh [08:19] TheMue: unavoidable, I'm working with local provider :-/ [08:19] voidspace: hehe [08:20] voidspace: I’m in contact with Serge and Stéphane about IPv6 and LXC [08:20] ah, cool === mjs0 is now known as menn0 [08:39] morning TheMue, welcome back [08:39] voidspace: [08:39] if you're restarting in the middel [08:39] then likely there are filesystem locks that are stale now [08:46] jam1: morning to Nuremberg [08:46] * TheMue has to fix his client to do a better signaling [09:00] jam1: so how would I check / resolve? [09:00] jam1: a reboot in between should be sufficient to release locks, right [09:00] voidspace, what are you seeing? [09:01] voidspace: no [09:01] dimitern: machine never starts, no trusty template downloaded [09:01] voidspace: there is a directory that gets created as a lock, that persists [09:01] voidspace, ah, bugger [09:01] so destroy-environment while it is held doesn't clean it up [09:01] so that's probably the problem [09:02] voidspace: IIRC, there is a plugin "juju-clean" that nukes everything that needs nuking [09:02] voidspace, I have a nifty little snippet to obliterate a local env + all lxc artifacts [09:02] is this an lxc lock or a juju lock? [09:02] I've found it in the past using debugging and finding the file it was waiting for and selectively deleting stuff [09:02] dimitern: cool [09:02] voidspace: creation of the "precise-template" or trusty is a juju lock [09:02] voidspace, http://blog.naydenov.net/2014/03/remove-juju-local-environment-cleanly/ - give it a try to see if it'll help, or just run some of the steps [09:03] voidspace: https://github.com/juju/plugins [09:03] dimitern: ^^ [09:03] juju-clean is in there, IIRC [09:03] jam1: thanks, awesome [09:04] its a pretty big hammer [09:04] and I'd like us to never need it, so file bugs when you do need it [09:04] I wish it would report what it actually had to clean up, but anyway [09:10] :) nice [09:11] thanks jam1 [09:21] lunch [09:49] jam1: the juju clean plugin solved my problem and unblocked me. [09:49] I don't think I can usefully file a bug though as I don't know what it solved :-) [09:50] I suspect that it was stale filesystem locks from a failed / stalled template download due to my crappy network [09:50] but I don't know [09:50] and now I have new and weirder issues, but at least the lxc container starts ok and I have both precise and trusty templates [09:53] voidspace, it might be the templates weren't downloaded properly [09:53] dimitern: right [09:54] now I have working containers, but install hooks seem to fail consistently - but succeed when retried [09:54] digging in [09:54] voidspace, can you paste some logs? [09:55] from the failing unit [09:55] dimitern: in a bit, I just blew everything away to try again :-) [09:55] :) alright [09:55] the failure mode is consistent [09:55] voidspace, which charm are you trying? [09:56] mmand "lsmod | grep -q 8021q || modprobe 8021q" failed [09:56] command "lsmod | grep -q 8021q || modprobe 8021q" failed [09:56] voidspace, right! [09:56] machine-1: 2014-08-06 09:19:28 ERROR juju.worker runner.go:219 exited "networker": command "lsmod | grep -q 8021q || modprobe 8021q" failed (code: 1, stdout: , stderr: modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/3.13.0-32-generic/modules.dep.bin' [09:56] voidspace, somebody reported that recently [09:57] dimitern: that's the mysql one [09:57] dimitern: I reported it yesterday [09:57] voidspace, :) ah [09:57] voidspace, the problem with this could be solved if you modprobe 8021q on the host before starting the container, hopefully [09:57] ah [09:57] maybe [09:58] modprobe is looking in the wrong place inside the container [09:58] it's using the host path [09:58] so maybe that would solve it [09:58] dimitern: juju resolved --retry [09:58] dimitern: seems to consistently fix it [09:58] voidspace, from inside the container, you can't modprobe stuff, but if it was done on the host, lsmod will list it - that's the intention of the script [09:59] dimitern: ok [09:59] so maybe that's fixed [09:59] just waiting for wordpress to come up so I can test the relation [09:59] voidspace, i think I see the problem [10:00] dimitern: do you want me to file a bug for this? [10:00] in the meantime [10:00] moar coffeez [10:00] voidspace, the hook is not really failing, maybe just the uniter (or the whole unit agent) gets killed and restarted, but because the networker already run the lsmod script, retry "solves" it [10:00] heh [10:01] sounds plausible [10:01] voidspace, yes, please, and attach the unit + machine logs [10:47] voidspace, standup? [10:47] dimitern: on my way [10:48] dimitern: it's finally letting me in [10:48] my crappy network [10:52] voidspace, rejoin? [10:57] so there are bad luck days and then there are days where you come to work on a borrowed office and forget the power brick for the computer.... [11:09] any chance of a review of a charm package PR, please? https://github.com/juju/charm/pull/36 [11:12] perrito666: :-( [11:30] https://github.com/juju/utils/pull/18 [11:30] mm, menno and dave left already [11:40] perrito666: +1 [11:41] TheMue: thanks, but for what exactly? [11:41] perrito666: your PR [11:41] ;) [11:41] ah lol sorry [11:42] thank you [11:42] perrito666: yw [11:43] dimitern: btw, will ping you a bit later. a friend just called and asked if he can grab a cup of coffee :) [11:43] TheMue, no worries :) [11:43] nooo, I just discovered the coolest thing from github [11:44] dimitern: I’ll also invite you to the current doc/collection of notes on Google Docs [11:44] you can convert a pr into a patch or diff [11:44] by just adding .diff or .patch to the pr [11:51] TheMue, sure, can you send me a link? [11:52] dimitern: you should have received the google mail [11:52] TheMue, ah, I've just seen it, thanks! [11:55] * TheMue is afk [11:58] dimitern: I filed that bug by the way https://bugs.launchpad.net/juju-core/+bug/1353443 [11:58] tasdomas: https://bugs.launchpad.net/juju-core/+bug/1353443 [11:59] voidspace, cheers! [12:08] dimitern: and I can confirm, with this branch I can use local provider and it doesn't kill my networking [12:08] https://github.com/voidspace/juju/tree/network-interfaces [12:08] dimitern: I need to work on tests, and myabe prettify it - but confirmation that this works for MAAS would be good [12:16] voidspace, great! I'll get to it soon to test on MAAS [12:17] dimitern: cool, let me know please [12:17] voidspace, will do [12:34] voidspace: huzzah for not killing networking [12:35] natefinch: heh, well hopefully this fix works for MAAS [12:35] natefinch: but yeah, the fix is basically "don't screw with networking on the local provider host" [12:39] natefinch: lol [12:39] * perrito666 tries menn0 and davechenneys patch without success [12:39] sinzui: any clue when will that land on jenkins? [12:41] perrito666, I see something being tested now http://juju-ci.vapour.ws:8080/ [12:42] perrito666, wallyworld's maas-hostname-address is being tested now [12:42] natefinch: ping [12:43] voidspace: what's up? [12:52] I did not understand if menn0s patch actually fixes the bug :| [12:55] rogpeppe1: ping [12:55] wwitzel3: pong [12:56] rogpeppe1: re your mailing list reply .. what collection in mongo do I need to put the machine-0 peer group member? [12:57] * rogpeppe1 looks [12:57] rogpeppe1: well, I guess I could look at what collection the code is reading from .. [12:57] rogpeppe1: no point in make you do my foot work :P [12:57] wwitzel3: :-) [12:57] rogpeppe1: was just checking if you knew without looking :) [12:58] wwitzel3: i can never remember state collection names... [12:58] wwitzel3: you'll need to know the field names too of course [12:59] dimitern: hey, you brought up that we actually get 2 IPv6 addresses as the common case, can you clarify what those are? [12:59] I *think* they are (1) Link Local (which isn't really routable from the outside and (2) Actually the routable one [12:59] is that true? [13:00] in which case, we can really just ignore (1), and we are down to just one IPv6 address [13:00] jam1, true [13:01] dimitern: and LinkLocal is a known IPv6 prefix, right? So it is trivial to filter out in the instance poller [13:02] jam1, the link-local is required by the IPv6 spec to exist; obviously ::1 will exist as well, and a private (or Unique Local Address in IPv6 terms) might be there or not (in the latter case we can't usefully do anything ofc) [13:02] jam1, it is, and we're actually doing that already [13:02] dimitern: can we usefully detect that it is a "private" address that we can't make use of ? [13:03] sinzui: do we have a tag for "blocks CI"? if not, can we make one? [13:03] jam1, so my comment was more about "we need to be aware of this, if we're aiming to model and display all addresses" [13:03] jam1, if you refer to a ULA - no, we can't unless we try [13:04] natefinch, "ci regression" are the tags in combination [13:04] jam1, otherwise, any valid IPv6 can be usable locally in the cloud of course [13:07] sinzui: how do I search for two tags at the same time? [13:08] nevermind, I see the tags field in advanced search [13:09] sinzui: there's a bug marked medium that is tagged ci regression.... I presume that's not blocking CI [13:10] http://goo.gl/URSQcV [13:10] natefinch, advance search has an option for both, but right only triaged and inprogress critical ci regressions block [13:12] natefinch, This is the smallest query https://bugs.launchpad.net/juju-core/+bugs?field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.importance%3Alist=CRITICAL&field.tag=ci+regression+&field.tags_combinator=ALL [13:13] sinzui: I'm catching up on email now. It seems a fair few things happened when I was gone. Should we have a chat? [13:13] abentley, I cannot now [13:14] abentley, I would like to chat, maybe I can steal away from a meeting [13:15] sinzui: Sure. I'm around all day. [13:16] sinzui: thanks for the link [13:18] natefinch, I will change reports.vapour.ws to show that information. Lp requires separate queries for trunk and stable [13:18] sinzui: natefinch arosales I believe one of you is the person who can give me azure credentials [13:18] perrito666, arosales can link your azure account to the canonical account. I can help you run tests as CI [13:19] * perrito666 thinks he does not have an azure account at all [13:19] perrito666: I used to have an azure account, but then my last company decided to stop paying for my MSDN subscription 10 months after I left. I guess they don't love me any more. [13:20] perrito666, you have mail [13:21] sinzui: yes I do :p [13:21] natefinch, you can read environments.yaml in cloud-city to run as CI [13:27] sinzui: thanks [13:31] sinzui: hoy likely am I to break things with the data you just passed to me? [13:36] wwitzel3: let's do our 1:1 after the standup, if that's ok? [13:36] natefinch: that's fine [14:01] mup: help [14:01] natefinch: Run "help " for details on: bug, echo, help, infer, poke, run, sendraw, sms [14:01] mup: help sms [14:01] natefinch: sms — Sends an SMS message. [14:01] natefinch: The configured LDAP directory is queried for a person with the provided IRC nick ("mozillaNickname") and a phone ("mobile") in international format (+NN...). The message sender must also be registered in the LDAP directory with the IRC nick in use. [14:02] mup help poke [14:02] mup: help poke [14:02] natefinch: poke — Searches people in the LDAP directory. [14:02] natefinch: The provided query will be searched for as an exact IRC nick ("mozillaNickname") or part of a name ("cn"). [14:02] mup: poke Finch [14:02] natefinch: Plugin "ldap" is not enabled here. [14:02] doh [14:03] mup: help infer [14:03] natefinch: infer [-all] — Queries the WolframAlpha engine. [14:03] natefinch: If -all is provided, all known information about the query is displayed rather than just the primary results. [14:03] mup: infer time [14:03] natefinch: 10:03:44 am EDT Wednesday, August 6, 2014. [14:05] mup: sms natefinch boo! [14:05] natefinch: Plugin "aql" is not enabled here. [14:11] natefinch: getting cozy with mup, eh? [14:12] :) [14:14] ericsnow: just read gustavo's email, so figured I'd try it out. Evidently it's good I did. [14:14] natefinch: plugins are overrated [14:27] Hi -- when juju clones an LXC template I notice the cloud-init gets run again. How does that happen? [14:28] here's a fix for (at least most of, if not all of) issue #1351019: https://github.com/juju/juju/pull/476 [14:28] sinzui: I noticed that manual for precise and trusty had clean runs this morning, is bug #1347715 still an issue? [14:38] wwitzel3, Those were 1.20 [14:39] wwitzel3, http://juju-ci.vapour.ws:8080/job/manual-deploy-precise-amd64/ shows that the blues are 1.20 and the reds are master [14:39] * sinzui adds task to make reports/jenkins be clear about what is under test [14:40] sinzui: ahh ok, I'm having trouble replicating the error locally with master. I manually started up an EC2 instance on AWS, configured the manual provider with its host, then I've done juju bootstrap .. and I'm not getting any of the SSH errors. [14:43] wwitzel3, Have you set authorized-keys or authorized-key-path. CI is always stating which key to use [14:44] * sinzui thinks about the test [14:45] voidspace, so far my testing with the local provider and your branch looks fine (had some trouble convincing it to run at all at first) [14:45] sinzui: no, I will set that in the config and try again .. does it set both? [14:46] voidspace, unfortunately, I won't be able to do the MAAS test today, as I'll need to go out soon, but I'll do it tomorrow morning, if that's ok [14:49] voidspace, it seems the install hook errors you're getting are happening to me as well, and in 2 cases: 1) when trying parallel deployments (i.e. deploy wordpress, then deploy mysql without waiting the first to install and start), 2) with each lxc container after the first one [14:51] both of these are related to the lxc filesystem cloning or whatever, due to "Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?" (i.e. apt-get fails on one container, because the lock is held by another container running apt-get or the host itself) [14:52] hi folks, is someone able to answer some questions about MESS for me? [14:52] dimitern: each container should have its own /var/lib/dpkg, shouldn't it? [14:53] jrwren, I've guessed so, but something apparently changed recently [14:54] ..and it's looks troubling [14:59] wwitzel3, CI ran something like this to create the instance [14:59] euca-run-instances -k id_rsa -t m1.large -g my-manual-test ami-36aa4d5e [14:59] rogpeppe1, ping? [14:59] wwitzel3, ^ change id_rsa to the key you have in ec2. That might help reproduce the ec2 setup [15:01] pinging fwereade re his concerns on pr 415 -- addressed or commented [15:06] sinzui: ok, thanks [15:12] dimitern: how odd [15:12] dimitern: are you seeing that on trunk? [15:12] dimitern: or with my branch [15:12] it seems unlikely that my branch causes that [15:12] wwitzel3: ericsnow: perrito666: exceedingly late standup? [15:13] natefinch: oops, sorry [15:13] natefinch: hangouts is flaking out on me [15:13] omw [15:13] natefinch: sorry going there [15:13] mgz: poke [15:13] dimitern: I have to go out soon as well (~20minutes), but I'll be working again later tonight [15:14] voidspace, I guess trunk is the same as your branch in this case (no local-provider-specific lxc changes); maybe it's just my env config [15:14] dimitern: maybe - I don't think that's the *same* error I was seeing [15:15] mattyw: pong [15:18] rogpeppe1, do you know much about the potential multi environment state server stuff? I have a specific question about how it might or might not affect the agent on the state server [15:19] mattyw: some, but i'm not directly involved [15:19] rogpeppe1, do you know who is? [15:19] mattyw: in the implementation of the multi-tenant state server [15:19] mattyw: no, sorry [15:20] mattyw: I believe that would be thumper's team [15:20] mattyw: natefinch was involved in the planning of that feature - but has now passed it on I believe [15:20] voidspace, ok great thanks [15:21] rogpeppe1, multi tenant is where the state server might be split across machines? [15:21] mattyw: no, where a single state server can serve several environments [15:22] mattyw: the state server itself can always be split over multiple machines (HA) [15:22] pwd [15:22] rogpeppe1, that's what I was going to ask - I thought it was the same as ha [15:22] rogpeppe1: C:\Documents and Settings [15:22] mattyw: nope. entirely orthogonal. [15:22] rofl [15:22] natefinch, ping [15:22] voidspace, nice [15:23] alexisb: what is up, yo? [15:24] ericsnow: reboot, it always fixes things (or deletes your networking config.... one of those two) [15:24] ericsnow: did you try turning it off and back on again? ;) [15:24] natefinch: wwitzel3: gee, thanks ;) [15:46] natefinch, ping [15:47] natefinch, do you have a rough developer-weeks estimate for remaining time on backup/restore [15:48] fwereade: he is afk [15:56] back === BradCrittenden is now known as bac [16:15] wwitzel3: should I reassign the manual provider bug to you? [16:17] I have a couple of minor pull requests waiting to land that are just test changes that will isolate a handful of tests from the users .bashrc better [16:18] Any objections to me JFDI'ing those PR's since they're approved and they would only contribute to more stable tests, and don't change non-test code? [16:19] jcw4: show me the PRs? [16:19] natefinch: https://github.com/juju/juju/pull/450 [16:20] natefinch: https://github.com/juju/juju/pull/454 [16:21] natefinch: re-reading 450 it does change non-test code [16:21] so I'm fine with that one waiting... [16:22] aghh lost sinzui again [16:27] jcw4: I'm sending pr454 through now [16:27] mgz: tx! [16:27] 450 also seems fine [16:27] natefinch: I will pick it up, I'm in there now anyway [16:28] yeah trivial change too. Thanks mgz and natefinch [16:28] though I can't replicate it even using the euca command sinzui gave me. [16:32] bbl ppl [17:47] calling it a day folks, night! [18:05] phew... electrician and bee inspector showed up at the same time [18:06] electric bees! [18:07] I do have an electric fence around the bees, but that's just coincidental :) [19:31] natefinch: what is a bee inspector_ [19:32] * perrito666 has the strangest bug no browser will start [19:32] perrito666: I've had that before [19:32] perrito666: I never figured it out, just ended up logout/login [19:33] lol if I tell you what it is you will laugh at me [19:33] browsers where starting up in the external screen and the monitor was off [19:34] lol [19:34] you are right [19:35] haha [19:35] I've had that happen [19:35] What's worse is when they open up completely off the screen top or bottom because you've changed resolutions or something [19:36] natefinch: I use all external screen apps full screen to avoid that [19:36] (as in f11) [19:36] perrito666: a bee inspector is someone paid by the state to go around to all the known beehives in his county and check them for disease, mites, etc. [19:37] wow you guys really have the state in your lives, our state only takes money and leaves us alone [19:42] perrito666: not all states in the US have inspectors, but some do. Bees are pretty important for agriculture, so we try to make sure some idiot doesn't spread disease and screw everything up [19:45] natefinch: that makes sense, I dont think we have control for bees here, at least not for small producers [19:55] sinzui: are you around? [20:04] mup: infer time in Nuremburg [20:04] natefinch: Cannot infer much out of this. :-( [20:04] lol [20:04] hrmph [20:05] mup: infer time [20:05] natefinch: 4:05:16 pm EDT Wednesday, August 6, 2014. [20:05] mup: infer time in nuremburg, germany [20:05] natefinch: Cannot infer much out of this. :-( [20:05] dang [20:05] mup: infer time UTC+0200 [20:05] jcw4: 10:05:56 pm GMT+2 Wednesday, August 6, 2014. [20:06] it worked for gustavo in his email :/ [20:06] teh suk [20:06] mup: infer timezone in nuremburg [20:06] natefinch: Cannot infer much out of this. :-( [20:06] mup: infer time in nurenbERG [20:06] niemeyer: 10:06:37 pm CEST Wednesday, August 6, 2014. [20:06] agg frig [20:06] s/m/n s/u/e/ [20:07] niemeyer: spelling gets me every time [20:09] google says it's nuremberg [20:09] anyway [20:11] Not burg [20:12] yep [20:12] niemeyer: so that's mountain not city right? [20:13] jcw4: What is? [20:13] berg vs. burg [20:13] Good question, I don't know, but I'm curious now [20:13] niemeyer: at least that's the difference in afrikaans [20:13] The real city name is actually Nürnberg [20:13] mup: infer time in Nürnberg [20:14] natefinch: Cannot infer much out of this. :-( [20:14] boo [20:14] heh [20:14] natefinch: Ask in German [20:14] what does mup use? google? [20:14] rofl [20:14] mup: help infer [20:14] niemeyer: infer [-all] — Queries the WolframAlpha engine. [20:14] niemeyer: If -all is provided, all known information about the query is displayed rather than just the primary results. [20:14] mup: infer -all berg vs. burg [20:14] jcw4: Distances: distance flight time, Berg, Jamtland, Sweden to Burg 729 miles 1 hour 20 minutes, (assuming direct flight path at 550 mph) — Demographics: Berg, Jamtland, Sweden Burg, Saxony-Anhalt, Germany, population 8175 people 24364 people. [20:14] jcw4: — Geographic properties: Berg, Jamtland, Sweden Burg, area 2384 mi^2 (square miles), average elevation 124.7 feet. [20:14] niemeyer: my german is umm.... somewhat rusty [20:14] lol [20:15] mup: folgern Zeit in Nürnberg [20:15] perrito666: In-com-pre-hen-si-ble-ness. [20:15] rofl [20:15] mup: infer Zeit in Nürnberg [20:15] perrito666: Cannot infer much out of this. :-( [20:16] mup: was ist die ziet in Nürnberg [20:16] jcw4: Roses are red, violets are blue, and I don't understand what you just said. [20:16] mup: infer -all Nuremberg [20:16] natefinch: Population: city population 505664 people (country rank: 14th) (2010 estimate). [20:18] niemeyer: hey, while you're here. where will the goamz lib live on github? [20:18] niemeyer: also, hello :) [20:19] katco: Hey :) [20:19] katco: I don't really know yet [20:19] oh hey, while people are paying attention, how do you specify an empty map in yaml? Google is failing me. [20:20] niemeyer: ok no worries. we're just about wrapping up changes, so it would be nice to have a home for the little guy somewhat soon, but we can work around it if not :) [20:21] mup: infer -all all your base [20:21] jcw4: ...are belong to us., (according to the video game Zero Wing). [20:21] mup: infer \b [20:21] katco: Cannot infer much out of this. :-( [20:22] good job, mup :) [20:22] hehe [20:22] mup: infer how to represent an empty map in yaml? [20:22] haha [20:22] jcw4: Cannot infer much out of this. :-( [20:22] there we go [20:22] mup: help [20:22] katco: Run "help " for details on: bug, echo, help, infer, poke, run, sendraw, sms [20:23] oooh sms [20:23] mup: help poke [20:23] katco: poke — Searches people in the LDAP directory. [20:23] katco: The provided query will be searched for as an exact IRC nick ("mozillaNickname") or part of a name ("cn"). [20:23] mup: poke katco- [20:23] katco: Plugin "ldap" is not enabled here. [20:23] poke and sms don't work on freenode [20:23] ah [20:24] * natefinch made that mistake this morning :) [20:24] natefinch: hey new site looks nice [20:24] mup: help sendraw [20:24] katco: sendraw [-account=] — Sends the provided text as a raw IRC protocol message. [20:24] katco: If an account name is not provided, it defaults to the current one. [20:24] sinzui: ping me when you are around [20:25] katco: thanks... Hugo is pretty cool, and since it's in Go, I can actually understand the code ;) [20:25] natefinch: lol [20:26] i have been thinking about switching to a static based site [20:26] katco: it's mostly the jekyll theme, ported to Hugo's format, and tweaked slightly by me. I have near zero web dev / design ability... but I am really good at changing margins and colors [20:26] just using blogger right now [20:26] lol [20:27] er... the "hyde" jekyll theme [20:27] http://stchuxderbychix.appspot.com/ <-- old website i did in Go for my derby team. not using best practices though. [20:27] katco: highly recommend Hugo. It's pretty easy to use, since it's all Go templates and pretty well thought out. [20:28] katco: I actually asked Steven Francia to be a committer on the repo so I could help fix bugs and land PRs and stuff [20:28] also, free hosting on github is pretty sweet [20:28] yeah haha [20:28] google app engine is probably better free hosting ;) [20:28] geographical redundancy for my dinky blog! [20:28] katco: that site looks 100 times better than anything I could produce. Nice thing about a blog is that it's mostly text :) [20:29] natefinch: i was pretty proud of it. it aggregates data from different web services [20:29] Oh yeah, I remember you mentioning that. Pretty cool. [20:30] it's funny that it's a dead site, but it still has some current info [20:30] e.g. next bout looks like [20:31] katco: can GAE easily host a static site? I like that I don't have to do anything but git push to github to update my site. [20:31] natefinch: yeah. git push is way easier, but they have a python script that pushes things up for you [20:31] but yeah, does static content quite nicely actually [20:31] and since under some huge amount of traffic is free, it's essentially free hosting :p [20:32] b/c it was designed for actual applications [20:32] but you can also run some live go code on there if you want a bit of dynamic ability. python, java, i think php now [20:33] yeah, that's cool [20:34] of course i don't know why anyone would want to use python /duck [20:35] * natefinch notes katco doesn't even need to mention java or php because.... *shudder* [20:35] haha [20:35] java... ok i can do that. php... just... no. never again. [20:35] i wrote a CMS in php back in the early 00's. before it had support for classes. [20:35] I've never done PHP. Java I've done. It's ok. [20:36] gotta go in a sec [20:36] java is ok albeit verbose. php to me is design by accident [20:36] there is this cool tool called nikola which generates your whole website which includes comments support, paging and some other goodies from rest [20:37] perrito666: hugo does that from markdown and Go templates [20:37] http://hugo.spf13.com/ [20:37] ok, gotta go [21:06] could anyone spare a few minutes for a review (the patch is small)? https://github.com/juju/juju/pull/476 [21:19] waigani: morning [21:19] menn0: morning [21:19] good findings [21:19] perrito666: howdy [21:19] menn0: morning [21:19] yeah [21:19] it took all day :) [21:20] but I learned a lot along the way [21:20] perrito666: morning :) [21:20] menn0: what took all day? [21:20] figuring out the HA problem [21:20] menn0: you make me feel better, i was al eeffing day setting those things and failing [21:20] menn0: you solved it? [21:21] well I found the commit that caused the problem and reverted it [21:21] although, I've seen that the merge failed [21:21] more to look at ... [21:21] ah yeah, just saw your PR - nice work man! [21:22] I need to take care of something here (screaming kids) [21:22] back in a bit [21:22] menn0: your revert should break the build somewhere else though [21:53] perrito666: sorry. [21:53] perrito666: yeah I feared that it might [21:53] menn0: I was waiting on it to merge [21:54] perrito666: I will try to understand the Safe.J change some more [21:54] but reading that commits message it seems to be fixing a bug in newer versions of juju [21:54] new versions of mongo right? [21:54] well apparently setting safe.j was ok even though you had no journal set [21:54] menn0: yes [21:55] but since 2....6? i think that is no longer true [21:57] well before doing anything else I'm trying the merge again because the way it failed is one of the errors we see regularly in test runs [22:27] menn0: your changes got merged [22:28] please have an eye on jenkins [22:28] perrito666: \o/ [22:28] perrito666: what do you mean: "please have an eye on jenkins" [22:29] menn0: there are a couple of CI jobs linked on the ticket [22:29] make sure they pass when they test your rev [22:29] perrito666: right [22:29] perrito666: I'm also running the CI test you were testing with (test_recovery.py --ha) manually now [22:30] perrito666: so far it's already getting further than it did before [22:30] perrito666: thanks for those instructions btw. they help a lot. [22:30] menn0: I am glad [22:31] I didn't end up doing the custom stream creation though [22:31] I hacked up the test helpers a bit ... [22:36] heh === menn0_ is now known as menn0 [22:51] perrito666: hooray! test_recovery.py --ha passes [23:03] menn0: \o/ [23:08] sweet [23:08] now we need to bug curtis to see if it passes a few times [23:22] perrito666: Curtis knows [23:34] menn0: I'm happy to do some reviewing if that helps [23:34] waigani: that would be awesome [23:35] menn0: okay - you owe me a beer ;) [23:35] waigani: done! [23:35] waigani: we should coordinate to make sure we don't end up overlapping too much [23:36] menn0: on that point I think you can actually assign a reviewer to a PR [23:37] waigani: ok cool. I didn't know that. [23:37] let's do that.