[01:30] wallyworld: will 1.20.2 just be cut off trunk? [01:30] axw: there's a 1.20 branch [01:30] ok [01:31] we branched when 1.20 was first released [01:31] oh yeah, I think I backported things to it [01:31] :) [01:31] wallyworld: was just wondering if #1337091 could/should be retargeted [01:31] <_mup_> Bug #1337091: maas provider: allow users to specify network bridge interface. [01:31] wallyworld: see the comments from a user at the end of the bug [01:31] looing [01:31] looking [01:33] axw: at first glance, i thought that one was actually in 1.20.1, but my memory may be wrong [01:33] wallyworld: nah, I just checked [01:34] axw: the one we really want in bug 1341524 [01:34] <_mup_> Bug #1341524: juju/MAAS deployed host with bonding configured via preseed missing eth0 from bond on first boot [01:34] i guess we can back port that other one also [01:36] wallyworld: do I need to ask sinzui or...? [01:37] axw: no we'll just backport [01:37] okey dokey [01:37] i could have sworn that one was targetted to 1.20 [01:41] axw: i've tweaked michael's juju/txn branch https://github.com/juju/txn/pull/2 [01:42] wallyworld: looking in a sec [01:42] no hurry, thanks [01:50] axw: great, thanks. i'll fix the comments [01:51] cheers [01:53] davecheney: https://github.com/juju/names/pull/16 [02:00] wallyworld: no great rush, but here's the last PR from the core StateServerInstances changes: https://github.com/juju/juju/pull/342 [02:00] ok [02:00] thumper: hey, you have fwreade with you this week? [02:00] wallyworld: I do [02:00] thumper: i'd love to talk about the mongo repliaset stuff [02:01] at some point [02:01] thumper: one for you in return, https://github.com/juju/names/pull/17 [02:02] davecheney: https://github.com/juju/charm/pull/22 [02:02] wallyworld: for repos other than juju/juju, we just click the merge button on github right? [02:02] thumper: yep, till martin get's stuff sorted, hopefully rsn [02:03] kk [02:03] thumper: so may later on when you have a spare moment, ping me and we can talk mongo [02:03] wallyworld: sure [02:11] davecheney: and another https://github.com/juju/juju/pull/343 [02:28] dfc, https://github.com/juju/juju/pull/344 [02:29] oh, i mean davecheney ^^ [02:43] looking for a second review on https://github.com/juju/juju/pull/344 [02:43] ta [02:49] (ノ ゜Д゜)ノ ︵ ┻━┻ [02:49] (ノಥ益ಥ)ノ ┻━┻ [02:50] (╯°□°)╯︵ ┻━┻ ︵ ╯(°□° ╯) [03:40] wallyworld: thanks. I'll wait for perrito666 before landing [03:40] axw: sure [04:01] thumper: https://github.com/juju/juju/pull/346 [04:08] waigani: review done [04:14] axw: i've set up a remote alias to another repo on gh. i've fetched it. now I want to merge a branch from it into my branch, but all the syntax i'm trying fails. any clues? [04:16] wallyworld: try just the commit hash? [04:16] or is there more than one? [04:16] axw: i think there's more than one, it's michael's copy-session branch [04:17] so i just want to merge all of this commits to that branch [04:17] wallyworld: I thought you just did "git merge remote/branch" [04:18] you'll need to fetch it first [04:18] git fetch $remote [04:18] axw: i did git remote add voidspace https://github.com:voidspace/juju.git [04:18] and then git fetch voidspace [04:18] and now i want to merge in the copy-sessions branch [04:19] tried git merge voidspace/copy-sessions [04:19] ah hang on [04:19] i'm an idiot [04:19] i had a typo [04:19] remote is wrong? [04:19] i let off an s [04:20] doh [04:20] yup [04:20] sorry [04:32] davecheney: if you feel the desire... https://github.com/juju/juju/pull/347 [04:43] thumper: kvm-ok is still installed but images we are told do not have "/usr/sbin" on the path [04:43] some images [04:43] right... which is why we install the cpu-checkers package [04:43] at least that is what we used to do [04:43] no we still do [04:43] kvm-ok is there [04:43] * thumper struggles... [04:43] but /usr/sbin is not on the path [04:43] wat [04:43] ah [04:44] hence "kvm-ok" won't be found [04:44] right [04:44] got it now [04:44] i had no idea /usr/sbin would not be on the path always [04:45] axw: you got a minute to talk in tanzanite-standup? [04:47] wallyworld: just a minute [04:47] sure === vladk|offline is now known as vladk [06:25] wallyworld: did you see my response about the raciness? do you think it's worthwhile updating? [06:27] axw: yeah, i was going to suggest using the buildTxn() stuff but i see your point. if it's not too much extra work, it would be nice to have it non-racy. since someone else reading the code will have the same concerns [06:28] and it would be good not to deliberately add racy code [06:28] since you never know how it might manefest in a problem later [06:28] ok, I'll take another look [06:28] don't waste too much time on it [06:28] add a comment perhpas === uru_ is now known as urulama [07:42] morning [07:50] TheMue, jam: please, take a look https://github.com/juju/juju/pull/348/files [07:55] vladk: *click* === vladk is now known as vladk|offline [08:27] jam: I’ve done the review of 348, looks solid to me but would prefer another short look by you. [08:29] vladk, TheMue: will do. I peeked at it a bit ago. The basic idea here is that we don't need to pass the networking information into cloud-init anymore because it is no longer responsible for setting up the initial networks, correct? [08:36] jam: I think so. This works only on MaaS provider now, because it's the only provider that supports networks. [08:55] TheMue: just making coffee, will be in our 1;1 in a moment [08:55] jam: ok [09:10] hello [09:10] is there any way to use github instead of charmstore? [09:17] vladk: lgtm [09:17] https://github.com/juju/juju/pull/348 [09:17] jam: thanks [10:04] morning all [10:12] natefinch: morning [10:12] MOrning all [10:17] :) [10:17] rogpeppe: do you know why, in go/ast, fields have a slice of names, rather than just one? When would a field have multiple names? http://golang.org/pkg/go/ast/#Field [10:18] natefinch: struct {a, b, c int} [10:19] rogpeppe: that's not 3 fields? :/ [10:19] natefinch: nope - otherwise you couldn't round-trip gofmt through go/ast [10:19] ahh, that's true [10:20] still, makes the logic around that code more difficult, can't just count fields and know how many fields there really are [10:20] thanks === Ursinha-afk is now known as Ursinha [10:22] natefinch: sure, you need another loop :-) [10:22] natefinch: you could always use the types package [10:22] rogpeppe: yeah, not a huge deal. Just a little annoying, but understandable [10:23] rogpeppe: I'm using a bit of both [10:23] natefinch: what are you actually trying to do? [10:24] rogpeppe: extract exported function names, arguments, and the comments on them to do some code generation [10:25] natefinch: generate code to do whaT? [10:25] rogpeppe: I'm making a CLI-generator. Give it some exported functions and it'll write the help docs and flag parsers based on the comments and the function signatures [10:27] natefinch: interesting [10:28] rogpeppe: just an idea that popped into my head when someone mentioned a self-documenting CLI, and then I was disappointed that it required retyping a bunch of stuff... so I figured I'd write my own where your code really is self-documenting [10:28] natefinch: getting flags right will be tricky [10:29] rogpeppe: yep, that's part of the fun [10:30] rogpeppe: there's a bunch of ways you could do it, and you're not going to please everyone no matter what way you choose to do it [10:30] natefinch: another question is whether you'd be able to provide a dynamic escape hatch [10:31] rogpeppe: not sure what you mean [10:32] natefinch: whether *all* of the cli is defined by this thing, or whether we could have a way of adding extra stuff that doesn't perhaps fit the model so well, while doing the majority generated [10:32] rogpeppe: certainly a good idea. [10:35] morning [10:35] axw: hey, says my backlog that you need to talk to me [10:36] rogpeppe: oh, unrelated - what's the correct way to get Acme? There was a deb I tried, but it behaved oddly. The website is not very clear. [10:37] perrito666: hey. some time today can you please review the change to restore in https://github.com/juju/juju/pull/342 [10:38] going [10:39] we have a very low bus factor on b&r [10:45] natefinch: http://swtch.com/plan9port/ [10:51] axw: I tend to believe you are right but I cannot assure it, as I see it you have two options here, you ask the original author of that line (roger) or you give me time to actually run the test on you branch [10:52] perrito666: CI is testing backup/restore now right? [10:52] I mean, successfully [10:53] axw: it is [10:53] axw: hold, let me check that again [10:54] axw: it is not, although ha backup and restore is passing [10:55] appears replset related [10:55] 1/2 is good enough for me [10:55] I'm confident it's fine anyway, so I'm going to land it as is. I'll fix it if it breaks [10:55] sg [10:56] if one of the two jobs is running properly means its mostly timeouts issue since they do pretty much the same === anthonyf` is now known as anthonyf [12:39] good morning all [12:47] hi katco [13:08] TheMue, jam: please, take a look https://github.com/juju/juju/pull/255 [13:10] vladk: yep [13:11] jam, wallyworld: is there a handoff we need to make for the current crop of super high priority bugs? [13:12] natefinch: in my email i mention 2 that we've not yet looked into [13:12] the ohers are in progress [13:15] should a backport commit message state that it's a backport, or should it contain the original commit message? [13:16] wallyworld: ok [13:17] katco: i'd mention the fact that it's been backported, but also include the details from the first commit [13:17] wallyworld: will do, ty sir [13:18] natefinch: also, any input on the oplog file size, plus presence replication discussion would be appreciated [13:19] wallyworld: sure. I think the problem with the oplog is that it gets put in the ephemeral storage which is like 400gigs [13:19] wallyworld: IIRC [13:20] wallyworld: not a lot of info on that landscape crash bug, eh? [13:21] wallyworld: oh, I missed the machine-0 log... that helps [13:21] yeah [13:21] sorta looks like it might just be a side effect of the i/o timeout bug [13:23] could be, i haven't had a chance to look [13:24] natefinch: with the oplog file, it gets put in /var/lib/juju/db i think, and the size is calulated based on 5% of free space (up to max 50GB), so for a 400GB disk, the file is about 20GB [13:24] which is not too bad I would think? [13:25] wallyworld: 20gb is pretty big, and I'm not sure exactly how much that thrashes the disk, which might be quite slow [13:25] natefinch: i think the file is just allocated ie empty [13:26] maybe the mongo defaults are bad, but that's what's being followed - the mongo algorithm is being used [13:27] we can asjus if needed though [13:27] adjust [13:27] wallyworld: we've seen it take a long time on some machines.... I don't know why. One instance took like 2 minutes. I don't know 100% for sure that the oplog was the problem, but it was basically the only thing we were doing [13:28] i think for local provider is was wond back to max 1GB for that reason [13:29] i think that was the incident you're referring to [13:29] yep [13:30] slow disks will be an issue for sure in generating the file [13:31] wallyworld: I'm guessing the huge, free, shared, ephemeral disk you get with a $40/month AWS instance is the worst kind of slow [13:31] yeah, could be [13:31] so william and i are thinking we should use the default algorithm, but may that's wrong [13:31] maybe [13:32] it can be changed easily enough [13:32] wallyworld: Mongo is designed for huge datasets. Juju's usage is not a huge dataset. [13:32] We can ask on #mongo [13:33] can you do that for me? i need to sleep [13:33] wallyworld: fair enough. Will do [13:33] thanks [13:40] hazmat: do you have the full logs for this bug? https://bugs.launchpad.net/juju-core/+bug/1345014 [13:40] <_mup_> Bug #1345014: juju machine agents suiciding [13:43] vladk: added a minor comment [13:43] TheMue: thanks === kami is now known as akami === akami is now known as ader1990 === psivaa is now known as psivaa-afk [14:07] natefinch ? [14:08] ericsnow, perrito666: we'll have to move the standup to later, roofers are here and they need a little help [14:08] houses are expensive, dammit [14:08] natefinch: lol ok, shout if you need a hand [14:08] perrito666: haha thanks [14:08] natefinch, perrito666: lol same here :) [14:09] * perrito666 shows a pamphlet of argentina to natefinch [14:09] we have space :p [14:09] you can comer here [14:24] TheMue: I fixed https://github.com/juju/juju/pull/255 [14:26] vladk: I’m looking [14:31] morning [14:33] bodie_: morning [14:35] vladk: reviewed === psivaa-afk is now known as psivaa [15:04] rogpeppe, mgz, jam: I'm talking to the mongodb guys on #mongodb, trying to get an idea of how big our oplog should be, and they basically said it depends on the amount of modifications the DB receives. It's my understanding that we normally aren't going to get huge amounts of modifications after an initial juju deployment... but we also support wide varieties of environment sizes from 2 to 5000 machines, and people might [15:04] be bringing stuff up and down a lot. Seems like one of those "it depends" answers which is not very helpful. [15:06] natefinch: we get a modification every time a unit's relation data changes [15:06] natefinch: that could be happening continually [15:07] natefinch: but it really depends on ops/second [15:07] rogpeppe: even more "it depends" (now depends on the implementation of the specific charms deployed) .... but that's good to know [15:07] natefinch: because there's only a problem if the rate of operations is greater than we can cope with [15:08] natefinch: and what the likely burst rate/duration is [15:08] natefinch: that was my understanding as well, unless we write very fast, we don't *need* a huge oplog [15:09] ahh, so it's getting drained as replication happens, I get it [15:09] ut I also see william's earlier point about just using default mongo settings unless we really shouldn't [15:09] well, the whole point is that mongo's default is causing us problems [15:09] in some cases [15:09] and this doesn't seem a big issue [15:09] except in some annoying edge cases [15:10] not real deployments [15:11] ok, yeah, going back over the email thread, it does seem like this is orthogonal to the actual problems seen [15:52] Hi -- is hulk-smashing a supported operation by juju? [16:02] dpb1: depends on your definition of supported [16:02] dpb1: it's generally discouraged, but it's possible, so we obviously support it at some level [16:04] natefinch: do you all have a document as to *why* it's generally discouraged? [16:08] dpb1: not really... usually it's enough to say that any two charms may make conflicting changes to the environment. If they both want to listen on port 80, for example, or they both create the same user etc etc. [16:09] dpb1: generally the best answer to getting more than one charm on the same machine is to put them in containers [16:09] natefinch: ok, thanks. [16:09] natefinch: appreciate the answer [16:11] dpb1: welcome [16:27] perrito666: have you seen the cloudbase emails? How is August 25th-28th for you? === Ursinha is now known as Ursinha-afk [16:34] hmmm, my local environments doesn’t like me anymore [16:34] api server cannot be connected, strange [16:48] natefinch: I was reading them during lunch [16:48] works for me too [17:48] natefinch: https://lh3.googleusercontent.com/-zndbSfYolfU/U8xCMtJxJ3I/AAAAAAAAGNg/oiO7QEtZACU/w426-h446/BashStartupFiles1.png [17:50] katco: I just want one place to put everything and not need to worry about a flowchart. That's why I like Window's model. One place. Put stuff there. If you're logged in, it's applicable. [17:50] hehe i thought you'd appreciate that [17:50] katco: I think I've actually seen it before, looks familiar. [17:50] katco: but yes, thanks for reminding me of that pain :) [17:51] lol === Ursinha-afk is now known as Ursinha [18:34] what happen to juju.NewConn ? [18:37] esta mina no es mas hija de puta ni practicando http://www.lavoz.com.ar/politica/desafortunada-frase-de-cristina-al-inaugurar-los-trenes-del-sarmiento [18:38] sorry wrong channel [18:39] heh, the english translation doesn't make sense :) [18:39] (of the webpage) [18:40] natefinch: short: our presindent making a christening act of a train formation says "if we dont hurry the next train will crash us (figuratively speaking of the gov making new trains) on that same spot a train crashed and killed many people a year ago tomorrow and it was discovered that it was in part due to a corruption case where gov did not oversee the train company that was redirecting the maintenance money [18:41] and the gov did nothing about it [18:41] doh [18:41] natefinch: I was trying to post the article to people that live nearby [18:41] in another channel [18:42] so well, yeah people not very happy with that comment [18:43] yeah.... that's pretty unfortunate [18:44] I have an environs.Environ and I want to get a State out of it, how do I get that now that NewConn is no longer there? anyone knows? [18:59] jam: jam1: is any of you here? [19:10] perrito666: how do we test backup/restore (plugins) currently? [19:13] in CI only (and I run the CI test every time I change anything in there [19:14] takes a bit of patience [19:17] perrito666: I'm trying to write one last test for backup that does a backup without any fakes involved (i.e. what we must be doing for the plugins currently). [19:17] perrito666: think it's doable? [19:17] ericsnow: what do you mean without any fakes? [19:18] I can hangout if you want, so we settle this before I go to the doctor [19:18] perrito666: moonstone? [19:18] sure [19:37] wallyworld: please ping me when you get up :) [19:37] bbl ppl, going to the doc [19:39] ericsnow: you around? [19:39] natefinch: yeah === Ursinha is now known as Ursinha-afk [20:00] ericsnow: did you see my private messagey thing? Not sure what they call it on irc [20:01] natefinch: nothing showed up [20:01] ericsnow: weird === Ursinha-afk is now known as Ursinha [20:25] my lxc foo is weak. Can someone tell me how to get more information on this error: http://pastebin.ubuntu.com/7832276/ [20:31] i'm a but confused by the purpose of this line: "var _ ContainerFactory = (*containerFactory)(nil)"; can anyone provide insight? [20:32] katco: it's a compile-time check that container factor satisfies the ContainerFactory interface [20:32] natefinch: ah ok; no other purpose? [20:32] katco: that's it [20:32] what is the use-case for that? if it wasn't, wouldn't that cause issues elsewhere? [20:32] katco: it's assigning a containerFactory pointer to a variable of type ContainerFactory. If that's not legal, it'll fail to compile [20:33] katco: it's handy when you're intentionally implementing an interface, but you don't actually have code that assigns one to the other in your code [20:35] So, like if a third package is supposed to use your implementation as that interface.... if you change just your package to accidentally not fulfill the interface anymore, you might not notice if you didn't build that other third package [20:36] ahhh ok that makes sense as a use case [20:37] natefinch: ty for the info; much appreciated :) [20:38] welcome. I hadn't seen it until I started working on juju, but it's kinda cool. [20:38] natefinch: it looks like that technique is actually in the effictive go docs: http://golang.org/doc/effective_go.html#blank_implements [20:38] i had never seen that either [20:39] kind of seems like something that should be in a test and not code that's run in production [20:39] ahh cool. I should reread that [20:39] the compiler will optimize it out, because the variable on the left is the blank identifier [20:40] natefinch: but still run it? ...? [20:40] There's effectively nothing to run. [20:41] well, it's at compile time, so i guess it parses it, sees it's valid or not, and then optimizes it out if it is [20:41] natefinch: hi! I didn't see thumper anymore in this channel. Do you know how I can communicate with him? I couldn't report that the workaround he asked me to try didn't work and I was hopping that he had some other ideas to try atm. [20:41] yep [20:42] hackedbellini: he's in New Zealand, so it's still somewhat early for him (it's 8:41am where he is). Usually he's on by now, but may have other things going on this week, I'm not sure. [20:44] natefinch: hrm, I see. No problem. Maybe you or someone else knows how to solve the problem now, as he found the possible cause of the issue [20:46] hackedbellini: More than willing to help. I sent you a private message with his and my emails. I'll do what I can to help, but unfortunately, I have to run in about 15 minutes. [20:47] remember that our juju installation whent from 1.18.3 to 1.19.2 because of this bug? https://bugs.launchpad.net/juju-core/+bug/1325034 [20:47] <_mup_> Bug #1325034: juju upgrade-juju on 1.18.3 upgraded my agents to 1.19.2 [20:47] it seems that this is the root of the problem. thumper said juju was missing a "ha migration" or something like this because that migration was only done when upgrading from 1.18.x to 1.20.x. [20:47] he asked me to change the "upgradedToVersion" on agent.conf to read "1.18.4" and try to run the agent again, so we could possibly trick juju and he would do that "ha migration", but it didn't :( [20:47] hackedbellini: ahh... interesting [20:54] thumper: hi [20:54] o/ [20:57] thumper: the workaround didn't work :( [20:57] hackedbellini: hmm... bummer [20:57] hackedbellini: "ha" is "HA" High Availability [20:57] we introduced replica sets for mongodb [20:57] we changed upgradedToVersion to read "1.18.4" instead of "1.19.3" and restarted the agent, but the same issue happened [20:58] which has triggered a bunch of weird edge case failures that weren't apparent before [20:58] I really don't know what to do from here [20:58] my main suggestion is to move the services running in that environment to a new Juju [20:58] and please don't use local [20:58] manual would be better [21:06] thumper: hrm, I see... The only problem with this is that we will have to migrate all of our services that were running on juju to that other environment :( [21:10] thumper: do you know if there is at least a way to access the lxc containers without sudo access? I could access them with "juju ssh " but since juju isn't running I can't now [21:19] perrito666: hi [21:30] PTAL https://github.com/juju/juju/pull/351 [21:40] hackedbellini: you can just use "ssh ubuntu@" [21:42] thumper: ahh, yes that works! :) Didn't work before for some reason [21:44] hackedbellini: probably missing the "ubuntu@" [21:45] thumper: don't really remember, but probably, yes =P [21:45] thumper: if we create a new environment using "manual", do you think we will be able to add the existing lxcs to it without any problems? Or should I avoid that? [21:45] heh... no it won't work by default... [21:46] and you would probably have to jump through lots of hoops to make it work [21:46] I see... its not trivial, but its doable? [21:48] well, I have to go now. Thank you very much for you help so far! Cheers [21:54] fwereade: https://github.com/juju/errors/pull/4 [22:04] anyone seen this one? https://pastebin.canonical.com/113903/ [22:07] dpb1: yes [22:07] all the time in testing [22:07] something related to how long it takes mongo to squeeze out a repl set [22:07] why is it launching mongo locally? [22:07] davecheney: ok, good to know. [22:07] davecheney: you have a bug I can point to? sorry, my LP search foo is weak [22:08] dpb1: i can't see where the environment is being bootstrapped from that paste [22:08] dpb1: no useful bug [22:08] well, there are loads of bugs logged [22:08] davecheney: ok, should I add --debug? [22:08] or will that help [22:08] dpb1: just paste the entire log [22:08] or just tell me [22:08] is this the local provider ? [22:08] atually it doens't matter [22:08] davecheney: nope, that is maas [22:09] mongo == slow [22:09] takes a long time [22:09] if it can't get it's repl set up and running fast enough [22:09] davecheney: that is the whole log, just the cmdline was not echoed [22:09] bootstrap fails [22:09] davecheney: I have another example, but the paste is the same. :) [22:10] davecheney: ooh [22:10] davecheney: you are right, the paste was truncated! [22:10] ok, sec [22:11] ok, this log is long, pasting again [22:11] davecheney: http://paste.ubuntu.com/7832774/ [22:11] davecheney: I'll file another bug and paste, if you want to dup it you can [22:12] dpb1: ok [22:12] this is an impotrant issue [22:13] i think many people are working on it tangentally [22:13] given the profile of maas and juju === bodie_ is now known as Guest86182 [22:17] thumper: https://github.com/juju/errors/pull/5 [22:19] davecheney: https://bugs.launchpad.net/juju-core/+bug/1346597 [22:19] <_mup_> Bug #1346597: mongo timeout? cannot get replica set configuration [22:27] fwereade, davecheney: https://github.com/juju/juju/pull/347/files === Guest86182 is now known as bodie_ [23:07] thumper: https://github.com/juju/juju/pull/352 [23:08] tiny fix for the test explosion [23:08] ^ pretty uncontravesial [23:54] wallyworld: nites [23:54] wallyworld: hey I have a question for you [23:54] sure [23:55] I ask you because you removed juju.NewConn, feel free to redirect me to rtfm [23:55] I have an environs.Environ and I want to get a State out of it, how do I get that now that NewConn is no longer there? anyone knows? [23:55] there's an api, let me check [23:56] I used to use NewConn [23:58] perrito666: there's code in restore.go which does something along those lines [23:58] you need to know the cacert which you can get from environ config [23:59] wallyworld: ah lol, I actually avoided what was done in the old restore in favor of new conn ehehe [23:59] ok, back to that [23:59] the address of the api server can also be got from environ