[00:47] does charm config give an easy way to have multiple sets of the same named values? [00:48] bigjools: as in [00:49] names: ["dave", "julian"] ? [00:49] davecheney: not quite [00:49] * bigjools thinks of example [00:51] set1: ["foo": "bar", "fizz": "bang"] and set2: ["foo": "bar2", "fizz": "bang2"] [00:51] just wondered if there was anything built in to help [00:51] or if I need to slap it all in manually [00:58] i think config suports, string, bool and int [00:58] so sets have to be modeled as json/yaml encoded strings [00:58] i guess [00:58] it might support sets [00:58] but i've never seen a case of it in the wold [00:58] anyone else knw ? [01:30] mramm: ping === wedgwood is now known as Guest48850 [03:16] Hi davecheney, thanks for your email on the haproxy charm, got a sec to chat? [03:58] dpb1: sure [03:58] off and on [03:58] small water heater issue in our house right at the moment [03:59] heh [04:00] OK, I'll write back over email. I guess I was not aware that the interface was so strictly defined. Much work around that explanation has been done in the past 6 months (gets and sets in the metadata.yaml file being one example). [04:21] kj:q [04:21] blah [05:04] wow, i had forgotten just how slow puppet is [07:01] wallyworld: "--source is needed because it allows people (and the release tools) to put the [07:01] tools tarballs in a local directory and upload from there." [07:01] can't you do the same thing with sync-tools though? [07:01] i thought that's what you were talking about? sync-tools? [07:02] wallyworld: I'm saying remove it from "juju bootstrap", leave it in "juju sync-tools" [07:02] it's in both. [07:02] oh. i didn't realise it was in juju bootstrap [07:02] i've never used it from bootstrap at all [07:03] cool, I'll take that as +1 to remove then ;) [07:03] sounds like it :-) [07:03] i'll reply to the email in a sec. gotta attend quickly to some cooking [07:03] ta [07:04] btw, I see your point about upload. it's a bit ugly, but I guess not having it is going to be a PITA [07:05] mornin' all [07:05] morning rogpeppe [07:07] axw: i'd appreciate a review of this, if you have a few moments: https://codereview.appspot.com/14531048/ [07:08] looking [07:08] axw: it'll allow us to log rpc messages but lose the pings [07:09] sounds excellent [07:10] axw: it's a compromise (the message information can't be quite as accurate as the original logging) but i *think* it's the right compromise [07:23] rogpeppe: where's the logging currently? [07:23] or has it been removed [07:24] axw: in rpc/jsoncodec [07:24] ah yes, ta [07:24] axw: i'm going to leave it there, but reduce it to trace level [07:24] ok [07:24] axw: because i think it might be still useful to have lower level logging sometimes [07:24] certainly, for tracing :) [07:25] axw: in particular, the new logging won't show unrecognised fields, because it's called *after* the marshalling has taken place [07:27] rogpeppe: unknown fields? do you mean after *un*marshalling? [07:27] axw: ah yes, sorry [07:27] ok [07:27] just making sure I understand the proble [07:27] m [08:02] rogpeppe: not sure if you saw my lgtm. I think there might be a missing call to ServerRequest [08:02] but otherwise lgtm [08:02] axw: ah, i haven't yet [08:02] axw: thanks [08:02] nps [08:06] axw: good catch about the missing ServerRequest [08:07] axw: (not that that case can ever happen with the jsoncodec that we always currently use) [08:07] cool [08:21] wallyworld_: environs/sync.DefaultToolsLocation will change when we have .canonical.com for tools, right? [08:21] yes [08:22] should be the same as environs/tools default URL? [08:22] let me check [08:22] wallyworld_: sync points at s3, tools points at juju.canonical.com [08:22] axw: yeah, the whole dependency on s3 thing will go away [08:22] ok [08:22] cool. [08:30] davecheney, what is the latest gc toolchain needed by juju? 1.1.2? [08:36] or is gc toolchain needed now that juju starting with 1.15 builds with gccgo? [08:36] ref bug 1222636 [08:36] <_mup_> Bug #1222636: juju should compile with gccgo === ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: see calendar | Bugs: 3 Critical, 162 High - https://bugs.launchpad.net/juju-core/ [10:28] morning all. Quiet day today [10:30] natefinch: morning nate, yes, all busy or simply not here ;) [10:32] natefinch: hiya [10:33] anyone fancy a review? [10:33] this mutes the Ping logging spam: https://codereview.appspot.com/14609043/ [10:33] rogpeppe: I think I owe you at least 2 I said I'd do and then never completed [10:33] natefinch: well, here's your opportunity :-) [10:33] natefinch: thanks! [10:33] rogpeppe: to make it 3? ;) [10:34] natefinch: lol [10:38] * TheMue clicks too [10:39] rogpeppe: log noise reduction? already this deserves a double +1 and lgtm [10:39] TheMue: well, i know the princple is ok - but is the implementation appropriate? :-) [10:40] rogpeppe: that is checked now [10:40] TheMue: thanks [10:41] mgz: it's Friday night here, I've had too many drinks, so i'll miss the standup. i won't make much sense anyway [10:41] wallyworld_: sure :) [10:41] * wallyworld_ opens another bottle [10:41] enjoy your weekend [10:41] i've got 2 branches for review. that's my day essentially [10:41] will do :-) [10:41] rogpeppe: also reviewed w/ LGTM [10:42] wallyworld_: maybe you might make *more* sense... :-) [10:43] ha de ha ha [10:43] TheMue: i don't see your review BTW [10:44] rogpeppe: still in progress, so far only one question in there. [10:45] TheMue: ah, sorry, i've just approved the branch. will address any concerns in another branch. [10:45] rogpeppe: ok [10:49] rogpeppe: standup? [10:49] rogpeppe: reviewed too, and as nate said: standup [12:49] hazmat: Ping messages are now elided from the log in trunk [12:51] \o/ [13:06] rogpeppe, awesome. [13:07] hazmat: the new scheme also provides us with the potential flexibility to avoid showing secrets too. [13:09] rogpeppe, how's that? you mean internal principal secrets not hook output.. and changing all the locations that might log that to tracef instead of debugf. [13:10] as you say its flexibility not enforced though [13:10] hazmat: yeah. i don't think it's possible to avoid logging secrets entirely [13:11] hazmat: in the function that does the logging we have access to all the call data and its parameters [13:11] hazmat: so we could do call-specific munging [13:11] hazmat: (not that i've done that yet) [13:11] rogpeppe, no worries.. this is nice win by itself.. [13:12] rogpeppe, the only way to not log principals secrets is to not log principal secrets ;-) [13:32] mgz: are the openstack tenant-name and region name attributes linked to userpass authentication in any way? [14:07] * rogpeppe goes for lunch [14:07] natefinch: could you take another look at https://codereview.appspot.com/14426046 please? i've made some more changes. [14:08] natefinch: this is what juju init --show produces now: http://paste.ubuntu.com/6222546/ [14:11] rogpeppe: on it [14:20] ok guys, I'm asking around for people to give manual provisioning a shot. [14:20] jcastro: ooh ooh.. I actually have been meaning to try that [14:21] next step is I'd like to document deploying to a container [14:21] https://juju.ubuntu.com/docs/charms-deploying.html [14:21] it's something like a flag to deploy to a container isn't it? [14:22] --constraints container=lxc [14:22] aha! [14:23] man, someone landed that on the container's doc page! [14:23] I was all ready to come in here and make fun of someone. [14:23] also - https://juju.ubuntu.com/docs/reference-constraints.html [14:23] yeah so I think I'll add that also to the manually provisioning page as an example [14:24] cool... the more documentation the better [14:52] where does juju cache things? I want to bootstrap to another null node but the client isn't picking up my changes to the config file [14:52] jcastro: ~/.juju/environments/.jenv [14:53] jcastro: I think that's rogpeppe's new change.... environments.yaml is just a starting point, the realtime configs are ^^^ [14:54] ok so do I blow those away to regen new ones or edit them by hand? [14:54] jcastro: you can edit them.. but if you just want a second null provider you can copy & paste a new on in environments.yaml and just give it a different name [14:55] null2 : type : null [14:55] blowing it away also worked [14:55] I just wanted to see what would happen there [15:04] 2013-10-11 15:04:45 DEBUG juju.state open.go:88 connection failed, will retry: dial tcp 127.0.0.1:37017: connection refused [15:05] I get that over and over when trying to manually provision a bootstrap node [15:06] ah, after about 5 minutes it just gave up [15:17] rogpeppe, ping? [15:21] mattyw: ping [15:21] mattyw: pong even :-) [15:23] mattyw: did you have a question? [15:23] rogpeppe, I don't suppose you have 5 minutes spare for a hangout? [15:23] mattyw: sure [15:23] I've got an odd error in my core branch [15:23] rogpeppe, https://plus.google.com/hangouts/_/01877422202d7d4982e8a135c06bbe6058de9ed0 === wedgwood_ is now known as wedgwood [15:43] so folks, good night and have a nice weekend [15:43] TheMue: have a good weekend! [15:44] natefinch: enjoy your time too, greetings to the family and have a good coffee ;) [15:59] natefinch: could you try running tests against trunk, testing launchpad.net/juju-core/juju, please [15:59] ? [15:59] rogpeppe: sure [16:14] rogpeppe: getting errors related to mgo [16:14] natefinch: could you paste the full output of go test -gocheck.vv in that dir please? [16:15] rogpeppe: sure thing [16:15] natefinch: ta [16:18] rogpeppe: http://pastebin.ubuntu.com/6223130/ [16:33] can anyone else replicate the test failure? [16:34] mgz: ^ [16:34] looking [16:35] seems to be a tear-down error with sockets? that's a known intermittent failure, no? [16:37] (cd juju&&go test -gocheck.v) passes here [16:37] mgz: it's happening consistently for mattyw [16:38] rogpeppe, thanks for your help [16:38] mattyw: np [16:38] mattyw: mgz might have some ideas [16:39] I'm going to take a short break but I'll still be here [16:39] mgz: i've been in a hangout with mattyw trying to work out what the issue is - you might want to carry on trying to help. unfortunately i've got to go now. [16:39] happy weekends all [16:39] mattyw: you're running just that same subset of tests right? [16:40] bug 1236931 is the same symptom [16:40] <_mup_> Bug #1236931: juju: sporadic test failure in TestDeployWithForceMachineRejectsTooManyUnits [16:41] and the same test even [16:44] I guess the next step, with reliable repo, would be to just run a single test that triggers if (if possible) with -gocheck.f= and change the teardown failtf into a something that gives us more debug possibilites [16:45] perhaps just letting it hang to it can be straced or similar [16:46] mgz: the problem never happens when running just a single test [16:46] that's what I thought [16:46] but if we can trigger it running two, it's a bit better than the whole suite [16:47] feels like a mongoy isolation racey something :) [17:09] mgz, sorry, yes, just the juju tests [17:09] mgz, as in juju-core/juju/... [17:27] mgz, I'm using go 1.1.2 right? [17:27] mattyw: yeah, that's what I'm using [17:28] what mongo version? [18:01] jcastro: for the record, I can't get null provider to work at all. I hit this bug: https://bugs.launchpad.net/juju-core/+bug/1235717 [18:01] <_mup_> Bug #1235717: null provider bootstrap fails with error about sftp scheme [18:24] natefinch, I just hit that [18:24] you need to blow away /var/lib/juju on the node [18:24] it happens if you partially bootstrap and it fails [18:25] it doesn't know how to resume [18:25] jcastro: I did that.. lemme retry [18:26] jcastro: are you using --upload-tools or just letting it find the S3 tools? [18:26] letting it find the tools [18:27] Get http://162.243.42.173:8040/provider-state: dial tcp 162.243.42.173:8040: connection refused [18:27] is the next problem I am running into though [18:27] is that IP localhost or something else? [18:27] I just signed up for digital ocean to try it [18:28] I bet I need to open some port or something [18:28] (local to the bootstrap node that is) [18:28] I would expect that juju would do that itself. You shouldn't need to open stuff manually, else, why even bother? [18:28] that was a guess [18:28] I have no idea what I am doing here, heh [18:29] haha [18:29] I'm trying to use trunk and not getting anywhere... keeps telling me my version number is 0.0.0 :/ [18:59] hi. we thought that juju core has the bootstrap node directly download charms, but we are getting evidence that they are being downloaded to a user's box and then uploaded the way pyjuju used to do. what's the expected story? [19:01] gary_poster: I think that's what it does right now, yeah. Don't ask me why, I don't know, personally. [19:02] natefinch, so IOW you are not surprised that they are downloaded to the user's machine first? [19:02] I mean, not surprised to hear me say it :-) [19:03] mgz, mongo version is 2.2.0 [19:03] gary_poster: correct. There's a folder under .juju called charmcache that appears to be a cache of charms (not surprisingly). Why we download them to the local machine and then upload to the bootstrap node, rather than just directly download to the bootstrap node, I don't know. [19:04] natefinch, :-( but thanks for clarification [19:04] mattyw: hm, same here, must be something more environmental [19:04] mgz, I'm going to call it a night, will look again on monday, thanks for your help [19:04] gary_poster: yeah: [19:04] -rw------- 1 andreas andreas 44M Out 11 15:32 .juju/charmcache/cs_3a_precise_2f_juju-gui-77.charm [19:06] gary_poster, ahasenack: it's possible that the folder there is just for the local provider, so it's not a smoking gun, however, the logs do seem to look like we're downloading charms before uploading [19:06] well, the timestamp matches [19:07] natefinch: look at these: http://pastebin.ubuntu.com/6223761/ (they are in utc for some reason, sounds like a bug) [19:07] 10min to download, 24min to upload, that's my guess [19:08] hot damn that is slow [19:08] yeah, I have the worst isp of Brazil for sure [19:09] but yes, we're definitely downloading and then uploading the charms. There doesn't seem to be any reason why we couldn't just tell the bootstrap node to go get them itself, and save one leg of the trip (and reduce the use of the local bandwidth in the case of bad Brazilian ISPs) [19:10] however, there may be a good reason I'm not aware of. [19:13] hey sinzui [19:13] hi jcastro [19:14] https://bugs.launchpad.net/juju-core/+bug/1238934 [19:14] <_mup_> Bug #1238934: manual provider doesn't install mongo from the cloud archive [19:14] kapil and I ran into this [19:14] it basically makes the manual provider not work [19:14] natefinch, ^^ this is the issue I was having earlier [19:16] jcastro, The issue looks familiar. We discussed this scenario last week in regards to lxc/local provider [19:16] jcastro: doh [19:16] jcastro, your do mean manual provider here, not local? [19:17] sinzui, tldr, we're yeah [19:17] I am trying to manually provision on a VPS [19:17] and I couldn't figure it out so hazmat figured it out while trying the same thing [19:17] natefinch, https://bugs.launchpad.net/juju-core/+bug/1238934 [19:17] <_mup_> Bug #1238934: manual provider doesn't install mongo from the cloud archive [19:17] I will tentatively say 1.17 and let our leads scream if I am asking too much [19:18] seems like installing mongo shouldn't be a big deal, but I'm not really the right person to ask [19:18] sinzui, ok so basically we should not announce that the manual provider is ready? [19:19] yeas, that is right. [19:19] natefinch, well, you'd have to ssh in, stop stuff, manually add the archive [19:19] start stuff [19:19] I'd rather say "wait one more stable release" than have people doing that crap [19:19] jcastro: what, it's just code, right? ;) [19:20] jcastro: absolutely. I definitely would not announce it yet. As far as I can tell, it's quite a bit away from being ready to go [19:20] so close! I can taste it! [19:21] jcastro: I'm pretty psyched for it, but far better to make people wait for a great experience than give them a bad experience a couple weeks sooner. [19:23] indeed, I'll go ahead and make a note on the docs [19:52] natefinch, sorry that was meant for axw.. [19:52] or sinzui [19:53] effective the manual installation path is a totally separate installation path than what juju normally does [19:53] it needs explicit testing before any release.. as the bit rot chance is much higher [19:53] in this case its missing the cloud archive install before using mongodb [19:54] for precise to work [19:54] hazmat: I hope it's not totally separate... but it definitely needs a lot of testing before the first release, and certainly it's enough of a special case to warrant at least some minimal testing each time [19:54] natefinch, well its not using cloudinit.. [19:54] hazmat: yes, there is that. And that is a pretty big difference. [19:54] the command stream serialized in cloudinit should be roughly the same though [19:55] but the envelope is different, so anything done by cloudinit outside of the command stream needs replicating [19:57] jcastro, if we switch out to 13.04 instances it should work afaicr.. those distro mongos have ssl support.. but without the cloudarchive there's no mongodb version/feature normalization across series [19:57] * hazmat tries it out [19:58] hazmat: FWIW, I tried with 13.04 and had difficulties... but I was doing it with trunk, so there may be other factors there [19:59] natefinch, i'm also on trunk [19:59] the issue was the same for 12.04 as what jcastro hit namely juju-db didn't startup [20:00] hmmm, the problem I had was that it couldn't find tools, even if I used upload-tools. I didn't really look into it beyond that, though [20:01] hmm.. manual provider does some client side caching.. [20:02] ie. changing the ip address of state server and bootstraping bombs out that provisioning has already happened [20:02] and you can't destroy-environment on a manual env