[00:00] menn0: what does your charm do? [00:00] I've found the source for the postgresql charm to be pretty helpful. it's written in Python and uses the charmhelpers package. [00:01] https://github.com/charms/postgresql [00:01] waigani: at thumper's suggestion, I'm productionising the github-watcher charm [00:03] waigani: what are you going to do? [00:04] menn0: good question! At some point I'd like to make a SPARQL endpoint charm [00:04] menn0: but I think I'll start with the Vanilla forum in the tut [00:04] waigani: sounds good [00:25] axw: can you ping me when you're around for a quick catch up about 1.18.3? [00:38] i'm trying to deploy juju-gui using the local provider on my machine and I'm getting this: 2014-05-09 00:36:50 INFO juju.worker runner.go:260 start "api" [00:38] 2014-05-09 00:36:50 INFO juju.state.api apiclient.go:198 dialing "wss://10.0.3.1:17070/" [00:38] 2014-05-09 00:36:50 INFO juju.state.api apiclient.go:206 error dialing "wss://10.0.3.1:17070/": websocket.Dial wss://10.0.3.1:17070/: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "juju-generated CA for environment \"local\"") [00:38] 2014-05-09 00:36:50 ERROR juju.worker runner.go:218 exited "api": timed out connecting to "wss://10.0.3.1:17070/" [00:38] 2014-05-09 00:36:50 INFO juju.worker runner.go:252 restarting "api" in 3s [00:38] is this a regression? i've deployed juju-gui on my machine successfully before [00:38] i'm using a relatively recent build [00:39] (yesterday) [00:40] and wouldn't you know it... the deploy just worked. [00:43] nevermind I guess [00:43] it just took a really long time and I thought that error was related. [00:58] menn0: local provider takes a long time for the first lxc container to go from pending since it has to download the ubuntu image file [00:58] wallyworld_: ping [00:58] axw: hiya, quick hangout? [00:58] wallyworld_: I knew that but it took like 20 mins. perhaps my link is slow today. [00:58] wallyworld_: sure [00:59] https://plus.google.com/hangouts/_/7ecpiklmm34hr8t7t02jinl5r0 [01:00] wallyworld_: same as yesterday... I'll try starting one [01:01] wallyworld_: https://plus.google.com/hangouts/_/72cpjktbfqddemso04puald08o [01:01] axw: my network has gone slow, will keep trying to get in [01:41] wallyworld_: I don't get any warning about lxc-use-clone [01:41] hmmmm [01:41] it may have been his set up [01:42] maybe he accidentally used an old client to bootstrap [01:42] that would explain it [01:45] wallyworld_: so what is this standup thing? [01:45] that time is not good for me... [01:45] thumper: it's on the calendar [01:45] I know, but it goes into the past [01:45] and looked like a mistake to me [01:45] i assumed someone added it [01:45] it looks like a dupe of the other calender item [01:46] ok [01:47] I'd be happy to do a standup sometime in the next hour [01:48] might get messy two teams doing it though if we want it to be short [01:55] agreed [01:55] mramm: o/ [01:55] mramm: are you real or a ghost in the machine? [01:55] hey hey [01:55] mostly real [02:03] ghost ramm I like it [02:15] wallyworld_: yea my latest test didn't report any issues with the lxc-use-clone config item error [02:15] so it was probably my setup initially [02:16] wallyworld_, thumper CI is broken. looks like we cannot get errgo from github [02:16] http://juju-ci.vapour.ws:8080/job/build-revision/1335/console [02:17] oh fuck [02:17] that was me [02:17] bugger [02:17] oh, and the other branch for 1.18 is broken the same way [02:17] http://juju-ci.vapour.ws:8080/job/build-revision/1336/console [02:17] shit shit shit [02:17] thumper, did you break trunk and stable? [02:17] yes [02:17] That's hard to do. [02:17] not if you delete the repo [02:18] Oh dear [02:19] sinzui: you see i've asked for input as to what version want for next week [02:19] thumper, the failure is in " go get -v -d launchpad.net/juju-core/..." I don't think I can hack around that [02:19] * thumper hacks... [02:20] wallyworld_, 1.18.3? [02:20] sinzui: you should be back in business [02:20] sinzui: i had thought so but then only 1.19.2 will have the new maas name placement directive [02:20] * thumper will be more careful before blowing shit away [02:21] wallyworld_, I wrote release notes for 1.18.3 and 1.19.2 [02:21] * sinzui retries the trunk build [02:21] sinzui: ok, thank you that's great. so when they tell us what they want, we can just flip the switch on wither one [02:22] wallyworld_, if we get CI building again. I will manually trigger 1.18 if a new revision doesn't land as I hammer trunk to build [02:22] axw: if we were to go with 1.19.2, we will need a fix for bug 1313785, which i think you are familiar with? [02:23] <_mup_> Bug #1313785: Can't SSH with "juju ssh" command to EC2 instance. [02:23] sinzui: have you eaten all the Violet Crumble yet? [02:23] thumper, yes! CI is on to the next phase to unittests and packaging [02:23] phew [02:24] wallyworld_, Yes. the Cherry Ripes were stolen. [02:24] lol [02:24] i'll bring some more to the next sprint [02:24] wallyworld_: ok. goddamn ssh [02:25] My youngest daughter exclaimed Holy Sh*t when she had one, while the older daughter asked where that the Cherry Ripes been all her lide [02:25] life [02:25] axw: thanks, seemed like you had a handle on it already [02:25] I hid one for Saturday [02:25] lol [02:26] sinzui: i can be youe dealer :-) [02:26] your [02:26] wallyworld_, 6 cherry ripes cost 18.00 USD from amazon. You could make a lot of money send them to the us by boat [02:27] $18 !!!!!! [02:27] wtf [02:27] they cost about $10 here [02:27] and they often go on sale for about $6 which is when i buy them for you :-) [02:28] that's for 6 [02:31] * axw wishes he had a cherry ripe now [02:31] * wallyworld_ does too :-( [02:58] sinzui: bug 1316174. the local provider explicitly disables changing apt sources. do you think we should make juju-core require cloud-tools in the packaging rules then? [02:58] <_mup_> Bug #1316174: 'precise-updates/cloud-tools' is invalid for APT::Default-Release [02:59] * sinzui is thinking [03:00] wallyworld_, cloud-tools, being an archive, must be added. Isn't think a cloud-inti responsibility? I thought all precise cloud images had cloud-tools as an archive [03:01] wallyworld_, There is nothing we can package to make an archive be added [03:01] sinzui: but this is for someone's local machine [03:01] wallyworld_, hmm, oh [03:01] where they just have not added cloud-tools to their own apt sources [03:02] they are using local provider and local provider does not dick with their apt set up on purpose [03:02] wallyworld_, this is like kiko's production precise running lxc. He didn't have cloud-tools and we require it [03:03] so Juju on precise needs to require that the archive be added [03:03] hmm [03:03] maybe we should error early with a nice message then? [03:03] wallyworld_, I think so [03:03] ok, will do [03:03] i'll comment on the bug [03:04] wallyworld_, I copied lxc and other packages from ctools to ensure the right mongo and lxc were available in precise for users that upgrade without ctools [03:04] oh, you mean those packages are in precise universe or something? [03:05] or are they in the juju ppa? [03:13] wallyworld_: can you please review? https://codereview.appspot.com/92140043 [03:13] axw: yeah, already looking :-) [03:13] ta [03:13] I will get onto backporting the lxc branch [03:13] great thanks :-) [03:16] wallyworld_, many precise local users were just using our juju ppa. It had older packages than the ones in ctools. I think I am the defactor maintainer of the ppa. I am backporting the same packages in ctools to our ppa to reduce the upgrade problems. The bug we are looking at was introduced when we decided to make juju be explicit about where the mongodb server comes from. [03:17] wallyworld_, I think checking for and requiring the ctools archive is sane. Many users didn't know they were doing ricky deployments [03:18] sinzui: ok, great thanks. i think that bug can definitely be deferred to next stable release then [03:23] axw: so you're sure you got all the places where EnsureEnvName() is required? [03:23] wallyworld_: pretty sure [03:23] ok, i'll see if i can spot any others [03:31] is there a URL I can view CI builds at? [03:37] jcw4, At this hour they are at http://juju-ci.vapour.ws:8080/ [03:42] sinzui: thank you [03:43] sinzui: does the CI url change regularly? [03:44] jcw4, It has in the past. We deploy it with juju into the clouds that meet our needs. I purchased a domain to stop the nonsense or url changes. [03:44] We will probably remove the ports from the url next week [03:45] sinzui: I see [03:45] sinzui: thanks [03:46] wallyworld_, thumper I haven't announce the CI URL yet, but you might note I got canonistack running again. I got juju to trans-cloud machines [03:47] heh, nice [03:47] great :-) [03:47] Next week I will dare to provision my three machines in the lab. That will be a networking challenge [03:53] indeed [04:12] has anyone hit this before: http://paste.ubuntu.com/7419545/ [04:15] juju debug-log keeps logging: "machine-0: 2014-05-09 04:14:17 ERROR juju.worker runner.go:218 exited "peergrouper": cannot get replica set status: cannot get replica set status: not running with --replSet" [04:35] waigani: I get that too. From bits I overheard at the sprint it's related to the HA changes but I'm not sure how much of problem it is. [04:35] wallyworld_: lxc change is in both 1.18 and trunk now [04:35] whoohoo [04:35] waigani: it would be nice if it would go away! [04:35] wallyworld_: I've fixed the SSH thing, just gonna test it on Azure before pushing [04:35] awesome [04:35] i'm just about to propose a fix for another 1.19 bug [04:36] menn0: I destroyed environ, re deploying everything, ensuring each agent is up before deploying the next [04:36] so far so good [04:36] menn0: waigani: that peergrouper error is log noise due to running on local provider with ha enabled - it's on "someone's" todo list to clean up [04:36] wallyworld_: I'll fix up the EnsureEnvName stuff in trunk next, unless there's something more pressing [04:37] wallyworld_: cool [04:37] axw: no, that would be next thanks [04:37] wallyworld_: ah, good to know [04:37] menn0: how did you go today? Did you get a working charm? [04:39] waigani: after getting over a few mental hurdles I now have the github-watcher charm ported to Python using charmhelpers. It's cleaner and more robust but still needs more love. [04:39] waigani: it's been fun though! [04:39] waigani: how did you get on? [04:39] yay \o/ [04:40] menn0: agent-state-info: 'hook failed: "install"' [04:40] hehe, that is my latest [04:41] waigani: debug logging is your friend :) [04:41] menn0: with less noise, yes [04:42] menn0: when you say charmhelpers, do you mean the tools e.g. charm create? [04:43] or the hook commands e.g. relation-get ... [04:43] waigani: to avoid all the peergrouper noise and just see the logs for the thing you're working on, look at /var/log/juju/unit-.log. that's what i've been looking at. [04:44] menn0: awesome tip, thanks :) [04:45] waigani: I mean the Python package called charmhelpers. https://launchpad.net/charm-helpers [04:45] oh cool, a whole new world - that I'm NOT going to look at right now! [04:45] waigani: it provides a Python API for doing charm related things [04:46] thumper: ping [04:54] menn0: waigani: debug log also supports filtering, include exclude etc [04:54] ah cool, will try [04:54] so you can look at info for a unit or several, and the machine it runs on etc [04:54] see the recent release notes [04:55] wallyworld_: the series stuff is just cut and paste right? [04:55] axw: yep [04:55] with a constant removed from testing [05:27] axw: i can't see from the diff where we are testing that a pub address is used if proxy=false, but a private address is used if proxy=true [05:28] wallyworld_: there's no explicit test for the latter; will see if I can add one [05:28] that would be great, thanks [05:28] wallyworld_: I changed setAddress to add an internal address, which broke the test before my fix [05:29] yeah, i saw that we test the proxy=false case [05:29] wallyworld_: problem with proxy=true is it causes a second invocation of juju [05:29] wallyworld_: will see how practical it is tho [05:29] ok [05:39] wallyworld_: whazzup? [05:40] thumper: forgot to tell you - on monday we need to enter cards for the next 2 week's worth of work [05:40] ah... ok [05:40] will do [05:40] i would think we'd need lanes for each squad [05:40] * thumper thinks [05:40] prolly [05:40] but i ain't got editing permissions [05:41] i'll bug someone later [05:42] aveagoodweekend [05:43] kk [05:43] you too [05:43] will try [05:43] chances are I'll be hacking away [05:43] on my personal project [05:43] yeah, i gots stuff to do as well [05:44] plus mother's day sunday [05:44] oh yeah... [05:44] must not forget... [05:44] must... not.... [05:44] me -> buy dinner food === vladk|offline is now known as vladk [06:35] wallyworld_: updated tests [07:07] fwereade: I don't understand your comment about .dns vs. .internal [07:08] fwereade: it's using .internal because that's what the default is- to proxy and use the internal address [07:10] axw, right -- I'm just saying that we should probably test both code paths, because they're both viable results in different situations [07:10] axw, especially when the bug you're working on is about picking one or the other [07:12] fwereade: ok. I thought it'd be enough to do it in the common code, but I can do it there too [07:13] axw, this feels to me like a case where the existing testing wasn't quite covering enough, and so it's worth testing a layer up [07:13] axw, there's a lot that sucks about testing the same stuff at every layer too [07:14] fwereade: indeed it was not covering enough. there was some code that very subtley changed the machine addresses, took me quite a while to figure out why it wasn't working like I thought it should [07:14] axw, but in the absence of easy mocking we just have to find a balance between testing just-one-path and all-exponential-possibilities, and I think it's reasonable to shade a little closer to the latter when fixing actual bugs [07:14] fwereade: nps [07:15] axw, (not that mocking is a panacea or anything but it certainly helps to control the exponential growth) [07:20] jam, fwereade, axw: hi [07:21] vladk: heya [07:22] vladk, morning [07:30] axw: hi, i just got back from the vet [07:30] looks like you got your +1 [07:31] wallyworld_: hey. it's cool, fwereade has +1 and I am updating it now [07:31] thanks [07:31] great [07:31] our poor dog got desexed today [07:32] fwereade: for the kanban board,are we going to create lanes for each squad to put the 2 week cards? [07:33] wallyworld_, I'm wondering if maybe we actually want a kanban board per team [07:33] wallyworld_, harder to see what reviews are in flight, but there's always +activereviews [07:34] wallyworld_, green have their own already [07:34] wallyworld_, and honestly it felt a bit clunky with just 2 teams sharing a board before [07:34] wallyworld_, 4 mightbe a couple of bridges too far [07:35] yeah i agree [07:38] fwereade: actually, i have soccer a bit later tonight so will miss the standup. there's a couple of things i wouldn't mind discussing. did you have time for a hangout anytime over the next hour or so? [07:44] wallyworld_, I was planning to swap today for my public holiday during the sprint, so let's chat now [07:44] wallyworld_, well 5 mins please? [07:44] fwereade: it can wait, don't want to take up your day [07:45] wallyworld_, let's do it anyway, start a hangout and I'll be with you v soon [07:45] ok [07:51] test suite bugfix: for bug #1301353 https://codereview.appspot.com/97240043 [07:51] <_mup_> Bug #1301353: juju test suite should use juju-mongodb [07:53] fwereade: well appropriate Layering is (IMO) better than mocking, you just need a way to have an API Server that doesn't (for example) actually require a mongodb underneath [07:53] jam1, I use the term mocking pretty loosely -- in my mind, that's just a mocked api server [07:53] that was our goals with interface based Doubles, though we only partially did it for providers [07:53] jam1, indeed so [07:53] fwereade: sure, I just heavily distinguish Mocks from Doubles [07:54] Doubles can be tested alternative implementations, Mocks are "stub out just this call with this pre-canned response" [07:54] fwereade: anyway, if you're off today, don't bother, but the above code review should be reasonably small [07:54] and would be nice if we are going to have a 1.18.3 [07:55] jam1, yeah, I guess it's just that my experience with doubles lead me to consider them to be less-reliable mocks ;p [07:55] fwereade: if you actually have interface testing and run your test against a double that runs against the actual you can be more confident that you actually still match reality [07:56] jam1, sure -- I've just found that when things change subtly it's easier to fix mocks than doubles [07:56] jam1, not trying to say it's fundamentally unworkable or anything [07:57] fwereade: sure, it is easier to change mocks, but it is harder to know *when* it changed and should have broken but didn't. Though again, doubles expect that you actually have good test coverage :) [07:57] jam1, if you get the double-testing *right* it's easy to tell when they're no longer accurate, but I don't think there's a *fundamental* reason it's easier to test the doubles than the mocks [07:57] its sort of "its possible with a Double, but harder" [07:57] jam1, yeah [07:58] fwereade: you can write a test suite that runs against the doubles and the real thing [07:58] Mocks tend to not be very testable to compare against something else [07:59] jam1, well the biggest problem with mocks is the skew, so you have to have *some* way of testing them, but it ends up being more implicit than explicit [07:59] jam1, and we all know about explicit being betterthan implicit :) [08:01] jam1, LGTM [08:03] fwereade: if they are properly tested, then IMO their closer to Doubles [08:04] fwereade: I've certainly seen mostly happy-to-skew Mocks [08:05] jam1, hence my fuzzy handwaving over terminology -- doubles are often overcomplicated, mocks are often undertested, I suppose I'm happiest with something in the middle :) [08:51] jam1: i'm looking to bump bug 1307643 off the 1.18 roadmap and just fix for 1.20. how do you feel about that? i sorta think it's a corner case that we don't really need to deal with on 1.18 [08:51] <_mup_> Bug #1307643: juju upgrade-juju --upload-tools does not honor the arch [08:52] wallyworld_: given that I think we are actually unlikely to get 1.20 in trusty, I think I'd rather it stayed in there. [08:52] wallyworld_: but it certainly doesn't block 1.18.3 [08:52] hmmm. ok. i sorta though folks would just use the stable ppa moving forward [08:52] (I may be being pessimistic here, but our relationship and track record with Distro isn't very heartwarming) [08:52] thought [08:53] wallyworld_: you don't "Just Get" the stable ppa or cloud-archive:tools on your system [08:53] so 1.18 is going to be the "out-of-the-box" experience people have with Juju [08:54] sure, i guess i figured most folks using juju would be ok with adding an apt source given the nature of what juju does [08:54] wallyworld_: people may be comfortable with it, people just doing "what is this juju thing, and apt-get install juju" won't get what we want them to [08:55] wallyworld_: *hopefully* upload-tools isn't in their default flow, but I fear that it already is [08:55] true, but how many of those would run into that particular bug? [08:55] well, we *really* want to discourage using upload-tools [08:55] i do think people using upload-tools equals the people who would get juju from the ppa [08:55] wallyworld_: until we actually address the use case (just work without scouring the internet) [08:56] we can't really avoid it [08:56] its how people testing it on a private cloud are likely to do it, but I think its fair enough [08:56] it does require mixed archs which is rare enough [08:57] yeah, i figured the natre of the bug meant that people try trying out juju wouldn't come across it [08:57] and those that did would be power users [08:57] who would install from ppa anyway [09:05] wallyworld_: just to close the loop in case it wasn't clear, I'm ok with pulling it out of 1.18 [09:05] after discussing it I think its reasonable [09:06] ah ok, thanks. i did think you wanted it left. thanks for clarifying [09:07] jam1: fyi, we'll be cutting either a 1.18 or 1.19 release tomorrow depending on what the maas folks say they want to use at ODS. we've put in fast lxc clonng for them (not just for local but everywhere) and driven down the bugs on both so we can pull the trigger. curtis has release notes prepared for both. i'm just waiting to hear back what they want to run with [09:09] wallyworld_: I thought we had to cut a 1.18 because of the Critical "sync-tools can destroy your environment" bug. [09:09] regardless of what we do for ODS [09:09] jam1: the fix for that is now both in 1.18 and 1.19 [09:09] wallyworld_: sure but we need it to go out to everyone else, too [09:10] (we have a critical issue in our current stable release, we should release a new version with a fix) [09:10] jam1: yes, but curtis can only do one release at a time and we need to get ODS sorted out [09:10] and that issue is only a very specific bug right? [09:11] wallyworld_: sure, I'm fine with delaying to do a different release, but we still need 1.18.3 regardles [09:11] yes sure [09:11] wallyworld_: its a "nuke your running environment without running destroy-environment" bug. still pretty serious [09:11] jam1: understood, it's just about what order we release in [09:11] if we do a 1.19 for ODS, then 1.18.3 will have to come straight after [09:12] wallyworld_: I'd also like to stipulate that if we don't hear from them, just do 1.18.3 to get progress on it. (assuming that it takes 1 day to do a release, we can do whatever they like today but 1.18 if they don't say anything) [09:13] yeah, that's what i was thinking, but we really need to know if they want the maas name placement stuff which is missing from 1.18 [09:13] wallyworld_: and HA [09:13] well, i guess so. but i didn't *think* that was goingto be the main focus [09:14] wallyworld_: alexis was telling us "we must have HA in by Apr 25 so that jamespage can put the demo together for ODS". [09:14] that may have been tempered [09:14] i asked little mark and dan and alexis 12 hours ago so hopefully they'll get back to me [09:15] if not, they get what they get, right? [09:16] I did think that 1.19 was the plan for ODS [09:16] but only as a 3rd person observer [09:16] I haven't been in any of the actual conversations around it [09:16] then adam indicated they might want 1.18 so we're getting mixed messages [09:16] me either [09:17] hece i asked them explicitly to tell us - we're good to go on both [09:18] * jam1 realizes that with his wife away, his ability to separate himself from work-time diminishes greatly [09:19] yep :-) [09:19] hmmm... I have that issue where I blow away ~/.juju and then generate-config fails because it somehow creates (without sudo) a ~/.juju owned by root [09:20] this is on a different machine [09:20] axw: we may want to land the 1.18 fix for bug 1316869 straight into 1.19 so we can release if required, and do a followup to clean it up [09:20] <_mup_> Bug #1316869: juju sync-tools destroys the environment when given an invalid source [09:21] rebooting [09:21] wallyworld_: 1.19 isn't really affected that badly [09:21] wallyworld_: it was already partially fixed [09:21] voidspace: I've seen that, and even when it isn't using local. I was pretty surprised by it [09:22] jam1: I have no environment and I *was* using openstack [09:22] wallyworld_: there's one case where it could occur, which is if someone had an environments.yaml with multiple environments and no specified default (and no current-environment) [09:22] wallyworld_: it should be possible to just merge 1.18 into trunk. I've been keeping it in sync [09:22] jam1: even happens in a new shell that I've not used sudo in [09:22] voidspace: yeah, I never dug into it, but I have seen it. [09:22] voidspace: I have the feeling we aren't actually doing sudo [09:22] wallyworld_: I'm in the process of fixing up trunk... but if you really want to merge it forwards, that's fine [09:22] rebooting as that was the only way I fixed it at the sprint [09:22] odd [09:22] maybe we are trying to chown it and accidently suppling UID =0 [09:23] right, that sounds likely [09:23] axw: i just want to ensure that by your EOD, we have everything in place to pul the tigger on 1.19 if needed [09:23] like we were trying to fix the old bug about creating it as root, and the old fix to chown it back to $REAL_USER is still running [09:23] only REAL_USER isn't set, so we get UID=0 [09:24] wallyworld_: IMHO it is not critical for 1.19, so it could be cut now [09:25] axw: ok, what i'll do after soccer is just merge 1.18 into trunk so they're consistent [09:25] okey dokey [09:57] morning [10:21] natefinch: morning [10:21] voidspace: morning [10:27] wwitzel3: ping [10:37] * perrito666 has restore working except for a quoting nightmare on the bash script [10:38] perrito666: awesome! [10:39] I am thinking that it might be healthier to just send the script and run it remotely instead [10:43] perrito666: what are you doing now? [10:44] natefinch: well, currently the bash script is just a text template that i generated and being run over ssh [10:46] perrito666: do you know about backticks for quoted strings? [10:48] I do, altough the resulting script is very oddly quoted [10:49] is standup at extremely early times happening today? [10:49] or we already moved to our own teams one? [10:50] I was assuming we were doing our own today [10:55] voidspace: pong [10:55] wwitzel3: hey, hi === nessita_ is now known as nessita [12:28] sinzui: [12:28] amazon restored [12:28] PASS [12:28] :D [12:28] perrito666, oh, should I re-enable the backup and restore test? [12:29] sinzui: nop, I will propose the fix now and let you know when its landed :D just wanted to share the joy with someone else than my neighbors who are certainly happy that I scream of joy in the morning before they get up [12:30] :) [12:51] wallyworld_, backporting features to stable jepardises the release's inclusion in Ubuntu [12:51] which bug in particular? [12:52] wallyworld_, I think Ubuntu will reject 1.18.3 for trusty if it contains lxc-use-clone [12:52] oh. but what if the maas h=guys need it for ODS [12:53] wallyworld_, The maas guys are not Ubuntu [12:54] hmmm. but i think the bug was raised against 1.18 in the first place [12:54] wallyworld_, Ubuntu is still on 1.18.1. We can choose to ignore the situation and focus on getting 1.20.0 out with a MRE [12:54] not sure now [12:54] sorry SRU [12:54] i'd love 1.20 to replace 1.18 in the archive [12:55] wallyworld_, Ubuntu look at the diff. If they see lots of adds to introduce new behaviour, they will reject the micro version [12:56] ok. so how to we handle the friction bewteen that and what customers (who are the ones who give us $$$$) want [12:56] wallyworld_, I am in favour of helping the maas guys, I just want everyone to know that Ubuntu has strict notions of what a micro release can contain [12:57] maybe Ubuntu need to be pragmatic? [12:57] wallyworld_, remember that Ubuntu rejected backup and restore additions to 1.16. [12:57] it's not like it's a lib with 1000000000 users [12:57] wallyworld_, they are being pragmatic, they do not accept new features because enterprise hates change [12:58] and enterprise also need s features like backup and restore [12:58] or else we wouldn't have had so much pressure to add that by a big enterprise [12:59] wallyworld_, And enterprised used CTS's and our versions on the juju, [13:20] wallyworld_, that's why the ubuntu cloud archive exists [13:20] maybe juju belongs in there then? [13:21] https://launchpad.net/~ubuntu-cloud-archive/+archive/cloud-tools-next [13:23] so it's there already [13:24] yeah, but it doesn't have the strictness of the proper archive, this lets us eat our cake [13:24] jcastro: in that case, why was sinzui saying we shouldn't backport features to the 1.18 series, if juju is in cloud archive? [13:24] wallyworld_, cloud-tools is for precise [13:25] doesn't trusty also have cloud-tools? [13:25] wallyworld_, no [13:25] why? [13:26] because we agreed last October to be very make juju super reliable with clearly define bugs and features to make Ubuntu like us [13:27] lol [13:27] we need to add new features at a rate greater than ubuntu will let us [13:27] wallyworld_, If Ubuntu trusts juju, they will mre and sru it into trusty so that all users get it [13:27] i can't see that the need for cloud-tools is gone [13:27] new features won't go into ubuntu, they will go into the PPA and UCA [13:28] jcastro: sometimes new features are a bug raised against 1.18 eg fast lxc [13:29] wallyworld_, In Vegas we discussed the issue again. Changing packaging to provide a juju-client that is super stable and featureful juju-agents that are in the clouds may be a viable compromise [13:29] and so we would expect 1.18.x to be put in trusty wouldn't we? [13:29] wallyworld_, Everything in Lp is called a bug [13:29] that would be good [13:29] They are just issues. [13:30] sure. but if a customer says 1.18 is unusable because feature x is missing what do we do [13:30] wallyworld_, If we removed the config and made fast-lxc work, the diff might look more like fix [13:30] wallyworld_, it's not like they're being strict for no reason, there's 10 years of regressions and feedback from tons of projects who have tried. [13:30] wallyworld_, they use the ubuntu cloud archive [13:30] but there's no clod archive for trusty [13:31] well, it's fresh so we didn't really need it yet [13:31] wallyworld_, Customers don't care about semantics. Ubuntu cares about the diff. There is no point i arguing with me because I am not an Ubuntu upliader [13:31] sorry, not meaning to argue, just trying to understand [13:32] i just want to make customer happy [13:32] it's ok [13:32] the way distros work is weird [13:33] wallyworld_, ok so the question is basically now "I have fixes that need to go to support MAAS and customers, and we think the diff might be too large to SRU into Ubuntu, is trusty cloud archive coming soon?" [13:33] yeah, that sounds like what would be needed i think [13:34] wallyworld_, Ubuntu cares about the meaning of major, minor, and micro. Micro doesn't introduce configurable features. micro fix bugs in the code, not add new ways to use the software. lxc was not broken. It works fine of local-provider and that is what we documented. Adding fast-lxc is a minor change, extending an existing feature to be used elsewehere [13:35] sinzui: we made it configurable (default existing behavior) so as to be stable so that seems ironic :-) [13:36] you'd be surprised at the amount of things that can sneak into things like that; which is why they're strict by default [13:37] sinzui: so for 1.19.2, there's one remaining critical bug: 1316869. i've just propose a mp which merges 1.18 into trunk which will fix that bug. as soon as that lands, 1.19.2 should be shippable [13:37] https://code.launchpad.net/~wallyworld/juju-core/merge-latest-1.18/+merge/218989 [13:45] wallyworld_, so prior to that merge lacking 'current' env file, any command that failed would destroy an env? i'm trying to understand where that destroy env logic is lurking. [13:45] that bit on sync-tools [13:46] wallyworld_, okay. I am pulling down 1.18.2 tarball and win installer [13:46] hazmat`: i'm not sure that is was any command. i thought it was just sync-tools [13:47] but i could be wrong [13:47] wallyworld_, fair enough.. but i didn't see any destroy logic in sync tools or sync/sync.go .. and the fix doesn't appear to touch that either. [13:47] the fix for 1.18 cleaned up a bit of stuff in that area which we want in trunk also [13:47] let me check === liam_ is now known as Guest67023 [13:48] hazmat`: cmd/juju/synctools.go is changed [13:48] natefinch: can i bother you for a review of the above mp? it's a merge of 1.18 into trunk to pick up a fix for bug 1316869. the changes end up being mechanical - replace EnvCommand.Init() with EnvCommand.EnsureEnvName() [13:48] <_mup_> Bug #1316869: juju sync-tools destroys the environment when given an invalid source [13:50] sinzui: what do you mean by pulling down? [13:52] wallyworld_, ic.. environFromName returns a cleanup/destroy env function [13:52] hazmat`: also, i was told trunk (1.19) didn't really suffer the issue like 1.18 did. but 1.18 improved the checking and so we wanted that in trunk as well [13:52] wallyworld_, CI makes the the tarball and installer and tests it. There is little point of risking me making something different from what CI tested [13:53] sure. i was just confused my 1.18.2. did you mean 1.18.3? [13:55] I do mean 1.18.3 [13:56] sinzui: ok. hopefully someone will +1 that mp and 1.19.2 can be ready for release also [13:56] i still haven't heard back what the maas folks want to use [13:56] 1.18 or 1.19 [13:57] wallyworld_, should I just release both today and tomorrow? [13:58] sinzui: if you could that we be \o/ [13:58] would [13:58] wallyworld_: I can review it in a minute [13:58] then at least we have done our best to help them [13:58] natefinch: thank you [13:59] as soon as i land it i'm off to bed cause it's just turned into saturday here [14:00] voidspace: standup? [14:01] natefinch: coming [14:31] if anyone has a moment https://codereview.appspot.com/94330043 please? [14:36] natefinch: i gotta sleep, but if you did look at the mp (sorry rietveld rejected it) and +1 it, could you please also change to Approved so it lands. thanks [14:50] wallyworld_: haven't looked yet, looking now. Will reapprove after === Guest66499 is now known as bodie_ [15:06] sinzui: are there ci tests for this? https://bugs.launchpad.net/juju-core/+bug/1305026 [15:40] man I wish canonical admin would do dates in yyyy/mm/dd ... I see a vacation from 06/10/2014 to 07/10/2014 and I think someone's taking a month off :/ [15:42] natefinch: it would be nice if it would use localized dates, or else that person might inadvertedly ask for a month vacation :p [15:42] perrito666: that's why I specified yyyy/mm/dd because that's universal. But yes, localized would also be good. [15:49] perrito666, no. but since the restore and HA tests use the same test base, I think I can add a test by Monday that will work [15:49] sinzui: ill test by hand in the mean time, tx [16:31] natefinch, perrito666 Do either of you have a minute to review https://codereview.appspot.com/92170043 [16:31] sinzui: sure [16:31] sinzui: looks like dimitern already got it [16:31] i'm fast ;) [16:32] very fast. Thank you dimitern [16:35] sinzui, np [17:38] ERROR state/api: websocket.Dial wss://10.0.3.1:17070/: dial tcp 10.0.3.1:17070: connection refused [17:38] can someone help me get around this error? [17:38] I've done destroy-environment --force [17:39] then I don't have any juju processes running [17:39] then I rebootstrap, and always end up with this [17:46] do we have any HA docs? [17:46] jcastro: do you have the output of bootstrap with --debug? [17:47] http://pastebin.ubuntu.com/7422575/ [17:47] there ya go! [17:48] * perrito666 looks [17:48] jcastro: this is local, right? [17:48] yep [17:49] jcastro: do ps -Al | grep lxc in my experience, sometimes lxc gets stuck and multiple lxc processes end up getting hung up. Usually requires a reboot to fix [17:50] no containers running [17:51] should I reboot? [17:54] jcastro: did you reboot, or was that just good timing? :) [17:55] no I rebooted [17:55] did it work? [17:55] cleared out jenv file and the local directory in .juju [17:55] same issue [17:55] weird [18:03] natefinch, how about mongodb? [18:03] is there a daemon I can kill to start it over? [18:07] jcastro: ps -Al | grep mongod [18:08] yeah I had already killed that. :-/ [18:08] jcastro: this is my killlocal.sh : [18:08] #!/bin/bash [18:08] sudo rm /etc/init/juju* [18:08] sudo rm -rf ~/.juju/local [18:08] rm ~/.juju/environments/local.jenv [18:08] sudo killall mongod [18:08] sudo killall jujud [18:09] http://askubuntu.com/questions/403618/how-do-i-clean-up-a-machine-after-using-the-local-provider [18:09] I am doing this one now [18:09] jcastro: ahh, yeah, that cleans up some stuff I missed [18:10] (mainly lxc stuff, which is honestly most likely the culprit) [18:11] same issue. :-/ [18:12] what does which mongod tell you? [18:12] huh [18:12] mongodb-server not installed? [18:12] that can't be right [18:13] no, that's ok. juju-local puts it in /usr/lib/juju/ .... something something [18:13] which is not in the path [18:13] let me add the juju ppa [18:13] I am on 1.18.1 [18:20] natefinch, ok I have a deadline, so off to AWS I go! [18:20] thanks anyway === vladk is now known as vladk|offline === vladk|offline is now known as vladk === vladk is now known as vladk|offline [19:01] <_benoit_> Hi [19:01] <_benoit_> niemeyer: So I want to add support in juju-core for Outscale SAS [19:02] <_benoit_> niemeyer: they are a cloud provider implementing most of the EC2 API but not implementing S3 [19:02] _benoit_: Okay [19:02] <_benoit_> niemeyer: tim told me I could replace S3 by an http storage worker [19:02] _benoit_: Do they have any notion of storage API? [19:03] <_benoit_> niemeyer: for now I patched goamz and created a directory for the outscale provider [19:03] <_benoit_> niemeyer: no [19:03] <_benoit_> niemeyer: I also created their instance.go file [19:03] Sorry.. [19:03] my xchat is misbehaving.. let me reconnect [19:04] <_benoit_> niemeyer: but I don't see how I can reuse the EC2 code since most structure name are not capitalized (exported) [19:04] Okay [19:04] _benoit_: You say you patched goamz.. what for? [19:05] <_benoit_> niemeyer: to add their regions (I submited the patch) [19:05] _benoit_: Ah, okay [19:05] <_benoit_> niemeyer: I also added panic for nil endpoint in the other New constructor like you suggested [19:05] _benoit_: So it's plain EC2 sans S3? [19:06] <_benoit_> _benoit_: the future release is almost EC2 excepted a few bits (they are adding VPC in this release) [19:06] <_benoit_> s/_benoit_/niemeyer/ [19:06] <_benoit_> and yes no S3 and no storage [19:07] <_benoit_> niemeyer: that the compatibility matrix of the current version -> (without VPCs for now) https://wiki.outscale.net/display/DOCU/AWS+Compatibility+Matrix [19:08] _benoit_: No S3 and no storage? What's the difference between these? [19:08] <_benoit_> niemeyer: I mean dont even have an equivalent to S3 [19:08] _benoit_: Ah, okay [19:09] _benoit_: So, did you manage to make it work? [19:09] <_benoit_> niemeyer: No [19:09] <_benoit_> niemeyer: I am still stuck trying to find a way to "inherit" EC2 stuff so I don't have to copy paste [19:10] <_benoit_> I am new to go but have a C/Python background [19:10] _benoit_: Why do you need to copy & paste? [19:10] _benoit_: if they're really so similar, it might make sense to have them in the same package [19:10] _benoit_: I mean, more specifically.. [19:10] _benoit_: Why are you not using the ec2 backend itself? [19:11] niemeyer: I think he means that the juju provider/ec2 code is almost entirely non-exported, and he wants to reuse a substantial portion of it [19:11] natefinch: I get that, thanks [19:11] ok [19:11] <_benoit_> niemeyer: If you think patching the ec2 provider is the way I'll do it like that [19:12] _benoit_: It's certainly where I would start [19:12] <_benoit_> niemeyer: do you have any hint to do this ? [19:12] <_benoit_> niemeyer: Given that the differences are regions, instance types price and no S3 [19:12] _benoit_: To begin with, I'd try using Amazon's S3 with the EC2 endpoints of Outscale [19:13] _benoit_: This can get you to experience something actually working [19:13] <_benoit_> _benoit_: ok Is there a recipe to bootstrap a juju comptible ubuntu AMI ? [19:13] <_benoit_> _benoit_: maybe a list of packages to install on a ubuntu AMI ? [19:13] _benoit_: juju uses a plain Ubuntu AMI [19:14] <_benoit_> _benoit_: would debootstrap work for this ? [19:14] _benoit_: It's tailored for juju with the proper packages at startup time [19:14] _benoit_: By juju itslef [19:14] _benoit_: debootstrap? [19:14] <_benoit_> _benoit_: debootstrap is a way to create a debian or ubuntu system from scratch in a chroot [19:15] _benoit_: I know what it is, but why are we talking about it? [19:15] <_benoit_> _benoit_: Outscale have only debian AMI right now [19:15] <_benoit_> s/_benoit_/niemeyer/ [19:15] _benoit_: Oh, that'd be an issue [19:15] _benoit_: Does it support custom images? [19:15] <_benoit_> niemeyer: I was wondering if I could could create the first ubuntu image with debootstrap [19:16] _benoit_: I would write them and ask about Ubuntu images [19:16] _benoit_: Explaining you want to use juju [19:16] <_benoit_> niemeyer: I would attach and EBS format it and dump a root filesystem on the EBS [19:16] _benoit_: SUre, whatever works.. you'll need to talk to them to sort out how to get an Ubuntu image there [19:16] <_benoit_> niemeyer: Ok [19:17] <_benoit_> niemeyer: thanks I think I have the first step to do [19:17] _benoit_: Super, glad it was useful [19:17] _benoit_: Please let us know how it goes [19:17] <_benoit_> niemeyer: I'll do this monday and get back to you once it work to have some guidance for the rest [19:17] <_benoit_> niemeyer: which timezone are you in ? [19:20] <_benoit_> niemeyer: so I can catch you outside of lunch hours ;) [19:21] <_benoit_> niemeyer: super thanks [19:21] _benoit_: I'm in UTC-3 [19:21] <_benoit_> niemeyer: I am in UTC+1 [19:22] _benoit_: It's not too hard to find me here, but it's pretty certain you can find help coming here at any time [19:22] <_benoit_> ok [19:22] <_benoit_> have a nice week end [19:37] Can you refer to a machine later on in the list of MachineParams, i.e.: [{create machine 1}, {create lxc on machine 1}] in the same AddMachines API call? [19:40] (I'm guessing probably not due to needing to know the machine name before adding containers.) [19:46] Makyo: what are you trying to do? Just add a machine with a container on it? [19:47] natefinch, adding AddMachine support in the GUI. I was just wondering if the API call added machines in the MachineParams array one at a time synchronously. [19:50] Makyo: I'm pretty sure you can just do "Create machine with lxc" and it'll create a new machine with an lxc container on it [19:53] Makyo: AddMachineParams has a ContainerType you can set [19:55] natefinch, I think we have much of that mirrored on our end: https://github.com/juju/juju-gui/blob/develop/app/store/env/go.js#L921 I was just wondering since AddMachines takes an array of AddMachineParams. Are those expected to be completed in order? Create machine, create one container on that machine, create a second...&c [19:56] Makyo: I wouldn't rely on them being executed synchronously, in order. [19:57] natefinch, Alright. Will work with that assumption, then. [19:57] Thanks! [19:58] Makyo: right now they are, but that's an implementation detail that may change. [19:59] Makyo: we'd like to batch up the provider requests into a single request, so we just say "make these 50 machines" rather than saying "make this one machine" 50 times. [20:03] natefinch, alright, thanks. Will make note of that in our docs. [20:03] Makyo: welcome