waigani | menn0: what does your charm do? | 00:00 |
---|---|---|
menn0 | I've found the source for the postgresql charm to be pretty helpful. it's written in Python and uses the charmhelpers package. | 00:00 |
menn0 | https://github.com/charms/postgresql | 00:01 |
menn0 | waigani: at thumper's suggestion, I'm productionising the github-watcher charm | 00:01 |
menn0 | waigani: what are you going to do? | 00:03 |
waigani | menn0: good question! At some point I'd like to make a SPARQL endpoint charm | 00:04 |
waigani | menn0: but I think I'll start with the Vanilla forum in the tut | 00:04 |
menn0 | waigani: sounds good | 00:04 |
wallyworld_ | axw: can you ping me when you're around for a quick catch up about 1.18.3? | 00:25 |
menn0 | i'm trying to deploy juju-gui using the local provider on my machine and I'm getting this: 2014-05-09 00:36:50 INFO juju.worker runner.go:260 start "api" | 00:38 |
menn0 | 2014-05-09 00:36:50 INFO juju.state.api apiclient.go:198 dialing "wss://10.0.3.1:17070/" | 00:38 |
menn0 | 2014-05-09 00:36:50 INFO juju.state.api apiclient.go:206 error dialing "wss://10.0.3.1:17070/": websocket.Dial wss://10.0.3.1:17070/: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "juju-generated CA for environment \"local\"") | 00:38 |
menn0 | 2014-05-09 00:36:50 ERROR juju.worker runner.go:218 exited "api": timed out connecting to "wss://10.0.3.1:17070/" | 00:38 |
menn0 | 2014-05-09 00:36:50 INFO juju.worker runner.go:252 restarting "api" in 3s | 00:38 |
menn0 | is this a regression? i've deployed juju-gui on my machine successfully before | 00:38 |
menn0 | i'm using a relatively recent build | 00:38 |
menn0 | (yesterday) | 00:39 |
menn0 | and wouldn't you know it... the deploy just worked. | 00:40 |
menn0 | nevermind I guess | 00:43 |
menn0 | it just took a really long time and I thought that error was related. | 00:43 |
wallyworld_ | menn0: local provider takes a long time for the first lxc container to go from pending since it has to download the ubuntu image file | 00:58 |
axw | wallyworld_: ping | 00:58 |
wallyworld_ | axw: hiya, quick hangout? | 00:58 |
menn0 | wallyworld_: I knew that but it took like 20 mins. perhaps my link is slow today. | 00:58 |
axw | wallyworld_: sure | 00:58 |
wallyworld_ | https://plus.google.com/hangouts/_/7ecpiklmm34hr8t7t02jinl5r0 | 00:59 |
axw | wallyworld_: same as yesterday... I'll try starting one | 01:00 |
axw | wallyworld_: https://plus.google.com/hangouts/_/72cpjktbfqddemso04puald08o | 01:01 |
wallyworld_ | axw: my network has gone slow, will keep trying to get in | 01:01 |
axw | wallyworld_: I don't get any warning about lxc-use-clone | 01:41 |
wallyworld_ | hmmmm | 01:41 |
wallyworld_ | it may have been his set up | 01:41 |
wallyworld_ | maybe he accidentally used an old client to bootstrap | 01:42 |
wallyworld_ | that would explain it | 01:42 |
thumper | wallyworld_: so what is this standup thing? | 01:45 |
thumper | that time is not good for me... | 01:45 |
wallyworld_ | thumper: it's on the calendar | 01:45 |
thumper | I know, but it goes into the past | 01:45 |
thumper | and looked like a mistake to me | 01:45 |
wallyworld_ | i assumed someone added it | 01:45 |
thumper | it looks like a dupe of the other calender item | 01:45 |
wallyworld_ | ok | 01:46 |
thumper | I'd be happy to do a standup sometime in the next hour | 01:47 |
wallyworld_ | might get messy two teams doing it though if we want it to be short | 01:48 |
thumper | agreed | 01:55 |
thumper | mramm: o/ | 01:55 |
thumper | mramm: are you real or a ghost in the machine? | 01:55 |
mramm | hey hey | 01:55 |
mramm | mostly real | 01:55 |
rick_h_ | ghost ramm I like it | 02:03 |
stokachu | wallyworld_: yea my latest test didn't report any issues with the lxc-use-clone config item error | 02:15 |
stokachu | so it was probably my setup initially | 02:15 |
sinzui | wallyworld_, thumper CI is broken. looks like we cannot get errgo from github | 02:16 |
sinzui | http://juju-ci.vapour.ws:8080/job/build-revision/1335/console | 02:16 |
thumper | oh fuck | 02:17 |
thumper | that was me | 02:17 |
thumper | bugger | 02:17 |
sinzui | oh, and the other branch for 1.18 is broken the same way | 02:17 |
sinzui | http://juju-ci.vapour.ws:8080/job/build-revision/1336/console | 02:17 |
thumper | shit shit shit | 02:17 |
sinzui | thumper, did you break trunk and stable? | 02:17 |
thumper | yes | 02:17 |
sinzui | That's hard to do. | 02:17 |
thumper | not if you delete the repo | 02:17 |
sinzui | Oh dear | 02:18 |
wallyworld_ | sinzui: you see i've asked for input as to what version want for next week | 02:19 |
sinzui | thumper, the failure is in " go get -v -d launchpad.net/juju-core/..." I don't think I can hack around that | 02:19 |
* thumper hacks... | 02:19 | |
sinzui | wallyworld_, 1.18.3? | 02:20 |
thumper | sinzui: you should be back in business | 02:20 |
wallyworld_ | sinzui: i had thought so but then only 1.19.2 will have the new maas name placement directive | 02:20 |
* thumper will be more careful before blowing shit away | 02:20 | |
sinzui | wallyworld_, I wrote release notes for 1.18.3 and 1.19.2 | 02:21 |
* sinzui retries the trunk build | 02:21 | |
wallyworld_ | sinzui: ok, thank you that's great. so when they tell us what they want, we can just flip the switch on wither one | 02:21 |
sinzui | wallyworld_, if we get CI building again. I will manually trigger 1.18 if a new revision doesn't land as I hammer trunk to build | 02:22 |
wallyworld_ | axw: if we were to go with 1.19.2, we will need a fix for bug 1313785, which i think you are familiar with? | 02:22 |
_mup_ | Bug #1313785: Can't SSH with "juju ssh" command to EC2 instance. <regression> <ssh> <juju-core:Triaged> <juju (Ubuntu):Triaged> <https://launchpad.net/bugs/1313785> | 02:23 |
wallyworld_ | sinzui: have you eaten all the Violet Crumble yet? | 02:23 |
sinzui | thumper, yes! CI is on to the next phase to unittests and packaging | 02:23 |
thumper | phew | 02:23 |
sinzui | wallyworld_, Yes. the Cherry Ripes were stolen. | 02:24 |
wallyworld_ | lol | 02:24 |
wallyworld_ | i'll bring some more to the next sprint | 02:24 |
axw | wallyworld_: ok. goddamn ssh | 02:24 |
sinzui | My youngest daughter exclaimed Holy Sh*t when she had one, while the older daughter asked where that the Cherry Ripes been all her lide | 02:25 |
sinzui | life | 02:25 |
wallyworld_ | axw: thanks, seemed like you had a handle on it already | 02:25 |
sinzui | I hid one for Saturday | 02:25 |
wallyworld_ | lol | 02:25 |
wallyworld_ | sinzui: i can be youe dealer :-) | 02:26 |
wallyworld_ | your | 02:26 |
sinzui | wallyworld_, 6 cherry ripes cost 18.00 USD from amazon. You could make a lot of money send them to the us by boat | 02:26 |
wallyworld_ | $18 !!!!!! | 02:27 |
wallyworld_ | wtf | 02:27 |
wallyworld_ | they cost about $10 here | 02:27 |
wallyworld_ | and they often go on sale for about $6 which is when i buy them for you :-) | 02:27 |
wallyworld_ | that's for 6 | 02:28 |
* axw wishes he had a cherry ripe now | 02:31 | |
* wallyworld_ does too :-( | 02:31 | |
wallyworld_ | sinzui: bug 1316174. the local provider explicitly disables changing apt sources. do you think we should make juju-core require cloud-tools in the packaging rules then? | 02:58 |
_mup_ | Bug #1316174: 'precise-updates/cloud-tools' is invalid for APT::Default-Release <juju-core:Triaged> <https://launchpad.net/bugs/1316174> | 02:58 |
* sinzui is thinking | 02:59 | |
sinzui | wallyworld_, cloud-tools, being an archive, must be added. Isn't think a cloud-inti responsibility? I thought all precise cloud images had cloud-tools as an archive | 03:00 |
sinzui | wallyworld_, There is nothing we can package to make an archive be added | 03:01 |
wallyworld_ | sinzui: but this is for someone's local machine | 03:01 |
sinzui | wallyworld_, hmm, oh | 03:01 |
wallyworld_ | where they just have not added cloud-tools to their own apt sources | 03:01 |
wallyworld_ | they are using local provider and local provider does not dick with their apt set up on purpose | 03:02 |
sinzui | wallyworld_, this is like kiko's production precise running lxc. He didn't have cloud-tools and we require it | 03:02 |
sinzui | so Juju on precise needs to require that the archive be added | 03:03 |
sinzui | hmm | 03:03 |
wallyworld_ | maybe we should error early with a nice message then? | 03:03 |
sinzui | wallyworld_, I think so | 03:03 |
wallyworld_ | ok, will do | 03:03 |
wallyworld_ | i'll comment on the bug | 03:03 |
sinzui | wallyworld_, I copied lxc and other packages from ctools to ensure the right mongo and lxc were available in precise for users that upgrade without ctools | 03:04 |
wallyworld_ | oh, you mean those packages are in precise universe or something? | 03:04 |
wallyworld_ | or are they in the juju ppa? | 03:05 |
axw | wallyworld_: can you please review? https://codereview.appspot.com/92140043 | 03:13 |
wallyworld_ | axw: yeah, already looking :-) | 03:13 |
axw | ta | 03:13 |
axw | I will get onto backporting the lxc branch | 03:13 |
wallyworld_ | great thanks :-) | 03:13 |
sinzui | wallyworld_, many precise local users were just using our juju ppa. It had older packages than the ones in ctools. I think I am the defactor maintainer of the ppa. I am backporting the same packages in ctools to our ppa to reduce the upgrade problems. The bug we are looking at was introduced when we decided to make juju be explicit about where the mongodb server comes from. | 03:16 |
sinzui | wallyworld_, I think checking for and requiring the ctools archive is sane. Many users didn't know they were doing ricky deployments | 03:17 |
wallyworld_ | sinzui: ok, great thanks. i think that bug can definitely be deferred to next stable release then | 03:18 |
wallyworld_ | axw: so you're sure you got all the places where EnsureEnvName() is required? | 03:23 |
axw | wallyworld_: pretty sure | 03:23 |
wallyworld_ | ok, i'll see if i can spot any others | 03:23 |
jcw4 | is there a URL I can view CI builds at? | 03:31 |
sinzui | jcw4, At this hour they are at http://juju-ci.vapour.ws:8080/ | 03:37 |
jcw4 | sinzui: thank you | 03:42 |
jcw4 | sinzui: does the CI url change regularly? | 03:43 |
sinzui | jcw4, It has in the past. We deploy it with juju into the clouds that meet our needs. I purchased a domain to stop the nonsense or url changes. | 03:44 |
sinzui | We will probably remove the ports from the url next week | 03:44 |
jcw4 | sinzui: I see | 03:45 |
jcw4 | sinzui: thanks | 03:45 |
sinzui | wallyworld_, thumper I haven't announce the CI URL yet, but you might note I got canonistack running again. I got juju to trans-cloud machines | 03:46 |
thumper | heh, nice | 03:47 |
wallyworld_ | great :-) | 03:47 |
sinzui | Next week I will dare to provision my three machines in the lab. That will be a networking challenge | 03:47 |
wallyworld_ | indeed | 03:53 |
waigani | has anyone hit this before: http://paste.ubuntu.com/7419545/ | 04:12 |
waigani | juju debug-log keeps logging: "machine-0: 2014-05-09 04:14:17 ERROR juju.worker runner.go:218 exited "peergrouper": cannot get replica set status: cannot get replica set status: not running with --replSet" | 04:15 |
menn0 | waigani: I get that too. From bits I overheard at the sprint it's related to the HA changes but I'm not sure how much of problem it is. | 04:35 |
axw | wallyworld_: lxc change is in both 1.18 and trunk now | 04:35 |
wallyworld_ | whoohoo | 04:35 |
menn0 | waigani: it would be nice if it would go away! | 04:35 |
axw | wallyworld_: I've fixed the SSH thing, just gonna test it on Azure before pushing | 04:35 |
wallyworld_ | awesome | 04:35 |
wallyworld_ | i'm just about to propose a fix for another 1.19 bug | 04:35 |
waigani | menn0: I destroyed environ, re deploying everything, ensuring each agent is up before deploying the next | 04:36 |
waigani | so far so good | 04:36 |
wallyworld_ | menn0: waigani: that peergrouper error is log noise due to running on local provider with ha enabled - it's on "someone's" todo list to clean up | 04:36 |
axw | wallyworld_: I'll fix up the EnsureEnvName stuff in trunk next, unless there's something more pressing | 04:36 |
menn0 | wallyworld_: cool | 04:37 |
wallyworld_ | axw: no, that would be next thanks | 04:37 |
waigani | wallyworld_: ah, good to know | 04:37 |
waigani | menn0: how did you go today? Did you get a working charm? | 04:37 |
menn0 | waigani: after getting over a few mental hurdles I now have the github-watcher charm ported to Python using charmhelpers. It's cleaner and more robust but still needs more love. | 04:39 |
menn0 | waigani: it's been fun though! | 04:39 |
menn0 | waigani: how did you get on? | 04:39 |
waigani | yay \o/ | 04:39 |
waigani | menn0: agent-state-info: 'hook failed: "install"' | 04:40 |
waigani | hehe, that is my latest | 04:40 |
menn0 | waigani: debug logging is your friend :) | 04:41 |
waigani | menn0: with less noise, yes | 04:41 |
waigani | menn0: when you say charmhelpers, do you mean the tools e.g. charm create? | 04:42 |
waigani | or the hook commands e.g. relation-get ... | 04:43 |
menn0 | waigani: to avoid all the peergrouper noise and just see the logs for the thing you're working on, look at /var/log/juju/unit-<the unit for your charm>.log. that's what i've been looking at. | 04:43 |
waigani | menn0: awesome tip, thanks :) | 04:44 |
menn0 | waigani: I mean the Python package called charmhelpers. https://launchpad.net/charm-helpers | 04:45 |
waigani | oh cool, a whole new world - that I'm NOT going to look at right now! | 04:45 |
menn0 | waigani: it provides a Python API for doing charm related things | 04:45 |
wallyworld_ | thumper: ping | 04:46 |
wallyworld_ | menn0: waigani: debug log also supports filtering, include exclude etc | 04:54 |
waigani | ah cool, will try | 04:54 |
wallyworld_ | so you can look at info for a unit or several, and the machine it runs on etc | 04:54 |
wallyworld_ | see the recent release notes | 04:54 |
axw | wallyworld_: the series stuff is just cut and paste right? | 04:55 |
wallyworld_ | axw: yep | 04:55 |
wallyworld_ | with a constant removed from testing | 04:55 |
wallyworld_ | axw: i can't see from the diff where we are testing that a pub address is used if proxy=false, but a private address is used if proxy=true | 05:27 |
axw | wallyworld_: there's no explicit test for the latter; will see if I can add one | 05:28 |
wallyworld_ | that would be great, thanks | 05:28 |
axw | wallyworld_: I changed setAddress to add an internal address, which broke the test before my fix | 05:28 |
wallyworld_ | yeah, i saw that we test the proxy=false case | 05:29 |
axw | wallyworld_: problem with proxy=true is it causes a second invocation of juju | 05:29 |
axw | wallyworld_: will see how practical it is tho | 05:29 |
wallyworld_ | ok | 05:29 |
thumper | wallyworld_: whazzup? | 05:39 |
wallyworld_ | thumper: forgot to tell you - on monday we need to enter cards for the next 2 week's worth of work | 05:40 |
thumper | ah... ok | 05:40 |
thumper | will do | 05:40 |
wallyworld_ | i would think we'd need lanes for each squad | 05:40 |
* thumper thinks | 05:40 | |
thumper | prolly | 05:40 |
wallyworld_ | but i ain't got editing permissions | 05:40 |
wallyworld_ | i'll bug someone later | 05:41 |
wallyworld_ | aveagoodweekend | 05:42 |
thumper | kk | 05:43 |
thumper | you too | 05:43 |
wallyworld_ | will try | 05:43 |
thumper | chances are I'll be hacking away | 05:43 |
thumper | on my personal project | 05:43 |
wallyworld_ | yeah, i gots stuff to do as well | 05:43 |
wallyworld_ | plus mother's day sunday | 05:44 |
thumper | oh yeah... | 05:44 |
thumper | must not forget... | 05:44 |
wallyworld_ | must... not.... | 05:44 |
wallyworld_ | me -> buy dinner food | 05:44 |
=== vladk|offline is now known as vladk | ||
axw | wallyworld_: updated tests | 06:35 |
axw | fwereade: I don't understand your comment about .dns vs. .internal | 07:07 |
axw | fwereade: it's using .internal because that's what the default is- to proxy and use the internal address | 07:08 |
fwereade | axw, right -- I'm just saying that we should probably test both code paths, because they're both viable results in different situations | 07:10 |
fwereade | axw, especially when the bug you're working on is about picking one or the other | 07:10 |
axw | fwereade: ok. I thought it'd be enough to do it in the common code, but I can do it there too | 07:12 |
fwereade | axw, this feels to me like a case where the existing testing wasn't quite covering enough, and so it's worth testing a layer up | 07:13 |
fwereade | axw, there's a lot that sucks about testing the same stuff at every layer too | 07:13 |
axw | fwereade: indeed it was not covering enough. there was some code that very subtley changed the machine addresses, took me quite a while to figure out why it wasn't working like I thought it should | 07:14 |
fwereade | axw, but in the absence of easy mocking we just have to find a balance between testing just-one-path and all-exponential-possibilities, and I think it's reasonable to shade a little closer to the latter when fixing actual bugs | 07:14 |
axw | fwereade: nps | 07:14 |
fwereade | axw, (not that mocking is a panacea or anything but it certainly helps to control the exponential growth) | 07:15 |
vladk | jam, fwereade, axw: hi | 07:20 |
axw | vladk: heya | 07:21 |
fwereade | vladk, morning | 07:22 |
wallyworld_ | axw: hi, i just got back from the vet | 07:30 |
wallyworld_ | looks like you got your +1 | 07:30 |
axw | wallyworld_: hey. it's cool, fwereade has +1 and I am updating it now | 07:31 |
axw | thanks | 07:31 |
wallyworld_ | great | 07:31 |
wallyworld_ | our poor dog got desexed today | 07:31 |
wallyworld_ | fwereade: for the kanban board,are we going to create lanes for each squad to put the 2 week cards? | 07:32 |
fwereade | wallyworld_, I'm wondering if maybe we actually want a kanban board per team | 07:33 |
fwereade | wallyworld_, harder to see what reviews are in flight, but there's always +activereviews | 07:33 |
fwereade | wallyworld_, green have their own already | 07:34 |
fwereade | wallyworld_, and honestly it felt a bit clunky with just 2 teams sharing a board before | 07:34 |
fwereade | wallyworld_, 4 mightbe a couple of bridges too far | 07:34 |
wallyworld_ | yeah i agree | 07:35 |
wallyworld_ | fwereade: actually, i have soccer a bit later tonight so will miss the standup. there's a couple of things i wouldn't mind discussing. did you have time for a hangout anytime over the next hour or so? | 07:38 |
fwereade | wallyworld_, I was planning to swap today for my public holiday during the sprint, so let's chat now | 07:44 |
fwereade | wallyworld_, well 5 mins please? | 07:44 |
wallyworld_ | fwereade: it can wait, don't want to take up your day | 07:44 |
fwereade | wallyworld_, let's do it anyway, start a hangout and I'll be with you v soon | 07:45 |
wallyworld_ | ok | 07:45 |
jam1 | test suite bugfix: for bug #1301353 https://codereview.appspot.com/97240043 | 07:51 |
_mup_ | Bug #1301353: juju test suite should use juju-mongodb <tech-debt> <trusty> <juju-core:In Progress by jameinel> <juju-core 1.18:In Progress by jameinel> <https://launchpad.net/bugs/1301353> | 07:51 |
jam1 | fwereade: well appropriate Layering is (IMO) better than mocking, you just need a way to have an API Server that doesn't (for example) actually require a mongodb underneath | 07:53 |
fwereade | jam1, I use the term mocking pretty loosely -- in my mind, that's just a mocked api server | 07:53 |
jam1 | that was our goals with interface based Doubles, though we only partially did it for providers | 07:53 |
fwereade | jam1, indeed so | 07:53 |
jam1 | fwereade: sure, I just heavily distinguish Mocks from Doubles | 07:53 |
jam1 | Doubles can be tested alternative implementations, Mocks are "stub out just this call with this pre-canned response" | 07:54 |
jam1 | fwereade: anyway, if you're off today, don't bother, but the above code review should be reasonably small | 07:54 |
jam1 | and would be nice if we are going to have a 1.18.3 | 07:54 |
fwereade | jam1, yeah, I guess it's just that my experience with doubles lead me to consider them to be less-reliable mocks ;p | 07:55 |
jam1 | fwereade: if you actually have interface testing and run your test against a double that runs against the actual you can be more confident that you actually still match reality | 07:55 |
fwereade | jam1, sure -- I've just found that when things change subtly it's easier to fix mocks than doubles | 07:56 |
fwereade | jam1, not trying to say it's fundamentally unworkable or anything | 07:56 |
jam1 | fwereade: sure, it is easier to change mocks, but it is harder to know *when* it changed and should have broken but didn't. Though again, doubles expect that you actually have good test coverage :) | 07:57 |
fwereade | jam1, if you get the double-testing *right* it's easy to tell when they're no longer accurate, but I don't think there's a *fundamental* reason it's easier to test the doubles than the mocks | 07:57 |
jam1 | its sort of "its possible with a Double, but harder" | 07:57 |
fwereade | jam1, yeah | 07:57 |
jam1 | fwereade: you can write a test suite that runs against the doubles and the real thing | 07:58 |
jam1 | Mocks tend to not be very testable to compare against something else | 07:58 |
fwereade | jam1, well the biggest problem with mocks is the skew, so you have to have *some* way of testing them, but it ends up being more implicit than explicit | 07:59 |
fwereade | jam1, and we all know about explicit being betterthan implicit :) | 07:59 |
fwereade | jam1, LGTM | 08:01 |
jam1 | fwereade: if they are properly tested, then IMO their closer to Doubles | 08:03 |
jam1 | fwereade: I've certainly seen mostly happy-to-skew Mocks | 08:04 |
fwereade | jam1, hence my fuzzy handwaving over terminology -- doubles are often overcomplicated, mocks are often undertested, I suppose I'm happiest with something in the middle :) | 08:05 |
wallyworld_ | jam1: i'm looking to bump bug 1307643 off the 1.18 roadmap and just fix for 1.20. how do you feel about that? i sorta think it's a corner case that we don't really need to deal with on 1.18 | 08:51 |
_mup_ | Bug #1307643: juju upgrade-juju --upload-tools does not honor the arch <upgrade-juju> <juju-core:Triaged> <juju-core 1.18:Triaged> <https://launchpad.net/bugs/1307643> | 08:51 |
jam1 | wallyworld_: given that I think we are actually unlikely to get 1.20 in trusty, I think I'd rather it stayed in there. | 08:52 |
jam1 | wallyworld_: but it certainly doesn't block 1.18.3 | 08:52 |
wallyworld_ | hmmm. ok. i sorta though folks would just use the stable ppa moving forward | 08:52 |
jam1 | (I may be being pessimistic here, but our relationship and track record with Distro isn't very heartwarming) | 08:52 |
wallyworld_ | thought | 08:52 |
jam1 | wallyworld_: you don't "Just Get" the stable ppa or cloud-archive:tools on your system | 08:53 |
jam1 | so 1.18 is going to be the "out-of-the-box" experience people have with Juju | 08:53 |
wallyworld_ | sure, i guess i figured most folks using juju would be ok with adding an apt source given the nature of what juju does | 08:54 |
jam1 | wallyworld_: people may be comfortable with it, people just doing "what is this juju thing, and apt-get install juju" won't get what we want them to | 08:54 |
jam1 | wallyworld_: *hopefully* upload-tools isn't in their default flow, but I fear that it already is | 08:55 |
wallyworld_ | true, but how many of those would run into that particular bug? | 08:55 |
wallyworld_ | well, we *really* want to discourage using upload-tools | 08:55 |
wallyworld_ | i do think people using upload-tools equals the people who would get juju from the ppa | 08:55 |
jam1 | wallyworld_: until we actually address the use case (just work without scouring the internet) | 08:55 |
jam1 | we can't really avoid it | 08:56 |
jam1 | its how people testing it on a private cloud are likely to do it, but I think its fair enough | 08:56 |
jam1 | it does require mixed archs which is rare enough | 08:56 |
wallyworld_ | yeah, i figured the natre of the bug meant that people try trying out juju wouldn't come across it | 08:57 |
wallyworld_ | and those that did would be power users | 08:57 |
wallyworld_ | who would install from ppa anyway | 08:57 |
jam1 | wallyworld_: just to close the loop in case it wasn't clear, I'm ok with pulling it out of 1.18 | 09:05 |
jam1 | after discussing it I think its reasonable | 09:05 |
wallyworld_ | ah ok, thanks. i did think you wanted it left. thanks for clarifying | 09:06 |
wallyworld_ | jam1: fyi, we'll be cutting either a 1.18 or 1.19 release tomorrow depending on what the maas folks say they want to use at ODS. we've put in fast lxc clonng for them (not just for local but everywhere) and driven down the bugs on both so we can pull the trigger. curtis has release notes prepared for both. i'm just waiting to hear back what they want to run with | 09:07 |
jam1 | wallyworld_: I thought we had to cut a 1.18 because of the Critical "sync-tools can destroy your environment" bug. | 09:09 |
jam1 | regardless of what we do for ODS | 09:09 |
wallyworld_ | jam1: the fix for that is now both in 1.18 and 1.19 | 09:09 |
jam1 | wallyworld_: sure but we need it to go out to everyone else, too | 09:09 |
jam1 | (we have a critical issue in our current stable release, we should release a new version with a fix) | 09:10 |
wallyworld_ | jam1: yes, but curtis can only do one release at a time and we need to get ODS sorted out | 09:10 |
wallyworld_ | and that issue is only a very specific bug right? | 09:10 |
jam1 | wallyworld_: sure, I'm fine with delaying to do a different release, but we still need 1.18.3 regardles | 09:11 |
wallyworld_ | yes sure | 09:11 |
jam1 | wallyworld_: its a "nuke your running environment without running destroy-environment" bug. still pretty serious | 09:11 |
wallyworld_ | jam1: understood, it's just about what order we release in | 09:11 |
wallyworld_ | if we do a 1.19 for ODS, then 1.18.3 will have to come straight after | 09:11 |
jam1 | wallyworld_: I'd also like to stipulate that if we don't hear from them, just do 1.18.3 to get progress on it. (assuming that it takes 1 day to do a release, we can do whatever they like today but 1.18 if they don't say anything) | 09:12 |
wallyworld_ | yeah, that's what i was thinking, but we really need to know if they want the maas name placement stuff which is missing from 1.18 | 09:13 |
jam1 | wallyworld_: and HA | 09:13 |
wallyworld_ | well, i guess so. but i didn't *think* that was goingto be the main focus | 09:13 |
jam1 | wallyworld_: alexis was telling us "we must have HA in by Apr 25 so that jamespage can put the demo together for ODS". | 09:14 |
jam1 | that may have been tempered | 09:14 |
wallyworld_ | i asked little mark and dan and alexis 12 hours ago so hopefully they'll get back to me | 09:14 |
wallyworld_ | if not, they get what they get, right? | 09:15 |
jam1 | I did think that 1.19 was the plan for ODS | 09:16 |
jam1 | but only as a 3rd person observer | 09:16 |
jam1 | I haven't been in any of the actual conversations around it | 09:16 |
wallyworld_ | then adam indicated they might want 1.18 so we're getting mixed messages | 09:16 |
wallyworld_ | me either | 09:16 |
wallyworld_ | hece i asked them explicitly to tell us - we're good to go on both | 09:17 |
* jam1 realizes that with his wife away, his ability to separate himself from work-time diminishes greatly | 09:18 | |
wallyworld_ | yep :-) | 09:19 |
voidspace | hmmm... I have that issue where I blow away ~/.juju and then generate-config fails because it somehow creates (without sudo) a ~/.juju owned by root | 09:19 |
voidspace | this is on a different machine | 09:20 |
wallyworld_ | axw: we may want to land the 1.18 fix for bug 1316869 straight into 1.19 so we can release if required, and do a followup to clean it up | 09:20 |
_mup_ | Bug #1316869: juju sync-tools destroys the environment when given an invalid source <juju-core:In Progress by axwalk> <juju-core 1.18:Fix Committed by axwalk> <https://launchpad.net/bugs/1316869> | 09:20 |
voidspace | rebooting | 09:21 |
axw | wallyworld_: 1.19 isn't really affected that badly | 09:21 |
axw | wallyworld_: it was already partially fixed | 09:21 |
jam1 | voidspace: I've seen that, and even when it isn't using local. I was pretty surprised by it | 09:21 |
voidspace | jam1: I have no environment and I *was* using openstack | 09:22 |
axw | wallyworld_: there's one case where it could occur, which is if someone had an environments.yaml with multiple environments and no specified default (and no current-environment) | 09:22 |
jam1 | wallyworld_: it should be possible to just merge 1.18 into trunk. I've been keeping it in sync | 09:22 |
voidspace | jam1: even happens in a new shell that I've not used sudo in | 09:22 |
jam1 | voidspace: yeah, I never dug into it, but I have seen it. | 09:22 |
jam1 | voidspace: I have the feeling we aren't actually doing sudo | 09:22 |
axw | wallyworld_: I'm in the process of fixing up trunk... but if you really want to merge it forwards, that's fine | 09:22 |
voidspace | rebooting as that was the only way I fixed it at the sprint | 09:22 |
voidspace | odd | 09:22 |
jam1 | maybe we are trying to chown it and accidently suppling UID =0 | 09:22 |
voidspace | right, that sounds likely | 09:23 |
wallyworld_ | axw: i just want to ensure that by your EOD, we have everything in place to pul the tigger on 1.19 if needed | 09:23 |
jam1 | like we were trying to fix the old bug about creating it as root, and the old fix to chown it back to $REAL_USER is still running | 09:23 |
jam1 | only REAL_USER isn't set, so we get UID=0 | 09:23 |
axw | wallyworld_: IMHO it is not critical for 1.19, so it could be cut now | 09:24 |
wallyworld_ | axw: ok, what i'll do after soccer is just merge 1.18 into trunk so they're consistent | 09:25 |
axw | okey dokey | 09:25 |
perrito666 | morning | 09:57 |
voidspace | natefinch: morning | 10:21 |
natefinch | voidspace: morning | 10:21 |
voidspace | wwitzel3: ping | 10:27 |
* perrito666 has restore working except for a quoting nightmare on the bash script | 10:37 | |
natefinch | perrito666: awesome! | 10:38 |
perrito666 | I am thinking that it might be healthier to just send the script and run it remotely instead | 10:39 |
natefinch | perrito666: what are you doing now? | 10:43 |
perrito666 | natefinch: well, currently the bash script is just a text template that i generated and being run over ssh | 10:44 |
natefinch | perrito666: do you know about backticks for quoted strings? | 10:46 |
perrito666 | I do, altough the resulting script is very oddly quoted | 10:48 |
perrito666 | is standup at extremely early times happening today? | 10:49 |
perrito666 | or we already moved to our own teams one? | 10:49 |
natefinch | I was assuming we were doing our own today | 10:50 |
wwitzel3 | voidspace: pong | 10:55 |
voidspace | wwitzel3: hey, hi | 10:55 |
=== nessita_ is now known as nessita | ||
perrito666 | sinzui: | 12:28 |
perrito666 | amazon restored | 12:28 |
perrito666 | PASS | 12:28 |
perrito666 | :D | 12:28 |
sinzui | perrito666, oh, should I re-enable the backup and restore test? | 12:28 |
perrito666 | sinzui: nop, I will propose the fix now and let you know when its landed :D just wanted to share the joy with someone else than my neighbors who are certainly happy that I scream of joy in the morning before they get up | 12:29 |
sinzui | :) | 12:30 |
sinzui | wallyworld_, backporting features to stable jepardises the release's inclusion in Ubuntu | 12:51 |
wallyworld_ | which bug in particular? | 12:51 |
sinzui | wallyworld_, I think Ubuntu will reject 1.18.3 for trusty if it contains lxc-use-clone | 12:52 |
wallyworld_ | oh. but what if the maas h=guys need it for ODS | 12:52 |
sinzui | wallyworld_, The maas guys are not Ubuntu | 12:53 |
wallyworld_ | hmmm. but i think the bug was raised against 1.18 in the first place | 12:54 |
sinzui | wallyworld_, Ubuntu is still on 1.18.1. We can choose to ignore the situation and focus on getting 1.20.0 out with a MRE | 12:54 |
wallyworld_ | not sure now | 12:54 |
sinzui | sorry SRU | 12:54 |
wallyworld_ | i'd love 1.20 to replace 1.18 in the archive | 12:54 |
sinzui | wallyworld_, Ubuntu look at the diff. If they see lots of adds to introduce new behaviour, they will reject the micro version | 12:55 |
wallyworld_ | ok. so how to we handle the friction bewteen that and what customers (who are the ones who give us $$$$) want | 12:56 |
sinzui | wallyworld_, I am in favour of helping the maas guys, I just want everyone to know that Ubuntu has strict notions of what a micro release can contain | 12:56 |
wallyworld_ | maybe Ubuntu need to be pragmatic? | 12:57 |
sinzui | wallyworld_, remember that Ubuntu rejected backup and restore additions to 1.16. | 12:57 |
wallyworld_ | it's not like it's a lib with 1000000000 users | 12:57 |
sinzui | wallyworld_, they are being pragmatic, they do not accept new features because enterprise hates change | 12:57 |
wallyworld_ | and enterprise also need s features like backup and restore | 12:58 |
wallyworld_ | or else we wouldn't have had so much pressure to add that by a big enterprise | 12:58 |
sinzui | wallyworld_, And enterprised used CTS's and our versions on the juju, | 12:59 |
jcastro | wallyworld_, that's why the ubuntu cloud archive exists | 13:20 |
wallyworld_ | maybe juju belongs in there then? | 13:20 |
jcastro | https://launchpad.net/~ubuntu-cloud-archive/+archive/cloud-tools-next | 13:21 |
wallyworld_ | so it's there already | 13:23 |
jcastro | yeah, but it doesn't have the strictness of the proper archive, this lets us eat our cake | 13:24 |
wallyworld_ | jcastro: in that case, why was sinzui saying we shouldn't backport features to the 1.18 series, if juju is in cloud archive? | 13:24 |
sinzui | wallyworld_, cloud-tools is for precise | 13:24 |
wallyworld_ | doesn't trusty also have cloud-tools? | 13:25 |
sinzui | wallyworld_, no | 13:25 |
wallyworld_ | why? | 13:25 |
sinzui | because we agreed last October to be very make juju super reliable with clearly define bugs and features to make Ubuntu like us | 13:26 |
wallyworld_ | lol | 13:27 |
wallyworld_ | we need to add new features at a rate greater than ubuntu will let us | 13:27 |
sinzui | wallyworld_, If Ubuntu trusts juju, they will mre and sru it into trusty so that all users get it | 13:27 |
wallyworld_ | i can't see that the need for cloud-tools is gone | 13:27 |
jcastro | new features won't go into ubuntu, they will go into the PPA and UCA | 13:27 |
wallyworld_ | jcastro: sometimes new features are a bug raised against 1.18 eg fast lxc | 13:28 |
sinzui | wallyworld_, In Vegas we discussed the issue again. Changing packaging to provide a juju-client that is super stable and featureful juju-agents that are in the clouds may be a viable compromise | 13:29 |
wallyworld_ | and so we would expect 1.18.x to be put in trusty wouldn't we? | 13:29 |
sinzui | wallyworld_, Everything in Lp is called a bug | 13:29 |
wallyworld_ | that would be good | 13:29 |
sinzui | They are just issues. | 13:29 |
wallyworld_ | sure. but if a customer says 1.18 is unusable because feature x is missing what do we do | 13:30 |
sinzui | wallyworld_, If we removed the config and made fast-lxc work, the diff might look more like fix | 13:30 |
jcastro | wallyworld_, it's not like they're being strict for no reason, there's 10 years of regressions and feedback from tons of projects who have tried. | 13:30 |
jcastro | wallyworld_, they use the ubuntu cloud archive | 13:30 |
wallyworld_ | but there's no clod archive for trusty | 13:30 |
jcastro | well, it's fresh so we didn't really need it yet | 13:31 |
sinzui | wallyworld_, Customers don't care about semantics. Ubuntu cares about the diff. There is no point i arguing with me because I am not an Ubuntu upliader | 13:31 |
wallyworld_ | sorry, not meaning to argue, just trying to understand | 13:31 |
wallyworld_ | i just want to make customer happy | 13:32 |
jcastro | it's ok | 13:32 |
jcastro | the way distros work is weird | 13:32 |
jcastro | wallyworld_, ok so the question is basically now "I have fixes that need to go to support MAAS and customers, and we think the diff might be too large to SRU into Ubuntu, is trusty cloud archive coming soon?" | 13:33 |
wallyworld_ | yeah, that sounds like what would be needed i think | 13:33 |
sinzui | wallyworld_, Ubuntu cares about the meaning of major, minor, and micro. Micro doesn't introduce configurable features. micro fix bugs in the code, not add new ways to use the software. lxc was not broken. It works fine of local-provider and that is what we documented. Adding fast-lxc is a minor change, extending an existing feature to be used elsewehere | 13:34 |
wallyworld_ | sinzui: we made it configurable (default existing behavior) so as to be stable so that seems ironic :-) | 13:35 |
jcastro | you'd be surprised at the amount of things that can sneak into things like that; which is why they're strict by default | 13:36 |
wallyworld_ | sinzui: so for 1.19.2, there's one remaining critical bug: 1316869. i've just propose a mp which merges 1.18 into trunk which will fix that bug. as soon as that lands, 1.19.2 should be shippable | 13:37 |
wallyworld_ | https://code.launchpad.net/~wallyworld/juju-core/merge-latest-1.18/+merge/218989 | 13:37 |
hazmat` | wallyworld_, so prior to that merge lacking 'current' env file, any command that failed would destroy an env? i'm trying to understand where that destroy env logic is lurking. | 13:45 |
hazmat` | that bit on sync-tools | 13:45 |
sinzui | wallyworld_, okay. I am pulling down 1.18.2 tarball and win installer | 13:46 |
wallyworld_ | hazmat`: i'm not sure that is was any command. i thought it was just sync-tools | 13:46 |
wallyworld_ | but i could be wrong | 13:47 |
hazmat` | wallyworld_, fair enough.. but i didn't see any destroy logic in sync tools or sync/sync.go .. and the fix doesn't appear to touch that either. | 13:47 |
wallyworld_ | the fix for 1.18 cleaned up a bit of stuff in that area which we want in trunk also | 13:47 |
wallyworld_ | let me check | 13:47 |
=== liam_ is now known as Guest67023 | ||
wallyworld_ | hazmat`: cmd/juju/synctools.go is changed | 13:48 |
wallyworld_ | natefinch: can i bother you for a review of the above mp? it's a merge of 1.18 into trunk to pick up a fix for bug 1316869. the changes end up being mechanical - replace EnvCommand.Init() with EnvCommand.EnsureEnvName() | 13:48 |
_mup_ | Bug #1316869: juju sync-tools destroys the environment when given an invalid source <juju-core:In Progress by axwalk> <juju-core 1.18:Fix Committed by axwalk> <https://launchpad.net/bugs/1316869> | 13:48 |
wallyworld_ | sinzui: what do you mean by pulling down? | 13:50 |
hazmat` | wallyworld_, ic.. environFromName returns a cleanup/destroy env function | 13:52 |
wallyworld_ | hazmat`: also, i was told trunk (1.19) didn't really suffer the issue like 1.18 did. but 1.18 improved the checking and so we wanted that in trunk as well | 13:52 |
sinzui | wallyworld_, CI makes the the tarball and installer and tests it. There is little point of risking me making something different from what CI tested | 13:52 |
wallyworld_ | sure. i was just confused my 1.18.2. did you mean 1.18.3? | 13:53 |
sinzui | I do mean 1.18.3 | 13:55 |
wallyworld_ | sinzui: ok. hopefully someone will +1 that mp and 1.19.2 can be ready for release also | 13:56 |
wallyworld_ | i still haven't heard back what the maas folks want to use | 13:56 |
wallyworld_ | 1.18 or 1.19 | 13:56 |
sinzui | wallyworld_, should I just release both today and tomorrow? | 13:57 |
wallyworld_ | sinzui: if you could that we be \o/ | 13:58 |
wallyworld_ | would | 13:58 |
natefinch | wallyworld_: I can review it in a minute | 13:58 |
wallyworld_ | then at least we have done our best to help them | 13:58 |
wallyworld_ | natefinch: thank you | 13:58 |
wallyworld_ | as soon as i land it i'm off to bed cause it's just turned into saturday here | 13:59 |
natefinch | voidspace: standup? | 14:00 |
voidspace | natefinch: coming | 14:01 |
perrito666 | if anyone has a moment https://codereview.appspot.com/94330043 please? | 14:31 |
wallyworld_ | natefinch: i gotta sleep, but if you did look at the mp (sorry rietveld rejected it) and +1 it, could you please also change to Approved so it lands. thanks | 14:36 |
natefinch | wallyworld_: haven't looked yet, looking now. Will reapprove after | 14:50 |
=== Guest66499 is now known as bodie_ | ||
perrito666 | sinzui: are there ci tests for this? https://bugs.launchpad.net/juju-core/+bug/1305026 | 15:06 |
natefinch | man I wish canonical admin would do dates in yyyy/mm/dd ... I see a vacation from 06/10/2014 to 07/10/2014 and I think someone's taking a month off :/ | 15:40 |
perrito666 | natefinch: it would be nice if it would use localized dates, or else that person might inadvertedly ask for a month vacation :p | 15:42 |
natefinch | perrito666: that's why I specified yyyy/mm/dd because that's universal. But yes, localized would also be good. | 15:42 |
sinzui | perrito666, no. but since the restore and HA tests use the same test base, I think I can add a test by Monday that will work | 15:49 |
perrito666 | sinzui: ill test by hand in the mean time, tx | 15:49 |
sinzui | natefinch, perrito666 Do either of you have a minute to review https://codereview.appspot.com/92170043 | 16:31 |
natefinch | sinzui: sure | 16:31 |
natefinch | sinzui: looks like dimitern already got it | 16:31 |
dimitern | i'm fast ;) | 16:31 |
sinzui | very fast. Thank you dimitern | 16:32 |
dimitern | sinzui, np | 16:35 |
jcastro | ERROR state/api: websocket.Dial wss://10.0.3.1:17070/: dial tcp 10.0.3.1:17070: connection refused | 17:38 |
jcastro | can someone help me get around this error? | 17:38 |
jcastro | I've done destroy-environment --force | 17:38 |
jcastro | then I don't have any juju processes running | 17:39 |
jcastro | then I rebootstrap, and always end up with this | 17:39 |
perrito666 | do we have any HA docs? | 17:46 |
perrito666 | jcastro: do you have the output of bootstrap with --debug? | 17:46 |
jcastro | http://pastebin.ubuntu.com/7422575/ | 17:47 |
jcastro | there ya go! | 17:47 |
* perrito666 looks | 17:48 | |
natefinch | jcastro: this is local, right? | 17:48 |
jcastro | yep | 17:48 |
natefinch | jcastro: do ps -Al | grep lxc in my experience, sometimes lxc gets stuck and multiple lxc processes end up getting hung up. Usually requires a reboot to fix | 17:49 |
jcastro | no containers running | 17:50 |
jcastro | should I reboot? | 17:51 |
natefinch | jcastro: did you reboot, or was that just good timing? :) | 17:54 |
jcastro | no I rebooted | 17:55 |
natefinch | did it work? | 17:55 |
jcastro | cleared out jenv file and the local directory in .juju | 17:55 |
jcastro | same issue | 17:55 |
natefinch | weird | 17:55 |
jcastro | natefinch, how about mongodb? | 18:03 |
jcastro | is there a daemon I can kill to start it over? | 18:03 |
natefinch | jcastro: ps -Al | grep mongod | 18:07 |
jcastro | yeah I had already killed that. :-/ | 18:08 |
natefinch | jcastro: this is my killlocal.sh : | 18:08 |
natefinch | #!/bin/bash | 18:08 |
natefinch | sudo rm /etc/init/juju* | 18:08 |
natefinch | sudo rm -rf ~/.juju/local | 18:08 |
natefinch | rm ~/.juju/environments/local.jenv | 18:08 |
natefinch | sudo killall mongod | 18:08 |
natefinch | sudo killall jujud | 18:08 |
jcastro | http://askubuntu.com/questions/403618/how-do-i-clean-up-a-machine-after-using-the-local-provider | 18:09 |
jcastro | I am doing this one now | 18:09 |
natefinch | jcastro: ahh, yeah, that cleans up some stuff I missed | 18:09 |
natefinch | (mainly lxc stuff, which is honestly most likely the culprit) | 18:10 |
jcastro | same issue. :-/ | 18:11 |
natefinch | what does which mongod tell you? | 18:12 |
jcastro | huh | 18:12 |
jcastro | mongodb-server not installed? | 18:12 |
jcastro | that can't be right | 18:12 |
natefinch | no, that's ok. juju-local puts it in /usr/lib/juju/ .... something something | 18:13 |
natefinch | which is not in the path | 18:13 |
jcastro | let me add the juju ppa | 18:13 |
jcastro | I am on 1.18.1 | 18:13 |
jcastro | natefinch, ok I have a deadline, so off to AWS I go! | 18:20 |
jcastro | thanks anyway | 18:20 |
=== vladk is now known as vladk|offline | ||
=== vladk|offline is now known as vladk | ||
=== vladk is now known as vladk|offline | ||
_benoit_ | Hi | 19:01 |
_benoit_ | niemeyer: So I want to add support in juju-core for Outscale SAS | 19:01 |
_benoit_ | niemeyer: they are a cloud provider implementing most of the EC2 API but not implementing S3 | 19:02 |
niemeyer | _benoit_: Okay | 19:02 |
_benoit_ | niemeyer: tim told me I could replace S3 by an http storage worker | 19:02 |
niemeyer | _benoit_: Do they have any notion of storage API? | 19:02 |
_benoit_ | niemeyer: for now I patched goamz and created a directory for the outscale provider | 19:03 |
_benoit_ | niemeyer: no | 19:03 |
_benoit_ | niemeyer: I also created their instance.go file | 19:03 |
niemeyer | Sorry.. | 19:03 |
niemeyer | my xchat is misbehaving.. let me reconnect | 19:03 |
_benoit_ | niemeyer: but I don't see how I can reuse the EC2 code since most structure name are not capitalized (exported) | 19:04 |
niemeyer | Okay | 19:04 |
niemeyer | _benoit_: You say you patched goamz.. what for? | 19:04 |
_benoit_ | niemeyer: to add their regions (I submited the patch) | 19:05 |
niemeyer | _benoit_: Ah, okay | 19:05 |
_benoit_ | niemeyer: I also added panic for nil endpoint in the other New constructor like you suggested | 19:05 |
niemeyer | _benoit_: So it's plain EC2 sans S3? | 19:05 |
_benoit_ | _benoit_: the future release is almost EC2 excepted a few bits (they are adding VPC in this release) | 19:06 |
_benoit_ | s/_benoit_/niemeyer/ | 19:06 |
_benoit_ | and yes no S3 and no storage | 19:06 |
_benoit_ | niemeyer: that the compatibility matrix of the current version -> (without VPCs for now) https://wiki.outscale.net/display/DOCU/AWS+Compatibility+Matrix | 19:07 |
niemeyer | _benoit_: No S3 and no storage? What's the difference between these? | 19:08 |
_benoit_ | niemeyer: I mean dont even have an equivalent to S3 | 19:08 |
niemeyer | _benoit_: Ah, okay | 19:08 |
niemeyer | _benoit_: So, did you manage to make it work? | 19:09 |
_benoit_ | niemeyer: No | 19:09 |
_benoit_ | niemeyer: I am still stuck trying to find a way to "inherit" EC2 stuff so I don't have to copy paste | 19:09 |
_benoit_ | I am new to go but have a C/Python background | 19:10 |
niemeyer | _benoit_: Why do you need to copy & paste? | 19:10 |
natefinch | _benoit_: if they're really so similar, it might make sense to have them in the same package | 19:10 |
niemeyer | _benoit_: I mean, more specifically.. | 19:10 |
niemeyer | _benoit_: Why are you not using the ec2 backend itself? | 19:10 |
natefinch | niemeyer: I think he means that the juju provider/ec2 code is almost entirely non-exported, and he wants to reuse a substantial portion of it | 19:11 |
niemeyer | natefinch: I get that, thanks | 19:11 |
natefinch | ok | 19:11 |
_benoit_ | niemeyer: If you think patching the ec2 provider is the way I'll do it like that | 19:11 |
niemeyer | _benoit_: It's certainly where I would start | 19:12 |
_benoit_ | niemeyer: do you have any hint to do this ? | 19:12 |
_benoit_ | niemeyer: Given that the differences are regions, instance types price and no S3 | 19:12 |
niemeyer | _benoit_: To begin with, I'd try using Amazon's S3 with the EC2 endpoints of Outscale | 19:12 |
niemeyer | _benoit_: This can get you to experience something actually working | 19:13 |
_benoit_ | _benoit_: ok Is there a recipe to bootstrap a juju comptible ubuntu AMI ? | 19:13 |
_benoit_ | _benoit_: maybe a list of packages to install on a ubuntu AMI ? | 19:13 |
niemeyer | _benoit_: juju uses a plain Ubuntu AMI | 19:13 |
_benoit_ | _benoit_: would debootstrap work for this ? | 19:14 |
niemeyer | _benoit_: It's tailored for juju with the proper packages at startup time | 19:14 |
niemeyer | _benoit_: By juju itslef | 19:14 |
niemeyer | _benoit_: debootstrap? | 19:14 |
_benoit_ | _benoit_: debootstrap is a way to create a debian or ubuntu system from scratch in a chroot | 19:14 |
niemeyer | _benoit_: I know what it is, but why are we talking about it? | 19:15 |
_benoit_ | _benoit_: Outscale have only debian AMI right now | 19:15 |
_benoit_ | s/_benoit_/niemeyer/ | 19:15 |
niemeyer | _benoit_: Oh, that'd be an issue | 19:15 |
niemeyer | _benoit_: Does it support custom images? | 19:15 |
_benoit_ | niemeyer: I was wondering if I could could create the first ubuntu image with debootstrap | 19:15 |
niemeyer | _benoit_: I would write them and ask about Ubuntu images | 19:16 |
niemeyer | _benoit_: Explaining you want to use juju | 19:16 |
_benoit_ | niemeyer: I would attach and EBS format it and dump a root filesystem on the EBS | 19:16 |
niemeyer | _benoit_: SUre, whatever works.. you'll need to talk to them to sort out how to get an Ubuntu image there | 19:16 |
_benoit_ | niemeyer: Ok | 19:16 |
_benoit_ | niemeyer: thanks I think I have the first step to do | 19:17 |
niemeyer | _benoit_: Super, glad it was useful | 19:17 |
niemeyer | _benoit_: Please let us know how it goes | 19:17 |
_benoit_ | niemeyer: I'll do this monday and get back to you once it work to have some guidance for the rest | 19:17 |
_benoit_ | niemeyer: which timezone are you in ? | 19:17 |
_benoit_ | niemeyer: so I can catch you outside of lunch hours ;) | 19:20 |
_benoit_ | niemeyer: super thanks | 19:21 |
niemeyer | _benoit_: I'm in UTC-3 | 19:21 |
_benoit_ | niemeyer: I am in UTC+1 | 19:21 |
niemeyer | _benoit_: It's not too hard to find me here, but it's pretty certain you can find help coming here at any time | 19:22 |
_benoit_ | ok | 19:22 |
_benoit_ | have a nice week end | 19:22 |
Makyo | Can you refer to a machine later on in the list of MachineParams, i.e.: [{create machine 1}, {create lxc on machine 1}] in the same AddMachines API call? | 19:37 |
Makyo | (I'm guessing probably not due to needing to know the machine name before adding containers.) | 19:40 |
natefinch | Makyo: what are you trying to do? Just add a machine with a container on it? | 19:46 |
Makyo | natefinch, adding AddMachine support in the GUI. I was just wondering if the API call added machines in the MachineParams array one at a time synchronously. | 19:47 |
natefinch | Makyo: I'm pretty sure you can just do "Create machine with lxc" and it'll create a new machine with an lxc container on it | 19:50 |
natefinch | Makyo: AddMachineParams has a ContainerType you can set | 19:53 |
Makyo | natefinch, I think we have much of that mirrored on our end: https://github.com/juju/juju-gui/blob/develop/app/store/env/go.js#L921 I was just wondering since AddMachines takes an array of AddMachineParams. Are those expected to be completed in order? Create machine, create one container on that machine, create a second...&c | 19:55 |
natefinch | Makyo: I wouldn't rely on them being executed synchronously, in order. | 19:56 |
Makyo | natefinch, Alright. Will work with that assumption, then. | 19:57 |
Makyo | Thanks! | 19:57 |
natefinch | Makyo: right now they are, but that's an implementation detail that may change. | 19:58 |
natefinch | Makyo: we'd like to batch up the provider requests into a single request, so we just say "make these 50 machines" rather than saying "make this one machine" 50 times. | 19:59 |
Makyo | natefinch, alright, thanks. Will make note of that in our docs. | 20:03 |
natefinch | Makyo: welcome | 20:03 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!