[00:18] it's gonna require some variables and stuff, so I'd say subordinate, otherwise action [03:05] lazPower: how’s it going? [08:20] hi, seems i still have issue with my bundle. it's terrible to disturb somebody to check error log. do you have another method to get error report? [08:39] mgz, hey - when you're around I have a bzr fast-import error I need a hand with === mmcc_ is now known as mmcc [10:11] jamespage: not sure I can help much with that [10:55] could somebody can help with publish bundle error? can somebody review publishing error? [10:55] https://code.launchpad.net/~dataart.telco/charms/bundles/dataart-telco-demo/bundle [11:08] jamespage, fyi neutron-api/next amulet test failing on: n-c-c: Unknown job: nova-api-ec2 @ trusty-liberty. i'll propose an update at n-api. [11:18] beisner, I got that in my last update I think [11:18] jamespage, even better, thanks! [11:19] jamespage, hey i've got some control-file type stuff for new review automation that I'd like to place in our github space so they're controllable via PR. migration-tools, or init a new thing? [11:20] beisner, not fussed - up to you [11:20] jamespage, on that note, shall we have your branch sync stuff live alongside the lp:gh script foo? [11:20] it seems like a logical place [11:20] beisner, sure - I'll move it over in a bit [11:21] beisner, http://10.245.162.36:8080/job/charm_amulet_test/521/console [11:22] its not ec2 or objectstore... [11:22] neutron api relation data key missing [11:24] hmm [11:25] beisner, ok fix pushed [11:27] jamespage, notice how jenkins-wrapped subprocess buffers sometimes get mixed up, and trace text land somewhere odd? like that console output ;-) weird. [11:32] beisner, ftr I get two tempest smoke test failures against mitaka-staging on trusty UCA - both todo with telemetry for glance images... [11:33] other than that smokes well with my three inflight branches... [11:36] jamespage, ack, i've not exercised m-staging but lmk when those are ready to run. are the folks in os-dev who are reporting trouble using staging or updates? [11:38] xnox, https://code.launchpad.net/~james-page/charms/+source/openstack-on-lxd-bundle/+git/openstack-on-lxd-bundle [11:39] I just created a xenial one specially for you [11:39] xnox, it won't work just yet - we have some inflight fixes for mitaka/xenial support that I should get landed today... [11:43] jamespage, i like! on g+ =) [11:46] beisner, I'm promoting proposed -> updates for mitaka - it will break charm amulet tests until the neutron-gateway and neutron-openvswitch charm updates land - on those next [11:47] jamespage, ack [12:10] beisner, I think my n-gteway and n-api updates are inter-dependent - i.e. I can't get one through testing without the other [12:11] beisner, disable *mitaka on one first, land, land the other and then re-enable sound ok? [12:20] jamespage, yes. alternatively, with things like this in the past, i've deployed the topology ahead of time with the desired charm branches, then just exec ./tests/0nn-yadda [12:21] jamespage, i'd really like to move all of the amulet tests to just consume a bundle in the tests/ dir. then it would be super easy for a user to manually verify something like this. [12:21] ie. instead of defining dicts on the fly in the test itself [12:21] beisner, well I need the promotion to flush through and then I'll work that [12:21] beisner, agreed [12:21] beisner, we also probably want to think about a way of defined check tests and gate tests for amulet [12:22] beisner, as a thirdparty CI I guess we can see when someone +2's right? [12:23] jamespage, we should, yes. right now our sandbox job is testing everything that hits openstack/ci-sandbox and I'm seeing notifications. [12:23] beisner, awesome [12:23] today my priority is to nail down the initial amulet test chain that is triggered by that. right now it's just a noop "you are here" [12:23] beisner, btw is the missing relation between glance and rmq intentional in our bundles? [12:24] jamespage, seems like we id'd that and resolved at o-c-t. where are you seeing that? [12:24] beisner, next bundle with trusty-mitaka-* [12:25] jamespage, workload status is satisfied: http://10.245.162.36:8080/view/Dashboards/view/OpenStack%20Deploy/job/deploy_with_deployer/428/artifact/juju-stat-tab-collect.428/*view*/ [12:25] do we need to make that a required relation in the workload status bits? [12:25] beisner, yeah its an optional relation [12:26] beisner, its only required when deploying ceilometer... [12:27] jamespage, looks like we should add that rel to next and default yamls, and have a look at the affected amulet tests. [12:28] beisner, ok [12:30] hi. how am i supposed to increment my charm version # ? I get everytime -0. I want to be able to run "charm build" and then update the charm by clicking "Change Version" at gui [12:31] -thanks jose. I think subordinate or a relation of some sort, that way people can plugin different implementations. I'm gonna have a think, it would be cool if, like the monitoring stuff there was a "backup" type relation for different services that different backup platforms could consume [12:31] 12 hours lag response... not bad ;) [12:51] is there a way of doing "when any" as opposed to just "when all"/"when not any", maybe I'm missing something but wasn't obvious in the reactive docs [12:52] ah, just saw the any_states helper, nvm [12:54] no wait, that doesn't help really [13:29] jamespage: could you take a look at these when you get a moment? it's systemd support for openstack charms deploy from source. I'll fix up tests post review. [13:29] https://code.launchpad.net/~corey.bryant/charm-helpers/systemd/+merge/287110 [13:29] https://code.launchpad.net/~corey.bryant/charms/trusty/keystone/systemd/+merge/287112 [13:46] i'm supposed to be doing Java dev.... lets write some charms instead \o/ [13:56] firl - pretty good [13:56] firl - squashed the ETCD bug if you want to give it a go [13:57] wesleymason - thats a known feature request, to be able to OR rather than AND @when() decorators [13:58] magicaltrout - that's a winning idea if i've ever heard one :) [14:00] lol [14:00] maybe [14:04] cory_fu: kwmonroe https://medium.com/airbnb-engineering/data-infrastructure-at-airbnb-8adfb34f169c#.4z4rc5eqo [14:04] might be of interest to you guys [14:10] magicaltrout: that's a great link, thanks [14:11] no probs [14:13] magicaltrout: where you the one working on a layered gitlab charm? [14:13] thats me [14:13] so I have an idea [14:13] hit me [14:13] so I was thinking when it's ready, we should approach them and like, ask them to highlight it on their download page, etc. [14:14] or whereever they put their "solutions" stuff [14:15] indeed. I have already asked Pentaho if they can highlight the PDI charm on their community pages somewhere [14:15] beisner, I'm going to land https://code.launchpad.net/~james-page/charms/trusty/neutron-api/mitaka-b2/+merge/287028 [14:15] magicaltrout: ooh lmk what they say [14:15] its broken due to the b2 updates I just pushed to -updates for mitaka - but other tests passed [14:15] they'll say yes, me and their community guy have worked together for 10 years :P [14:16] magicaltrout: I would be happy to offer anyone who highlights a charm comarketing announcements, etc. on our pages [14:16] if that helps your argument, heh [14:16] but i need to finish the pdi charm and then i want to do a couple of video's to put on their pages so people know how to use it [14:16] jamespage, yep, looks like the test failure is down to the n-g change dependency [14:16] its great having stuff available, not great if no one knows how to spin it up ;) [14:16] magicaltrout - hi o/ you said the trigger word video [14:17] magicaltrout - let me know if you want a hand there [14:17] uh oh [14:18] my videos' tend to suck, although i was messing around with the idea of a green screen the other day so I can actually appear on the screen whilst running a demo behind me :) [14:18] I get bored of screencast type demos [14:19] part of why i like presentations and community meetups as a way to convey information [14:19] beisner, ok https://code.launchpad.net/~james-page/charms/trusty/neutron-gateway/mitaka-neutron-changes/+merge/286900 is up next sans mitaka tests [14:19] I understand that sentiment [14:20] I think out of all of them I prefer screencasts [14:20] because you can be ad-hoc [14:20] there's no pressure to like make each thing perfect because it's live, you just roll with it. [14:20] i watch back my onair hangouts and stuff and end up in tears ;) [14:20] that said other people like them, so they must float some peoples boat [14:21] magicaltrout you just hit exactly what i was trying to type out :) [14:21] so, we have this juju channel over on youtube [14:21] and having your pdi video's there would help discoverability [14:21] i think anyway :) wide net and all that [14:22] ah yeah [14:22] i can see you on screen :P [14:22] O_o [14:23] jcastro - can we put a link to the youtube channel in /topic? === lazyPower changed the topic of #juju to: || Welcome to Juju! || Juju Charmer Summit: http://summit.juju.solutions || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP [14:26] drop the summit bit, that should let the channel fit === lazyPower changed the topic of #juju to: || Welcome to Juju! || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP [14:26] it's https://www.youtube.com/c/jujucharms === lazyPower changed the topic of #juju to: || Welcome to Juju! || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP || Youtube: https://www.youtube.com/c/jujucharms [14:26] aww shit [14:26] i can see my face [14:26] i tried so hard to hide in that photograph [14:35] talking of presentations, I have a presentation in the summer at NASA JPL which will involve Data Management & Juju. Now if an organisation needed some help doing application modelling, that's one! ;) [14:36] i suspect they are rather stuck in their ways, but you never know [14:36] I've seen some NASA video's and I've seen Unity in the background. I have a sneaky suspicion they <3 ubuntu and they'll <3 juju just as much [14:37] speaking of which, i wonder if anyone patched the mars rover for heartbleed [14:37] hehe [14:41] when writing a layered charm, is there anything stopping me creating a bunch of different rective/*.py stuff so I can dump my state changes in one and hooks in another? [14:51] magicaltrout: You can create as many *.py files as you like in reactive. I'd suggest putting them in reactive/mycharm/*.py though to avoid any possiblity with clashes. [14:52] magicaltrout: But if you want them to be able to import from one another, you currently need a hack. [14:57] thanks stub not to worried about the importing I don't think, just some logical separation [15:44] hey aisrael [15:44] yo [15:44] let's say we wanted to start doing a LXD/juju vagrantbox but like for 16.04 [15:44] what's the tldr on getting that to work? [15:45] I am thinking since juju has auto-lxd setup on first run [15:45] the box is literally just juju and lxd preinstalled [15:45] That's totally doable, but we'd need to enlist utlemming to adjust the vagrant build to support it [15:46] I think that's the way to go, once we start building the xenial boxes [15:46] * jcastro nods === natefinch is now known as natefinch-afk [16:12] aisrael: so like, if new juju2 and lxd lands next week, and we got a box out [16:12] that would effectively let everyone test it without jumping through the hoops of getting all of that working in trusty [16:21] mgz, it might be more of a bzr question - no sure... [16:22] jcastro: Yes [16:32] jamespage: Corey Johns nick is johnsca, right? [16:32] so, I'm seeing this when using bundletester with the apache-core-batch-processing bigdata-dev bundle http://pastebin.ubuntu.com/15198610/ [16:32] apuimedo, cory_fu [16:32] i'm trying to use the local provider for this [16:32] jamespage: Yes? [16:32] cory_fu: that was me calling [16:33] ;) [16:33] I just had the trusty cassandra charm explode on install hook [16:33] apuimedo: Oh, I see [16:33] on maas [16:33] first time it happened [16:33] and I see there's a change from less than an hour ago [16:34] apuimedo: Hrm. You have the error? [16:34] sure [16:34] http://paste.ubuntu.com/15198649/ [16:34] Also, I wouldn't have expected that MP to have ingested into the store yet [16:35] that's true [16:35] it looks like a repos issue actually [16:35] Though it seems it has [16:36] I'll try to re-trigger [16:36] * apuimedo doing that now [16:36] apuimedo: Seems from the error that something else on that unit was running apt-get at the same time [16:36] same issue [16:36] Did you deploy this on a fresh unit? [16:36] cory_fu: it's running on its own lxc [16:37] as allocated by the juju bundle deployer [16:37] and yes, the dpkg lock can't be taken even now [16:39] cory_fu: http://paste.ubuntu.com/15198698/ [16:42] cory_fu: should I just destroy the service and lxc and re-deploy the previous revision? [16:42] hello folks attmepting to mess around with this: http://pythonhosted.org/charms.templating.jinja2/ in my reactive layer but when it runs I get ImportError: No module named 'charms.templating' [16:42] am i doing something moronic? [16:42] apuimedo: I don't see anything in the MP that should cause a dpkg lock issue. The only thing that even touches it is switching from a manual fetch.add_source to fetch.configure_sources, but that shouldn't break anything [16:43] And I imagine stub is well out for the day [16:43] magicaltrout pastebin to your source using templating? [16:43] apuimedo: You can try, but TBH, I'm not certain this is related to the MP [16:43] oh this looks new... [16:44] but more or less just an extended port of whats in charmhelpers.core.templating.jinja2 [16:44] ah thats better I knew there was some templating stuff kicking around somewhere [16:44] thanks lazyPower [16:45] np magicaltrout - lmk if you still have an issue w/ templating w/ that lib [16:45] apuimedo: Did you deploy it with any specific config options set? [16:45] cory_fu: I did [16:46] Can you share those with me? [16:46] good morning juju team! trying to bootstrap an all-vm system, node failing on seed_random module, see pastebin.ubuntu.com/15198749 [16:47] sure [16:47] mwenning - troublemaker ;) [16:47] mwenning, ees broke :-( [16:50] options: [16:50] authenticator: AllowAllAuthenticator [16:50] cluster_name: midonet [16:50] install_keys: 7E41C00F85BFC1706C4FFFB3350200F2B999A372 [16:50] install_sources: deb http://debian.datastax.com/community 2.0 main [16:50] extra_packages: dsc20 [16:52] cory_fu: ^^ [16:54] apuimedo: Could it be an issue with the datastax PPA? [16:55] I don't know [16:55] apuimedo: Oh. install_sources is supposed to be a Yaml list (as a string) [16:57] right, another stupid question, but this is the terminology stuff confusing me more than anything else I think. I want to slap Apache2 over gitlab as an optional relation, so reverseproxy etc to gitlab [16:58] what the f**k interface do I need to slurp up? [16:58] https://api.jujucharms.com/charmstore/v5/trusty/apache2/archive/metadata.yaml from that lot [16:58] interface: http [16:58] reverse proxy is a consumer relationship, you just need to provide interface: http on gitlab [16:58] cory_fu: these bundle settings I've been using unmodified for a week or two [16:59] apache2:reverse-proxy should be enough tog et it going, its just going to read in private-address and port. give it a go with haproxy and apache2 [16:59] hmm right lazyPower, i was reading some stuff in the apache docs about the vhost-config relation but then couldn't figure out how to define it [16:59] ah yeah our apache2 charm is like, geekmode++ [16:59] lol [16:59] IS wrote it and maintains most of it, so its next level on the combination of base64 encoded wizardry [17:00] yeah i have like a base64 encoded vhost template and shit [17:00] okay i'll scrub that and look to reverseproxy [17:00] so in my charm I provide http so apache2 can consume it then? [17:00] should be able to [17:00] okay cool [17:01] seriously deploy haproxy in front of your gitlab [17:01] if you have http already implemented you're basically done [17:01] apuimedo: Ah, nm. It should accept a yaml list or a single string [17:02] apuimedo: Let me try deploying with those settings. Are you trying the previous version? [17:02] not yet [17:04] apuimedo: Ok, I got the same thing. It's definitely something about those config options and I guess the change to how the sources are loaded. [17:04] cory_fu: so the change modified the source management [17:05] cory_fu: I'll re-deploy with the previous version [17:07] Prev version seems to work fine [17:08] I'm trying to see what it is about the changes, but it looks like it should just work. :( [17:08] :P [17:08] maybe you can add some amulet test for the install_sources [17:09] The charm does have a lot of Amulet tests, and there are specific tests added for this change. [17:09] All of which are passing [17:14] so, question. when I want to implement an action in a layered charm, should I just add an 'actions' folder in the root folder? [17:15] yep [17:15] you'll additionally need the actions.yaml definition [17:17] all of that in the root, right? [17:17] of your layer, yep [17:18] cool, thanks! [17:21] apuimedo: Ok, I've got it. The ppa:stub/cassandra and ppa:openjdk-r/ppa are both required (with nulls for the keys). This config works for me: http://pastebin.ubuntu.com/15199178/ [17:23] I think it's probably worth reverting the merge and kicking it back for that, since anyone who had been using that config option is almost certainly going to be broken [17:24] beisner, ok with https://code.launchpad.net/~james-page/charms/trusty/neutron-gateway/mitaka-neutron-changes/+merge/286900 ? [17:24] has anyone had success with deploying juju on a private openstack cloud? [17:25] I generate the metadate and tools, and begin the bootstrap. It creates a ost instance on my openstack cloud, but it then tells me that RROR index file has no data for cloud {RegionOne http://192.168.4.11:5000/v2.0} not found [17:26] s\RROR\ERROR [17:28] jamespage, looks like i, K, L are happy with the changes. having a bit deeper look, sec.. [17:29] cory_fu: Agreed [17:34] jamespage, ok, pretty sure i understand everything going on there. +1 [17:34] beisner, thanks === natefinch-afk is now known as natefinch [17:41] beisner, https://code.launchpad.net/~james-page/charms/trusty/neutron-openvswitch/mitaka-neutron-changes/+merge/286910 [17:41] that next... [17:42] apuimedo: It's been reverted and should be ingested before too long [17:42] cory_fu: thanks. Much appreciated [17:43] apuimedo: No problem. Sorry I missed that side-effect and let this through. [17:43] no problem [17:43] now, on to find whatever happened with Puppet. It seems it was also updated and broke my charms... [17:44] dosaboy, https://code.launchpad.net/~hopem/charms/trusty/ceph/lp1523871/+merge/286487 lgtm - but please wait for osci tocomplete - not long now.... [17:44] apuimedo, module breakage? [17:46] jamespage: I don't know. It's a bit strange, it's complaining about a missing hiera variable that I was never setting [17:46] most likely [17:46] some of our puppet module dependencies updated and became more strict about undefined values [17:46] or it's not taking the default [17:47] apuimedo, hmm - tricky for repeatable deployments [17:47] apuimedo, could you snapshot the modules into the charm in some way to avoid this? [17:47] jamespage: I can do better, I can pin the versions for all the puppet modules [17:47] and be done with it [17:47] apuimedo, that indeed would be better [17:48] now I only need to find the culprit :-) [18:38] jamespage: thanks for the reviews [18:38] I think I found it! [18:38] https://forge.puppetlabs.com/ripienaar/module_data [18:38] released today [18:38] looks very suspicious [18:39] I often wonder what makes it so hard for the ruby/puppet community to keep backwards compatibility [19:44] has anyone had success with deploying juju on a private openstack cloud? [20:01] jamespage: I got it right. Ripienaar fixed the bug ;-) [20:01] I'll try to get our puppet stable module dependencies pinned [20:01] so that it doesn't happen in the future [20:07] you around lazyPower ? [20:16] apuimedo: Can you take a look at stub's reply on https://code.launchpad.net/~stub/charms/trusty/cassandra/ensure-thrift/+merge/279869 [20:17] let's see [20:20] firl i iam [20:20] hey man [20:21] Any updates by any chance? [20:21] you bet [20:21] juju deploy ~lazypower/bundle/kubernetes-core [20:21] do i need to compile the bundle again? [20:21] or the layer rather [20:22] nope [20:22] etcd just moved to layers [20:22] need something newer than 1.25? [20:22] ERROR expected a charm URL, got bundle URL "cs:~lazypower/bundle/kubernetes-core-0" [20:22] oh, thats possible yeah [20:22] argh [20:22] new stuff [20:22] :) [20:22] want to trywith new stuff? [20:23] sure, this is a scratch environment [20:23] docker pull lazypower/charmbox:devel [20:23] juju 2.0-beta1 [20:23] i'm speaking in half fragmented sentences because i'm excited you showed up. I feel like i have a mountain of candy you've been asking for patiently [20:23] and man, i gotchoo covered [20:23] haha [20:24] let me pull up a vm to do the docker stuffs [20:24] ok. if you want to add devel ppa on your current scratch thing [20:24] that’s fine too [20:24] I have a build node for when you gave me the layer stuff [20:24] just know that you cant upgrade from 1.25 to 2.0 cleanly *yet* [20:24] and the $JUJU_HOME's are incompatible. [20:24] actually [20:24] it won’t break the juju agent environment with the state machines will it? [20:24] you have the source - github.com/chuckbutler/layer-etcd - is current devel focus [20:25] ya i have that [20:25] which will unblock your k8s work [20:25] ok i need to hit up a meeting, will try this right after [20:25] tip of mbruzek/layer-k8s works out of the box with that replacement etcd [20:25] ack, lmk if you have *any* questions firl === lazyPower is now known as lazyPower|lunch [20:38] cory_fu: replied === alexisb is now known as alexisb-brb === lazyPower|lunch is now known as lazyPower [20:58] lazyPower ok back [20:58] woo [21:03] so I need to add the devel ppa and upgrade the juju CLI locally, correct? [21:03] or should I just build the layer-k8s? [21:05] just build layer-k8s [21:05] additionally, you'll want to deploy cs:~lazypower/trusty/etcd-4, or build from my layer-etcd === alexisb-brb is now known as alexisb [21:09] so from a bundle prespective [21:09] this ok? http://pastebin.com/N3YEpVHm [21:10] firl - that looks kind of old compared to what our config is now.. default has bumped to v1.1.7, the generated charm is "kubernetes" [21:12] > charm: cs:trusty/etcd-4 - is the last -stable revision before the connection string revisions. try charm: cs:~lazypower/trusty/etcd [21:13] should I just use: https://api.jujucharms.com/charmstore/v5/~lazypower/bundle/kubernetes-core-0/archive/bundle.yaml [21:13] That'll work :) [21:13] haha [21:14] so should I pull it in via gui / store? or build the layer and deploy [21:14] if you're interested in just consuming the kubernetes charms, pull it in via gui/store [21:14] if you want to help dev, build from layers and beat on it a bit [21:14] i'll keep updating this bundle as i drop features, pre-release style [21:17] gotcha [21:19] can’t pull it in via gui [21:20] “fetching the entity failed” [21:20] argh [21:20] :) [21:20] this is why I am here right? hah [21:21] yeah, i'm depressed. this is due to some new things i've been piloting [21:21] haha [21:21] don’t worry man [21:21] if you're on 2.0, you can get at the charms [21:21] I can upgrade [21:21] looks like 1.25 is only serving charms from ingest [21:21] i don’t mind [21:21] ok, welcome to a whole new world :) [21:21] haha upgrade instructions? i need to upgrade the cli, the agent and the state machines? [21:22] so, its an incompatible upgrade right now [21:22] do you have work in 1.25 that you need to keep accessing? [21:22] I have a jenkins setup that I need [21:22] just 2 machines [21:22] ok lets isolate your 2.0 environment [21:22] alright, this could be a pain but lets do it [21:23] last time I tried to create a new juju environment on openstack it had issues [21:23] Creating a new project now [21:23] is there up to date install on openstack instructions? [21:24] one sec, the definitive guide came out as a mailing list post [21:24] I should have it then someplace [21:24] firl https://lists.ubuntu.com/archives/juju/2016-February/006618.html [21:24] so use Xenial? [21:25] nooooooo [21:25] you can do this on trusty just fine [21:25] kk trust still right? [21:25] ok [21:25] first paragraph, its co-installable with 1.25 [21:25] I don’t care in all honesty, it’s all just vm’s [21:25] update-alternatives --config juju is how you switch between the releases [21:25] I need a pissing server sat under my desk, stuff this cloud nonsense [21:25] *yawn* [21:26] magicaltrout - keep it family friendly please :) [21:28] indeed, I shouldn't mention that C word.... [21:37] alright then 2 hours of expense claiming done. Lets take this http interface for a spin [21:37] i'm so confused, i can't tell if the bad c word is cloud or co-installable or config ;] [21:39] hehe [21:47] okay so i still don't get this [21:47] unit-apache2-0: 2016-02-25 21:46:21 INFO unit.apache2/0.juju-log server.go:269 reverseproxy:4: Vhost Template not provided, not configuring: 80 [21:47] i'm guessing thats quite important [21:48] magicaltrout did it work with haproxy? [21:48] apache2:reverseproxy do I still pass it a vhost file? [21:48] i've not got that far yet :P [21:48] let me rip out apache and find out [21:53] okay so i'm doing something stupid, mostly because i'm trying to learn python at the same time instead of just copying the bash examples :P [21:54] https://gist.github.com/buggtb/ff52067809a00101d4f1 [21:54] whats up with that? [21:54] there's no problem with the deployment but I get [21:54] unit-haproxy2-0: 2016-02-25 21:52:01 INFO unit.haproxy2/0.juju-log server.go:269 reverseproxy:7: No port in relation data for 'gitlab/0', skipping. [21:54] unit-haproxy2-0: 2016-02-25 21:52:01 INFO unit.haproxy2/0.juju-log server.go:269 reverseproxy:7: No backend servers, exiting. [21:54] lazyPower: having issues bootstrapping 2.0 [21:55] not sure what the “endpoint” should be [21:57] one sec, i lost the post [21:57] https://lists.ubuntu.com/archives/juju/2016-February/006618.html [21:57] that one? [21:59] yeah [21:59] looking it over, one sec, i dont use openstack as a substrate often so i'm not as familiar with the config [22:00] “#### Private Clouds” section [22:02] ddellav beisner : ping [22:02] firl - if this doesnt yeild good results, i vote to change my answer to "build from layers and deploy" [22:02] haha [22:03] this quickly turned into a ball of yarn. 2.0 was a weekend discovery for me [22:03] lol ok [22:03] so i need to have a 1.25 juju environment and then do a charm build on the git repo right? [22:04] yep [22:04] both git repos* [22:04] layer-k8s, and layer-etcd [22:04] let me clone layer-etcd [22:04] https://github.com/chuckbutler/layer-etcd ? [22:04] thats the one [22:05] ok now? [22:05] with those built, replace both charms with local references, remove the default config option for kubernetes version and you should be in like flynn [22:06] in this file ? [22:06] https://api.jujucharms.com/charmstore/v5/~lazypower/bundle/kubernetes-core-0/archive/bundle.yaml [22:06] yep [22:06] i was thinking your bundle, but mine works [22:06] either/or :) six of one, half dozen of the other [22:06] the other bundle was from matt i think [22:06] oh right on [22:08] http://pastebin.com/APinzAeu [22:08] ? [22:09] when you ls your $JUJU_REPOSITORY, do you see k8's or kubernetes? [22:10] i see both [22:10] let me clean [22:10] kubernetes is the latest from the charm build [22:10] ok, change line 4 to local:trusty/kubernetes [22:11] on it [22:17] firl - also fyi, i think that endpoint setting in the credentials.yaml for your privcloud is the same as the auth-url in the 1.25 config [22:17] your keystone endpoint [22:17] I tried that one [22:18] that’s why I was asking [22:19] :( [22:19] i'll bring that up and see if i can get you an answer firl [22:22] beisner, if you are still around - https://code.launchpad.net/~james-page/charms/trusty/neutron-openvswitch/mitaka-neutron-changes/+merge/286910 [22:22] lgtm now === blahdeblah_ is now known as blahdeblah [22:42] “local charms not allowed for service kubernetes: local:trusty/kubernetes” [22:42] lazyPower [22:46] firl - have you exported JUJU_REPOSITORY? [22:47] http://pastebin.com/CFK6C05U [22:48] i created my local juju repository via sym links to ./trusty [22:50] ah quickstart doesn't work with local urls. [22:50] :) [22:50] juju-deployer does, or the juju deploy command from juju2 [22:52] just going to add units [23:21] lazyPower http://pastebin.com/x0bMLf8B [23:24] https://github.com/OSBI/layer-gitlab/blob/master/reactive/gitlabhooks.py#L14 can anyone see anything stupidly obvious that means that wont install properly, yet when I paste it into debug-hooks it works as expected.... [23:29] firl - i'm going to EOD, but if you need anything feel free to ping [23:29] i'll circle back and check in on ya after dinner [23:29] hah I sent a pastebin [23:29] oh [23:29] * lazyPower missed that [23:29] charm build doesn’t give a charm that is deployable [23:30] ah [23:30] the etcd layer has 2.0 features in it [23:30] if you delete series: -trusty out of metadata.yaml that'll fix that [23:30] lame :| [23:31] hrmm [23:32] firl if you pull from master i just updated the layer [23:32] for etcd? [23:32] oh i see ya, I did -series: “trusty" [23:33] WARNING failed to load charm at "/home/ubuntu/work/trusty/kubernetes": metadata: series: expected string, got []interface {}([]interface {}{"trusty", "wily", "xenial"}) [23:34] weird, why would k8s be complaining? it does't even have series in metadata. [23:34] yeah [23:34] it’s weird [23:35] it shows up in the metadata output [23:35] cleaning and rebuilding [23:36] http://pastebin.com/NyxPL9Cd [23:36] maybe something is aggregating based upon charm build deps [23:36] its layer-docker that pulled that in [23:36] ok, hang on, thats a breaking update [23:38] haha [23:38] we can do this another time too after you eat [23:40] firl - https://github.com/juju-solutions/layer-docker/pull/27 [23:40] i need to land that and i think.. thats the last of the 2.0 features we baked in [23:40] haha [23:40] should I also build that locally from your pr branch or is it pretty quick for the pr [23:41] oh nm i see you already did [23:41] go ahead and pave/rebuild [23:41] jamespage, ah so good, the theory and the plan came to a pass on 286910 :-) [23:41] how soon for juju 2.0? couple weeks [23:41] ? [23:42] its in beta, i think it lands as -stable in a little over a month [23:42] dont quote me on that though, i actually haven't asked. I've been so deep in using it :) [23:43] added charm to env [23:43] woo \o/ [23:43] paydirt [23:43] i can have the etcd charm reside on the same units as the kubernetes units right? [23:43] haha [23:43] err, dont do that [23:43] so 6 nodes [23:43] we're still gutting some stuff from k8s [23:43] yeah [23:43] 3 kube 3 etcd [23:43] k8s launches some etcd containers on the host that we're phasing out [23:44] that was a leftover from the initial import we didnt notice [23:44] to grow +1kube node no need to grow etcd for testing right? [23:44] right [23:44] kk [23:44] and you can pair down etcd to single node [23:44] lol [23:44] you only need to run 3/5 in production scenarios or proper HA testing [23:44] already triggerred 3 [23:44] welp [23:44] it supports scale up [23:44] I will remove units after [23:44] but not scale down [23:44] lols [23:45] thats where i'm at now in the fixes [23:45] but to be fair, the old charm had the same behavior [23:45] so I will blow it away before the relation [23:45] yeah [23:45] and do just 1 unit [23:45] ok, off for now, will be back later [23:45] o/ [23:45] see ya [23:45] thanks again! [23:46] hey np :) sorry it was a bumpy road getting to this point [23:46] reminder that not everybody is in -beta land with me [23:59] oh you're fscking kidding me [23:59] i spend all evening messing around with a website-relation-changed hook not firing and find including the http interface and a @when('website.available') [23:59] sorts me out in about 30 seconds