=== Beret- is now known as Beret === ming is now known as Guest12249 === menn0-afk is now known as menn0 === jcw4 is now known as jcw4_zzz [06:01] juju server relation hook files are not running? | http://askubuntu.com/q/505310 === CyberJacob|Away is now known as CyberJacob === CyberJacob is now known as CyberJacob|Away [08:27] juju charm relation-joined hook not working? | http://askubuntu.com/q/505342 === CyberJacob|Away is now known as CyberJacob [09:09] marcoceppi, hey - do you think it would be possible to have an openstack-charmers review queue like we have a charmers on on jujucharms.com? [09:10] marcoceppi, I'm struggling with visibility of proposed changes right now and a central report would be useful for everyone [09:10] gnuoy, ^^ === CyberJacob is now known as CyberJacob|Away === vila is now known as vila-lunch === CyberJacob|Away is now known as CyberJacob === CyberJacob is now known as CyberJacob|Away [11:24] jamespage: yes, I'm in the process of re-doing the review queue to be way more robust. In doing so it'll be a stand alone application that anyone can run and configure to track changes for whatever user/group/project [11:24] as such, we could spin up an instance on canonistack to track openstack-charmers stuff === vila-lunch is now known as vila [11:57] marcoceppi, sounds good [11:58] jamespage: it's a little lower priority than everything else, but it's on my personal "I really care about this and want it done" list [11:58] marcoceppi, if you have something in flight maybe post a branch? I'm happy to hack on this as well [11:59] jamespage: I have early musing of some pyramid stuff, nothing connected to lp or gh yet [11:59] just a databases schema [11:59] jamespage: https://github.com/marcoceppi/review-queue [12:00] I hope to get more time on it this weekend, get celery and lp hooked up for initial imports [12:48] I've got a Juju instance bootstrapped within Openstack using trusty. Is it possibly to use that to deploy a charm that uses precise or is that not supported (I've got the precise image in glance but can't see a way to tell Juju where to find that image). [12:51] mfa298: you'll need to upload a custom image-metadata file to juju [12:51] so it'll know where the precise images are [12:52] that's presumably created with juju metadata generate-image, how do I then upload it ? [12:52] mfa298: great question, I forget how, but I believe it's done at bootstrap time [12:52] * marcoceppi checks [12:53] so it may not be possible to upload after bootstrap [12:54] mfa298: does not appear, but again I'm not 100% certain, it's something that's defined in the environments.yaml https://juju.ubuntu.com/docs/config-openstack.html [12:54] However, let me check set-environment [12:55] mfa298: you can update this after bootstrap [12:55] with `juju set-environment image-metadata-url="url-to-generated-metadata"` [13:00] hmmm, looking at the metadata that generate-metadata created for the precise image it seems to reference 14.04 rather than 12.04 [13:02] mfa298: you should be able, as a command line option, provide a series flag [13:02] mfa298: with the -s flag [13:02] I'd just spotted that [13:08] looks like juju still can't find the image [13:09] mfa298: what do the logs look like when trying to deploy a precise charm? [13:10] debug-log shows machine-0: 2014-08-01 13:07:20 WARNING juju.worker.instanceupdater updater.go:231 cannot get instance info for instance "": no instances found [13:10] and the machine state in juju status shows: agent-state-info: '(error: index file has no data for cloud {RegionOne http://192.168.17.17:5000/v2.0} [13:11] mgz_: who should I bug about juju deploying to openstack? [13:12] trying juju metadata validate-images seems to suggest it's using cloud-images.ubuntu.com rather than the local images, although that could be me missing something [13:12] marcoceppi: that'd pretty generic [13:12] what more specifically? [13:13] I'm a pretty good atrting point [13:13] mgz_: mfa298 is trying to upload image-metadata post bootstrap, not sure the process as I've never tried [13:13] I see, reading log [13:18] setting up image-metadata is something you do prior to bootstrap [13:19] so having bootsrapped juju with it only knowing about trusty there's not a way to add in precise other than destroying the environment and bootstrapping again ? [13:19] yup [13:20] or is this going down the wrong route for what I was hoping to acheive. [13:20] if you're setting this up yourself, you should instead make your keystone advertise the simplestreams [13:20] rather than have juju supply it at run time [13:23] quickly googled and that looks like it might be what I want. Is there a decent guide somewhere for setting that up ? [13:26] mfa298: https://juju.ubuntu.com/docs/howto-privatecloud.html [13:27] mgz_: whoa, where has this link been all my life [13:28] also, wow tha page is rendered wrong [13:28] * marcoceppi goes to patch [13:35] mgz_: I've done the juju metadata generate-image commands and have the metadata files which contain both the precise and trusty files. [13:35] the issue seems to be getting that into juju [13:36] I can run juju set-environment image-metadata-url=file://home/ubuntu but I still don't seem to be able to deploy a charm using precise [13:37] or is the answer here that I need to have a web server to provide those files rather than file:// [13:37] no, that doesn't help [13:38] you set those values in your environments.yaml at the start, and they need to be accessible from the cloud you've deployed, eg in swift [13:38] file:// is no good [13:39] so installing a webserver would be enough or do I also need to destroy the environment and bootstrap again with the url configured as well [13:49] looks like a http server is enough. That seems to be working [13:49] thanks [13:52] now to work out the sets of commands that were actually needed so I can write the local documentation [13:56] sebas5384, arosales: hi [13:57] hey jcastro! [13:57] sounds like jose has us set up [13:57] oh cool [13:57] link? [13:57] just in case I am going to joing the old hangout in case any folks join there [13:57] ohai [13:57] jcastro: sec [14:00] link? [14:00] :P [14:01] Just a quick reminder to folks that we will be hosting on ubuntuonair.com not via the google hangout event. [14:03] * marcoceppi tunes in [14:03] I confirmed no folks are in the Google Event hangout, which is good [14:05] sebas5384, https://plus.google.com/hangouts/_/ytl/vjVQa1PpcFNNnOvmc6cVeIQVTxHO4-o-wB5b0uus7JY [14:06] also we'll be using the following document to capture input [14:06] https://docs.google.com/a/canonical.com/document/d/1t_55N1il3XoL8z-jfa1CBoSxzOQjC90cgSpCqx5wkH0/edit [14:06] we'll be starting in a couple of minutes. [14:06] ah ok we haven't started yet. I was following along on Ubuntu on Air and its still broadcasting please stand by. [14:07] lazyPower: correct haven't staretd yet. [14:07] :P [14:08] i'm there! [14:08] we should have a slide saying 'Hey! We're late, but don't go!' [14:10] jose: looks like we are running into a perms issue [14:11] jose: are you in the hangout? [14:11] arosales: I am [14:11] jose: jorge and I are in the hangout but don't see you. [14:11] jose, can you paste the link in here? [14:11] the one you PMed me isn't the one you are in apparently [14:11] Regarding the first point, I think what we really need is an extension to `juju resolved --retry` that essentially does a forced upgrade-charm before retrying the failed (or maybe even last successful) hook. `juju resolved --upgrade-and-retry` [14:11] i'm already there, but i cant hear jose [14:11] https://plus.google.com/hangouts/_/ytl/vjVQa1PpcFNNnOvmc6cVeIQVTxHO4-o-wB5b0uus7JY [14:12] i thought the hangout wasn't being used today. [14:12] I could go ahead and create another event [14:13] arosales, jcastro: want me to create another event? [14:13] Why don't we all just join the same hangout? [14:13] I am confused why there are two? [14:13] jose: can you hear me? [14:13] sebas5384: not at all [14:14] damn it [14:14] hangout is trolling us [14:14] blame Google [14:14] aaaalways happens [14:14] what hangout are you in? can you paste in the URL? [14:14] jcastro: https://plus.google.com/hangouts/_/ytl/vjVQa1PpcFNNnOvmc6cVeIQVTxHO4-o-wB5b0uus7JY [14:14] https://plus.google.com/hangouts/_/g6mlkq4hfo6jvgjqmvksxo3inia?authuser=1&hl=pt-BR [14:14] wait... that's another hangout link [14:14] urgh [14:14] ... [14:14] I believe Google is playing with us [14:15] ok which hangout link are we going to? [14:15] we have different links [14:15] pick one [14:15] https://plus.google.com/hangouts/_/ytl/vjVQa1PpcFNNnOvmc6cVeIQVTxHO4-o-wB5b0uus7JY is the one I'm in [14:15] permissions problems [14:16] let me just quickly create another event. Google is a mess atm [14:16] do you have them set to private or something? [14:16] not at all [14:17] hmmm [14:17] ok fire up a new one I guess [14:17] https://plus.google.com/hangouts/_/ytl/vjVQa1PpcFO4tsipIQdBcOcv_43jywHwKQvrzxaa9XA= [14:17] the ubuntu-on-air one was supposed to replace the hangout. [14:17] jcastro, arosales, sebas5384: ^ [14:18] now i'm in the last link you passed jose [14:18] this one is looking better [14:18] zirpu: ubuntuonair uses hangouts :) [14:18] jcastro: I am in [14:19] refresh ubuntuonair if you were in before [14:20] we got it started [14:20] Thanks for the patience. [14:21] live on ubuntuonair.com or join the hangout @ https://plus.google.com/hangouts/_/ytl/vjVQa1PpcFO4tsipIQdBcOcv_43jywHwKQvrzxaa9XA= [14:22] i switched to the ubuntuonair version. i'm just listening. [14:35] You can give debug-hooks a hook name to only have it trigger on the one hook you're interested in fixing. [14:35] I just learned this the other day. Very helpful [14:37] jcastro: If local-mapped-to-remote charm source is not an option, my preferred alternative would be `juju resolved --update-and-retry` [14:38] thats good to know cory_fu. We should make that more pronounced in the docs, which might help alleviate some of that frustration. [14:46] noodles775: you've done roles with ansible scripts in charms correct? [14:47] i'm fairly sure its still very experimental at present right? we haven't ironed out how it shoud look [14:47] jcastro: we dont have --force on service. [14:48] ack [14:48] lazyPower: thanks. [14:48] If you --force destroy the machine out from under the service, you can then remove the service [14:48] i think the ansible buffering is a function of the parallelism code. so make ansible serial and lower the polling from the default 15 seconds. [14:49] cory_fu: i think the idea is more that they want to force destroy the service, and leave the machine, so they aren't waiting for a machine spin up [14:49] jcastro: you're left with a service definition, with no units. [14:50] lazyPower: If you redeploy after removing a service, it creates a new machine. If you use --to to put it on the same machine, that's not much different than upgrade-charm. But I guess it would force it to re-run all of the hooks, at least. [14:51] well, the scary part about what's being asked is deploying to a tainted machine may yield really crazy results [14:51] but i get what you're saying cory_fu [14:51] * cory_fu isn't arguing against adding --force to destroy-service, though. [14:51] I've tried to do that many times, even after realizing it doesn't work. [14:51] haha [14:51] it gets me too [14:51] more often than i care to admit... and i know it doesn't exist. [14:51] muscle memory i suppose [14:52] :) [15:13] lazyPower: is this: http://manage.jujucharms.com/~lazypower/precise/dns your latest DNS charm? [15:13] arosales: Handrus and Renato where here too :) [15:14] ah thanks Handrus and Renato! [15:15] sebas5384: take a lookt at http://manage.jujucharms.com/~lazypower/precise/dns just need to confirm this is the last rev from lazyPower (re DNS) [15:15] arosales: it is. i sync'd it a few weeks ago witht he latest work. [15:16] lazyPower: thanks for confirming. [15:21] hey guys! question around here. I'm working on a chamilo-memcached relation, and it would allow multiple servers. when I do 'relation-get host', will it tell me just one IP address, or multiple IP addresses? [15:22] (in the event I have multiple memcached instances) [15:23] now [15:23] wrong window :) [15:25] jose: you can infer all the hosts at once if you wanted to [15:25] jose: using relation-list [15:25] jose: then just loop through the list [15:26] hmm, I'm gonna check how that may work for me in a debug-hooks session [15:26] memcached_hosts=`relation_list`; for m in "$memcached_hosts"; do relation-get host $m >> /file/to/track/hosts; done [15:26] as an example [15:26] cool [15:26] the relation-get is from memory [15:27] but there's a way to specify which unit you wish to query in a relation context === jcw4_zzz is now known as jcw4 [15:45] lazyPower, https://github.com/juju/docs/pull/135 [17:33] lazyPower: thanks!!! could you show us how to use it? http://manage.jujucharms.com/~lazypower/precise/dns === roadmr is now known as roadmr_afk === StoneTable is now known as aisrael [19:23] sebas5384: its not production ready yet. There's no HA support as of yet. [19:23] if you use that, and your DNS charm server tanks, you've lost DNS [19:23] sebas5384: but i'm more than happy to talk you through the implementation details, and how its structured / how to implement hooks. [19:25] sebas5384: take a look at https://github.com/chuckbutler/DNS-Charm - and scroll down to CHARM Integration, it talks about a programmable and programmable-multiple relationship hook. You set the proper variables, and it will build the configuration on the fly for you. The DNS charm itself spits out the public-address OTW so you can update /etc/resolve.conf as the primary DNS server, and your domains will then be avialble to each node connected to the dns [19:25] charm. [19:25] sebas5384: there's more implementation logic that needs to happen with regard to updating third party providers, and/or implementing your DNS server in the global DNS tree with your registrar (if you want it to be authoritative) [19:30] lazyPower, hey so btw my last PR didn't touch the precise box URLs [19:30] so you might want to pull those [19:30] ack was already on it [19:31] jcastro: just in master? or do i need to touch another branch? [19:34] jcastro: https://github.com/juju/docs/pull/136 === rektide_ is now known as rektide [19:49] lazyPower, LGTM, merged [19:50] lazyPower, marco told me a while back it's better to just do all the work in personal branches and then submit to master [19:50] rather than under the juju namespace [19:50] I was like, ok, sounds good to me [19:50] jcastro: thats what i did [19:50] yeah, I saw [19:50] I was just responding to your irc question [19:50] oh you mean the web editor [19:50] well i was curious which branch to target [19:50] if i needed to touch the 1.18 docs as well [19:50] not that we are still actively pointing anything at them [20:07] actually no need to sync, just riddle me this batman [20:07] https://code.launchpad.net/~asanjar/charms/trusty/hdp-hadoop/trunk <-- hortonworks? [20:08] https://code.launchpad.net/~asanjar/charms/trusty/hdp-zookeeper/trunk <-- has first traces of hortonworks charm helpers? [20:13] lazyPower: yes, that is hortonworks .. but if you need to investigate bdutils.py (general big data charm helper) or hdputils.py (hortonworks dirstro specific charm helpers) look at ~asanjar/charms/trusty/hdp-zookeeper [20:15] lazyPower: as soon as I get a chance, I will update hortonworks hadoop with the latest helper.. [20:16] ok thats all i needed [20:16] I'm wrapping up my last fringe issues thsi week with vagrant that jcastro just brought to me [20:16] Monday is when i start digging heavy into teh apache hadoop rewrite === roadmr_afk is now known as roadmr === CyberJacob|Away is now known as CyberJacob === CyberJacob is now known as CyberJacob|Away === CyberJacob|Away is now known as CyberJacob [23:31] Swift Through Horizon | http://askubuntu.com/q/505650