=== freeflying_away is now known as freeflying [00:23] paulczar_: no [00:24] lazyPower: thanks for the bugs o/ [01:10] marcoceppi: anytime [01:10] I'm going to have more coming, I'm going to start by charming hubot. [01:10] helllooooo node [01:10] Wrapping up some template edits, then back to charm school [01:14] lazyPower: there's a node.js charm, similar to the rails charm, you might want to check it out [01:18] Thats the plan :) === defunctzombie_zz is now known as defunctzombie [03:09] fyi: https://juju.ubuntu.com/docs/config-environments.html seems to be a broken link [03:10] was from the getting started page https://juju.ubuntu.com/docs/getting-started.html === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === freeflying is now known as freeflying_away [03:59] https://juju.ubuntu.com/docs/howto.html both deploy docs point to nodejs [03:59] ah i see where i can just branch and fix it [03:59] ill do that [04:02] stokachu: grab charm tools from [04:02] ppa:juju/stable [04:02] then you can do [04:02] mkdir charms [04:02] sorry [04:02] mkdir charms/precise [04:02] cd charms/precise [04:03] charm get nodejs [04:03] play with it [04:03] then [04:03] juju deploy --repository=$(pwd)/../.. local:precise/nodejs === defunctzombie is now known as defunctzombie_zz [04:03] you can also switch to your local version of the charm if you have the charmstore one deployed [04:04] davecheney: ah is this related to the online documentation? thats what i was referring to [04:04] davecheney: was going to update the docs as some of the urls are incorrect [04:05] stokachu: cool [04:06] docs are in a branch in lp:juju-core [04:06] davecheney: cool thanks checking it out locally now [04:06] are MP's the preferred way or does a bug need to be linked to it? [04:08] stokachu: we'll take anything we can get [04:08] davecheney: sounds good :D will get those done in a few minutes === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === _thumper_ is now known as thumper [04:41] so many broken links, should I not worry about them under the assumption those pages will eventually be added? or should i remove the link references until a page is created [04:41] for example, https://juju.ubuntu.com/docs/troubleshooting.html [04:52] davecheney: ok got a MP created for my initial pass-through === freeflying_away is now known as freeflying === CyberJacob|Away is now known as CyberJacob === vila is now known as vila-afk-biab === AlanChicken is now known as alanbell === alanbell is now known as AlanBell === freeflying is now known as freeflying_away === vila-afk-biab is now known as vila === CyberJacob is now known as CyberJacob|Away === CyberJacob|Away is now known as CyberJacob === TheRealMue is now known as TheMue === scuttlemonkey_ is now known as scuttlemonkey [15:00] hi, i have a subordinate charm that will be reused for 3 different services. Do i have to deploy a different subordinate charm for each of the services? [15:24] yolanda, juju deploy subordinate subordinate-instance-1 [15:24] yolanda, juju deploy subordinate subordinate-instance-2 [15:24] yolanda, juju deploy subordinate subordinate-instance-3 [15:25] jamespage, ok, that's what i first tried [15:25] but i have a problem [15:25] the last parameter names the instance of the subordinate service [15:25] i need a way to discriminate between if the relation is for one charm or another [15:25] i have a configurator charm, that updates config for gerrit, zuul, jenkins [15:26] so i will have to create 3 different interfaces then? [15:27] we have something common for the 3, and then we have something like: if relation_ids('gerrit-configurator') : ... [15:28] then i find that when i associate that zuul it also has the gerrit-configuration relationship [15:29] so i'll try with 3 different subordinates [15:51] hi, just fyi i filed an MP for some juju-docs corrections === defunctzombie_zz is now known as defunctzombie [16:07] stokachu: thanks for the submission! === defunctzombie is now known as defunctzombie_zz [16:12] yolanda, yeah - you would need to implement three differently typed interfaces [16:12] jamespage, ok, that works, but i wasn't sure if that was the right way === mhall119_ is now known as mhall119 === kentb-out is now known as kentb [17:26] heya jamespage [17:26] Reminder that you're down for reviewer this week [17:27] m_3: marcoceppi: we're still in a hole if you guys have time to dig in [17:27] negronjl: We miss you. :) [17:28] jcastro: lol ... miss you too people ... but they have me tied down like a slave here :/ [17:34] jcastro: ack, I've got amulet to release, but I'll poke at the queue with a hard stick soon [17:49] marcoceppi: thanks, ive got a big project im working on that will drive more documentation to the public facing juju site [18:07] jcastro: ack === defunctzombie_zz is now known as defunctzombie [19:15] anyone on? [19:16] zradmin: yup, though it's best to just ask your question as people might not be here right this second === BradCrittenden is now known as bac [19:22] Thanks marcoceppi, I'm still having the same issue with quantum not functioning properly. It brings up all of the other bridges except br-ex on eth1 but I have confirmed that I can manually assign an address to eth1 and talk on the external network. Is there a log file I can check for openvswitch (or maybe the charm setup log) that I can check to see why its failing to setup? [19:32] sinzui: can you join #juju-gui for a sec? We've got a promulgation question on bundles and how to link branches to series [19:32] * sinzui #juju-gui === rektide_ is now known as rektide [19:45] jcastro: around? [19:51] or marcoceppi [19:55] sidnei: hey [19:56] marcoceppi: hey, just realized https://juju.ubuntu.com/Events/ is missing the charm school & talk im giving at PythonBrasil === defunctzombie is now known as defunctzombie_zz [19:56] not sure how to get it updated (even if quite late by now ) [19:56] sidnei: we can usually do it but we're looked out ATM [19:57] ok, no problem [20:01] sidnei: I'm getting on a call, you need to mail Peter Mahnke to add it === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === CyberJacob is now known as CyberJacob|Away === gary_poster is now known as gary_poster|away === gary_poster|away is now known as gary_poster [22:24] hazmat, what was the cavet for local provider on systems without swap? [22:50] arosales, with encrypted home dirs.. JUJU_HOME env var needs to be set to not be in $HOME [22:50] arosales, else reboot won't work and env will need be to destroyed before using [22:52] hazmat, but other than than swapless system is ok with local? [22:53] arosales, should be fine given enough mem to run core.. what's the context? [22:53] er. to run core and mongo [22:55] hazmat, just my fragmented memory recalling a caveat with swapless systems [22:56] hazmat, I think I just may be hitting some if the issues thumper has recently fixed [22:56] arosales, i can't think of a reason why a system wouldn't have swap [22:58] hazmat, ok and thanks for the reply [22:58] i mean.. for a laptop.. for example no suspend to disk.. no virtual memory.. you get oom to kill your processes instead overcommit. [23:00] marcoceppi: Juju stole the show at work with the GUI. I used screenshots to present it initially, I have a functional demo scheduled next week. [23:13] I have an issue where the juju tools can no longer communicate with our bootstrap server (which is returning 503). SSH'ed on to the server, only thing that is odd is that the /var/log/juju/all-machines.log is rolling out tons of text, probably eat up all the free space in an hour or so. This is on AWS with juju-1.13.1-unknown-amd64 on the client and 1.14.1-precise-amd64 on the server [23:14] ls -l [23:24] and nvm. Turns out my JUJU_HOME was set differently after a .bashrc change [23:27] interesting, thanks.. [23:29] bic2k: i had the same issue, you need to upgrade juju to at least 1.13.3.1 to get rid of that error, both on your main node and the bootstrap node [23:30] zradmin: Ya, I'm stuck on whatever is ported to brew for now. Looks like I'll be able to work around it for now