=== natefinch-afk is now known as natefinch === cprov_ is now known as cprov === psivaa_ is now known as psivaa === plars_ is now known as plars [05:10] Hello Team, I have created a new bug for my charm and after linking bug id to the trunk branch in the lauchpad. As discussed over the IRC chat i have subscribed bug to charmer and it has been more than 12 hrs even though my charm is not reflecting in the review queue. Please advise on the same. [06:00] tvansteenburgh: I think the review queue is stuck [06:36] any openstack charmers on:" [06:36] ? [06:44] Hello Team, I have created a new bug for my charm and linked that bug id to the trunk branch in the lauchpad. As discussed over the IRC chat i have subscribed bug to charmer and it has been more than 12 hrs still my charm is not reflecting in the review queue. Please advise on the same. [06:45] firl: o/ [06:46] Prabakaran firl stub the review queue does appear stuck. Unsticking [06:47] Prabakaran: rebooted the review queue, it's processing the backlog now [06:47] hey marcoceppi [06:48] I have found a weird state in which it seems the openvswitch-agent doesn’t pick up the rabbitmq connection [06:48] I am upgrading to the latest version of openvswitch-agent to test it [06:57] firl: that's weird, an openstack chamer would be best. I just now realized thats what you asked for [06:58] :) [06:58] firl: they should be on in a few hours [06:58] cool === bodie__ is now known as bodie_ [07:47] Hey Marco, I am able to see old charms now but my new charm is not reflecting in the review queue. Could you please check on this? [10:12] In the juju state server, what is the 'local' mongodb database for, and why is in >1Gb (and growing), when all other db's are 10s of Mbs? This is with manual provider. [10:26] seems it the oplog for replication. Which is odd, as I only have one state server. Can I disable replication? Or reduce oplog size (set via the cli to 512) [10:52] looks like agent config files support setting oplog size explicitly (local provider sets it to 1Mb). Is there anyway to set this in the environment config? [10:53] (this is a development setup) [12:40] Hi all - quick Q: is it possible to configure juju in such a way as to use LXC containers on a remote machine? (Sort of like a combination between the local provider and the manual provider?) [12:41] Basically I want to have a big box (whether VM or bare metal - it doesn't really matter) that hosts all my containers for juju deployments, but I want to control it from juju on my laptop. Is that possible? [12:47] blahdeblah: so you've got hte gist of it currently. You'd have to setup the lxc containers and add them with manual as 'machines' [12:48] blahdeblah: the team is working on adding lxd support which will be single host, but with lxd it's possible to come back in the future and allow those lxd containers to be on another machine since lxd can handle that for us [12:48] rick_h__: I don't want manual; I want it to be able to spin up containers automatically like it does with the local provider. So that will need to wait for a full-fledged lxd provider? [12:48] blahdeblah: but the work for this cycle is just the single host scenario unfortunatley [12:48] blahdeblah: yea, really lxd provider 2.0, 1.0 is in progress now [12:49] blahdeblah: it is a use case that's come up, you're ahead of us on it [12:49] rick_h__: so the only currently-available driver that will allow automatic container creation is the local provider? [12:49] blahdeblah: yes [12:49] blahdeblah: the lxd one will be in 1.26 scheduled for release in jan [12:50] blahdeblah: should be in beta ppa in dec [12:50] I guess I can work around that by just doing the juju on that other box [12:51] blahdeblah: hmm, commision that machine in maas and have juju on one machine ask maas for the lxc containers? [12:52] blahdeblah: just to add more stuff to the mix [12:52] rick_h__: I thought about that, but I don't really want the overhead; I might just run it with the local provider and work remotely [13:01] blahdeblah: so with manual you should be able to "juju add-machine 10.10.10.10" and that gets added as say machine-1 [13:01] then you can [13:01] juju deploy --to lxc:1 [13:01] and it will create an LXC instance on machine-1 [13:02] that actually lets you get to a point where you have more than one of these big machines, but you still have to tell us where you want the container to exist [13:38] anyone have the variable handy to force juju to show the new status by default? [13:44] jcastro: not sure but filed https://github.com/juju/docs/issues/728 to get it added to the docs [13:45] natefinch: wwitzel3 do you recall? ^ [13:45] marco has it memorized but he's in japan still [13:45] yea, I've seen it but not set it so not in my shell history [13:49] Hey Marco, I am able to see old charms now but my new charm is not reflecting in the review queue. Could you please check on this? [13:50] Hello Team, Could someone help me on this? [14:30] blahdeblah, I think you just need to do juju bootstrap remotely. After then, assume all the ports are accessible, then the juju client should be able to control that local env remotely [14:39] jcastro: JUJU_CLI_VERSION=2 [15:07] ta [15:18] kjackal: welcome to the juju channel [15:19] kwmonroe: cory_fu: admcleod- ^ kjackal will be woking on big data with you [15:20] woohoo - welcome kjackal! [15:28] Hey Marco, I am able to see old charms now but my new charm is not reflecting in the review queue. Could you please check on this?. [15:32] kjackal: welcome [15:55] kjackal: Sorry, was in a UOS session. Welcome! [16:20] Hello Team, Newly submitted charm is not reflecting in the charm review queue. Please advise on this. [16:32] I just tore down an amazon environment and have re-bootstrapped and deployed charms, they have been 'allocating' now for over 10 minutes [16:33] juju's debug logs just show error fetching public address: public no address] [16:33] over and over [16:33] Prabakaran: Can you provide a link to your charm? [16:34] And the bug report [16:36] My charm link is https://code.launchpad.net/~ibmcharmers/charms/trusty/ibm-platform-rtm/trunk [16:37] Bug link is https://bugs.launchpad.net/ibmcharms/+bug/1510216 [16:37] Bug #1510216: New Charm: IBM Platform RTM [16:46] anyone know if there is an easy way to tell the juju client to re run all the config values on a host? [16:47] for example: I have to ssh into a machine and tell it to reload via “sudo service jujud-unit-neutron-openvswitch-21 restart" [16:53] cannot start instance for machine "1": tagging root disk: timed out waiting for EBS volume to be associated [16:55] tvansteenburgh: Do you by chance have access to the RQ? [17:16] Is there any changes in the process of creating a bug and subscribing to a charmers group as my new charm is not reflecting under review queue. [17:38] Prabakaran : no change in process so far. [17:39] Prabakaran: Marco restarted the revq ingest, and it may be several hours behind. We'll be tracking this, as marco is in Japan. once he's available we'll circle back and check on the RQ status of your bug. Thanks for teh bug link, we'll use it to cross reference === ericsnow is now known as ericsnow_afk [17:47] narindergupta, hey - around? [17:47] jamespage: yeah [17:48] narindergupta, looking at https://code.launchpad.net/~nuage-canonical/charms/trusty/nova-compute/next/+merge/276669 [17:48] is that change related to the query you sent me via email last week? [17:48] jamespage: yes [17:48] narindergupta, was that specific to an openstack release? [17:48] jamespage: nuage mentioned that its needed for kilo as well [17:49] narindergupta, it is but its probably been working in kilo OK as the old deprecated option had not been removed [17:49] jamespage: [17:50] jamespage: what are deprecated options? [17:50] jamespage: i can look for those options with Nuage. [17:51] narindergupta, the change is fine - the openstack projects normally support old configuration options for a few cycles when the get moved around [17:51] so before this was a DEFAULT section item (neutron_ovs_bridge=alubr0) now its a section config option [17:51] narindergupta, anyway its landed [17:52] jamespage: could be but thanks [17:52] narindergupta, it is :-) [17:52] but i am sure other plugin might face similar issue [17:52] jamespage: thanks [17:52] narindergupta, just an observation - but no need to resubmit merge proposals every time you push to a branch - the existing MP will be updated automatically [17:53] jamespage: ok thanks for letting me know. I was unaware about this feature. [18:03] narindergupta, ok I've landed all of the merge proposals you detailed earlier [18:04] jamespage: thanks james [18:04] jamespage: i will update nuage accordingly [18:04] jamespage: i have meeting with Subu today evening and will ask to update the nuage specific review comments as well. [18:15] cory_fu: no i don't [18:32] Thanks === ericsnow_afk is now known as ericsnow [20:12] tvansteenburgh: Just ran into a bundletester dependency issue: http://pastebin.ubuntu.com/13105165/ [20:12] Fresh vm, after I ran `pip install bundletester` [20:13] and a different error after I manually installed it: http://pastebin.ubuntu.com/13105175/ [20:14] I fixed that with `pip install -U bundletester`, but then ran into http://pastebin.ubuntu.com/13105186/ [20:14] Am I installing it wrong? [20:15] aisrael: charmbox installs it via pip https://github.com/juju-solutions/charmbox/blob/master/install-review-tools.sh [20:16] tvansteenburgh: Odd. Maybe I missed installing one of the dependencies. After I finish with rq, I'll wipe the machine clean and reinstall and see if I can recreate it [20:16] aisrael: run bundletester with -Fvl DEBUG instead [20:17] aisrael: it should install its own deps [20:18] tvansteenburgh: ack, thanks [20:18] aisrael: i just ran bundletester inside the latest charmbox, no errors [20:18] aisrael: the -v should give you more output, hopefully that'll help [21:18] the first node that I am attempting to add is outputting iscsistart cannot make a connection to 10.43.201.11:3260 (-1,101) during the PXE boot process. How do I fix this? [21:18] I'm booting the 14.04 LTS amd64 image [21:19] the maas server does appear to be listening tcp 0 0 0.0.0.0:3260 0.0.0.0:* LISTEN - off (0.00/0/0) [21:41] I was able to create a private network within vmware and once the gateway was available PXE worked and I'm at a login prompt === ericsnow is now known as bobsnow === bobsnow is now known as ericsnow [21:54] can I run juju without using the wakeonlan fuctionality, I do not mind having every node on all the time ? [21:56] Hi [21:56] hey [21:57] i create a charm for my open source billing software : https://github.com/opencellsoft/juju-charms [21:58] to use it i need to deploy postgresql charm [21:58] then to add-relation between opencell and postgresql [21:58] my question is about add-relation [21:59] if i do "juju add-relation opencell postgresql:db" before opencell charm is fully deploy, it don't work === natefinch is now known as natefinch-af === natefinch-af is now known as natefinch-afk [22:00] so my question is : is there a way to wait that charm deploy is finish before do the add-relation ? [22:01] for information at the end of my install hook, i've add : status-set blocked "Waiting for active database connection" [22:17] opencell: hey there [22:17] with juju 1.24 and above, the unit can set its status, along with a message [22:18] so when the install hook runs, it can download and install bits it knows it needs but not start the service [22:19] and leave a message saying it is blocked until the postgres relation is added [22:19] make the actual starting not happen until the db hook joined gets called [22:19] hmm, just finished reading all you said [22:20] yes, yours status-set is the right thing [22:20] I'm not sure what you mean by not worknig if you add relation before it is deployed [22:20] the hooks are called in a defined order [22:35] Is it possible to run juju without the power functionality. I'm inside of a hosted vmware environment and do not have shell access to the ESX5.5 hosts. [23:20] sorry thumper [23:21] what i do is juju add-relation opencell postgresql:db when charm message is Waiting for agent initialization to finish [23:24] that should be fine [23:31] hum my bad, really sorry ... i try to reproduce again, but all is fine [23:37] sorry again to disturb, i'm beginner with juju, so i was maybe do something wrong before add-relation. Thx a lot for you're time and long life to juju witch is a great tool !