=== natefinch-afk is now known as natefinch | ||
=== cprov_ is now known as cprov | ||
=== psivaa_ is now known as psivaa | ||
=== plars_ is now known as plars | ||
Prabakaran | Hello Team, I have created a new bug for my charm and after linking bug id to the trunk branch in the lauchpad. As discussed over the IRC chat i have subscribed bug to charmer and it has been more than 12 hrs even though my charm is not reflecting in the review queue. Please advise on the same. | 05:10 |
---|---|---|
stub | tvansteenburgh: I think the review queue is stuck | 06:00 |
firl | any openstack charmers on:" | 06:36 |
firl | ? | 06:36 |
Prabakaran | Hello Team, I have created a new bug for my charm and linked that bug id to the trunk branch in the lauchpad. As discussed over the IRC chat i have subscribed bug to charmer and it has been more than 12 hrs still my charm is not reflecting in the review queue. Please advise on the same. | 06:44 |
marcoceppi | firl: o/ | 06:45 |
marcoceppi | Prabakaran firl stub the review queue does appear stuck. Unsticking | 06:46 |
marcoceppi | Prabakaran: rebooted the review queue, it's processing the backlog now | 06:47 |
firl | hey marcoceppi | 06:47 |
firl | I have found a weird state in which it seems the openvswitch-agent doesn’t pick up the rabbitmq connection | 06:48 |
firl | I am upgrading to the latest version of openvswitch-agent to test it | 06:48 |
marcoceppi | firl: that's weird, an openstack chamer would be best. I just now realized thats what you asked for | 06:57 |
firl | :) | 06:58 |
marcoceppi | firl: they should be on in a few hours | 06:58 |
firl | cool | 06:58 |
=== bodie__ is now known as bodie_ | ||
Prabakaran | Hey Marco, I am able to see old charms now but my new charm is not reflecting in the review queue. Could you please check on this? | 07:47 |
bloodearnest | In the juju state server, what is the 'local' mongodb database for, and why is in >1Gb (and growing), when all other db's are 10s of Mbs? This is with manual provider. | 10:12 |
bloodearnest | seems it the oplog for replication. Which is odd, as I only have one state server. Can I disable replication? Or reduce oplog size (set via the cli to 512) | 10:26 |
bloodearnest | looks like agent config files support setting oplog size explicitly (local provider sets it to 1Mb). Is there anyway to set this in the environment config? | 10:52 |
bloodearnest | (this is a development setup) | 10:53 |
blahdeblah | Hi all - quick Q: is it possible to configure juju in such a way as to use LXC containers on a remote machine? (Sort of like a combination between the local provider and the manual provider?) | 12:40 |
blahdeblah | Basically I want to have a big box (whether VM or bare metal - it doesn't really matter) that hosts all my containers for juju deployments, but I want to control it from juju on my laptop. Is that possible? | 12:41 |
rick_h__ | blahdeblah: so you've got hte gist of it currently. You'd have to setup the lxc containers and add them with manual as 'machines' | 12:47 |
rick_h__ | blahdeblah: the team is working on adding lxd support which will be single host, but with lxd it's possible to come back in the future and allow those lxd containers to be on another machine since lxd can handle that for us | 12:48 |
blahdeblah | rick_h__: I don't want manual; I want it to be able to spin up containers automatically like it does with the local provider. So that will need to wait for a full-fledged lxd provider? | 12:48 |
rick_h__ | blahdeblah: but the work for this cycle is just the single host scenario unfortunatley | 12:48 |
rick_h__ | blahdeblah: yea, really lxd provider 2.0, 1.0 is in progress now | 12:48 |
rick_h__ | blahdeblah: it is a use case that's come up, you're ahead of us on it | 12:49 |
blahdeblah | rick_h__: so the only currently-available driver that will allow automatic container creation is the local provider? | 12:49 |
rick_h__ | blahdeblah: yes | 12:49 |
rick_h__ | blahdeblah: the lxd one will be in 1.26 scheduled for release in jan | 12:49 |
rick_h__ | blahdeblah: should be in beta ppa in dec | 12:50 |
blahdeblah | I guess I can work around that by just doing the juju on that other box | 12:50 |
rick_h__ | blahdeblah: hmm, commision that machine in maas and have juju on one machine ask maas for the lxc containers? | 12:51 |
rick_h__ | blahdeblah: just to add more stuff to the mix | 12:52 |
blahdeblah | rick_h__: I thought about that, but I don't really want the overhead; I might just run it with the local provider and work remotely | 12:52 |
jam | blahdeblah: so with manual you should be able to "juju add-machine 10.10.10.10" and that gets added as say machine-1 | 13:01 |
jam | then you can | 13:01 |
jam | juju deploy --to lxc:1 | 13:01 |
jam | and it will create an LXC instance on machine-1 | 13:01 |
jam | that actually lets you get to a point where you have more than one of these big machines, but you still have to tell us where you want the container to exist | 13:02 |
jcastro | anyone have the variable handy to force juju to show the new status by default? | 13:38 |
rick_h__ | jcastro: not sure but filed https://github.com/juju/docs/issues/728 to get it added to the docs | 13:44 |
rick_h__ | natefinch: wwitzel3 do you recall? ^ | 13:45 |
jcastro | marco has it memorized but he's in japan still | 13:45 |
rick_h__ | yea, I've seen it but not set it so not in my shell history | 13:45 |
Prabakaran | Hey Marco, I am able to see old charms now but my new charm is not reflecting in the review queue. Could you please check on this? | 13:49 |
Prabakaran | Hello Team, Could someone help me on this? | 13:50 |
bloodearnest | blahdeblah, I think you just need to do juju bootstrap remotely. After then, assume all the ports are accessible, then the juju client should be able to control that local env remotely | 14:30 |
wwitzel3 | jcastro: JUJU_CLI_VERSION=2 | 14:39 |
jcastro | ta | 15:07 |
arosales | kjackal: welcome to the juju channel | 15:18 |
arosales | kwmonroe: cory_fu: admcleod- ^ kjackal will be woking on big data with you | 15:19 |
kwmonroe | woohoo - welcome kjackal! | 15:20 |
Prabakaran | Hey Marco, I am able to see old charms now but my new charm is not reflecting in the review queue. Could you please check on this?. | 15:28 |
admcleod- | kjackal: welcome | 15:32 |
cory_fu | kjackal: Sorry, was in a UOS session. Welcome! | 15:55 |
Prabakaran | Hello Team, Newly submitted charm is not reflecting in the charm review queue. Please advise on this. | 16:20 |
Icey | I just tore down an amazon environment and have re-bootstrapped and deployed charms, they have been 'allocating' now for over 10 minutes | 16:32 |
Icey | juju's debug logs just show error fetching public address: public no address] | 16:33 |
Icey | over and over | 16:33 |
cory_fu | Prabakaran: Can you provide a link to your charm? | 16:33 |
cory_fu | And the bug report | 16:34 |
Prabakaran | My charm link is https://code.launchpad.net/~ibmcharmers/charms/trusty/ibm-platform-rtm/trunk | 16:36 |
Prabakaran | Bug link is https://bugs.launchpad.net/ibmcharms/+bug/1510216 | 16:37 |
mup | Bug #1510216: New Charm: IBM Platform RTM <IBM Charms:New> <https://launchpad.net/bugs/1510216> | 16:37 |
firl | anyone know if there is an easy way to tell the juju client to re run all the config values on a host? | 16:46 |
firl | for example: I have to ssh into a machine and tell it to reload via “sudo service jujud-unit-neutron-openvswitch-21 restart" | 16:47 |
Icey | cannot start instance for machine "1": tagging root disk: timed out waiting for EBS volume to be associated | 16:53 |
cory_fu | tvansteenburgh: Do you by chance have access to the RQ? | 16:55 |
Prabakaran | Is there any changes in the process of creating a bug and subscribing to a charmers group as my new charm is not reflecting under review queue. | 17:16 |
lazypower | Prabakaran : no change in process so far. | 17:38 |
lazypower | Prabakaran: Marco restarted the revq ingest, and it may be several hours behind. We'll be tracking this, as marco is in Japan. once he's available we'll circle back and check on the RQ status of your bug. Thanks for teh bug link, we'll use it to cross reference | 17:39 |
=== ericsnow is now known as ericsnow_afk | ||
jamespage | narindergupta, hey - around? | 17:47 |
narindergupta | jamespage: yeah | 17:47 |
jamespage | narindergupta, looking at https://code.launchpad.net/~nuage-canonical/charms/trusty/nova-compute/next/+merge/276669 | 17:48 |
jamespage | is that change related to the query you sent me via email last week? | 17:48 |
narindergupta | jamespage: yes | 17:48 |
jamespage | narindergupta, was that specific to an openstack release? | 17:48 |
narindergupta | jamespage: nuage mentioned that its needed for kilo as well | 17:48 |
jamespage | narindergupta, it is but its probably been working in kilo OK as the old deprecated option had not been removed | 17:49 |
narindergupta | jamespage: | 17:49 |
narindergupta | jamespage: what are deprecated options? | 17:50 |
narindergupta | jamespage: i can look for those options with Nuage. | 17:50 |
jamespage | narindergupta, the change is fine - the openstack projects normally support old configuration options for a few cycles when the get moved around | 17:51 |
jamespage | so before this was a DEFAULT section item (neutron_ovs_bridge=alubr0) now its a section config option | 17:51 |
jamespage | narindergupta, anyway its landed | 17:51 |
narindergupta | jamespage: could be but thanks | 17:52 |
jamespage | narindergupta, it is :-) | 17:52 |
narindergupta | but i am sure other plugin might face similar issue | 17:52 |
narindergupta | jamespage: thanks | 17:52 |
jamespage | narindergupta, just an observation - but no need to resubmit merge proposals every time you push to a branch - the existing MP will be updated automatically | 17:52 |
narindergupta | jamespage: ok thanks for letting me know. I was unaware about this feature. | 17:53 |
jamespage | narindergupta, ok I've landed all of the merge proposals you detailed earlier | 18:03 |
narindergupta | jamespage: thanks james | 18:04 |
narindergupta | jamespage: i will update nuage accordingly | 18:04 |
narindergupta | jamespage: i have meeting with Subu today evening and will ask to update the nuage specific review comments as well. | 18:04 |
tvansteenburgh | cory_fu: no i don't | 18:15 |
Prabakaran | Thanks <lazypower> | 18:32 |
=== ericsnow_afk is now known as ericsnow | ||
aisrael | tvansteenburgh: Just ran into a bundletester dependency issue: http://pastebin.ubuntu.com/13105165/ | 20:12 |
aisrael | Fresh vm, after I ran `pip install bundletester` | 20:12 |
aisrael | and a different error after I manually installed it: http://pastebin.ubuntu.com/13105175/ | 20:13 |
aisrael | I fixed that with `pip install -U bundletester`, but then ran into http://pastebin.ubuntu.com/13105186/ | 20:14 |
aisrael | Am I installing it wrong? | 20:14 |
tvansteenburgh | aisrael: charmbox installs it via pip https://github.com/juju-solutions/charmbox/blob/master/install-review-tools.sh | 20:15 |
aisrael | tvansteenburgh: Odd. Maybe I missed installing one of the dependencies. After I finish with rq, I'll wipe the machine clean and reinstall and see if I can recreate it | 20:16 |
tvansteenburgh | aisrael: run bundletester with -Fvl DEBUG instead | 20:16 |
tvansteenburgh | aisrael: it should install its own deps | 20:17 |
aisrael | tvansteenburgh: ack, thanks | 20:18 |
tvansteenburgh | aisrael: i just ran bundletester inside the latest charmbox, no errors | 20:18 |
tvansteenburgh | aisrael: the -v should give you more output, hopefully that'll help | 20:18 |
urthmover | the first node that I am attempting to add is outputting iscsistart cannot make a connection to 10.43.201.11:3260 (-1,101) during the PXE boot process. How do I fix this? | 21:18 |
urthmover | I'm booting the 14.04 LTS amd64 image | 21:18 |
urthmover | the maas server does appear to be listening tcp 0 0 0.0.0.0:3260 0.0.0.0:* LISTEN - off (0.00/0/0) | 21:19 |
urthmover | I was able to create a private network within vmware and once the gateway was available PXE worked and I'm at a login prompt | 21:41 |
=== ericsnow is now known as bobsnow | ||
=== bobsnow is now known as ericsnow | ||
urthmover | can I run juju without using the wakeonlan fuctionality, I do not mind having every node on all the time ? | 21:54 |
opencell | Hi | 21:56 |
urthmover | hey | 21:56 |
opencell | i create a charm for my open source billing software : https://github.com/opencellsoft/juju-charms | 21:57 |
opencell | to use it i need to deploy postgresql charm | 21:58 |
opencell | then to add-relation between opencell and postgresql | 21:58 |
opencell | my question is about add-relation | 21:58 |
opencell | if i do "juju add-relation opencell postgresql:db" before opencell charm is fully deploy, it don't work | 21:59 |
=== natefinch is now known as natefinch-af | ||
=== natefinch-af is now known as natefinch-afk | ||
opencell | so my question is : is there a way to wait that charm deploy is finish before do the add-relation ? | 22:00 |
opencell | for information at the end of my install hook, i've add : status-set blocked "Waiting for active database connection" | 22:01 |
thumper | opencell: hey there | 22:17 |
thumper | with juju 1.24 and above, the unit can set its status, along with a message | 22:17 |
thumper | so when the install hook runs, it can download and install bits it knows it needs but not start the service | 22:18 |
thumper | and leave a message saying it is blocked until the postgres relation is added | 22:19 |
thumper | make the actual starting not happen until the db hook joined gets called | 22:19 |
thumper | hmm, just finished reading all you said | 22:19 |
thumper | yes, yours status-set is the right thing | 22:20 |
thumper | I'm not sure what you mean by not worknig if you add relation before it is deployed | 22:20 |
thumper | the hooks are called in a defined order | 22:20 |
urthmover | Is it possible to run juju without the power functionality. I'm inside of a hosted vmware environment and do not have shell access to the ESX5.5 hosts. | 22:35 |
opencell | sorry thumper | 23:20 |
opencell | what i do is juju add-relation opencell postgresql:db when charm message is Waiting for agent initialization to finish | 23:21 |
thumper | that should be fine | 23:24 |
opencell | hum my bad, really sorry ... i try to reproduce again, but all is fine | 23:31 |
opencell | sorry again to disturb, i'm beginner with juju, so i was maybe do something wrong before add-relation. Thx a lot for you're time and long life to juju witch is a great tool ! | 23:37 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!