[06:22] Hi, [06:23] As https://jujucharms.com/docs/devel/howto-privatecloud for 2.0 is still being rewritten is that Devel version something I can use to test 2.0 on my private openstack? [07:48] junaidali, bbaqar: morning [07:48] junaidali, so regarding xenial branches; we really don't need todo that any longer. [07:49] the charm store does not injest from launchpad bzr branches any longer, you direct publish to the charm store, decoupling VCS from publication altogether [07:49] are your trusty and xenial charm versions identical? [07:50] if so we can use the series in metadata feature and just have a single charm version for both - this is what we do across all other openstack charms in master branch (that will be released this week). [07:50] Hi jamespage, xenial charms have a few minor changes [07:50] junaidali, is it possible to add that into the charm, rather than having two different charms? [07:50] i.e. conditional code for xenial vs trusty? [07:51] junaidali, we've found it much easier to support a single codebase for multiple releases, rather than having lost of differnent branches for different versions [07:52] yes, we can do that. === Guest90146 is now known as CyberJAcob === degville- is now known as degville === stokachu_ is now known as stokachu === firl_ is now known as firl === ejat_ is now known as ejat === Ursinha_ is now known as Ursinha === X-Istence is now known as x58 [13:54] hey, anyone could tell me how to automatically attach a resource when running an amulet test? [15:00] "juju status" with lxd/local just hangs with no response, I've seen this before when lxd containers are not starting properly, but this does not seem to be the case now [15:01] what's my first course of action for debugging, or gathering enough info for a bug report? [15:01] "juju switch default" hangs, most juju commands are hanging [15:01] holocron: try adding --debug to the commands and see if anything jumps out [15:02] holocron: I thuoght there was a bug along those lines /me goes to look [15:02] rich_h_ thanks, yes that's obvious -- juju.api is looping on a dial to wss://x.x.x.x:17070/api [15:03] holocron: and are you able to reach that address? [15:03] i can ping it yes [15:03] it's a local lxd container, i'll exec bash into it [15:03] holocron: then perhaps the controller went down for some reason? can you ssh/connect to the controller and check the logs there? [15:03] holocron: maybe try to bounce jujud? [15:04] yeah, jujud seems to be thrashing a bit [15:04] 17330 root 20 0 49.690g 5.858g 7000 S 0.3 15.0 154:16.67 jujud [15:05] it's not listening for that wss connection [15:06] rick_h_ : jujud isn't a service it seems, should i restart the whole container, or is there a preferred method for just restarting the daemon? [15:06] holocron: sudo service juju? [15:06] holocron: let me look for the real name [15:08] yeah, i'm not finding any service for jujud [15:09] holocron: there you go, jujud-machine-0 [15:09] ah thank you.. i guess the tab completion isn't working in lxc exec bash [15:11] rich_h_: were you able to find that bug? still hanging up on me here [15:13] holocron: not yet [15:13] holocron: did it restart for you? [15:14] rick_h_: okay, yeah it did restart, but i'm getting mongodb connection errors in the log [15:14] holocron: can you peek at the /var/log/juju/machinexxxx [15:14] holocron: ah ok, so let's try restarting that then as well [15:14] sudo service juju-db restart [15:14] and then re-kick jujud after the db is restarted and see what's up with that [15:15] holocron: is this on a laptop or something that's shutdown/back up often? [15:15] holocron: or some other system? [15:15] rick_h_ : it's running on a mainframe, it hasn't come down since it was working Friday night [15:16] Oct 10 15:16:23 juju-84a348-0 systemd[1]: Failed to start juju state database. === CyberJAcob is now known as CyberJacob [15:18] got a mongodb backtrace [15:18] holocron: ah ok. well that's starting to look like a root cause [15:18] holocron: and yea, exising bugs don't line up to this so treading new ground [15:19] par for the course for me :P [15:19] holocron: oh lucky you? [15:19] holocron: what's the traceback from mongo look like? [15:19] rick_h_: i'll paste it out in a moment [15:22] rick_h_: https://gist.github.com/vmorris/7750b8f9d3dfaa14238df39f7628ea3a [15:23] hmm, that doesn't have the trace in it :P [15:23] sec [15:25] rick_h_: please see revision 2 on that gist === holocron is now known as vmorris [15:32] something called wiredtiger reporting an i/o error when attempting to read data [15:36] i didn't run out of storage :/ using zfs on the localhost as per the normal setup [15:36] there's really nothing out of the ordinary here except i'm running s390x arch [15:38] let me trace back through the log and see if i can determine a first fault [16:29] rick_h_: could this be related? https://gist.github.com/vmorris/f81217815059c6fc748eaba8cc1b5318 [16:30] vmorris: sorry, issed you changed nick/continued the conversation [16:31] vmorris: wiredtiger is the mongodb engine [16:31] vmorris: looking at the gist [16:31] rick_h_: apologies for the nick switch [16:31] vmorris: all good [16:31] vmorris: sorry, I'm not a mongodb expert to know how to decipher that [16:32] vmorris: please feel free to file a bug with the notes on arch, etc. It sounds like mongodb bit on something and that caused Juju to bail out on you. [16:34] rick_h_: this seems to be the case, yeah. i've had the juju controllers flaking out on me a bunch over the past few weeks, but i've chalked it up to my fooling with things. i've reset to working clean state a bunch in the past few weeks and finally got to a good position to uncover a few things [16:37] mbruzek, lazyPower: sup sup [16:39] cory_fu: sup [16:40] lets say I want to re-gen certs, and re-render my nginx config when fqdn config is changed [16:40] would this be a good way of accomplishing ^ -> https://github.com/jamesbeedy/charm-documize/blob/master/reactive/documize.py#L200-L207 ? [16:41] if I remove line #136 -> https://github.com/jamesbeedy/charm-documize/blob/master/reactive/documize.py#L119-L136 [16:42] If #136 is removed, would my charm be requesting a new cert every time the hook environment executes? [17:24] arosales: ping https://bugs.launchpad.net/juju/+bug/1632030 [17:24] Bug #1632030: juju-db fails to start -- WiredTiger reports Input/output error === frankban is now known as frankban|afk === dames is now known as thedac [17:42] Hello Team, Getting timeout issue while commissioning the node in MAAS 1.9, Logs are pasted in here http://pastebin.ubuntu.com/23301741/ could somebody help me on this? [17:45] Prabakaran_: have you confirmed that you've imported the boot images? https://maas.ubuntu.com/docs/install.html#import-the-boot-images [17:48] vmorris: I have imported images in the maas ui [17:50] Prabakaran_: just have to ask, you did confirm that they were imported before trying to commission? [17:51] It looks like you're trying to commission KVM virtual machines? [17:51] ya that was the 1st step i did before commissioning... [17:51] ya correct .. i am commissioning the vrish nodes [18:17] vmorris: thanks for the bug report :-) [18:17] arosales: cheers :) i've torn the mess down and am about to redeploy the openstack-on-lxd bundle [18:18] Seems it goes into a defunct state [18:19] Will take a look and see if I can reproduce [18:19] same [18:19] vmorris: Ubuntu 16.04 with updates I presume [18:20] VERSION="16.04.1 LTS (Xenial Xerus)" [18:23] arosales: I have a slightly modified version of the rabbitmq-server charm that i'm deploying as well to try and get more visibility into https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1563271 [18:23] Bug #1563271: update-status hook errors when unable to connect [18:24] Thanks for pitching on rabbit bug [18:30] heh, sure i'll surface something if it works for me.. but it's purely selfish motivation =D [19:04] arosales: are you guys responsible for the percona-cluster charm? [19:05] beisner, arosales: this hardcoding is a piss off -> http://bazaar.launchpad.net/~openstack-charmers-next/charms/precise/percona-cluster/trunk/view/head:/hooks/percona_utils.py#L64 [19:05] beisner, arosales: can we fix that hard coding of the pkgs so we can use latest version of percona? [19:06] hi bdx, what specifically are you needing to do? [19:06] Yes, specifically the openstack charmers [19:07] beisner: I need features that were introduced in 5.7 [19:07] bdx, fyi, that charm's source of truth is https://github.com/openstack/charm-percona-cluster. i have a pending request out to deprecate the old LP branches, which may not always be in sync with the latest bits. [19:07] beisned: aaah nice, thx [19:07] bdx, on which version of ubuntu ? [19:07] beisner: xenial [19:07] beisner: is there a problem with using percona repos by default? [19:07] https://www.percona.com/doc/percona-server/5.7/installation/apt_repo.html [19:08] bdx - we only test with ubuntu repos and the cloud archive pockets. [19:09] ahh [19:09] bdx, even yakkety has 5.6 atm so that's as bleeding edge as we get. https://launchpad.net/ubuntu/yakkety/+source/percona-xtradb-cluster-5.6 [19:10] bdx, all of that said, /me looks at that repo... :) [19:10] darn ... so -> http://paste.ubuntu.com/23304600/ [19:11] ^ shows that 5.7 is xenial, just not for percona? [19:11] I see [19:11] ok [19:11] right, mysql might be ahead of percona-cluster packaging [19:13] beisner: percona-cluster charm has a config for 'source' and 'key', if the apt package for percona wasn't hard coded, I could then set 'source', 'key', and hypothetically 'package' and get the latest right? [19:15] I might just rig something up in my personal namespace for the time being - just needed the scoop on what the deal is so I know how best to move forward [19:15] beisner: thx [19:15] bdx, that hard-codedness is indeed a bit of crack, but was necessary crack whilst the packaging was in limbo around the vivid:wily timeframe. with both wily and vivid now eol, we can pull that out after 16.10 release (too late in the feature freeze to do that now). also i'd be all for making sure the charm can use the upstream repos. but .. someone will have to get pretty clever to try to predictively know what the package name is going to be, given [19:15] that the version number is in the package name :-/ [19:22] hi guys [20:09] hi [21:24] is there a python module or layer I can use to pull info from MAAS 2.0 in my charm? I want to get at the network space info [21:28] Hey all, I'm trying to expand our juju-deployed openstack cluster, so I ran "juju add-machine". After it being stuck in a pending state for a number of minutes, I manually ssh'd into the machine and found that cloud-init had tried to run something involving 'curtin' and it failed because curtin wasn't installed. I also can't find any references to anything trying to install it. Has anyone run into a similar problem? [21:30] are you using custom images or something? [21:33] nope [21:34] so it looks like cloud-init is downloading some script from maas with a self-extracting archive that contains curtin inside it [21:34] hm. [21:53] smgoller: never seen curtin fail to install in our deployments, I always thought it was baked into the ubuntu images [23:15] i'm hesitant to blame juju on this yet. we've got some network wonkiness === CyberJacob is now known as zz_CyberJacob === zz_CyberJacob is now known as CyberJacob [23:45] smgoller: if you do apt update on that server do you get errors? [23:45] it looks like intermittent dns resolution problems [23:45] but the dns server is fine, so i think it's a layer 3 thing [23:46] spaok: so I'm less concerned about juju at the moment and trying to make sure the infrastructure is solid [23:46] usually a good first step :)