=== kadams54 is now known as kadams54-away === CyberJacob|Away is now known as CyberJacob === CyberJacob is now known as CyberJacob|Away === kadams54_ is now known as kadams54-away === urulama_ is now known as urulama === erkules_ is now known as erkules [09:46] where do bugs on jujucharms.com go now? === CyberJacob|Away is now known as CyberJacob === kadams54 is now known as kadams54-away [12:08] stub: https://github.com/CanonicalLtd/jujucharms.com [12:10] marcoceppi: Ta. I also found the charmworld project on Launchpad, configured for accepting bugs, so added it there. [12:10] stub: charmworld was the old jujucharms.com [12:11] it's switched teams and what not [12:11] marcoceppi: More bugs to file then :) It is still linked in the footer, and the LP project should point to the external bug tracker. [12:11] * stub files bugs [12:13] stub: good point, thanks! [12:45] hi all [12:45] http://paste.ubuntu.com/9426602/ [12:45] can some one help === CyberJacob is now known as CyberJacob|Away [12:52] ejat: have you changed anything in the environments.yaml [12:58] marcoceppi : nope [12:58] ejat: fwiw, 1.20.14 was released recently [12:58] ok .. [12:59] errr [12:59] sorry, is being proposed for release [12:59] let me try update 1st [12:59] I don't see anything regarding that in the bugfix log though [12:59] one sec [12:59] ok .. [13:00] marcoceppi : another thing ... did juju jitsu still working ? [13:00] to export n import to another environment ? [13:01] jitsu hasn't worked for over a year, that's for juju < 0.7 [13:01] opss .. my bad .. [13:01] ejat: you can still export an environment, it's called bundles [13:01] so its mean need to load the bundle file using juju-gui ? [13:01] ejat: yeah, that's the new "export/import" feature [13:02] need to up the juju-gui at the another environment 1st then import the bundle ? [13:02] brb.,, [13:04] marcoceppi: so now .. no automatic way to transfer the bundle to another environment? [13:04] ejat: no [13:05] ok noted :) [13:05] * ejat catching up things back [13:13] marcoceppi: im using 1.21-beta3-utopic-i386 [13:15] should i file a bug? [13:23] sinzui : thanks [13:23] marcoceppi: http://curtis.hovey.name/2014/06/12/migrating-juju-to-hp-clouds-horizon/ [13:24] its because of the region .... poor me .. === roadmr is now known as roadmr_afk [16:11] hi there [16:11] does anyone has experience with setting up openstack via juju [16:13] icerain_: a bit, what's up? [16:14] we try to set up an openstack using the multiinstall script [16:14] but the process always get stuck due bootstrapping juju [16:15] always get permission denied cause of the ssh keys [16:17] first we set up a ubuntu server and then a maas [16:17] commissioned a node for juju [16:17] and thats it [16:18] Remote host authentication has changed is the error message === roadmr_afk is now known as roadmr [16:22] actually i think we make an error between installing maas and juju [16:23] we have 6 nics per node and use eth4 for external and eth5 for internal coomunication [16:24] icerain_: I think it might be best if you summarize your setup, what you've done so far, the errors you're getting and either post on http://askubuntu.com or email them to juju@lists.ubuntu.com [16:25] I'm not too experienced with openstack and juju, but by having them recorded I can help find the people who can answer your questions [16:25] kk, thank u anyway [17:24] hi! Can someone help me with a little problem? I have a relation between one unit a postgresql one. The machine where postgresql is in changed ip. Juju recognized it, but the relation is still getting the old one, making that service upadate it's config file to the old connection ip [17:24] how can I force the relation to receive that new ip? [17:26] hackedbellini: thats dependent on how its receiving the address - is it looking at remote-get private-address? [17:28] lazyPower: let me take a look on the charm code, just a src [17:28] sec* [17:29] lazyPower: it's using relation-get [17:29] hackedbellini: which provider is this? [17:30] hackedbellini: and juju version would be helpful as well [17:31] juju version 1.20.1 [17:31] provider? It's a relation between postgresql (last charm version) and gerrit (a local one I cloned before canonical-ci became private) [17:32] lazyPower: it gets the hostname by using charmhelpers relation_get: relation_get('host', rid=relid, unit=unit) [17:33] hackedbellini: ah that sounds like its using a cached config thats being sent over the wire - and it may be sending a full postgresql connection string - i'm not 100% familiar - are you in a debug-hooks session and can verify? [17:33] if youare - just running `relation get` in the relationship context will give you teh full output of whats coming over the wire, and give us a good frame of reference for debugging [17:33] to isolate if this is a juju bug or a postgresql charm bug [17:34] lazyPower: how can I enter a debug-hooks session? [17:34] hackedbellini: juju debug-hooks service/# [17:34] this will place you in a tmux session - you'll want to open another terminal and issue the command `juju resolved --retry service/#` [17:35] assuming it wsa the relationship hook that failed - it should place you in the context of that relationship hook. [17:37] lazyPower: it's not failing the service, unfortunately [17:37] hackedbellini: ok - so we have to go a bit deeper - do you have time to run through a another deployment of your service thats consuming postgres? [17:38] lazyPower: what do you mean? [17:39] hackedbellini: adding another unit - attaching a debug-hooks session to that unit thats under deployment to obtain the data. [17:39] actually 1 moment, there may be a quicker way to do this [17:39] let me check the manpages [17:46] hackedbellini: looking over the code it appears this is coming from some kind of cached config - i'm having trouble locating where master_host is getting set as its a parameter, but thats whats being assigned to the host= var [17:47] hackedbellini: and this is with relation to postgresql clustering - it appears it does an internal quorem and sets this ip - which is the culprit - so its a bug against the postgresql charm [17:48] lazyPower: hrmmm, I see. I'll take a look at the postgresql charm too to see if I can help you find where it's storing the cache [17:48] hackedbellini: can you file a bug against the postgresql charm about this? as I can foresee this being a major thorn in the side of any postgresql admins that experience an outage. [17:49] if we can't get to it, i bet stub can iron this out in a an iteration or two [17:53] lazyPower: now that I saw, not only that... postgresql still have the gerrit's old ip on its pg_hba [17:53] If changing the machine's ip does anything, it will call the config-changed hook. I don't think relation-changed hooks get triggered, so the new ip will never be published [17:54] db_relation_joined_changed is calling hookenv.unit_private_ip() to get the ip, so it isn't cached afaict. [17:54] ah ok [17:55] stub: so what can I do in this situation to force postgresql and gerrit to get their new ips? [17:55] i didn't get very deep in the code - i'm multi-tasking this, sorry about the red herring - your analysis is sound stub. [17:55] i can see that being the case. [17:55] I may be out of date though. Last I heard, you can't change a units ip address for any service. [17:56] stub: really? [17:56] hackedbellini: like I said - I might be out of date. [17:56] if you change the relation at the client end, it will invoke the server's relation-changed hook and the new host will be published. [17:57] stub: hrm ok. But well, some ips changed =P. How can I change the "cache" that's holding those ips? [17:57] lazyPower: hey man, do you happen to know of any way to retrieve an environment password if you've lost your jenv files &c? this is on a a machine i still have non juju ssh access to (manual provider). [17:57] juju run --unit=client/0 relation-set -r db:42 something=whatever [17:57] jcsackett: oo good question - I think its stored in mongodb but i dont have the foggiest idea where that document would be. [17:57] jcsackett: and i would imagine its salted in the database [17:57] stub: hrmmmm, lets try that [17:58] hackedbellini: Or you can just explicitly set the host attribute on the relation using juju run too [17:58] lazyPower: yeah, that's what i was afraid of. i have a whole owncloud and other stuff setup i don't want to get rid of, but i now have no juju access to it...had an HD die a few weeks ago and just now realized that juju env files weren't part of my backup... [17:59] If I am out of date and you can change a units ip address after deployment, please file a bug :) [17:59] jcsackett: Really sorry to hear that - a core dev might be able to help you recover that info though. [17:59] stub: the "-r db:42". What does that mean? [17:59] lazyPower: good point. i'll bug core. thanks. [18:00] stub: juju recognized the ip change, but the realation is still getting the old one... maybe I need to run relation-set [18:00] it shouldn't, but it's an acceptable solution atm =P [18:00] hackedbellini: db:42 is an example relation id. 'juju-run --unit=client/0 relation-ids db' or 'db-admin' will list them [18:01] stub: hrmmm, nice! Let me try that and I'll give you a feedback to say if that worked [18:02] hackedbellini: Worst case, you can use juju run like this to override any values in the relation you like... just try not to blow your foot off :) [18:02] stub: you just dropped some science on me about -r relation:id [18:04] lazyPower: Just test before repeating - I'm knee deep in something else and doing this by memory ;) [18:05] launchpad.net/juju-relinfo if you want a plugin to reduce the typing for the relation-get/relation-sets [18:06] stub: I think it worked! Thank you! [18:06] and thank you lazyPower for taking the time to see that with me [18:06] np [18:06] hackedbellini: no worries :) I'm happy we didn't traverse the original route i was proposing [18:06] that would have been long winded to determine blame [19:01] Is there some way of telling amulet to deploy cs:precise/storage under trusty? Since I need to relate the subordinate to a trusty service? [19:06] stub: you'll need to make a local copy [19:06] :-P [19:06] cd ~/charms/trusty && charm get cs:precise/storage -- that will effectively make a local trusty charm - and you'd deploy it like any other local charm. [19:06] ymmv if deps have changed between precise/trusty === fuzzy_ is now known as Fuzai [19:09] I'll try a custom branch via the charm store. I don't want to embed a copy of the storage charm in my charm just for running tests. [19:11] Or do lp: urls work? Hmm... [19:13] We should probably promulgate it to trusty anyway, since I think we are already running it in production :-/ [19:39] stub: personal branch in cs is best way [19:51] lp: branch is working, with one less point of failure. [19:52] deployment.add('storage', 'lp:~stub/charms/trusty/storage/trunk') [20:22] Hello. I am just starting to fully look into Juju charms rather than Juju-core and I had a question about the charms. Would you run the charm's install script again if you wanted to update the package if it was not installed via apt-get? [20:28] PariahVi: normally there would be an upgrade hook that would do that [20:28] thumper: Okay. Thank you. This points me in the correct path of documentation reading. :) [20:28] np === roadmr is now known as roadmr_afk === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 [21:07] I have a juju bootstrap that runs with a MAAS. The bootstrap looks like it completed successfully and ends with "comamnd finished", but maas never changes the machine to a "deployed" state. Any ideas why this might be? === cmagina_ is now known as cmagina === roadmr_afk is now known as roadmr [21:51] Hi guys. I just tried to get SSL enabled for the keystone charm. It doesn't work, and I note that there's no [ssl] section in any of the templates, nor does it appear in the keystone.conf on the target machines. Can anyone confirm that this is known to be broken? [21:52] Obviously I've provided an SSL key and cert. The endpoint URLs show https, but that appears to be the only evidence of any attempt at SSL. [22:08] johnmce: If no one pops in with an answer to your question you might have better luck asking on askubuntu.com, the juju mailing list, or by contacting the charm maintainer [22:39] johnmce: the SSL stuff is handled by a charm helper, I'm not sure the specifics of that charm as it's maintained by the Openstack Charmers team [22:40] like hatch mentioned mailing the mailing list (juju@lists.ubuntu.com) is a great way to get in touch with them. When I see it come in I can make sure it gets their attention [22:47] drbidwell: try running juju bootstrap with the --debug flag [22:47] drbidwell: also which versions of maas and juju are you using? [22:54] marcoceppi: maas 1.7.0 and juju-core is 1.20.13 for U14.04.1 [22:54] I have the debug output also [22:55] drbidwell: debug output is good, could you put it on http://paste.ubuntu.com [22:55] marcocceppi: http://pastebin.com/3LDxnQyy [23:04] tvansteenburgh: does bundletester provide facilities for archiving logs? [23:04] tvansteenburgh: specifically the unit logs [23:04] dpb1: no [23:05] tvansteenburgh: k, thx [23:05] it would be a great feature though [23:05] would like to add it eventually [23:06] tvansteenburgh: yes, I'd like something like that, we have a script that does the same. I might think of how to incorporate it. [23:07] dpb1: great! [23:09] tvansteenburgh: is tests/test.yaml used? I put some 'packages' in there and it doesn't seem to influence anything [23:10] dpb1: yep [23:11] tvansteenburgh: http://paste.ubuntu.com/9433320/ [23:11] tvansteenburgh: does that look right? [23:11] yeah [23:13] keep in mind your venv probably isn't seeing sys pkgs [23:14] and we don't have a way to pip install via that yaml file (yet) [23:16] so a 00-setup.sh that installs pkgs is probably the best solution when running in a venv right now === urulama_ is now known as urulama [23:19] dpb1: hope that helps, i gotta EOD [23:22] tvansteenburgh: yes, that helps [23:22] and matches what I'm seeing [23:35] So, is Juju as good as it looks from the website?? [23:37] Micromus: It absolutely is. [23:38] I don't believe it :P [23:39] I got a full private cloud running with under 10 commands. [23:40] I was looking to deploy cloudstack for testing, figured openstack was still immature, but canonical just released a new version quite recently, no? with a lot of juju magic integrated with it? [23:40] Micromus: You should, that's the future of things. IoT, Cloud yada yada [23:40] You can install Openstack on a single system in under an hour [23:40] Sure, installing something is one thing, but maintaining and using it is another [23:41] Micromus: openstack is deployed on hundreds of thousands of systems, if not millions [23:41] I don't find clicking and dragging to be terribly difficult to manage. [23:41] doesn't mean it's a fit for us though [23:41] oO [23:42] hehe, I just spent several days trying to configure a Ceph cluster, and giving up in the end, so everything that shines is not gold [23:42] I have not found a solution such as OpenStack and Juju that is more effecient, powerful, reliable and FREE than said solutions [23:42] that said, I'm really looking forward to trying the new ubuntu openstack "thingie" [23:43] What is your background with computing? [23:43] I bought 9 used serverblades on ebay privately to start testing cloud stuff [23:43] about 10 years of network and system administration [23:43] There is a lot of research and knowledge, as nice as I made it sound you do have to have a wide range of system knowledge [23:44] Oh okay, anything on the Unix / Linux side? Most of it can be handled through ssh [23:44] mostly networking, but networks often need some servers and other stuff to have a purpose [23:44] It would be most beneficial to have a strong background in Linux type systems. [23:44] For troubleshooting at least. [23:44] run 50 linux servers yes [23:44] Ah no worries then. [23:45] Micromus: Now your type of deployment is different if you got 9 blades you're going to use for it. [23:45] familiar with googling and troubleshooting, even if that is not what i enjoy spending my time doing [23:45] I'll probably use 3 of the blades for testing openstack [23:45] Micromus: So you would most likely want to use MaaS and manage them through landscape or something. [23:46] Micromus: Specs on those blades pretty good? [23:46] At work we are looking to replace VMware, so the more of our new services, and legacy stuff, we can get on a "cloud" solution the better [23:47] nah, bought used on ebay for $2500, combined :P [23:47] That would be a great replacement, Juju and openstack platform [23:47] 48gb ram, 2x1tb disk, 6 core cpu [23:47] Total? or each? [23:47] Each [23:47] You bought 9 blades with those specs 2500? [23:47] nice [23:47] Yep :D [23:47] You lucky... [23:48] Shipped them with container to norway, installed in the basement [23:48] Seems like you're bottle neck is the HDDs [23:48] probably, there are 2 free slots for each blade tho === kadams54 is now known as kadams54-away [23:48] Not to be too blunt, but at least in my particular experience it is very heavy on the IOPS [23:49] What is? [23:49] Openstack Platform and such, especially Juju [23:49] When deploying stuff, or continously? [23:49] Raid 1 at minimum, raid 5,6 or 10 [23:51] I just deployed a 3 node cluster of HBase in the weekend, using Ambari for deployment, very good stuff [23:51] In general, but it depends on the number of users you'll have, for testing, it may seem fine but add a few users, depending on network congestion, cpu, ram and hdd utilization at the time you'll find it will slow down considerably [23:51] Yep, thats why it's good to have some hardware to play with and set stuff up and test actual workloads on [23:52] Before we go buy hardware for production, without knowing what to optimize for [23:52] My testing environment isn't the best but after establishing performance baselines with expected vs actual I have came to that conclusion [23:52] Are you using MaaS at all? [23:52] Never heard of MaaS [23:53] Are you using Ubuntu or debian? [23:53] debian today [23:53] and centos6 for ambari/hadoop-cluster, since debian is not supported yet [23:53] I use Ubuntu but I suppose it doesn't matter. https://maas.ubuntu.com/ You should check it out. MaaS is very, very nice and Juju can orchestrate that very well [23:54] I like the review of Charms etc [23:55] I believe one of the problems with FOSS is lack of accountability, and actual review of stuff before it is "passed on" [23:55] The only gripe I have about Juju is that there are far more charms and bundles than what show up when "Searching" [23:55] So as a receiver/consumer of FOSS products, you get a lot of surprises on docs, quality, whatnot [23:55] Micromus: Accountability? Why? [23:56] What I mean is, there are too many morons releasing too much crap [23:56] Only add PPA's that are stable. [23:56] Making it hard to actually believe when something great, like Juju seems to be, actually appears :) [23:57] I will definetely look at maas/juju/ubuntu openstack for our new DC deployment which I hope wll get budgeted for next year [23:58] Juju is wonderful, simple and easy. Juju along with the associated platform/platforms requires a significant amount of knowledge about not only their software products but in every area. [23:58] Also seeing a fairly large, and very welcomming, irc channel is also a huuge plus for any product/ecosystem [23:59] Micromus: The Canonical Dist for OpenStack, http://www.ubuntu.com/download/cloud/install-ubuntu-openstack [23:59] Indeed, that is why it is such a big commitment to go down one road, with regards to asociated platforms etc [23:59] And why it's so important that the system can be set up for testing in minimal amount of time, since maybe one has to test multiple alternatives