[09:46] <stub> where do bugs on jujucharms.com go now?
[12:08] <marcoceppi> stub: https://github.com/CanonicalLtd/jujucharms.com
[12:10] <stub> marcoceppi: Ta. I also found the charmworld project on Launchpad, configured for accepting bugs, so added it there.
[12:10] <marcoceppi> stub: charmworld was the old jujucharms.com
[12:11] <marcoceppi> it's switched teams and what not
[12:11] <stub> marcoceppi: More bugs to file then :) It is still linked in the footer, and the LP project should point to the external bug tracker.
[12:11]  * stub files bugs
[12:13] <marcoceppi> stub: good point, thanks!
[12:45] <ejat> hi all
[12:45] <ejat> http://paste.ubuntu.com/9426602/
[12:45] <ejat> can some one help
[12:52] <marcoceppi> ejat: have you changed anything in the environments.yaml
[12:58] <ejat> marcoceppi : nope
[12:58] <marcoceppi> ejat: fwiw, 1.20.14 was released recently
[12:58] <ejat> ok ..
[12:59] <marcoceppi> errr
[12:59] <marcoceppi> sorry, is being proposed for release
[12:59] <ejat> let me try update 1st
[12:59] <marcoceppi> I don't see anything regarding that in the bugfix log though
[12:59] <marcoceppi> one sec
[12:59] <ejat> ok ..
[13:00] <ejat> marcoceppi : another thing ... did juju jitsu still working ?
[13:00] <ejat> to export n import to another environment ?
[13:01] <marcoceppi> jitsu hasn't worked for over a year, that's for juju < 0.7
[13:01] <ejat> opss .. my bad ..
[13:01] <marcoceppi> ejat: you can still export an environment, it's called bundles
[13:01] <ejat> so its mean need to load the bundle file using juju-gui ?
[13:01] <marcoceppi> ejat: yeah, that's the new "export/import" feature
[13:02] <ejat> need to up the juju-gui at the another environment 1st then import the bundle ?
[13:02] <ejat> brb.,,
[13:04] <ejat> marcoceppi: so now .. no automatic way to transfer the bundle to another environment?
[13:04] <marcoceppi> ejat: no
[13:05] <ejat> ok noted :)
[13:05]  * ejat catching up things back 
[13:13] <ejat> marcoceppi: im using 1.21-beta3-utopic-i386
[13:15] <ejat> should i file a bug?
[13:23] <ejat> sinzui : thanks
[13:23] <ejat> marcoceppi: http://curtis.hovey.name/2014/06/12/migrating-juju-to-hp-clouds-horizon/
[13:24] <ejat> its because of the region .... poor me ..
[16:11] <icerain_> hi there
[16:11] <icerain_> does anyone has experience with setting up openstack via juju
[16:13] <marcoceppi> icerain_: a bit, what's up?
[16:14] <icerain_> we try to set up an openstack using the multiinstall script
[16:14] <icerain_> but the process always get stuck due bootstrapping juju
[16:15] <icerain_> always get permission denied cause of the ssh keys
[16:17] <icerain_> first we set up a ubuntu server and then a maas
[16:17] <icerain_> commissioned a node for juju
[16:17] <icerain_> and thats it
[16:18] <icerain_> Remote host authentication has changed is the error message
[16:22] <icerain_> actually i think we make an error between installing maas and juju
[16:23] <icerain_> we have 6 nics per node and use eth4 for external and eth5 for internal coomunication
[16:24] <marcoceppi> icerain_: I think it might be best if you summarize your setup, what you've done so far, the errors  you're getting and either post on http://askubuntu.com or email them to juju@lists.ubuntu.com
[16:25] <marcoceppi> I'm not too experienced with openstack and juju, but by having them recorded I can help find the people who can answer your questions
[16:25] <icerain_> kk, thank u anyway
[17:24] <hackedbellini> hi! Can someone help me with a little problem? I have a relation between one unit a postgresql one. The machine where postgresql is in changed ip. Juju recognized it, but the relation is still getting the old one, making that service upadate it's config file to the old connection ip
[17:24] <hackedbellini> how can I force the relation to receive that new ip?
[17:26] <lazyPower> hackedbellini: thats dependent on how its receiving the address - is it looking at remote-get private-address?
[17:28] <hackedbellini> lazyPower: let me take a look on the charm code, just a src
[17:28] <hackedbellini> sec*
[17:29] <hackedbellini> lazyPower: it's using relation-get
[17:29] <lazyPower> hackedbellini: which provider is this?
[17:30] <lazyPower> hackedbellini: and juju version would be helpful as well
[17:31] <hackedbellini> juju version 1.20.1
[17:31] <hackedbellini> provider? It's a relation between postgresql (last charm version) and gerrit (a local one I cloned before canonical-ci became private)
[17:32] <hackedbellini> lazyPower: it gets the hostname by using charmhelpers relation_get: relation_get('host', rid=relid, unit=unit)
[17:33] <lazyPower> hackedbellini: ah that sounds like its using a cached config thats being sent over the wire - and it may be sending a full postgresql connection string - i'm not 100% familiar - are you in a debug-hooks session and can verify?
[17:33] <lazyPower> if youare - just running `relation get` in the relationship context will give you teh full output of whats coming over the wire, and give us a good frame of reference for debugging
[17:33] <lazyPower> to isolate if this is a juju bug or a postgresql charm bug
[17:34] <hackedbellini> lazyPower: how can I enter a debug-hooks session?
[17:34] <lazyPower> hackedbellini: juju debug-hooks service/#
[17:34] <lazyPower> this will place you in a tmux session - you'll want to open another terminal and issue the command `juju resolved --retry service/#`
[17:35] <lazyPower> assuming it wsa the relationship hook that failed - it should place you in the context of that relationship hook.
[17:37] <hackedbellini> lazyPower: it's not failing the service, unfortunately
[17:37] <lazyPower> hackedbellini: ok - so we have to go a bit deeper - do you have time to run through a another deployment of your service thats consuming postgres?
[17:38] <hackedbellini> lazyPower: what do you mean?
[17:39] <lazyPower> hackedbellini: adding another unit - attaching a debug-hooks session to that unit thats under deployment to obtain the data.
[17:39] <lazyPower> actually 1 moment, there may be a quicker way to do this
[17:39] <lazyPower> let me check the manpages
[17:46] <lazyPower> hackedbellini: looking over the code it appears this is coming from some kind of cached config - i'm having trouble locating where master_host is getting set as its a parameter, but thats whats being assigned to the host= var
[17:47] <lazyPower> hackedbellini: and this is with relation to postgresql clustering - it appears it does an internal quorem and sets this ip - which is the culprit - so its a bug against the postgresql charm
[17:48] <hackedbellini> lazyPower: hrmmm, I see. I'll take a look at the postgresql charm too to see if I can help you find where it's storing the cache
[17:48] <lazyPower> hackedbellini: can you file a bug against the postgresql charm about this? as I can foresee this being a major thorn in the side of any postgresql admins that experience an outage.
[17:49] <lazyPower> if we can't get to it, i bet stub can iron this out in a an iteration or two
[17:53] <hackedbellini> lazyPower: now that I saw, not only that... postgresql still have the gerrit's old ip on its pg_hba
[17:53] <stub> If changing the machine's ip does anything, it will call the config-changed hook. I don't think relation-changed hooks get triggered, so the new ip will never be published
[17:54] <stub> db_relation_joined_changed is calling hookenv.unit_private_ip() to get the ip, so it isn't cached afaict.
[17:54] <lazyPower> ah ok
[17:55] <hackedbellini> stub: so what can I do in this situation to force postgresql and gerrit to get their new ips?
[17:55] <lazyPower> i didn't get very deep in the code - i'm multi-tasking this, sorry about the red herring - your analysis is sound stub.
[17:55] <lazyPower> i can see that being the case.
[17:55] <stub> I may be out of date though. Last I heard, you can't change a units ip address for any service.
[17:56] <hackedbellini> stub: really?
[17:56] <stub> hackedbellini: like I said - I might be out of date.
[17:56] <stub> if you change the relation at the client end, it will invoke the server's relation-changed hook and the new host will be published.
[17:57] <hackedbellini> stub: hrm ok. But well, some ips changed =P. How can I change the "cache" that's holding those ips?
[17:57] <jcsackett> lazyPower: hey man, do you happen to know of any way to retrieve an environment password if you've lost your jenv files &c? this is on a a machine i still have non juju ssh access to (manual provider).
[17:57] <stub> juju run --unit=client/0 relation-set -r db:42 something=whatever
[17:57] <lazyPower> jcsackett: oo good question - I think its stored in mongodb but i dont have the foggiest idea where that document would be.
[17:57] <lazyPower> jcsackett: and i would imagine its salted in the database
[17:57] <hackedbellini> stub: hrmmmm, lets try that
[17:58] <stub> hackedbellini: Or you can just explicitly set the host attribute on the relation using juju run too
[17:58] <jcsackett> lazyPower: yeah, that's what i was afraid of. i have a whole owncloud and other stuff setup i don't want to get rid of, but i now have no juju access to it...had an HD die a few weeks ago and just now realized that juju env files weren't part of my backup...
[17:59] <stub> If I am out of date and you can change a units ip address after deployment, please file a bug :)
[17:59] <lazyPower> jcsackett: Really sorry to hear that - a core dev might be able to help you recover that info though.
[17:59] <hackedbellini> stub: the "-r db:42". What does that mean?
[17:59] <jcsackett> lazyPower: good point. i'll bug core. thanks.
[18:00] <hackedbellini> stub: juju recognized the ip change, but the realation is still getting the old one... maybe I need to run relation-set
[18:00] <hackedbellini> it shouldn't, but it's an acceptable solution atm =P
[18:00] <stub> hackedbellini: db:42 is an example relation id. 'juju-run --unit=client/0 relation-ids db' or 'db-admin' will list them
[18:01] <hackedbellini> stub: hrmmm, nice! Let me try that and I'll give you a feedback to say if that worked
[18:02] <stub> hackedbellini: Worst case, you can use juju run like this to override any values in the relation you like... just try not to blow your foot off :)
[18:02] <lazyPower> stub: you just dropped some science on me about -r relation:id
[18:04] <stub> lazyPower: Just test before repeating - I'm knee deep in something else and doing this by memory ;)
[18:05] <stub> launchpad.net/juju-relinfo if you want a plugin to reduce the typing for the relation-get/relation-sets
[18:06] <hackedbellini> stub: I think it worked! Thank you!
[18:06] <hackedbellini> and thank you lazyPower for taking the time to see that with me
[18:06] <stub> np
[18:06] <lazyPower> hackedbellini: no worries :) I'm happy we didn't traverse the original route i was proposing
[18:06] <lazyPower> that would have been long winded to determine blame
[19:01] <stub> Is there some way of telling amulet to deploy cs:precise/storage under trusty? Since I need to relate the subordinate to a trusty service?
[19:06] <lazyPower> stub: you'll need to make a local copy
[19:06] <stub> :-P
[19:06] <lazyPower> cd ~/charms/trusty && charm get cs:precise/storage -- that will effectively make a local trusty charm - and you'd deploy it like any other local charm.
[19:06] <lazyPower> ymmv if deps have changed between precise/trusty
[19:09] <stub> I'll try a custom branch via the charm store. I don't want to embed a copy of the storage charm in my charm just for running tests.
[19:11] <stub> Or do lp: urls work? Hmm...
[19:13] <stub> We should probably promulgate it to trusty anyway, since I think we are already running it in production :-/
[19:39] <marcoceppi> stub: personal branch in cs is best way
[19:51] <stub> lp: branch is working, with one less point of failure.
[19:52] <stub>         deployment.add('storage', 'lp:~stub/charms/trusty/storage/trunk')
[20:22] <PariahVi> Hello.  I am just starting to fully look into Juju charms rather than Juju-core and I had a question about the charms.  Would you run the charm's install script again if you wanted to update the package if it was not installed via apt-get?
[20:28] <thumper> PariahVi: normally there would be an upgrade hook that would do that
[20:28] <PariahVi> thumper: Okay.  Thank you.  This points me in the correct path of documentation reading. :)
[20:28] <thumper> np
[21:07] <drbidwell> I have a juju bootstrap that runs with a MAAS.  The bootstrap looks like it completed successfully and ends with "comamnd finished", but maas never changes the machine to a "deployed" state.  Any ideas why this might be?
[21:51] <johnmce> Hi guys. I just tried to get SSL enabled for the keystone charm. It doesn't work, and I note that there's no [ssl] section in any of the templates, nor does it appear in the keystone.conf on the target machines. Can anyone confirm that this is known to be broken?
[21:52] <johnmce> Obviously I've provided an SSL key and cert. The endpoint URLs show https, but that appears to be the only evidence of any attempt at SSL.
[22:08] <hatch> johnmce: If no one pops in with an answer to your question you might have better luck asking on askubuntu.com, the juju mailing list, or by contacting the charm maintainer
[22:39] <marcoceppi> johnmce: the SSL stuff is handled by a charm helper, I'm not sure the specifics of that charm as it's maintained by the Openstack Charmers team
[22:40] <marcoceppi> like hatch mentioned mailing the mailing list (juju@lists.ubuntu.com) is a great way to get in touch with them. When I see it come in I can make sure it gets their attention
[22:47] <marcoceppi> drbidwell: try running juju bootstrap with the --debug flag
[22:47] <marcoceppi> drbidwell: also which versions of maas and juju are you using?
[22:54] <drbidwell> marcoceppi: maas 1.7.0 and juju-core is 1.20.13 for U14.04.1
[22:54] <drbidwell> I have the debug output also
[22:55] <marcoceppi> drbidwell: debug output is good, could you put it on http://paste.ubuntu.com
[22:55] <drbidwell> marcocceppi: http://pastebin.com/3LDxnQyy
[23:04] <dpb1> tvansteenburgh: does bundletester provide facilities for archiving logs?
[23:04] <dpb1> tvansteenburgh: specifically the unit logs
[23:04] <tvansteenburgh> dpb1: no
[23:05] <dpb1> tvansteenburgh: k, thx
[23:05] <tvansteenburgh> it would be a great feature though
[23:05] <tvansteenburgh> would like to add it eventually
[23:06] <dpb1> tvansteenburgh: yes, I'd like something like that, we have a script that does the same.  I might think of how to incorporate it.
[23:07] <tvansteenburgh> dpb1: great!
[23:09] <dpb1> tvansteenburgh: is tests/test.yaml used?  I put some 'packages' in there and it doesn't seem to influence anything
[23:10] <tvansteenburgh> dpb1: yep
[23:11] <dpb1> tvansteenburgh: http://paste.ubuntu.com/9433320/
[23:11] <dpb1> tvansteenburgh: does that look right?
[23:11] <tvansteenburgh> yeah
[23:13] <tvansteenburgh> keep in mind your venv probably isn't seeing sys pkgs
[23:14] <tvansteenburgh> and we don't have a way to pip install via that yaml file (yet)
[23:16] <tvansteenburgh> so a 00-setup.sh that installs pkgs is probably the best solution when running in a venv right now
[23:19] <tvansteenburgh> dpb1: hope that helps, i gotta EOD
[23:22] <dpb1> tvansteenburgh: yes, that helps
[23:22] <dpb1> and matches what I'm seeing
[23:35] <Micromus> So, is Juju as good as it looks from the website??
[23:37] <LinStatSDR> Micromus: It absolutely is.
[23:38] <Micromus> I don't believe it :P
[23:39] <LinStatSDR> I got a full private cloud running with under 10 commands.
[23:40] <Micromus> I was looking to deploy cloudstack for testing, figured openstack was still immature, but canonical just released a new version quite recently, no? with a lot of juju magic integrated with it?
[23:40] <LinStatSDR> Micromus: You should, that's the future of things. IoT, Cloud yada yada
[23:40] <LinStatSDR> You can install Openstack on a single system in under an hour
[23:40] <Micromus> Sure, installing something is one thing, but maintaining and using it is another
[23:41] <sarnold> Micromus: openstack is deployed on hundreds of thousands of systems, if not millions
[23:41] <LinStatSDR> I don't find clicking and dragging to be terribly difficult to manage.
[23:41] <Micromus> doesn't mean it's a fit for us though
[23:41] <LinStatSDR> oO
[23:42] <Micromus> hehe, I just spent several days trying to configure a Ceph cluster, and giving up in the end, so everything that shines is not gold
[23:42] <LinStatSDR> I have not found a solution such as OpenStack and Juju that is more effecient, powerful, reliable and FREE than said solutions
[23:42] <Micromus> that said, I'm really looking forward to trying the new ubuntu openstack "thingie"
[23:43] <LinStatSDR> What is your background with computing?
[23:43] <Micromus> I bought 9 used serverblades on ebay privately to start testing cloud stuff
[23:43] <Micromus> about 10 years of network and system administration
[23:43] <LinStatSDR> There is a lot of research and knowledge, as nice as I made it sound you do have to have a wide range of system knowledge
[23:44] <LinStatSDR> Oh okay, anything on the Unix / Linux side? Most of it can be handled through ssh
[23:44] <Micromus> mostly networking, but networks often need some servers and other stuff to have a purpose
[23:44] <LinStatSDR> It would be most beneficial to have a strong background in Linux type systems.
[23:44] <LinStatSDR> For troubleshooting at least.
[23:44] <Micromus> run 50 linux servers yes
[23:44] <LinStatSDR> Ah no worries then.
[23:45] <LinStatSDR> Micromus: Now your type of deployment is different if you got 9 blades you're going to use for it.
[23:45] <Micromus> familiar with googling and troubleshooting, even if that is not what i enjoy spending my time doing
[23:45] <Micromus> I'll probably use 3 of the blades for testing openstack
[23:45] <LinStatSDR> Micromus: So you would most likely want to use MaaS and manage them through landscape or something.
[23:46] <LinStatSDR> Micromus: Specs on those blades pretty good?
[23:46] <Micromus> At work we are looking to replace VMware, so the more  of our new services, and legacy stuff, we can get on a "cloud" solution the better
[23:47] <Micromus> nah, bought used on ebay for $2500, combined :P
[23:47] <LinStatSDR> That would be a great replacement, Juju and openstack platform
[23:47] <Micromus> 48gb ram, 2x1tb disk, 6 core cpu
[23:47] <LinStatSDR> Total? or each?
[23:47] <Micromus> Each
[23:47] <LinStatSDR> You bought 9 blades with those specs 2500?
[23:47] <sarnold> nice
[23:47] <Micromus> Yep :D
[23:47] <LinStatSDR> You lucky...
[23:48] <Micromus> Shipped them with container to norway, installed in the basement
[23:48] <LinStatSDR> Seems like you're bottle neck is the HDDs
[23:48] <Micromus> probably, there are 2 free slots for each blade tho
[23:48] <LinStatSDR> Not to be too blunt, but at least in my particular experience it is very heavy on the IOPS
[23:49] <Micromus> What is?
[23:49] <LinStatSDR> Openstack Platform and such, especially Juju
[23:49] <Micromus> When deploying stuff, or continously?
[23:49] <LinStatSDR> Raid 1 at minimum, raid 5,6 or 10
[23:51] <Micromus> I just deployed a 3 node cluster of HBase in the weekend, using Ambari for deployment, very good stuff
[23:51] <LinStatSDR> In general, but it depends on the number of users you'll have, for testing, it may seem fine but add a few users, depending on network congestion, cpu, ram and hdd utilization at the time you'll find it will slow down considerably
[23:51] <Micromus> Yep, thats why it's good to have some hardware to play with and set stuff up and test actual workloads on
[23:52] <Micromus> Before we go buy hardware for production, without knowing what to optimize for
[23:52] <LinStatSDR> My testing environment isn't the best but after establishing performance baselines with expected vs actual I have came to that conclusion
[23:52] <LinStatSDR> Are you using MaaS at all?
[23:52] <Micromus> Never heard of MaaS
[23:53] <LinStatSDR> Are you using Ubuntu or debian?
[23:53] <Micromus> debian today
[23:53] <Micromus> and centos6 for ambari/hadoop-cluster, since debian is not supported yet
[23:53] <LinStatSDR> I use Ubuntu but I suppose it doesn't matter. https://maas.ubuntu.com/ You should check it out. MaaS is very, very nice and Juju can orchestrate that very well
[23:54] <Micromus> I like the review of Charms etc
[23:55] <Micromus> I believe one of the problems with FOSS is lack of accountability, and actual review of stuff before it is "passed on"
[23:55] <LinStatSDR> The only gripe I have about Juju is that there are far more charms and bundles than what show up when "Searching"
[23:55] <Micromus> So as a receiver/consumer of FOSS products, you get a lot of surprises  on docs, quality, whatnot
[23:55] <LinStatSDR> Micromus: Accountability? Why?
[23:56] <Micromus> What I mean is, there are too many morons releasing too much crap
[23:56] <LinStatSDR> Only add PPA's that are stable.
[23:56] <Micromus> Making it hard to actually believe when something great, like Juju seems to be, actually appears :)
[23:57] <Micromus> I will definetely look at maas/juju/ubuntu openstack for our new DC deployment which I hope wll get budgeted for next year
[23:58] <LinStatSDR> Juju is wonderful, simple and easy. Juju along with the associated platform/platforms requires a significant amount of knowledge about not only their software products but in every area.
[23:58] <Micromus> Also seeing a fairly large, and very welcomming, irc channel is also a huuge plus for any product/ecosystem
[23:59] <LinStatSDR> Micromus: The Canonical Dist for OpenStack, http://www.ubuntu.com/download/cloud/install-ubuntu-openstack
[23:59] <Micromus> Indeed, that is why it is such a big commitment to go down one road, with regards to asociated platforms etc
[23:59] <Micromus> And why it's so important that the system can be set up for testing in minimal amount of time, since maybe one has to test multiple alternatives