[00:03] <hatch> danob you can use lxc's
[00:03] <hatch> deploy the charm using the 'local' environment
[00:04] <danob> hatch: i am using lxc's :)
[00:04] <hatch> ohh, well then :)
[00:04] <hatch> so do you just want to test the individual scripts?
[00:12] <marcoceppi> danob: what's your end goal?
[00:17] <danob> hatch: to test individual scripts i am reading juju debug-hooks now, will it do?
[00:17] <hatch> that will allow you to watch the debug logs while the hooks are executing
[00:17] <hatch> but like marcoceppi asked....what is your end goal? What are you trying to achieve here?
[00:18] <marcoceppi> that will allow you to use execute commands as if you were that hook*
[00:20] <danob> marcoceppi: my end gole is to test and debug my charm as if i am just running a python script. i just want to make charm test and debug simple for me.
[00:20] <hatch> er yes and that too....sorry :)
[00:20] <marcoceppi> danob: well, we have two ways to do that. One is to simply unit test your hooks. So if you wrote them in Python you'll want to use unittest module to then test your python code as you woudl any python project
[00:21] <danob> marcoceppi: what is the best workflow for this (test/debug)?
[00:21] <marcoceppi> danob: the other is to write charm tests, however those are desigend as integration tests, which will actually deploy the charm, and other charms
[00:21] <marcoceppi> danob: debug-hooks is not what you want, that's something different
[00:21] <hatch> danob I test mine as individual scripts
[00:21] <hatch> just fyi
[00:22] <marcoceppi> What you want is unittests, and that depends entirely on the language of the charm
[00:33] <danob> marcoceppi: "the other is to write charm tests" like python-django charm?
[00:34] <marcoceppi> danob: there are quite a few charms that have charm tests
[00:35] <marcoceppi> danob: python-django is one example
[00:37] <marcoceppi> danob: memcached is another example, https://bazaar.launchpad.net/~charmers/charms/precise/memcached/trunk/files/head:/tests/
[00:38] <danob> marcoceppi: hmm thanks :)
[00:40] <danob> marcoceppi: "One is to simply unit test your hooks" in this way can i run config-get in subprocess.call ?
[00:40] <danob> marcoceppi: i think not ryt?
[00:40] <marcoceppi> danob: no, you can't, you would mock calls to config-get and the other hook commands
[00:40] <marcoceppi> you can't run those commands unless the service is deployed in a juju context
[00:41] <marcoceppi> Those only exist in the context of a deployed service in a juju environment
[00:43] <danob> marcoceppi: hmm i understand
[00:46] <danob> marcoceppi: is it possible to develop a emulator to emulate a juju context/environment, I will be happy to contribute that project :)
[00:46] <marcoceppi> I have considered it, but it's not a priority at the moment. You're welcome to try
[00:57] <danob> marcoceppi: hmm, then where I can get full information about 'how juju context/environment created?' or how a charm deployed detail step by step with source code reference? I am not going to start developing by starting a project, just want to know deep inside?
[00:58] <marcoceppi> danob: I mean, we have docs here and there, have you looked at our documentation? http://juju.ubuntu.com/docs
[00:58] <danob> marcoceppi: yes.
[00:59] <marcoceppi> that's pretty much it, if you want to see what the environment looks like, deploy a charm, run juju debug-hooks, then initiage a config change with `juju set <service> key=val`, when the hook gets trapped, you can run config-get, you can type env to see the environment, etc
[01:02] <danob> marcoceppi: hmm thanks :)
[03:09] <milk> could anyone tell me how to promote a mysql-slave to master using juju?
[03:48] <marcoceppi> milk: remove the the slave relation, then recreate the relation with the slave as the master
[03:51] <milk> marcoceppi: there are two units in the mysql-slave service. i want to promote one of them to master and keep the other as slave.
[03:51] <marcoceppi> milk: in short, you can't
[03:52] <milk> hmm....
[03:52] <marcoceppi> those are the units of a service group, so what you could do was deploy a new mysql service as mysql-master, then scale the slave down, then create the relation
[03:53] <milk> but the data would be destroyed.
[03:54] <marcoceppi> milk: stand up the new mysql-master, make it a slave of the mysql-slave, have it sync, break the relation, re-establish as it being master to the slaves, verify data is there, scale down the mysql-slave service group
[03:54] <marcoceppi> or just leave it where the new "mysql-master" is the slave
[03:54] <marcoceppi> and scale down the slave group to only have one
[03:58] <milk> thanks, but it will take too much time for the failover to finish..
[03:58] <milk> i think this single-master-multiple-slave scenorio is a pretty common one.
[03:59] <milk> but juju's model seems not to be able to handle it properly...
[03:59] <milk> :)
[04:07] <marcoceppi> it handles it, just differently than you'd expect
[04:07] <marcoceppi> having master - master replication is a better model embodied in juju
[04:09] <marcoceppi> milk: I've been working on charming this http://www.proxysql.com/ which would make the underlying which is master, which is slave, which do I failover to in juju easier
[04:14] <milk> marcoceppi: i agree, the multi-master databases(like riak) can fit in juju's model easily.
[04:15] <milk> marcoceppi: so, with proxysql, we should put both master and slave into a single service, and let the proxy do the rest of the work?
[04:16] <milk> marcoceppi: and the failover will happen in that service, instead of between two services?( mysql-master and mysql-slave)
[04:16] <marcoceppi> milk: yes, using the same relation schema, then proxy sql would know how to failover. Charms would use proxy sql as a single point of contact, and it'd be configured to fail over to slave and promte slave as  master directly
[04:17] <milk> marcoceppi: so the proxysql would be a subordinate service deployed with the mysql server?
[04:19] <marcoceppi> milk: probably, yes
[04:19] <marcoceppi> milk: could also be a subordinate deployed on the actual application servers
[04:19] <marcoceppi> or both
[04:19] <marcoceppi> proxysql is kind of...flexible in how it can be deployed
[04:19] <milk> marcoceppi: or as a separate principle service..
[04:20] <marcoceppi> that as well
[04:20] <lazyPower> marcoceppi: good news, amulet installs in precise.
[04:20] <milk> marcoceppi: agree. seems no difference.
[04:21] <marcoceppi> lazyPower: with the pkg-test ppa?
[04:21] <lazyPower> I did add that, yes.
[04:21] <marcoceppi> lazyPower: sorry, apparently I killed the wrong process and display server died, going to have to reboot
[04:21] <lazyPower> nbd
[04:22] <lazyPower> do i need to nuke the vm and retry from the juju ppa?
[04:22] <lazyPower> or is it there yet?
[04:23] <milk> marcoceppi: thanks for your time :)
[04:32] <marcoceppi> milk: np! let me know if you have any other questions
[13:31] <tomixxx3> hi, i when i try to enter "instances & volumes" in openstack dashboard, i only get a "Interal Server Error" message. However, i have not deployed "nova-volume". do i need "nova-volume" for this functionality? the same error comes when i try to open "images & snapshots"
[13:40] <marcoceppi> tomixxx3: I don't know if that's required for the dashboard to work but it might be
[13:41] <tomixxx3> marcoceppi: kk, i guess i will figure out this when i actually will USE openstack to deploy my task.
[13:41] <roadmr> tomixxx3: can you "juju ssh openstack-dashboard/0", then go to /var/log/apache2 and look at errors.log? the error should give a clue as to what's wrong
[13:42] <roadmr> tomixxx3: (I *think* it's related to the django version mismatch with grizzly but I haven't looked at it further; yes, I have the same problem)
[13:43] <tomixxx3> apache log says: The request you have made requires authentication. http 401
[18:05] <bitgandtter> hello good day
[18:05] <bitgandtter> can anyone make a successfull deployment of juju on rackspace?
[18:08] <bitgandtter> anyone?
[18:32] <Ming> Is it a way for juju-log log as ERROR?
[18:51] <danob> what will be the best practice if I want to download a *.tar.gz file form charm install python script??
[18:52] <hatch> danob in my latest charm review it was suggested to package it in the charm
[18:53] <hatch> danob but in the current version this is how my Ghost charm does it https://github.com/hatched/ghost-charm/blob/master/hooks/install
[18:54] <danob> hatch: thanks man :) is there any size restrictions in charm store?
[18:54] <danob> hatch: or in juju env?
[18:54] <hatch> not as far as I know...but if it's excessive it might be rejected in review
[18:55] <hatch> I just say 'might' because if it was 1GB I would reject it lol
[18:55]  * hatch is not a reviewer however
[18:55] <danob> hatch: lol
[18:55] <hatch> marcoceppi ^^
[19:09] <rick_h_> danob: you can always do that in a charm you write, but for best results in getting it to users it should be able to work offline
[19:09] <rick_h_> danob: many charms contain a files or releases directory that contains the downloaded file so that it can work offline
[19:10] <rick_h_> danob: I think that's policy for reviewed charms going forward.
[19:11] <danob> rick_h_: hmm
[19:11] <danob> rick_h_: thanks
[19:13] <hatch> danob what charm are you writing?
[19:17] <danob> hatch: i am writing a charm which will deploy apache2 mod and apache2 so i need to download this mod using wget
[19:17] <danob> hatch: this mod is precompiled binary
[19:18] <hatch> ahh did you look into the current apache 2 charm to see if you could use it to add your mod? http://manage.jujucharms.com/charms/precise/apache2
[19:18] <hatch> or maybe enhance it to allow you to add custom mods?
[19:22] <danob> hatch: no, but i will. i was thinking that i will install apache2 using apt-get. i was wanted my own unit like my-charm-name/0
[19:24] <hatch> on deploy you can call a service almost whatever you want
[19:24] <hatch> or you can fork the promoted apache2 charm and make your own modifications to it
[19:24] <hatch> just throwing ideas out there for ya, take them as you will :)
[19:27] <maxcan_> is there any way to configure juju to add a unit to a service wehn a service dies?
[19:31] <danob> hatch: I appreciate ideas :) thanks man
[19:32] <hatch> :)
[19:41] <danob> hatch: if i deploy an apache2 charm then how i put may mod.so file in /usr/lib/apache2/modules and other configuration files in apathe2/0 unit?
[19:44] <danob> hatch: i am confused in here
[19:45] <danob> hatch: can you point me a charm who does this type of operation
[19:45] <danob> if i deploy an apache2 charm then how i put may mod.so file in /usr/lib/apache2/modules and other configuration files in apathe2/0 unit?
[19:46] <danob> i am confused in here
[19:46] <danob> can you point me a charm who does this type of operation
[19:49] <lazyPower_> maxcan_: Example of what you're trying to do?
[19:49] <maxcan_> have resilience against AWS's random killing of ec2 instances
[19:50] <lazyPower_> maxcan_: ah, juju should be doing that automagically if the environment says a service shoul dhave X units, and on a pulse check it realizes it only has 1, it should be spinning up a replacement unit
[19:51] <maxcan_> hm, i'll go back and check
[19:51] <sarnold> lazyPower_: _really_?? cool
[19:51] <sarnold> lazyPower_: though if the billing department sees the other units are still up, that could get expensive :)
[19:51] <maxcan_> IIRC, when I killed an ec2 instance, the machine state went to terminated as did the agent-state but no new machines got spun up
[19:52] <lazyPower_> sarnold: i overheard this prior. Please feel free to correct me if i'm misinformed
[19:54] <lazyPower_> let me bootstrap and validate that statement, 1 moment maxcan_
[19:54] <maxcan_> i'm also on on old version
[19:54] <maxcan_> let me confirm it on my end.. dont want to bother you since i'm not sure
[19:54] <maxcan_> s/sure/certain
[19:54] <lazyPower_> well i just regurgitated information i read in chat, so there's no evidence aside from heresay
[19:55] <lazyPower_> i may have read some late night conversation that's not valid - so i'll check regardless
[20:02] <lazyPower_> maxcan_: ok i'm seeing the machine terminated status on 1.17.2
[20:02] <lazyPower_> so the behavior you are seeing is by design, i was misinformed.
[20:02] <hatch> maxcan_ you are probably looking for something like what https://landscape.canonical.com/ provides
[22:26]  * timrc considers writing the charm for: https://github.com/robmerrell/hipsterdb
[22:45] <lazyPower_> timrc: do it!
[22:46] <timrc> lazyPower_, ;)
[22:50] <lazyPower> The icon for the service would have to randomly not show up though, because it became too mainstream