[01:51] grapz: yes, the method would be 'bzr branch lp:juju/docs foo ; cd foo ; edit stuff ; bzr push lp:~grapz/juju/docs/foo ; bzr lp-propose [01:52] grapz: you can check out the 'lpad' tool that the juju team uses too, it automates a lot of the stuff [01:52] <_mup_> juju/refactor-machine-agent r455 committed by jim.baker@canonical.com [01:52] <_mup_> Docstrings [02:42] is there some id that i can get that is uniq to "this unit" ? [02:42] config-get [02:42] that would be unique in the global namespace, but consistent for this unit [02:43] SpamapS, ? (only pinging you because youmight still be around due to $TZ) [02:48] smoser: there's $JUJU_UNIT_NAME available from within a hook [02:49] what does that look like ? [02:49] /[0-X] [02:49] ? [02:49] servicename/[0-9]+ [02:50] smoser: the combo of servicename and unit id will never be repeated in a single environment [02:50] the digits (unit sequence number) are assigned by zk iirc... always increasing [02:50] even if you destroy the service, the next deploy will create the next id higher [02:51] hm.. [02:51] is there a a completely unique id ? [02:52] smoser: that is completely unique, per environment [02:52] yeah, but that isn't good enough [02:52] I have thought that the environment needs a UUID [02:52] that wouldn't work either [02:52] so that you could be specific when destroying it [02:52] my "this unit" needs a UUID [02:52] the reason is that i'm poking around with things that are globally (linux) namepsaced [02:52] smoser: env UUID + unit id == unique forever [02:53] yeah.. [02:53] but is there an env UUID ? [02:54] next question.. [02:54] how do i debug hooks ? [02:54] :) [02:54] http://askubuntu.com/questions/81818/failure-to-troubleshoot-a-juju-charm-deployment/82170#82170 [02:54] <_mup_> Bug #82170: Welkom in Ubuntu 6.10 < https://launchpad.net/bugs/82170 > [02:54] but 'juju debug-hook' doesn't give me $JUJU_UNIT_NAME in my environment [02:55] smoser: because you're not in a hook yet [02:55] smoser: make a hook execute and you will have it [02:55] how would i do that ? [02:56] smoser: change a config setting, upgrade-charm, relate something.. [02:56] debug-hooks is a little broken in that you can't debug install [02:56] so, lets just pretend tha a crazy friend of mine was trying to get through an install [02:57] and wanted to debug that [03:03] smoser: what I've done, is have no install hook, deploy, then call install from upgrade-charm and just iterate using upgrade-charm [03:03] smoser: recommend installing in smaller functions... call from config-changed to debug them... then call them from isntall for real [03:03] so same thing with config-changed [03:03] yeah. [03:03] k [03:04] SpamapS, to be clear, there is no ENV_UUID ? [03:04] My usual upgrade-charm hook is to call hooks/stop && hooks/install && hooks/config-changed && hooks/start [03:04] smoser: no, there is not [03:04] couldn't find one through cursory greps through the code [03:04] No its just an idea I have that it will be needed some day [03:04] and would be useful for being careful with destroy-environment [03:05] and ultimately could be a better replacement for control-bucket [03:05] perhaps instance-id + unit name? even that's pretty lame [03:05] instance-id's avail in juju status... it's hard to parse tho... gotta map service unit -> machine -> instance-id [03:06] can i get environment-name ? [03:06] not that I know of [03:07] smoser: what exactly is the purpose of this? [03:09] nova-volume creates volume groups [03:09] volume group names are global namespaced [03:10] so if i have 2 units in local provider that try to create a volume named "my-volume" then one will fail [03:10] shouldn't nova-volume take steps to prevent that? [03:11] nova volume doesn't know that these 2 things are on the same linux kernel [03:11] OH that sounds like LXC needs to namespace them [03:11] right, but lxc does not. they're not namespacd in the kernel [03:11] (neither are /dev/loopX ) [03:11] smoser: I think you can get the env name from a python script that just imports juju... the environment is pretty much the only thing you can interact with without twisted [03:11] depends on where you're trying to get it from... the cli or a charm [03:12] i'm probably over engineering at the moment. [03:13] but i was willing to accept $ENVIRONMENT_NAME-$JUJU_UNIT_NAME as a uniq identifier in that global namespace [03:13] and then was going to just create volume groups named: my-volume-group-$ENVIRONMENT_NAME-$JUJU_UNIT_NAME [03:14] i'll just use JUJU_UNIT_NAME for now [03:15] smoser: why not just plain old uuid ? [03:15] because then i have to store it. [03:16] is there some recommended data storage mechanism for charms ? [03:16] * m_3 laughs [03:16] "the filesystem" [03:17] yeah. [03:17] suggestion on where? [03:18] facter? [03:18] smoser: possibly $CHARM_HOME , as that will be removed whenever the unit is destroyed [03:18] smoser: or possibly "anywhere other than $CHARM_HOME" , as that will be removed whenever the unit is destroyed. :) [03:18] CHARM_DIR ? [03:18] /var/lib/juju/units/nova-volume-7/charm [03:18] otherwise, we usually just dump stuff into the charm's home dir... /var/lib/juju/units// [03:18] CHARM_DIR, right [03:19] /var/lib/juju's persistent tho [03:19] ar ehooks serial ? [03:19] yes [03:19] m_3: I think we may want to make it clear that that dir is deleted with reckless abandon when units are destroyed.. [03:19] its a good place for things you *want* to disappear [03:19] if its serial, then this is easy ehough. [03:20] yup [03:20] smoser: all hooks on one unit are serial [03:20] yeah. thats fine. [03:20] I bet that'll still be the case with subordinate charms too... it's twisted... single reactor [03:22] m_3, the subordinate charm will be run by an independent unit agent [03:23] yeah subordinates will have to run in parallel with the master [03:24] hmmm [03:24] seems like each hook exec should exec atomically tho [03:25] anyway, this running in parallel is really the behavior one should expect [03:25] m_3: nope, subordinates are just service units running on the same box. [03:25] that would still allow subordinate to negotiate with primary through joined-changed-changed-etc [03:25] m_3, correct [03:26] It actually may present an interesting problem as subordinate usage explodes.. if people have 50 subordinates for all their management bits... add-units may be really painful. [03:26] with the difference being that it's always a paired relationship - you don't see other units of the subordinate for a principal [03:26] * m_3 plans to explode subordinate usage! [03:27] to be clearer, you don't see the events for such units [03:27] right [03:28] SpamapS, add-unit doesn't make sense for a subordinate [03:28] effectively they just get deployed by their principal [03:28] on the same machine [03:28] jimbaker: sure it does.. when you add-unit the master service, you have to deploy all the subordinates too [03:28] thats what I mean.. the deploy is going to be taxing [03:28] SpamapS, of course in that sense [03:28] in fact [03:29] apt-get will bomb out [03:29] caching will be important [03:31] no, apt-get will *fail* [03:31] it has a lock file, and no "apt-get -- when something else is done apt-getting" [03:31] we'll have to use aptd [03:31] yes, i see your point [03:38] SpamapS, given that subordinate charms are explicitly that (subordinate: true iirc is what's been decided), it should be part of the policy in subordinate development that they don't try apt-get in parallel [03:38] SpamapS, by aptd, you mean aptdaemon? [03:42] jimbaker, he probably did, yes. [03:42] (man aptd) [03:43] smoser, must be. not familiar with it. i wonder if it can be relatively transparently with apt-get or some other cli [03:46] aptdcon. not certain if it waits for the package to be installed before returning. anyway, we can make it work :) [03:47] also the python api to aptd probably gives us some flexibility in working with transactions [04:07] grrrrr peer relations bite [04:07] waaaay too many relations to resolve... gets downright chatty [04:10] m_3, i wonder if we will expand scoping to help with some of this chattiness. certainly something worth discussing at the next uds [04:11] in budapest, one obvious thing we discussed was scoping relations so they would only chat if in the same geographic region [04:12] stacks also provide obvious scope [04:19] jimbaker: yes I meant aptdaemon [04:20] SpamapS, looks like it should be workable. just something subordinates would need to know. can aptdaemon work at the same time as one apt-get? [04:21] (to avoid reworking principals too) [04:21] jimbaker: It will return a failure to the client [04:21] jimbaker: IMO, the idea that subordinates should be special is absurd, and it will be plainly obvious once the feature is complete. [04:21] jimbaker: ceph is a good example of something that will have to be a subordinate and a principal [04:22] jimbaker: anyway, what I think we'll do is move to a declarative package installing plugin that will talk to aptd [04:23] SpamapS, subordinates can have normal relations, so it's probably not so extreme. but the apt stuff certainly needs to be resolved [04:23] jimbaker: and things that directly call apt-get will be flagged as broken [04:23] maybe a role for charm helpers [04:24] perhaps [04:24] SpamapS, anyway, sounds good about the evolution of charms away from directly calling apt-get [04:24] if for no other reason than to abstract apt-get away [04:25] we could certainly just start doing it in metadata.yaml and have a charm helper that does it until juju does it natively [04:25] sort of like ucf does for packages that want to support 3-way merges for conffiles [04:25] SpamapS, i like the idea [10:43] Hi folks. Is there any way, from within a charm's hooks, of getting the default values for a given set of config variables? (As opposed to whatever was passed to `juju set` last) [10:45] gmb: not that I know of, but you do have your config.yaml copied in /var/lib/juju/ as part of your charm if I am not mistaken. [10:46] nijaba: Ah, that's useful to know, thanks. [10:46] np [13:36] hi all [13:37] jcastro: around? [14:18] hello all [14:18] anybody experienced with juju w/openstack w/keystone? [14:18] I'm getting 401 back when nova client and euca2ools work fine [15:14] uksysadmin, sorry, id ont have expereience with it. [15:14] can someone verify for me that there is no "destroy-thyself" hook [15:15] i'm somewhat in need of one to reliably be able to do 'juju destroy-environment' on the local host. [15:15] as my charms are setting up looopback devices that have to be unsetup before 'rm -Rf' (lxc-destroy) will work. [16:06] smoser: I'm convinced that you need LXC to namespace vg's or this just isn't going to work. [16:07] that wouldnt help. [16:07] not for this. [16:08] lxc-destroy would still not work [16:08] something is going to have to do 'losetup -d that-loop-device' [16:08] or rm -Rf /that/path [16:08] is not going to be allowed [16:09] Yeah we really need units to have a destroy hook [16:09] so either lxc-destroy would have to be taught to find open filehandles that were associated with loop devices (which lxc doesn't by default allow access to loop devices, so that would be strange) [16:09] or i want a destroy-me hook, where i'd cleanup myself nice [16:10] smoser: this is where we go "juju does not do anything with storage" and leave you confused. ;) [16:12] well, this particular bit is really completely bogus. [16:12] granting /dev/loop* access to the container is quesitonable at best [16:12] and i realized, that the volume-group thing is probably a result of the container being able to see /dev/loop* [16:13] if it could only see /dev/loop0 (or loop1) then vgscan probably wouldn't see the other volume group in a different container with that name. [16:13] and wouldn't complain. [16:13] but i dont know if that owuld just cause issues later... who knows. [16:14] smoser: it sounds to me like lvm inside containers is dangerous. [16:14] probably. [16:14] root inside containers is dangerous [16:14] :) [16:14] but that has been hand waved at [16:16] smoser: thats from a security standpoint. I'm looking at it from a "not f***ing your data" standpoint [16:16] i think its the same thing. [16:17] running arbitrary code as root in a container that can leak , is at risk to fsck your data [16:17] period [16:18] but i do think the volume groups are probably ok. [16:18] as a vg* commands inside the container are not going to have access to block devices outside. [17:04] np smoser [17:16] Hi folks. We're in the process of trying to write some tests for one of our charms... http://bazaar.launchpad.net/~charmers/charm-tools/trunk/view/head:/doc/source/charm-tests.rst refers to a "get-unit-info" utility. Whereabouts can I find that? [17:17] SpamapS keeps it hidden somewhere about his person at all times [17:17] Hah. [17:18] gmb: its embedded in the mediawiki and mongodb charms right now. I'm thinking of creating a new 'juju-tools' project to stick things like this in. [17:18] SpamapS: Thanks. [17:19] (And that would be cool) [17:19] * SpamapS does that now [17:21] I could put it in charm-tools.. but this feels like something else. [17:35] SpamapS: like teen spirit? [17:35] lol [17:35] robbiew: we're just stupid.. entertainers.. [17:36] here we are now [17:38] SpamapS, isn't that what charm tools is for? [17:43] hi all [17:44] does BZR requires a specific port for bzr branch command? [17:44] * koolhead17 hates firewalls [17:44] :( [17:47] hazmat: no, charm tools is for charm oriented tools.. juju-tools is stuff to enhance the juju experience :) [17:47] hazmat: ideally charm-tools dies one day when juju has full functionality. :) [17:48] koolhead17: it works with launchpad over ssh [17:48] koolhead17: tho you can do readonly checkouts via HTTP/HTTPS, I think [17:49] SpamapS: so essentially i need to SSH port working for the same. === koolhead17 is now known as koolhead17|zzZZ [18:33] Is anybody working on a charm and intends to push changes in the next 10 minutes? [18:36] m_3: ping, maybe? :) [18:41] * SpamapS checks for uncomitted deltas in his local repos [18:50] the old globally distributed lock ;-) [18:51] nothing uncommitted here [18:51] (and no spelling abiluhtee eethur) [19:47] niemeyer: nope, I'm at mongo-boulder... the mongo gang mentioned you [20:21] SpamapS: hmm, I am debating on wether to post the raw IRC logs of m_3's session [20:21] http://irclogs.ubuntu.com/2012/02/01/%23ubuntu-classroom.html [20:22] I am not confident that the session missing the interactions and whatnot is useful for people who want to check it out [20:28] <_mup_> juju/ssh-known_hosts r486 committed by jim.baker@canonical.com [20:28] <_mup_> Merged trunk [20:32] <_mup_> juju/refactor-machine-agent r456 committed by jim.baker@canonical.com [20:32] <_mup_> PEP8 [20:35] jcastro: yeah, it's weak without the accompanying examples [20:36] m_3: Oh did you have everybody log into juju-classroom ? [20:36] yup [20:36] <3 [20:36] it was either that or an etherpad [20:36] ok so I think instead of highlighting past courses, we should just highlight upcoming courses [20:36] and bank on the interactive experience instead. [20:37] which involves users more and it's more fun [21:13] anyone: https://bugs.launchpad.net/juju/+bug/875903 ? [21:13] <_mup_> Bug #875903: Zookeeper errors in local provider cause strange status view and possibly broken topology < https://launchpad.net/bugs/875903 > [21:16] smoser: I saw that one a few times when I beat the heck out of my machine [21:17] yeah. i assume its load based. [21:17] smoser: IIRC I think we can just bump a timeout up to avoid hitting it [21:18] where would said timeout live? [21:18] smoser: something deep within txzookeeper IIRC [21:40] win 57 [21:40] doh! [21:40] * SpamapS hates when he reveals how many windows he has open [21:44] 57.. that is *weak [22:20] smoser, its on the roadmap === raema is now known as mattrae [23:18] m_3, nice presentation, thanks for making it so. [23:39] <_mup_> Bug #925211 was filed: Refactor the machine agent so that unit agent management is moved to a separate class < https://launchpad.net/bugs/925211 >