=== defunctzombie_ is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === thomi|lunch is now known as thomi === defunctzombie is now known as defunctzombie_zz === wedgwoodz is now known as wedgwood_away === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === racedo` is now known as racedo === wedgwood_away is now known as wedgwood === wedgwood is now known as Guest6347 [13:44] relation error between nova-compute and nova cloud controller | http://askubuntu.com/q/277139 === marco is now known as Guest28583 === defunctzombie_zz is now known as defunctzombie === Guest28583 is now known as marcoceppi [14:46] m_3: so I hear you nailed -alternatives? [14:48] jcastro: yeah, it's working great... I want martin to take a look and then figure out what to do with the changelog and other official stuff to get it merged [14:48] jcastro: will want some testing, but looks reasonable [14:48] he might have a beter way to do it too [14:48] :) [14:48] mgz: only tested it on raring btw [14:49] mgz: welcome back! [14:49] well, we'll only do this for raring, so that much is fine [14:49] hope you had a decent long weekend [14:49] * m_3 wants one next weekend I think [14:51] m_3: was good, still have sis and baby niece around today as well [14:52] cool [14:52] wow, the room for the OpenStack Charm Workshop holds like 320 people. [14:52] no way [14:52] m_3: we'll have wired internet this time! [14:52] cool [14:52] way. [14:52] oh, awesome [14:54] jcastro: I really still wanna combine #juju and #juju-dev [14:55] I don't think developer-speak will drive people away from the channel [14:55] I agree, but mramm didn't want to change anything until at least 13.04+ [14:55] I can appreciate his concern, with the deadlines looming, etc. [14:55] yeah, understand [14:55] just looking towards the future ;) [14:56] yeah, after that we just need to move cheney to the us [14:56] and we'll be good [15:00] * m_3 is still holding out for an australian sprint [15:00] harumph [15:43] * m_3 coffee === gianr_ is now known as gianr [16:38] hi all [16:41] arosales: charmworld guys are asking for confirmation on something [16:41] arosales: the new terminology is "reviewed" and then everything else right? [16:41] we're not doing "Community" and all those other labels iirc? [16:43] jcastro: correct "reviewed" is how I remember it for charms being in the charm store, and then nothing else for all others. [16:43] and correct, no "community" label [17:04] someone Italian? [17:08] jcastro: still searching the archives, but I didn't see that term discussed on the mailing list.. :-P === salgado is now known as salgado-lunch [17:10] yeah I am just waiting for some UI thing to bring it up on the list [17:11] SpamapS: I am also going to start publishing notes from each weekly call, etc. [17:11] and probably recording them and putting them on youtube [17:12] jcastro: and inviting the "community" ? [17:12] yeah [17:12] I mean, not that I'll have time to attend [17:12] but its the thought that counts ;) [17:12] somehow the weekly call turned private, not on purpose [17:12] we used to do them with like brandon and marco etc. [17:13] yeah I remember :) [17:46] * m_3 lunch === salgado-lunch is now known as salgado [18:08] * marcoceppi lunch [18:20] SpamapS: heya, still owning the packaging for charm-tools? had to fix a couple things to for the packaging to work with tip of lp:charm-tools, missing some build-deps mainly. [18:20] well, and a billion pep8 errors on raring, due to the newer pep8, but im still going through those. === Catbuntu is now known as Guest23171 [19:01] hi everyone [19:01] I'm having some troubles with zookeeper in one of our juju installation :-( === HelenCrowley is now known as Catbutnu === Catbutnu is now known as Catbuntu [19:03] zookeeper is eating all the available memory in one bootstrap machine, while in the other environment it uses less than 400MB of RAM. Both environments have about the same number of machines [19:04] has anyone had this problem before? [19:08] hi [19:09] fss: I haven't heard anything like that. Have the two environments existed for the same length of time? Have the two bootstrap nodes been up for the same amount of time? I'm wondering if there is a leak somewhere. [19:09] fss, can you check that debug-log is off [19:10] fss, http://paste.ubuntu.com/5671415/ [19:11] fss, debug log left on can fill up space in quickly [19:11] fss, out of curiosity how many machines in the env? [19:11] hazmat: hmm, I see. thanks for the script. I will turn it off. [19:12] fss, that script turns off the log entries but does not clear out the used space [19:12] hazmat: does "juju debug-log" clears it? [19:12] s/clears/clear/ [19:13] hazmat: about 10 running machines, but we create and destroy very frequently [19:13] the machine "counter" is on 554 [19:13] fss, it does not [19:14] hazmat: how can I clean it? [19:14] fss, the logs are nodes in /logs [19:14] fss, if you can check/verify that's the issue, i can add a flag to that off script to clear it [19:15] hazmat: how can I inspect /logs on nodes? I'm a zookeeper noob :) [19:16] fss ssh into bootstrap node and run /usr/share/zookeeper/bin/zkCli.sh and then $ ls /logs [19:16] hazmat: thanks [19:17] fss, hmm. actually $ get /logs should show the child count as part of the parent node properties without having to count the individual files [19:18] there's also not much by way of garbage collection on older state/nodes (ie destroyed services / machines) [19:18] but those are typically tiny.. under 1k each [19:18] hazmat: good, because I'm not able to ls /logs, I get "connection loss" error. it's probably out of memory error again [19:18] here's the get output [19:19] numChildren = 684745 [19:19] yikes.. [19:19] (if I increase xmx I would possible be able to fix it, but it's already 1GB) [19:19] :) [19:20] zk only allows for 1mb packet size to the client by default, trying to do more than that will get that reconnection loss [19:20] er connection loss [19:20] * hazmat ponders [19:21] hazmat: I see [19:21] hazmat: what's the proper way to cleanup these log entries? [19:23] there isn't a proper tool for it. its a bug. deleting the nodes under /logs would do it.. but if you can't get the node names.. [19:24] S: [19:24] :S [19:25] maybe you can infer the node names to purge from the !active ones [19:25] i think its a sequence [19:25] so should be able to iterate on delete commands to the seuqence [19:26] is it possible to increase that 1MB packet size limit? [19:26] fss, yes, but it needs recompilation [19:26] oh boy [19:28] fss, got a solution, but in meeting, will need 30m to code up [19:32] hazmat: oh, thanks. I can wait. I will keep trying something else === gianr_ is now known as gianr [21:45] fss, --reset option to kill the logs.. http://paste.ubuntu.com/5671870/ [21:45] oh.. he's gone. [21:47] * hazmat emails [21:48] hazmat: was about to say. :) also does that delete all the logs unconditionally? [21:50] sidnei, it does [21:50] sidnei, honestly storing logs in zk is particularly bad.. [21:50] juju-core i think is setting up syslog as remote target [21:50] yeah, no idea how it works. just wondering if that will break something else [21:50] sidnei, the logs still exist on disk everywhere [21:50] yup, i saw the commit for using syslog passing by [21:51] sidnei, yeah.. just updated the script.. to reset the parent block pointer to update the last seen index for the cli tool [21:51] in this version, also emailed out http://paste.ubuntu.com/5671892/ [21:52] ah, cool [22:31] m_3: ping! https://code.launchpad.net/~ubuntuone-pqm-team/ubuntu/raring/charm-tools/raring/+merge/156710 https://code.launchpad.net/~ubuntuone-pqm-team/charm-tools/trunk/+merge/156709 [22:32] successfully built at https://code.launchpad.net/~ubuntuone-pqm-team/+recipe/charm-tools-daily [22:44] sidnei: hey [22:47] sidnei: I don't have permission to push to that cause it's distro I guess [22:47] sidnei: lemme see what the difference it [22:47] is [22:48] we might have to submit them against lp:charm-tools first... not sure [22:48] -vs- lp:ubuntu/charm-tools [22:49] m_3: the latter is the packaging branch (the debian/* stuff), the former is the 'upstream' branch [22:50] sidnei: yeah, just found the right ones... the initial MPs linked to the ubuntu one [22:51] lemme merge [22:51] cool [22:53] sidnei: do you have permissions to push to the packaging branch? [22:53] sidnei: the changes are approved, but I can't push to that one [22:54] m_3: i don't. maybe SpamapS? [22:54] clint's the only one that I know of [22:54] we can probably get somebody to sponsor it, but it might take a bit [22:54] SpamapS: https://code.launchpad.net/~ubuntuone-pqm-team/ubuntu/raring/charm-tools/raring/+merge/156710 [22:56] m_3: about to step out for the day, don't think I'll be able to get to it until tomorrow [22:57] tomorrow is fine thanks! [22:57] SpamapS: np, thanks man [22:59] hazmat: its need reboot the zookeeper after delete logs? [23:01] andrewsmedina, the purgeTxnLogs cron job (from the zk pkg) should get run. but afaik .. no restart [23:01] sidnei: lp:charm-tools is updated... it'll start passing again once the packaging updates land [23:01] i don't know how aggressive it is about releasing memory.. probably some jvm tuning [23:01] sidnei: thanks btw! [23:01] its not keeping additional versions in memory, it is keeping them on disk in the txn logs which is why the purge script is useful [23:02] m_3: thank you! [23:04] i'm back === iggy2 is now known as iggy [23:04] hazmat: i saw your email when I was in the bus, thanks for the help! :) [23:06] hazmat: I will take a look tomorrow, it's on vpc, I can't access it from home. I could use an elastic ip, but we have a temporary solution, it can wait [23:09] fss, andrewsmedina: did you guys ever try jitsu import/export, and do you have the need for it or something similar to it in tsuru? [23:11] sidnei: nope, we never tried it. Actually, I don't know what it does. Is it used for exporting and importing environments between cloud providers? [23:12] fss: roughly yes. im looking for something that can take (charms, configs, relations, constraints, number of units) and either build a complete environment from scratch or update an already-running one. [23:13] jitsu import does handle the 'from scratch' part, but im not sure it handles updating an existing environment to match the import file, hazmat? [23:15] content, stat = client.get("/logs") [23:15] sidnei: that would be cool [23:15] exceptions.TypeError: iteration over non-sequence [23:15] sidnei, it can be imported into an existing non-conflicting env (conflicts detected and aborted) [23:15] whoops [23:15] andrewdeane, add a yield to before client [23:15] 'yield' [23:16] hazmat: ok [23:16] actually hold on.. i should check the rest of that script [23:16] andrewsmedina, fixed version http://paste.ubuntu.com/5672062/ [23:16] hazmat: it works :) [23:17] andrewsmedina, it doesn't... [23:17] andrewsmedina: \o/ [23:17] hazmat: its removing 685132 logs :p [23:17] * fss zzzzzzz [23:17] andrewsmedina, not without the second version.. its not waiting on the op completion.. so its issuing a bunch of deletes without waiting on the results.. [23:17] hazmat: I added the yield [23:17] it should still clear out some stuff, and its not harmful.. but you should run the second version [23:18] andrewsmedina, there where a few yields missing.. diff to that second pastebin [23:18] but that should clear out the memory [23:18] don't forget to clear out the txn logs [23:18] if your close to disk full [23:18] hazmat: yeah, looking at the source seems like it's pretty specific, and would depend on pyjuju [23:19] sidnei, yeah.. i've got on my task/todo list to update juju-deployer2 for juju-core.. as the openstack deploys are using it instead of import/export jitsu style [23:19] its much closer to pure cli usage so should work with both [23:20] hazmat: is there a juju-deployer2 already? :) [23:20] with the advent of the juju-core api publicly things like that should be easier [23:20] what import/export are doing that is [23:20] sidnei, not yet.. was waiting on maas provider for jcore [23:20] else its non-urgent/fun task [23:21] hazmat: i got it very high on my list to rewrite juju-deployer with tests [23:21] sidnei, when you say update existing env.. what do you mean? [23:21] like change config, change charm versions.. change charm origins? [23:22] ie more of a sync than import [23:23] hazmat: yes, something along these lines. with an eye towards service-swap (deploy charm as service-1; on sync deploy charm as service-2 and move relation from service-1 to service-2) [23:24] sidnei, with both s-1 and s-2 defined in the state file ? [23:25] hazmat: haven't got to that part yet, but i think nope. it would be defined as 'service' in the state file, the versioning would be a flag. [23:26] maybe sync --rolling-swap=service or something [23:27] or maybe it's defined as a policy in the state file, and it's always done as a rolling swap whenever there are changes to 'service' to be synced [23:28] fss, andrewsmedina you guys good? [23:28] sidnei, not really seeing that.. cause else it could just be a charm upgrade away [23:28] sidnei, else why throw away state.. ie. what if its a db [23:28] hazmat: I think so [23:29] fss, memory usage down? [23:29] hazmat: what if you need to change constraints? move from m1.small to m1.large? or can that be done just by setting constraints on an existing service and add/remove nodes? [23:30] hazmat: andrewsmedina is running the script, looks like the memory usage drop 1/3 (from about 1G to about 650M) [23:30] fss, excellent [23:30] sidnei, you can do that on ec2 without killing things.. [23:30] sidnei, stop, modify instance attr, start.. [23:31] i mean there always a place for rolling.. with canaries.. esp for image deploys.. [23:31] but there are other ways to skin that cat [23:32] hazmat: thank you :-) === Guest6347 is now known as wedgwood_away [23:33] hazmat: yeah, i think canaries is actually what im after [23:33] hazmat: can you expand on other ways to skin that cat? :) [23:34] (dinner will check backlog later) [23:34] * hazmat does the same