[13:44] <AskUbuntu_> relation error between nova-compute and nova cloud controller | http://askubuntu.com/q/277139
[14:46] <jcastro> m_3: so I hear you nailed -alternatives?
[14:48] <m_3> jcastro: yeah, it's working great... I want martin to take a look and then figure out what to do with the changelog and other official stuff to get it merged
[14:48] <mgz> jcastro: will want some testing, but looks reasonable
[14:48] <m_3> he might have a beter way to do it too
[14:48] <m_3> :)
[14:48] <m_3> mgz: only tested it on raring btw
[14:49] <m_3> mgz: welcome back!
[14:49] <mgz> well, we'll only do this for raring, so that much is fine
[14:49] <m_3> hope you had a decent long weekend
[14:49]  * m_3 wants one next weekend I think
[14:51] <mgz> m_3: was good, still have sis and baby niece around today as well
[14:52] <m_3> cool
[14:52] <jcastro> wow, the room for the OpenStack Charm Workshop holds like 320 people.
[14:52] <m_3> no way
[14:52] <jcastro> m_3: we'll have wired internet this time!
[14:52] <m_3> cool
[14:52] <jcastro> way.
[14:52] <m_3> oh, awesome
[14:54] <m_3> jcastro: I really still wanna combine #juju and #juju-dev
[14:55] <m_3> I don't think developer-speak will drive people away from the channel
[14:55] <jcastro> I agree, but mramm didn't want to change anything until at least 13.04+
[14:55] <jcastro> I can appreciate his concern, with the deadlines looming, etc.
[14:55] <m_3> yeah, understand
[14:55] <m_3> just looking towards the future ;)
[14:56] <jcastro> yeah, after that we just need to move cheney to the us
[14:56] <jcastro> and we'll be good
[15:00]  * m_3 is still holding out for an australian sprint
[15:00] <m_3> harumph
[15:43]  * m_3 coffee
[16:38] <Carletto> hi all
[16:41] <jcastro> arosales: charmworld guys are asking for confirmation on something
[16:41] <jcastro> arosales: the new terminology is "reviewed" and then everything else right?
[16:41] <jcastro> we're not doing "Community" and all those other labels iirc?
[16:43] <arosales> jcastro: correct "reviewed" is how I remember it for charms being in the charm store, and then nothing else for all others.
[16:43] <arosales> and correct, no "community" label
[17:04] <Carletto> someone Italian?
[17:08] <SpamapS> jcastro: still searching the archives, but I didn't see that term discussed on the mailing list.. :-P
[17:10] <jcastro> yeah I am just waiting for some UI thing to bring it up on the list
[17:11] <jcastro> SpamapS: I am also going to start publishing notes from each weekly call, etc.
[17:11] <jcastro> and probably recording them and putting them on youtube
[17:12] <SpamapS> jcastro: and inviting the "community" ?
[17:12] <jcastro> yeah
[17:12] <SpamapS> I mean, not that I'll have time to attend
[17:12] <SpamapS> but its the thought that counts ;)
[17:12] <jcastro> somehow the weekly call turned private, not on purpose
[17:12] <jcastro> we used to do them with like brandon and marco etc.
[17:13] <SpamapS> yeah I remember :)
[17:46]  * m_3 lunch
[18:08]  * marcoceppi lunch
[18:20] <sidnei> SpamapS: heya, still owning the packaging for charm-tools? had to fix a couple things to for the packaging to work with tip of lp:charm-tools, missing some build-deps mainly.
[18:20] <sidnei> well, and a billion pep8 errors on raring, due to the newer pep8, but im still going through those.
[19:01] <fss> hi everyone
[19:01] <fss> I'm having some troubles with zookeeper in one of our juju installation :-(
[19:03] <fss> zookeeper is eating all the available memory in one bootstrap machine, while in the other environment it uses less than 400MB of RAM. Both environments have about the same number of machines
[19:04] <fss> has anyone had this problem before?
[19:08] <andrewsmedina> hi
[19:09] <benji> fss: I haven't heard anything like that.  Have the two environments existed for the same length of time?  Have the two bootstrap nodes been up for the same amount of time?  I'm wondering if there is a leak somewhere.
[19:09] <hazmat> fss, can you check that debug-log is off
[19:10] <hazmat> fss, http://paste.ubuntu.com/5671415/
[19:11] <hazmat> fss, debug log left on can fill up space in quickly
[19:11] <hazmat> fss, out of curiosity how many machines in the env?
[19:11] <fss> hazmat: hmm, I see. thanks for the script. I will turn it off.
[19:12] <hazmat> fss, that script turns off the log entries but does not clear out the used space
[19:12] <fss> hazmat: does "juju debug-log" clears it?
[19:12] <fss> s/clears/clear/
[19:13] <fss> hazmat: about 10 running machines, but we create and destroy very frequently
[19:13] <fss> the machine "counter" is on 554
[19:13] <hazmat> fss, it does not
[19:14] <fss> hazmat: how can I clean it?
[19:14] <hazmat> fss, the logs are nodes in /logs
[19:14] <hazmat> fss, if you can check/verify that's the issue, i can add a flag to that off script to clear it
[19:15] <fss> hazmat: how can I inspect /logs on nodes? I'm a zookeeper noob :)
[19:16] <hazmat> fss ssh into bootstrap node and run /usr/share/zookeeper/bin/zkCli.sh   and then $ ls /logs
[19:16] <fss> hazmat: thanks
[19:17] <hazmat> fss,  hmm. actually $ get /logs should show the child count as part of the parent node properties without having to count the individual files
[19:18] <hazmat> there's also not much by way of garbage collection on older state/nodes (ie destroyed services / machines)
[19:18] <hazmat> but those are typically tiny.. under 1k each
[19:18] <fss> hazmat: good, because I'm not able to ls /logs, I get "connection loss" error. it's probably out of memory error again
[19:18] <fss> here's the get output
[19:19] <fss> numChildren = 684745
[19:19] <hazmat> yikes..
[19:19] <fss> (if I increase xmx I would possible be able to fix it, but it's already 1GB)
[19:19] <sidnei> :)
[19:20] <hazmat> zk only allows for 1mb packet size to the client by default, trying to do more than that will get that reconnection loss
[19:20] <hazmat> er connection loss
[19:20]  * hazmat ponders
[19:21] <fss> hazmat: I see
[19:21] <fss> hazmat: what's the proper way to cleanup these log entries?
[19:23] <hazmat> there isn't a proper tool for it. its a bug.  deleting the nodes under /logs would do it.. but if you can't get the node names..
[19:24] <fss> S:
[19:24] <fss> :S
[19:25] <sidnei> maybe you can infer the node names to purge from the !active ones
[19:25] <hazmat> i think its a sequence
[19:25] <hazmat> so should be able to iterate on delete commands to the seuqence
[19:26] <fss> is it possible to increase that 1MB packet size limit?
[19:26] <hazmat> fss, yes, but it needs recompilation
[19:26] <fss> oh boy
[19:28] <hazmat> fss, got a solution, but in meeting, will need 30m to code up
[19:32] <fss> hazmat: oh, thanks. I can wait. I will keep trying something else
[21:45] <hazmat> fss, --reset option to kill the logs.. http://paste.ubuntu.com/5671870/
[21:45] <hazmat> oh.. he's gone.
[21:47]  * hazmat emails
[21:48] <sidnei> hazmat: was about to say. :) also does that delete all the logs unconditionally?
[21:50] <hazmat> sidnei, it does
[21:50] <hazmat> sidnei, honestly storing logs in zk is particularly bad..
[21:50] <hazmat> juju-core i think is setting up syslog as remote target
[21:50] <sidnei> yeah, no idea how it works. just wondering if that will break something else
[21:50] <hazmat> sidnei, the logs still exist on disk everywhere
[21:50] <sidnei> yup, i saw the commit for using syslog passing by
[21:51] <hazmat> sidnei, yeah.. just updated the script.. to reset the parent block pointer to update the last seen index for the cli tool
[21:51] <hazmat> in this version, also emailed out http://paste.ubuntu.com/5671892/
[21:52] <sidnei> ah, cool
[22:31] <sidnei> m_3: ping! https://code.launchpad.net/~ubuntuone-pqm-team/ubuntu/raring/charm-tools/raring/+merge/156710 https://code.launchpad.net/~ubuntuone-pqm-team/charm-tools/trunk/+merge/156709
[22:32] <sidnei> successfully built at https://code.launchpad.net/~ubuntuone-pqm-team/+recipe/charm-tools-daily
[22:44] <m_3> sidnei: hey
[22:47] <m_3> sidnei: I don't have permission to push to that cause it's distro I guess
[22:47] <m_3> sidnei: lemme see what the difference it
[22:47] <m_3> is
[22:48] <m_3> we might have to submit them against lp:charm-tools first... not sure
[22:48] <m_3> -vs- lp:ubuntu/charm-tools
[22:49] <sidnei> m_3: the latter is the packaging branch (the debian/* stuff), the former is the 'upstream' branch
[22:50] <m_3> sidnei: yeah, just found the right ones... the initial MPs linked to the ubuntu one
[22:51] <m_3> lemme merge
[22:51] <sidnei> cool
[22:53] <m_3> sidnei: do you have permissions to push to the packaging branch?
[22:53] <m_3> sidnei: the changes are approved, but I can't push to that one
[22:54] <sidnei> m_3: i don't. maybe SpamapS?
[22:54] <m_3> clint's the only one that I know of
[22:54] <m_3> we can probably get somebody to sponsor it, but it might take a bit
[22:54] <m_3> SpamapS: https://code.launchpad.net/~ubuntuone-pqm-team/ubuntu/raring/charm-tools/raring/+merge/156710
[22:56] <SpamapS> m_3: about to step out for the day, don't think I'll be able to get to it until tomorrow
[22:57] <sidnei> tomorrow is fine thanks!
[22:57] <m_3> SpamapS: np, thanks man
[22:59] <andrewsmedina> hazmat: its need reboot the zookeeper after delete logs?
[23:01] <hazmat> andrewsmedina, the purgeTxnLogs cron job (from the zk pkg) should get run. but afaik .. no restart
[23:01] <m_3> sidnei: lp:charm-tools is updated... it'll start passing again once the packaging updates land
[23:01] <hazmat> i don't know how aggressive it is about releasing memory.. probably some jvm tuning
[23:01] <m_3> sidnei: thanks btw!
[23:01] <hazmat> its not keeping additional versions in memory, it is keeping them on disk in the txn logs which is why the purge script is useful
[23:02] <sidnei> m_3: thank you!
[23:04] <fss> i'm back
[23:04] <fss> hazmat: i saw your email when I was in the bus, thanks for the help! :)
[23:06] <fss> hazmat: I will take a look tomorrow, it's on vpc, I can't access it from home. I could use an elastic ip, but we have a temporary solution, it can wait
[23:09] <sidnei> fss, andrewsmedina: did you guys ever try jitsu import/export, and do you have the need for it or something similar to it in tsuru?
[23:11] <fss> sidnei: nope, we never tried it. Actually, I don't know what it does. Is it used for exporting and importing environments between cloud providers?
[23:12] <sidnei> fss: roughly yes. im looking for something that can take (charms, configs, relations, constraints, number of units) and either build a complete environment from scratch or update an already-running one.
[23:13] <sidnei> jitsu import does handle the 'from scratch' part, but im not sure it handles updating an existing environment to match the import file, hazmat?
[23:15] <andrewsmedina>     content, stat = client.get("/logs")
[23:15] <fss> sidnei: that would be cool
[23:15] <andrewsmedina> exceptions.TypeError: iteration over non-sequence
[23:15] <hazmat> sidnei, it can be imported into an existing non-conflicting env (conflicts detected and aborted)
[23:15] <hazmat> whoops
[23:15] <hazmat> andrewdeane, add a yield to before client
[23:15] <hazmat> 'yield'
[23:16] <andrewsmedina> hazmat: ok
[23:16] <hazmat> actually hold on.. i should check the rest of that script
[23:16] <hazmat> andrewsmedina, fixed version http://paste.ubuntu.com/5672062/
[23:16] <andrewsmedina> hazmat: it works :)
[23:17] <hazmat> andrewsmedina, it doesn't...
[23:17] <fss> andrewsmedina: \o/
[23:17] <andrewsmedina> hazmat: its removing 685132 logs :p
[23:17]  * fss zzzzzzz
[23:17] <hazmat> andrewsmedina, not without the second version.. its not waiting on the op completion.. so its issuing a bunch of deletes without waiting on the results..
[23:17] <andrewsmedina> hazmat: I added the yield
[23:17] <hazmat> it should still clear out some stuff, and its not harmful.. but you should run the second version
[23:18] <hazmat> andrewsmedina, there where a few yields missing.. diff to that second pastebin
[23:18] <hazmat> but that should clear out the memory
[23:18] <hazmat> don't forget to clear out the txn logs
[23:18] <hazmat> if your close to disk full
[23:18] <sidnei> hazmat: yeah, looking at the source seems like it's pretty specific, and would depend on pyjuju
[23:19] <hazmat> sidnei, yeah.. i've got on my task/todo list to update juju-deployer2 for juju-core.. as the openstack deploys are using it instead of import/export jitsu style
[23:19] <hazmat> its much closer to pure cli usage so should work with both
[23:20] <sidnei> hazmat: is there a juju-deployer2 already? :)
[23:20] <hazmat> with the advent of the juju-core api publicly things like that should be easier
[23:20] <hazmat> what import/export are doing that is
[23:20] <hazmat> sidnei, not yet.. was waiting on maas provider for jcore
[23:20] <hazmat> else its non-urgent/fun task
[23:21] <sidnei> hazmat: i got it very high on my list to rewrite juju-deployer with tests
[23:21] <hazmat> sidnei, when you say update existing env.. what do you mean?
[23:21] <hazmat> like change config, change charm versions.. change charm origins?
[23:22] <hazmat> ie more of a sync than import
[23:23] <sidnei> hazmat: yes, something along these lines. with an eye towards service-swap (deploy charm as service-1; on sync deploy charm as service-2 and move relation from service-1 to service-2)
[23:24] <hazmat> sidnei, with both s-1 and s-2 defined in the state file ?
[23:25] <sidnei> hazmat: haven't got to that part yet, but i think nope. it would be defined as 'service' in the state file, the versioning would be a flag.
[23:26] <sidnei> maybe sync --rolling-swap=service or something
[23:27] <sidnei> or maybe it's defined as a policy in the state file, and it's always done as a rolling swap whenever there are changes to 'service' to be synced
[23:28] <hazmat> fss, andrewsmedina you guys good?
[23:28] <hazmat> sidnei, not really seeing that.. cause else it could just be a charm upgrade away
[23:28] <hazmat> sidnei, else why throw away state.. ie. what if its a db
[23:28] <fss> hazmat: I think so
[23:29] <hazmat> fss, memory usage down?
[23:29] <sidnei> hazmat: what if you need to change constraints? move from m1.small to m1.large? or can that be done just by setting constraints on an existing service and add/remove nodes?
[23:30] <fss> hazmat: andrewsmedina is running the script, looks like the memory usage drop 1/3 (from about 1G to about 650M)
[23:30] <hazmat> fss, excellent
[23:30] <hazmat> sidnei, you can do that on ec2 without killing things..
[23:30] <hazmat> sidnei, stop, modify instance attr, start..
[23:31] <hazmat> i mean there always a place for rolling.. with canaries.. esp for image deploys..
[23:31] <hazmat> but there are other ways to skin that cat
[23:32] <fss> hazmat: thank you :-)
[23:33] <sidnei> hazmat: yeah, i think canaries is actually what im after
[23:33] <sidnei> hazmat: can you expand on other ways to skin that cat? :)
[23:34] <sidnei> (dinner will check backlog later)
[23:34]  * hazmat does the same