[10:34] <mariusko> Anybody here that could help me getting my environment up after node 0 is not working?
[10:36] <mariusko> Bounty offered here: http://askubuntu.com/questions/271312/what-to-do-when-juju-machine-0-has-got-agent-state-not-started-state
[10:57] <mariusko> I am actually able to log into machine 0, but how to get the agent running again?
[10:57] <mariusko> The log there says: Failure: txzookeeper.client.ConnectionTimeoutException: could not connect before timeout
[12:42] <popey> where should I go for support with juju? I am running raring on my desktop so expect to be told I can't use askubuntu..
[13:39] <marcoceppi> popey here works
[13:40] <popey> ok, on raring I just went through the tutorial, add the ppa, install juju, then bootstrap
[13:40] <popey> i then did "juju deploy etherpad-lite" and it barfed
[13:40] <popey> http://paste.ubuntu.com/5646194
[13:40] <popey> like that
[13:40] <popey> juju status just returns a timeout now
[13:40] <marcoceppi> Also, WRT Ask Ubuntu, not all topics wrt raring are offtopic, Juju on raring would be ontopic FWIW
[13:41] <popey> oh, i thought all Ubuntu+1 was offtopic, sorry
[13:42] <marcoceppi> popey: that looks like a bug to be honest
[13:42] <marcoceppi> What version of juju is this listed as in the package?
[13:45] <marcoceppi> I don't have a raring machine up yet, let me spin up a VM to test
[13:46] <marcoceppi> popey: https://bugs.launchpad.net/juju/+bug/1159020
[13:46] <_mup_> Bug #1159020: SyntaxError: invalid syntax <juju:New> < https://launchpad.net/bugs/1159020 >
[13:48] <popey> marcoceppi: yeah, looks like it, using 0.6.0.1+bzr620-0juju2~raring1
[13:48] <mgz> looks like it's from python 2 and 2.7 getting muddled
[13:49] <mgz> lxc-ls is being given to python 3 when it's a python 2 script
[13:52] <mgz> doesn't look juju related at any rate
[13:55] <jcastro_> popey: if you're going to use canonistack and AWS, try the 2.0 version!
[13:56] <mgz> popey: I'm curious about your environment though, do you have PYTHONPATH or anything set?
[13:56] <popey> mgz: no
[13:57] <popey> mgz: http://paste.ubuntu.com/5646414
[13:57] <mgz> if you just run `lxc-ls` what happens?
[13:57] <popey> nothing
[13:57] <mgz> nothing is not an error, so the missing fun element is something else...
[13:58] <popey> i have two machines here, can probably replicate it
[14:01] <popey> yeah, same on my laptop
[14:01] <popey> unsurprisingly
[15:55] <mariusko> Hmm, I just wonder if some of the problems I experience is caused by too much memory consumption by the Python stuff compared to the 512 mb available in my ec2 t1.micro instances?
[15:55] <mariusko> And they run without swap, so I guess the kernel kills "random" processes when the memory runs out
[15:59] <mariusko> Would the go version reduce this problem?
[16:00] <mgz> by making t1.micro have more memory? :)
[16:01] <mgz> I'm not sure is the real answer.
[16:01] <mgz> the memory used by juju on everything but the state server should be pretty limited regardless
[16:01] <mgz> and I'm not sure which is worse out of zookeeper and mongo
[16:02] <mgz> if it's machine 0 that's having the issues, you can always deploy just that on a larger instance by passing a constraint on bootstrap
[16:03] <mgz> I think that has helped for other people in the past.
[16:03] <mariusko> Ironically I am suspecting that the problem grew worse by adding landscape client for monitoring them
[16:03] <mgz> that's implemented as a subordinate charm?
[16:03] <mariusko> japp
[16:03] <mgz> that would push up the requirements for each machine
[16:05] <mariusko> Hmm, for small deployments, the price for running a lot of those m1.small instances is noticable with all required nodes
[16:06] <mgz> yup, micro is a good deal, but is a very different beast
[16:08] <mariusko> hmm, japp, if they just had added some swap, it wouldn't be so destructive
[19:17] <hazmat`> mariusko, ec2 micro instances are not regular vms..
[19:18] <hazmat`> they are heavily penalized for cpu usage.
[19:19] <hazmat`> both zk and mongodb favor keeping things in memory.. overall data set size is similiar between the two (excluding debug-log)
[19:20] <SpamapS> hazmat`: 'favor' is a bit light for what zk does with its dataset in RAM ;)
[19:21]  * SpamapS notes that MySQL is happy to run (horribly) w/ a 16MB buffer.
[19:24] <hazmat`> SpamapS, fair enough.. they both strongly desire working set in ram, with zk required, mongodb.. almost required for good perf.
[22:34] <sarnold> hrm, am I doing something wrong here? ZOO_ERROR@handle_socket_error_msg@1579: Socket [10.0.3.1:39997] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
[22:57] <sarnold> ah. full disk == bad news.