=== defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === Beret- is now known as Beret === jamespage is now known as 13WAATFJC === defunctzombie_zz is now known as defunctzombie === marcopollo__ is now known as marcopollo_ === mthaddon` is now known as mthaddon === defunctzombie is now known as defunctzombie_zz === huats_ is now known as huats === melmoth_ is now known as melmoth === negronjl` is now known as negronjl === racedo` is now known as racedo [10:34] Anybody here that could help me getting my environment up after node 0 is not working? [10:36] Bounty offered here: http://askubuntu.com/questions/271312/what-to-do-when-juju-machine-0-has-got-agent-state-not-started-state [10:57] I am actually able to log into machine 0, but how to get the agent running again? [10:57] The log there says: Failure: txzookeeper.client.ConnectionTimeoutException: could not connect before timeout === gary_poster|away is now known as gary_poster [12:42] where should I go for support with juju? I am running raring on my desktop so expect to be told I can't use askubuntu.. === ppetraki_ is now known as ppetraki === ssweeny` is now known as ssweeny === teknico_ is now known as teknico [13:39] popey here works [13:40] ok, on raring I just went through the tutorial, add the ppa, install juju, then bootstrap [13:40] i then did "juju deploy etherpad-lite" and it barfed [13:40] http://paste.ubuntu.com/5646194 [13:40] like that [13:40] juju status just returns a timeout now [13:40] Also, WRT Ask Ubuntu, not all topics wrt raring are offtopic, Juju on raring would be ontopic FWIW [13:41] oh, i thought all Ubuntu+1 was offtopic, sorry [13:42] popey: that looks like a bug to be honest [13:42] What version of juju is this listed as in the package? [13:45] I don't have a raring machine up yet, let me spin up a VM to test [13:46] popey: https://bugs.launchpad.net/juju/+bug/1159020 [13:46] <_mup_> Bug #1159020: SyntaxError: invalid syntax < https://launchpad.net/bugs/1159020 > === vednis is now known as mars [13:48] marcoceppi: yeah, looks like it, using 0.6.0.1+bzr620-0juju2~raring1 [13:48] looks like it's from python 2 and 2.7 getting muddled [13:49] lxc-ls is being given to python 3 when it's a python 2 script [13:52] doesn't look juju related at any rate [13:55] popey: if you're going to use canonistack and AWS, try the 2.0 version! [13:56] popey: I'm curious about your environment though, do you have PYTHONPATH or anything set? [13:56] mgz: no [13:57] mgz: http://paste.ubuntu.com/5646414 [13:57] if you just run `lxc-ls` what happens? [13:57] nothing [13:57] nothing is not an error, so the missing fun element is something else... [13:58] i have two machines here, can probably replicate it [14:01] yeah, same on my laptop [14:01] unsurprisingly === ahs3` is now known as ahs3 === wedgwoodz is now known as wedgwood_away === defunctzombie_zz is now known as defunctzombie [15:55] Hmm, I just wonder if some of the problems I experience is caused by too much memory consumption by the Python stuff compared to the 512 mb available in my ec2 t1.micro instances? [15:55] And they run without swap, so I guess the kernel kills "random" processes when the memory runs out [15:59] Would the go version reduce this problem? [16:00] by making t1.micro have more memory? :) [16:01] I'm not sure is the real answer. [16:01] the memory used by juju on everything but the state server should be pretty limited regardless [16:01] and I'm not sure which is worse out of zookeeper and mongo [16:02] if it's machine 0 that's having the issues, you can always deploy just that on a larger instance by passing a constraint on bootstrap [16:03] I think that has helped for other people in the past. [16:03] Ironically I am suspecting that the problem grew worse by adding landscape client for monitoring them [16:03] that's implemented as a subordinate charm? [16:03] japp [16:03] that would push up the requirements for each machine [16:05] Hmm, for small deployments, the price for running a lot of those m1.small instances is noticable with all required nodes [16:06] yup, micro is a good deal, but is a very different beast [16:08] hmm, japp, if they just had added some swap, it wouldn't be so destructive === salgado is now known as salgado-lunch === exekias_ is now known as exekias === gianr__ is now known as gianr === salgado-lunch is now known as salgado === Makyo is now known as Makyo|out [19:17] mariusko, ec2 micro instances are not regular vms.. [19:18] they are heavily penalized for cpu usage. [19:19] both zk and mongodb favor keeping things in memory.. overall data set size is similiar between the two (excluding debug-log) [19:20] hazmat`: 'favor' is a bit light for what zk does with its dataset in RAM ;) [19:21] * SpamapS notes that MySQL is happy to run (horribly) w/ a 16MB buffer. === defunctzombie is now known as defunctzombie_zz === hasp-air_ is now known as hasp-air [19:24] SpamapS, fair enough.. they both strongly desire working set in ram, with zk required, mongodb.. almost required for good perf. === defunctzombie_zz is now known as defunctzombie === hazmat` is now known as hazmat [22:34] hrm, am I doing something wrong here? ZOO_ERROR@handle_socket_error_msg@1579: Socket [10.0.3.1:39997] zk retcode=-4, errno=111(Connection refused): server refused to accept the client [22:57] ah. full disk == bad news. === sidnei` is now known as sidnei