[00:14] talk went fine... good practice for SCALE.. not many cloud users in ventura county. :-P [00:27] SpamapS: Somebody told me a GUI was in the works for Juju? [00:27] I've been working on one myself the last couple of days. [00:30] Oh I don't know about guis [00:31] I cmdline [00:31] * SpamapS realizes all his problems with his demo today were because he had a lucid AMI specified for some reason [00:31] Well me too... but a more accessible interface for beginners is always a good thing. [00:32] Plus it makes it a little easier to keep an eye on 'juju status'. [00:32] oohh and m1.large.. awesome, I spent like $8 on FAIL today [00:33] george_e: Ah, yeah I think its always been something planned [00:33] Well, if there _isn't_ anyone working on one - I am :) [00:33] george_e: I'd love an interface that just showed status in a pretty way :) [00:33] I did the gource thing.. but thats just a show [00:33] SpamapS: I will certainly make sure of that. [00:34] I plan to have a tree view that displays the services, units, etc. in a hierarchical manner. [00:34] Plus it can even make use of libnotify for errors. [00:34] That way you find out when something goes wrong. [00:35] Oh a native GUI? [00:35] I'd do it as an HTML5 app [00:36] george_e: notify is weak, use an indicator. :) [00:36] if you miss the notify.. you never know the problem. Indicator will let you turn the envelope blue. :) [00:36] SpamapS: It's going to be a Qt application - that's where my skills are. [00:37] sweet [00:37] I believe there is an AppIndicator package for Qt somewhere... but I don't think it made it into the Oneiric archives. [01:04] It's here by the way: https://launchpad.net/juju-gui [01:04] I have daily builds and a PPA set up for it. === _mup__ is now known as _mup_ [03:24] HTML5 app would be sweet too, that runs on the bootstrap <3 [03:25] For when you're out and about [11:52] jcastro: done with my limesurvey charm. Review welcome :) bug #899849 [11:52] <_mup_> Bug #899849: New charm (Limesurvey) < https://launchpad.net/bugs/899849 > === _mup__ is now known as _mup_ === mpl_ is now known as mpl === medberry is now known as Guest59067 [16:31] nijaba: reviewing your limesurvey charm now [16:51] nijaba: review done.. *SO* close [16:51] * SpamapS_ heads to brunch === SpamapS_ is now known as SpamapS [22:27] Thanks SpamapS. Will try to fix and let you know :) [22:31] hi! the docs are wrong for this IRC channel. ;) https://juju.ubuntu.com/docs/faq.html [22:34] ... we really need to figure out what is wrong with LXC [22:34] PTY allocation request failed on channel 0 [22:34] :-/ [22:37] kees: thanks, I'll fix that.. [22:37] so, I have had juju lose it's mind. [22:38] it stopped launching systems, and complains that machine 9 is missing [22:38] any ideas? :P [22:40] MachineStateNotFound: Machine 9 was not found [22:40] and no I can do no more provisioning. [22:40] *now [22:40] Interesting [22:40] No I don't think I've seen that.. but I have seen the provisioning agent basically stop working.. [22:41] kees: if you read the environment of the provisioning agent on machine 0, you can re-start it (or it might be upstart managed in more recent releases, I haven't checked) [22:41] kees: but I suspect its a probelm in ZK and the problem will continue. [22:42] ZK? [22:42] zookepper. [22:42] *eeper [22:42] where does it store the details? seems like I could just _remove_ machine 9 [22:42] can I restart zookeepers without wrecking all the running units? [22:43] kees: docs should referesh in the next hour [22:43] kees: Zookeeper yet [22:43] yeah [22:43] kees: you can restart zookeeper yes, though I believe it may cause the agents to spew copious errors... they might even die... I forget if that bug was fixed yet [22:47] kees: kees https://bugs.launchpad.net/juju/+bug/861928 .. is this maybe your bug ? [22:47] <_mup_> Bug #861928: provisioning agent gets confused when machines are terminated < https://launchpad.net/bugs/861928 > [22:50] SpamapS: yeah, that's totally my bug. [22:50] SpamapS: any work-around? [22:51] kees: I think you probably have to dive into ZK and remove the machine node [22:51] SpamapS: where does it store it? [22:51] if agents die, can I restart them, or are they just totally hosed? [22:53] kees: you can restart them.. its getting easier with the branch that puts them in upstart jobs [22:53] actually that may have landed recently [22:54] hrm, I'm using whatever is in oneiric [22:54] 398 ? [22:54] kees: so you may have to dig through the cloud-init bits to find the execution line [22:55] to restart the agent, or fix ZK? [22:58] to restart the agent [22:58] to fix ZK, there's a zookeeper client on machine 0 [22:58] right [23:00] and how do I tell that client to forget about machine 9? :) [23:00] /usr/share/zookeeper/bin/zkCli.sh I think [23:01] kees: rm /machines/machine-000000009 [23:01] kees: I think [23:02] "delete" instead of "rM" ? [23:02] *rm? [23:02] maybe [23:02] rm, Node does not exist: /machines/machine-000000009 [23:03] ls /machines [23:03] nothing there? [23:03] * kees pokes harder [23:03] nopes [23:03] [machine-0000000025, machine-0000000024, machine-0000000000, machine-0000000001, machine-0000000010, machine-0000000006, machine-0000000007, machine-0000000008, machine-0000000002, machine-0000000023, machine-0000000004, machine-0000000005] [23:03] so.. [23:03] it may be that the provisioning agent is internally confused [23:03] so restarting it may fix your problem [23:05] and it doesn't live in /etc/init nor /etc/init.d [23:06] SpamapS: is there a correct way to restart it? Or just kill it and run python -m juju.agents.provision -n --logfile=/var/log/juju/provision-agent.log --pidfile=/var/run/juju/prov sion-agent.pid [23:06] ? [23:13] kees: I think you might need to replicate some env vars [23:14] kees: have to run... good luck. ;) [23:14] okay, thanks! === Guest59067 is now known as medberry