[00:06] <SpamapS> hazmat: looking now
[00:08] <SpamapS> hazmat: looks like it is .. note that it won't be installed on anyone's system because the version is < than the one in 11.10
[00:08] <SpamapS> hazmat: 11.10 has 0.8.0-0ubuntu1 , the ppa has 0.8.0-0juju45~oneiric1 .. j < u
[00:09] <hazmat> bummer
[00:11] <SpamapS> hazmat: IMO its a good thing. :) This PPA shouldn't have anything more than you *need* to run juju.
[00:11] <SpamapS> hazmat: we probably need a dev PPA for stuff like that.
[00:12] <hazmat> SpamapS, its not a breaker yet, but it will be in future ppa revs of the juju
[00:15] <SpamapS> hazmat: at that point we will put the backport in the PPA.
[00:15] <hazmat> jimbaker, i'm wondering if it would be faster to just reset the groups on shutdown of ec2
[00:15] <hazmat> rather than playing the waiting game
[00:16] <jimbaker> hazmat, that does sound reasonable and equivalent
[00:16] <jimbaker> i think it was just an attempt to not create too much garbage
[00:17] <jimbaker> in terms of lots of security groups hanging around
[00:17] <jimbaker> hazmat, i'm pretty certain this is what was done in an earlier version, i don't know if that ever went through review
[00:18] <jimbaker> although the reset then was done at SG acquisition, so a bit different i guess
[00:18] <hazmat> hmm.. yeah
[00:18] <hazmat> jimbaker, group removal at shutdown almost never works for me
[00:18] <hazmat> it always gives up
[00:19] <hazmat> so i'm wondering if its worth the bother
[00:19] <jimbaker> hazmat, hmmm... it does tend to work for me, but i tend to just run the wordpress stack at most
[00:19] <hazmat> effectively.. i wait 30s.. and then.. 2011-12-08 19:14:20,668 ERROR Instance shutdown taking too long, could not delete groups juju-public-0
[00:19] <hazmat> and it moves on
[00:20] <jimbaker> yeah, and without ill effect, since it can just use those SGs anyway
[00:20] <hazmat> well it will try to delete them latter as  i recall
[00:20] <hazmat> and fail if can't delete them
[00:21] <hazmat> ie. if you try to bootstrap immediately
[00:21] <hazmat> resetting the security group means no waiting or errors
[00:21] <jimbaker> hazmat, that does sound like a valid diff approach then
[00:21] <hazmat> on bootstrap we can go ahead and clear out any detected garbage
[00:22] <hazmat> ugh..
[00:22] <hazmat> that sounds rather odd though.. but the reality is the sgs are still present, so its better than nothing
[00:26] <jimbaker> hazmat, it sounds reasonable to me. cleanup is supposed to solve the bounce problem seen in yes | juju destroy-environment && juju bootstrap - so if it doesn't, or not reliably, we need to revisit
[00:27] <hazmat> interesting that error kees saw only exhibits in the us-west-1 region
[00:27] <hazmat> the response from ec2 is different
[00:27] <hazmat> so txaws parsing goes awry
[00:27] <hazmat> when stringfing the error msg
[00:29] <_mup_> juju/provisioning-agent-bug-901901 r431 committed by kapil.thangavelu@canonical.com
[00:29] <_mup_> let the logging package format the exception
[00:29] <jimbaker> hazmat, that is very interesting
[00:43] <adam_g> is it possible to change default-image-id at deploy time?
[00:44] <fwereade_> hazmat, is it deliberate that there's no RelationWorkflow transition from error -> departed?
[00:44] <adam_g> http://paste.ubuntu.com/764414/  :|
[00:49] <hazmat> fwereade_, i believe it was, but in retrospect it seems reasonable that there should be one
[00:49] <fwereade_> hazmat, cool, cheers
[00:49] <hazmat> hmm
[00:51] <fwereade_> hazmat, even if we don't want to fire a departed hook I think we need to be able to make that transition
[00:52] <fwereade_> hazmat, I could be convinced either way on the fire-hook question
[01:08] <osadmin> Whenever I reboot my host juju reports it as stopped even though it is running. Anyone know how to fix this?
[01:10] <osadmin> Anyone, Whenever I reboot my host juju reports it as stopped even though it is running. Anyone know how to fix this?
[01:11] <hazmat> osadmin, having juju survive reboots is a work in progress atm, which provider are you using?
[01:11] <osadmin> hazmat, provider? not sure but I am running the most uptodate ubuntu server version
[01:12] <hazmat> osadmin, are you running juju services on ec2, or physical machines via orchestra, or local/lxc dev on a machine
[01:13] <osadmin> hazmat, running on physical machines via orchestra. hosts are running openstack
[01:17] <hazmat> osadmin, could you pastebin the output of juju status
[01:17] <hazmat> osadmin, at the moment, agents that juju launches aren't set to come back up on machine boot, its something thats being worked on though.
[01:18] <osadmin> hazmat, will do, and fyi here is the doco I followed to create the env
[01:18] <osadmin> hazmat, https://wiki.edubuntu.org/ServerTeam/UbuntuCloudOrchestraJuju
[01:20] <osadmin> hazmat, http://pastebin.com/HuzfJqiq
[01:21] <osadmin> hazmat, is there anyway I can manually reset the agent status?
[01:22] <hazmat> osadmin, yes, its a little involved, but the command that launched the agent is in the cloud-init userdata
[01:23] <hazmat> osadmin, its the output of... sudo cat /var/lib/cloud/instance/user-data.txt
[01:24] <hazmat> er. its in the output of
[01:24] <osadmin> hazmat, that would be great as I am using "juju ssh" to access the hosts
[01:26] <osadmin> hazmat, ok I have logged into the host and am looking at that file now.
[01:28] <osadmin> hazmat, what do I do with this? Sorry (noob to this stuff)
[01:28] <hazmat> osadmin, hm.. that will start the machine agent.. but that won't start the unit agents..
[01:29] <hazmat> osadmin, so for example this is what i have my in output of that file.. http://pastebin.ubuntu.com/764439/
[01:30] <hazmat> osadmin, the command to run the agent is embedded in there... for that output its this one.. JUJU_MACHINE_ID=3 JUJU_ZOOKEEPER=ip-10-176-22-254.us-west-1.compute.internal:2181    python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log --pidfile=/var/run/juju/machine-agent.pid
[01:30] <hazmat> you'd just run that with a sudo prefix on the cli
[01:31] <hazmat> the machine will start reporting in, it looks like it will restart the unit agents, so that should do it
[01:32] <osadmin> hazmat, lost my irc for a moment, back now and will look over the pastebin
 osadmin, the command to run the agent is embedded in there... for that output its this one.. JUJU_MACHINE_ID=3 JUJU_ZOOKEEPER=ip-10-176-22-254.us-west-1.compute.internal:2181    python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log --pidfile=/var/run/juju/machine-agent.pid
 you'd just run that with a sudo prefix on the cli
[01:33] <osadmin> hazmat, ok
[01:35] <hazmat> osadmin, fwiw i'd recommend running from the ppa, we keep it pretty stable, and when the restartable feature/bug fix lands, it will be there first, there's also some additional status output and fixes that are useful for orchestra usage.
[01:37] <osadmin> hazmat, getting errors I will paste what I did
[01:38] <osadmin> hazmat, http://pastebin.com/dr6BSMZe (added sudo to this command)
[01:38] <hazmat> osadmin, there's a trailing '] that shouldn't be there
[01:39] <osadmin> hazmat, oh, I removed that and got an error, will paste the error
[01:40] <osadmin> http://pastebin.com/CBAqj0gj
[01:40] <osadmin> hazmat, http://pastebin.com/CBAqj0gj
[01:46] <hazmat> osadmin, the full command should look like this..
[01:46] <hazmat>  JUJU_MACHINE_ID=3 JUJU_ZOOKEEPER=ip-10-176-22-254.us-west-1.compute.internal:2181    python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log --pidfile=/var/run/juju/machine-agent.pid
[01:46] <hazmat> ie. it specifies environment variables
[01:46] <hazmat> the whole line needs to be used
[01:47] <osadmin> hazmat, I did the following was this wrong?  export JUJU_MACHINE_ID=4; export JUJU_ZOOKEEPER=oscc-01.itos.deakin.edu.au:2181
[01:48] <hazmat> osadmin, that should be fine
[01:48] <hazmat> osadmin, you can't use sudo then
[01:48] <hazmat> the shell environment won't persist through the sudo
[01:48] <hazmat> you'd have to use a root shell if your going to do it that way
[01:49] <osadmin> ok
[01:49] <osadmin> trying
[01:49] <osadmin> no errors
[01:50] <osadmin> hazmat, juju status has not changed however
[01:52] <osadmin> hazmat, I can now "juju ssh" into the host, I will recheck juju status again
[01:53] <osadmin> hazmat, status still says stopped
[01:54] <hazmat> osadmin, can you pastebin the machine agent log file /var/log/juju/machine-agent.log
[01:54] <osadmin> ok
[01:55] <hazmat> osadmin, there's a cli tool that makes that easier.. apt-get install pastebinit
[01:55] <hazmat> and then you can.. cat /var/log/juju/machine-agent.log | pastebinit
[01:55] <hazmat> and it will give you a url
[01:55] <osadmin> thx
[01:56] <hazmat> bcsaller, jimbaker could i get a +1 on this trivial.. http://paste.ubuntu.com/764452/
[01:57] <osadmin> hazmat, host may not be able to get out at this stage. May have to do it the old fashion way.
[02:00] <bcsaller> hazmat: lgtm
[02:00] <osadmin> hazmat, here is the tail of the file you requested http://pastebin.com/Z1QgvpEC
[02:06] <osadmin> hazmat, here is the whole log file. http://pastebin.com/u9SWwc5x
[02:06] <hazmat> hm..
[02:07] <hazmat> osadmin, could you paste log file at /var/lib/juju/units/nova-compute-1/charm.log
[02:08] <hazmat> osadmin, the machine agent looks like its running fine.. the charm.log will show the service unit agent log file
[02:08] <osadmin> hazmat, ok fyi: here is the juju status output. http://pastebin.com/qAhdTggJ
[02:09]  * hazmat nods
[02:11] <_mup_> juju/trunk r431 committed by kapil.thangavelu@canonical.com
[02:11] <_mup_> [trivial] provisioning agent fix, let the logging package format the exception [f=901901][r=bcsaller]
[02:12] <osadmin> hazmat: tail of the file for starters. http://pastebin.com/Ge63NQvg
[02:22] <osadmin> hazmat,whole of the requested log file is here: http://pastebin.com/9jChCVnS
[02:24] <osadmin> hazmat: 2nd try http://pastebin.com/iHfLuWUh
[02:27] <osadmin> hazmat, lol, grabbed to much with that last pastebin, u may have to scroll down a bit to see the contents of the log file
[02:47] <hazmat> osadmin, yeah.. that's not going to recover without some surgery.. your probably better off just removing the unit, terminating the machine, and adding a new unit
[02:48] <hazmat> ie. juju remove-unit nova-compute/1, juju terminate-machine 4, juju add-unit nova-compute
[02:53] <osadmin> hazmat, thanks. Will do but first, will doing this delete any apps from nova-comput/1?
[02:54] <hazmat> osadmin, it will
[02:54] <hazmat> well.. it probably will
[02:54] <osadmin> om
[02:54] <osadmin> ok
[02:54] <hazmat> i'm not sure if orchestra is going to reinstall the machine when its cleared out
[02:54] <hazmat> er.  shutdown
[02:55] <hazmat> for the next boot.. my understanding is atm it doesn't, so the data would still be there, but i wouldn't count on it
[02:55] <osadmin> I guess I could wait until the fix is released
[02:56] <osadmin> hazmat, d u think the fix will be a while away?
[02:57] <hazmat> osadmin, the fix won't help for an existing installation, there's a branch in review which implements it
[02:57] <hazmat> so not to far away
[02:57] <hazmat> probably another week or two
[02:58] <osadmin> hazmat, thats ok, I will be rebuilding this very soon. If timing is right, I will build with the fixed version. D u think release bfore xmas is poss?
[02:58] <osadmin> ok
[02:58] <osadmin> thanks
[02:58] <hazmat> osadmin, np
[02:59] <osadmin> hazmat, what d u use juju for mainly?
[08:05] <nijaba> Good morning
[08:30] <Sander^work> does juju work with vmware ?
[10:06] <nijaba> SpamapS: @ubuntucloud will republish your tweets, except if you tweets start either by "@ubuntucloud" or "RT" or "♺".  Hence why your tweet was not retweeted
[10:07] <nijaba> SpamapS: so move @ubuntucloud toward the end, and it will be retweeted
[11:01] <shafiqissani> how to deploy wordpress to a single instance
[11:02] <shafiqissani> i.e. bootstrap instance + mysql instance + wordpress instance all on the same instance
[11:05] <rog> shafiqissani: you can't do that currently.
[11:05] <shafiqissani> I see
[11:08] <fwereade> shafiqissani, some people have been bringing up single EC2 instances and running the local provider on just that one instance
[11:09] <fwereade> shafiqissani, so it's not *impossible*, but it is not a configuration we would recommend for production
[11:11] <shafiqissani> fwereade: I know it is not the optimal configuration but imagine it to be on the line of shared hosting
[11:12] <shafiqissani> fwereade: a site or service that does not require high availability and get very little traffic would a scenario for such a configuration
[11:12] <fwereade> shafiqissani, indeed, there are interesting possibilities when units can share machines, and we plan to do something about that -- but it's not on the current roadmap yet
[11:13] <shafiqissani> hm so the solution for now is an ec2 instance with all the deploys runnning on local configuration using lxc as base
[11:13] <shafiqissani> man virtualization inside of virtualization! ... is it just me or does that sound crazy :D
[11:14] <fwereade> shafiqissani, yep; that's the current one-machine solution
[11:14] <fwereade> shafiqissani, heh, I take your point, but juju isn't necessarily working with ec2 "machines": it could be working with real hardware managed by orchestra
[11:15] <shafiqissani> fwereade: right, the service level abstraction ... got it
[12:10] <nijaba> Has anyone used juju scp command successfully?
[12:17] <rog> nijaba: jimbaker's the one to ask about that :-)
[12:17] <nijaba> rog: actually his mail to the ml describing it is more useful than the help for the command.  Got it to work now!
[12:18] <rog> nijaba: cool!
[12:20] <hazmat> Sander^work, no re vmware virtualization, yes wrt to cloud foundry, rabbitmq, etc.
[12:22] <hazmat> nijaba, unfortunate.. it probably should be the help for the command
[12:23] <hazmat> nijaba, what's unclear about the output of juju scp -h
[12:32] <nijaba> hazmat: I think it just lacks an example.  or maybe the "[remote_host:]file1" should be "[remote_host:]sourcefile1" and  [remote_host:]file2 be [remote_host:]destfile1
[12:37] <nijaba> hazmat: also, what would be really cool, is to be able to use scp from a charm to the bootstrap machine.  This way I could put my some file on bootstrap and scp the files from it to the charm automagically
[12:40] <nijaba> hazmat: but I guess I am trying to work around bug 814974
[12:40] <_mup_> Bug #814974: config options need a "file" type <juju:Triaged by jimbaker> < https://launchpad.net/bugs/814974 >
[12:52] <_mup_> Bug #902143 was filed: juju set <service_name> --filename does not work <juju:New> < https://launchpad.net/bugs/902143 >
[12:52] <hazmat> nijaba, indeed
[13:07] <koolhead11> marcoceppi: hey
[13:46] <hazmat> bcsaller, none of your branches show up on the kanban view
[13:52] <Sander^work> hazmat, Can juju install several wordpress installations to one apache and one mysql server?
[13:53] <fwereade> hazmat, UnitLifecycle._process_relation_changes has an interesting little dance where all removed relation workflows are explicitly stopped before any depart transitions are fired
[13:54] <hazmat> Sander^work, no, juju would model those as separate services, the wordpress charm is not done in a multi-tenant fashion
[13:54]  * hazmat puts on his dancing shoes
[13:54] <fwereade> hazmat, this seems to be intended to ensure that no other hook executions (from joined, say) can sneak in once we know that we're departing
[13:55] <hazmat> fwereade, interesting, indeed, thats seems quite correct
[13:55] <fwereade> hazmat, but I don't see how it can work; stop itself will yield
[13:56] <hazmat> fwereade, the logical flow to depart takes account the yield
[13:57] <fwereade> hazmat, sorry, don't follow, restate please
[13:57] <hazmat> fwereade, at the end of the stop, the scheduler is stopped, their maybe a hook execution that will happen before the depart, but the depart will be last
[13:59] <Sander^work> hazmat, so it's possible to create a new wordpress charm that can be deployed twince to one instance?
[14:00] <hazmat> fwereade, the concurrency on the yield isn't relevant in this context, because at the end of the stop method the scheduler which serves as a sync point is stopped, and concurrent notifications/executions go through the scheduler,  the depart directly schedules on the executioner, and it will be post any concurrent activity from the rel.
[14:00] <hazmat> Sander^work, juju doesn't do density outside of the local provider atm
[14:01] <hazmat> and the local provider isn't routable
[14:01] <fwereade> hazmat, ...if that's the case, why don't we just stop inside the do_depart method on workflow?
[14:01] <fwereade> hazmat, which we do in fact do
[14:02] <Sander^work> hazmat, what is a local provider?
[14:03] <fwereade> hazmat, I'm worrying we really shouldn't execute normal relation hooks at all once we know we've departed, because we can't be sure that all the relevant state still exists
[14:03] <hazmat> Sander^work, https://juju.ubuntu.com/docs/provider-configuration-local.html
[14:04] <hazmat> fwereade, moving stop to inside do_depart is fine, but i don't see how that changes what happens
[14:04] <hazmat> fwereade, we execute stop immediately after we're notified
[14:04] <fwereade> hazmat, but this may just be because I'm still a little bit unsure about (1) what state needs to exist to run a relation hook and (2) what state may or may not be suddenly cleared by client operations
[14:04] <hazmat> fwereade, and the zk structures are in place
[14:06] <fwereade> hazmat, if all the necessary zk structures will remain in place throughout all client operations, then there's no need for the dance, right?
[14:06] <hazmat> fwereade, the comment directly reasons why the dance is there
[14:07] <hazmat> to avoid things like.. modify after depart
[14:07] <Sander^work> hazmat, what do you mean by "do density" ?
[14:07] <hazmat> Sander^work, multiple units on a single 'machine'
[14:08] <fwereade> hazmat, you just said "fwereade, moving stop to inside do_depart is fine, but i don't see how that changes what happens"; I'm confused
[14:08] <Sander^work> hazmat, ah, ok. Is there any reason why it dosn't do density outside of the local provider?
[14:08] <fwereade> hazmat, either all we care about is stop-before-depart, in which case we can move it; or the little stop-everything-and-only-then-depart-everything dance is unnecessary
[14:09] <fwereade> hazmat, ...right?
[14:09] <fwereade> hazmat, sorry, scrambled something there
[14:13] <fwereade> hazmat, stepping back
[14:14] <fwereade> hazmat, (1) the only thing we care about is that no other relation hooks can fire once the relation-broken hook is has done so; agree?
[14:15] <fwereade> hazmat, (2) once we've called stop(), we can be sure that no other relation hooks will fire; agree?
[14:16] <hazmat> Sander^work, there's some work that will achieve density in a consenting adults fashion via unit placement/resource constraints, there's additional work being done to allow subordinate charms to live in a container with a parent/master charm for things like logging etc. The main reason for lack of density in a rigorous fashion, is that juju allows for dynamic port usage by a charm, and this is problematic when putting two independent cha
[14:16] <hazmat> rms with port conflicts on the same machine, a the conflict is undetectable apriori. there's some talk of using like a soft network overlay to alleviate that for density, but its not on the roadmap atm
[14:16] <fwereade> hazmat, (3) therefore, we can call lifecycle.stop() in workflow.do_depart(), and we can guarantee that from that point on no further hooks can be scheduled , so we're safe to just run lifecycle.depart(); agree?
[14:18] <hazmat> 1) yes, 2) yes, but one may be currently executing, 3) yes
[14:18] <fwereade> hazmat, and if you do agree with all the above, I don't understand the purpose of the dance, because it's just duplicating work already done in do_depart
[14:18] <hazmat> fwereade, the purpose of the dance is to immediately stop all broken hooks
[14:19] <hazmat> fwereade, if you do it in depart, your having exeuctions of a depart hooks, and more hooks for broken relations can be executing, as the rels are serially stopped.
[14:20] <hazmat> where as the dance ensures all rels that are broken are stopped, and then executes their individual depart hooks
[14:20] <hazmat> er. broken hooks via depart transition
[14:20] <Sander^work> hazmat, I whould like to see a diffrence on density when it comes to applications that uses another service's port. 2x Wordpress can easily be installed into one apache instance without any port issues.
[14:21] <hazmat> Sander^work, you could write a wordpress charm that encapsulated that capability, ie multi-tenant wordpress hosting in a single unit
[14:22] <fwereade_> hazmat, is that correct, or am I still missing something?
[14:23] <hazmat> fwereade, say i have 5 broken relations, the current dance ensures all 5 are stopped before executing any of their depart hooks
[14:24] <hazmat> fwereade, your suggesting that we go through each of the rels, stop it execute its broken hook, and then process the next
[14:24] <fwereade_> hazmat, what would be the negative consequences of failing to do so?
[14:25] <fwereade_> hazmat, really that we just go through each and fire the departed transition, and trust the transition to ensure the lifecycle is stopped
[14:25] <hazmat> fwereade, the problem is that may be events for those 5, that are happening and scheduling/executing hooks while your executing for the one.. ie your processing htem in serial
[14:25] <hazmat> which means your getting hook execution for those not processed, even though the rel is known to be broken
[14:26] <Sander^work> hazmat, Am I understanding it right?.. So I then can deploy wordpresse installs on demand into customer's directories for one fixed apache instance?
[14:27] <fwereade_> hazmat, ok, that's fine; but we can't be sure that won't happen anyway, can we? we yield several times in the course of stopping all those lifecycles, and the not-yet-stopped ones could still be scheduling hooks
[14:27] <hazmat> Sander^work, a charm can do whatever it wants to do on a machine, in this case you'd have to write the charm yourself
[14:27] <hazmat> the existing wordpress charm doesn't address that use case
[14:29] <fwereade_> hazmat, and if it's a situation we're already prepared to accept, I don't see that reducing its incidence is exceptionally important
[14:29] <hazmat> fwereade_, indeed its an optimistic guarantee not an absolute, if there is concurrent activity happening at that sec
[14:29] <Sander^work> hazmat, Ok. Do you know about any documents I should read to be able to write a charm like that?
[14:29] <fwereade_> hazmat, and the consequences of unjustified optimism could be, at worst, ..?
[14:29] <hazmat> fwereade_, the goal is minimizing hook execution for hooks known broken, waiting on a scheduler is minimal
[14:30] <hazmat> waiting on hook executions creates a large gap
[14:30] <fwereade_> hazmat, ok, thanks for clearing that up; the original comment seemed to me to be suggesting that the stop would prevent *any* extra hooks from slipping in
[14:34] <hazmat> fwereade_, we could probably offer a better guarantee of that, if we stopped the executor, but given that's a shared resource i felt more comfortable with minimizing the possibility.. and the reality is that there is the possibility that a rel hook is executing when we get the notification the rel is broken
[14:35] <hazmat> since the schedulers feed into the executor, stopping it there suffices
[14:35] <marcoceppi> koolhead11: hey
[14:35] <fwereade_> hazmat, yeah, I pondered stopping the executor, it wouldn't be a nice solution
[14:35] <hazmat> and the currently executing rel hook
[14:35] <hazmat> is always a possibility
[14:36] <fwereade_> hazmat, I must be missing something about the significance of a currently executing rel hook
[14:36] <hazmat> fwereade_, feel free to add to the comment about this
[14:36] <fwereade_> hazmat, I will :)
[14:38] <hazmat> Sander^work, well the general understanding of charms helps, but first just figuring out how you do it outside of charms is helpful
[14:39] <hazmat> Sander^work, http://askubuntu.com/questions/82683/what-juju-charm-hooks-are-available-and-what-does-each-one-do  http://askubuntu.com/questions/84656/where-can-i-find-the-logs-of-irc-charm-school
[14:44] <SpamapS> http://www.debian-administration.org/article/Installing_Redmine_with_MySQL_Thin_and_Redmine_on_Debian_Squeeze  ... looks like a charm to me. ;)
[14:45] <jimbaker> nijaba, sure, sounds like a good idea to augment juju scp (and other commands that need it) with more example-oriented help
[14:49] <SpamapS> jimbaker: we call that "man pages"
[14:49] <SpamapS> and you guys wanted me to make juju auto-generated which I've been looking into
[14:50] <SpamapS> err.. language.. not quite unthawed from sleep.. rrrrrr
[14:52] <TheMue> We don't have a kind of "juju retrieve-environment ..." to retrieve a somewhere else setup environment and merge it into the own one.
[14:54] <TheMue> The intention is that a 2nd new operator can easily extend  his environment to take over the administration of an environment.
[14:57] <Sander^work> hazmat, is it possible to write a charm that deploys eg. wordpress over an ftp connection?
[14:59] <SpamapS> TheMue: I think that would be brilliant
[14:59] <SpamapS> Sander^work: no, juju is built on the ability to own whole servers.
[15:00] <SpamapS> Sander^work: you could write a charm which deploys a webservice + ftp onto a machine which accepts wordpress uploads. ;)
[15:04] <TheMue> SpamapS: Aaargh, "bootstrap" has to be renamed! I allways do the same typo here. (smile)
[15:04] <Sander^work> SpamapS, Okay.. Is it possible to deploy a charm.. where an ldap database defines which uid/gid the files deployed is owned by?
[15:04] <SpamapS> Sander^work: certainly
[15:05] <SpamapS> Sander^work: things like system policy are hard right now.. dev work has just begun on a feature to separate system policy charms from servie charms.
[15:05] <SpamapS> service even
[15:07] <Sander^work> I'm using apache with an ldap module an mod_fcgid so every vhost get it's own uid.
[15:07] <SpamapS> Sander^work: yeah, that would be quite doable
[15:09] <Sander^work> Whould love to be able to deploy our whole architecture trough a set of charms :-)
[15:10] <SpamapS> TheMue: one thing to consider with the idea of retrieve-environment is that there is a desire, eventually, for environments.yaml to be limited to only facts that help you find and authenticate to the environment...
[15:10] <SpamapS> TheMue: any of the settings would be stored and managed inside ZK
[15:11] <SpamapS> Sander^work: we'd love for you to be able to do that too.
[15:12] <SpamapS> Sander^work: charms are just scripts in whatever language you want... so you can just duplicate whatever you have now into a charm. :)
[15:15] <TheMue> So the new admin only should get those facts. Once added his commands would use the ZK on the bootstrap instance, wouldn't they?
[15:15] <mchenetz> I tried asking this on the Vagrant chat, but i think everyone is asleep. :-) Has anyone tried to implement Juju in Vagrant? I would be interested in working on that if not.
[15:27] <TheMue> SpamapS: Where do I find the environment on the bootstrap instance? Only in ZK or does a file exists?
[15:39] <nijaba> SpamapS: rouncube charm now has https support
[15:42] <SpamapS> mchenetz: No but I figure its probably possible
[15:43] <SpamapS> mchenetz: the local provider is basically vagrant-like tho
[15:44] <mchenetz> Spamaps: hmmm I am just learning Juju… What is the local provider?
[15:44] <SpamapS> mchenetz: spins up 'machines' by way of LXC containers
[15:44] <SpamapS> mchenetz: instead of using EC2 or a hardware provisioning system
[15:45] <SpamapS> mchenetz: so its quite useful for testing things disconnected
[15:45] <mchenetz> hmmm, interesting. I will look into that. I still think it would be nice to integrate it into Vagrant as i use it a lot and it already has chef and puppet...
[15:45] <SpamapS> mchenetz: juju is more like vagrant than chef or puppet
[15:45] <mchenetz> Definitely… I do a lot of deployments in the cloud for some hugh customers… Juju is definitely going to be a big part of my future!
[15:46] <mchenetz> I watched the webinar yesterday and my head is spinning with ideas
[15:47] <SpamapS> mchenetz: so it wouldn't really make sense for vagrant to run juju at the same level as chef or puppet... juju doesn't have a DSL or a big library of configuration tools. Its just for coordinating and orchestrating these encapsulated services.
[15:47] <SpamapS> mchenetz: I was "Clint" from the webinar. :) any questions?
[15:47] <SpamapS> mchenetz: and thanks for watching!!
[15:48] <SpamapS> mchenetz: I'm quit interested to hear how your vagrant knowledge maps to juju.
[15:49] <mchenetz> hehe, i asked the security question the other day. I am mainly an enterprise security consultant. So, i am thinking about how i can create charms that would encompass some security vm's into the solution. I am thinking about creating some special firewall and ids modules that integrate with juju charms
[15:50] <mchenetz> I will definitely keep you informed on how Vagrant and juju map up. :-)
[15:50] <SpamapS> mchenetz: complex networking, thus far, has not been a part of the juju conversation.. but the colocation (or actually, subordination) work that is going on will enable that quite nicely.
[15:51] <SpamapS> mchenetz: note that the security model of juju is still evolving, I'd love to hear your input on how important it is. There are a few bugs tagged "security" that are sort of our second priority.
[15:51] <mchenetz> I would like to be able to say add-firewall port-80 relation or something to that effect and it will add a firewall and maybe some die monitoring too
[15:51] <mchenetz> not die… ids...
[15:52] <SpamapS> mchenetz: well in EC2 nothing is accessible from outside -> inside
[15:52] <SpamapS> mchenetz: we use the ec2 ingress firewall extensively
[15:52] <mchenetz> thats true… I am not just thinking ec2 though…
[15:52] <SpamapS> mchenetz: you could write a firewall subordinate charm and do exactly what you're talking about
[15:52] <mchenetz> thats what i am thinking about
[15:53] <SpamapS> mchenetz: subordinate charms are just charms that live inside the same container as other charms
[15:54] <mchenetz> yeah.. aim a little familiar with how the charm structure works now. I am quickly getting up to speed.
[15:54] <mchenetz> I would love to help out on the security side if you guys need any assistance
[15:58] <TheMue> Hmmm, funny, I can expose a wordpress w/o a mysql instance. I would have expected an error due to the not fulfilled requirement.
[16:00] <SpamapS> TheMue: the wordpress charm should not have any open port yet though
[16:00] <SpamapS> TheMue: open-port 80 should only happen after the db is configurd
[16:01] <SpamapS> TheMue: since the system is async.. its not an "error" .. you just don't get any open port
[16:04] <koolhead11> i am trying to deploy a charm and i need some assistance
[16:04] <koolhead11> i have moved the charm from /usr/share/doc/juju/oneiric directory
[16:04] <koolhead11> to my /home/juju directory
[16:04] <SpamapS> hazmat: I'm still really confused why docs needs to be a separate series and why we can't just agree that the docs dir under the trunk has a different policy. I'm *very* concerned now that the docs will get out of sync w/ trunk.
[16:05] <koolhead11> when am trying  juju deploy --repository=/home/atul/juju  local:mysql
[16:05] <TheMue> SpamapS: I understand, and I should have had a debug-log open. *gna*
[16:06] <koolhead11> ERROR Charm 'local:oneiric/mysql' not found in repository /home/atul/juju
[16:06] <SpamapS> TheMue: I don't necessarily think that having debug-log going all the time is a good idea ;)
[16:07] <SpamapS> koolhead11: you need the series in there
[16:07] <SpamapS> koolhead11: mkdir /home/atul/juju/oneiric
[16:07] <SpamapS> koolhead11: and move the charms into that dir
[16:07] <koolhead11> SpamapS: ok
[16:08] <TheMue> SpamapS: debug-hooks are better? I'm currently want to so what's going around.
[16:09] <koolhead11> so SpamapS my charm will be in /home/atul/juju/oneiric
[16:09] <SpamapS> TheMue: while developing and learning its probably a good idea.. I think though at some point we have to look at it as users of the charm, who won't necessarily be able to consume all of that data.
[16:09] <koolhead11> and i will deploy with
[16:09] <SpamapS> koolhead11: right
[16:09] <koolhead11> juju deploy --repository=/home/atul/juju  local:mysql
[16:09] <koolhead11> ok
[16:10] <SpamapS> koolhead11: that is necessary so that we can match the OS series with the charms for that OS
[16:10] <mchenetz> Where do i find information on using a local provider in Juju?
[16:11] <hazmat> SpamapS, let's give it a try, we can evaluate before 12.04 if its not worthwhile and move it back, but i'm hoping its still a benefit to getting doc contributions
[16:13] <SpamapS> hazmat: as long as we agree to actually put a version number on juju so the disconnected docs can be written to a specific version, it should work. I'm just not confident about that. ;)
[16:14] <hazmat> TheMue, there was a spec out for doing import / export of environments, but it ran afoul of want for a design of service groups aka stacks as a first class entity that was modeled and agreed upon.
[16:14] <hazmat> SpamapS, we call winners on that bet at uds ;-)
[16:15] <SpamapS> hazmat: we should maybe think about putting version strings in juju and having a release process now that we have, you know, users. ;)
[16:15] <hazmat> SpamapS, i should investigate read the docs some more. i know we tried moved on, but i believe it has support for multiple versions
[16:16] <TheMue> hazmat: thx for the info
[16:16] <_mup_> Bug #902219 was filed: config values of 0 are discarded <juju:New> < https://launchpad.net/bugs/902219 >
[16:16] <hazmat> SpamapS, sounds good, would you mind putting in  a bug for that?
[16:17] <mchenetz> Found the doc: https://juju.ubuntu.com/docs/provider-configuration-local.html, Doesn't survive reboots? That isn't good for my scenario as i stage development code in the local environment.
[16:20] <hazmat> mchenetz, it will survive reboots for 12.04, but no it doesn't survive reboots, or even hibernates at the moment.
[16:21] <hazmat> mchenetz, you'd also have to manually connect the bridge that the lxc containers are bound to allow external connectivity to them off the host.
[16:21] <hazmat> or port forward from the host
[16:22] <mchenetz> okay. good to know. can i make a charm for that. :-)
[16:23] <mchenetz> What i would like to do is give developers a local environment to develop in and then move the units to the cloud for test an production.
[16:23] <mchenetz> Which i think is the whole purpose of juju. Easily move and provision units.
[16:26] <SpamapS> hazmat: this feels like a blueprint, there are really 3 things that need to happen. 1- add versions to the juju --help, 2- ratify and agree to maintain a stable branch. 3- Setup a PPA that just has the latest stable release.
[16:26] <hazmat> mchenetz, well.. when you say move.. its not moving the data, you can develop/stage local and the copy the configuration/charms to a different cloud, but that doesn't sync data.. you'd need a separate charm/service for data syncing, right now.. juju environments do not bridge clouds.
[16:27] <hazmat> each environment is specific to a provider, but you can have multiple environments in a given provider.
[16:27] <SpamapS> mchenetz: for what you're talking about.. you'd just repeat the local deployment into the cloud
[16:27] <mchenetz> I was thinking that i can utilize charms that would say, install mysql, and apache, and then create the relations for multiple environments. As long as i create charms that have the glue code then it should work. Correct?
[16:28] <mchenetz> And again… I am still learning Juju… I learn very fast, but if it sounds like a stupid question it's just that i haven't learned it all yet. :-)
[16:29] <hazmat> mchenetz, yes the charms are meant to capture the configuration/best practices for a service in a provider independent fashion
[16:29] <SpamapS> mchenetz: eventually there's the idea that we'd be able to create a relationship between two environments .. but thats not done yet. ;)
[16:29] <hazmat> mcclurmc, so you could deploy the same mysql/apache/appserver setup in multiple environments
[16:29] <mchenetz> Spamaps… That sound awesome
[16:30] <marcoceppi> SpamapS: I eagerly await that idea :)
[16:30] <SpamapS> mchenetz: its already possible.. you can write a cloud-bridge charm that exchanges anything you need to exchange between the two envs.. and just use service configs to get them talking to eachother.
[16:30] <hazmat> and even then you need an underlying setup that syncs data
[16:31] <SpamapS> I bet I could get the mysql charm to expose config settings to allow external slaves/masters
[16:31] <hazmat> well maybe not it could just offer access, but wan connectivity solutions are better at a data tier
[16:31] <mchenetz> I don't mind creating, "glue" to connect disparate environments. I just need to know that limitations so that i implement things in the most efficient way.
[16:32] <hazmat> marcoceppi, mchenetz so the first cut at gluing disseparate environments is a proxy charm that will relay notifications to a remote endpoint
[16:32] <hazmat> you'd deploy a proxy service in each environment, bind it locally to the relations of interest, and then connect the proxy endpoints
[16:32] <hazmat> at least that's one option
[16:33] <SpamapS> I took a stab at making an 'othercloud' charm that would use the juju client to talk to another juju env but the lack of wildcard interfaces made it not work like I wanted.
[16:33]  * hazmat nods
[16:33] <hazmat> SpamapS, it would need support in the core, for basically assuming the interfaces of the proxy target
[16:33] <mchenetz> hazmat: that makes sense… To me, it sounds like i would use ssh to create a tunnel between the envirnments
[16:33] <marcoceppi> SpamapS:  hazmat: I was under the impression that orchestra was preferred for stringing multiple clouds into one env?
[16:33] <SpamapS> really.. if you want cross-cloud cross-AZ .. you probably want to make conscious decisions about what crosses those boundaries.
[16:33] <mchenetz> Then just run commands locally and remotely
[16:33] <hazmat> mcclurmc, any secure transport would work
[16:34] <hazmat> whoops
[16:34] <SpamapS> marcoceppi: that wouldn't really work. ;)
[16:34] <hazmat> mchenetz, i was thinking zeromq with encrypted messages would do it
[16:34] <mchenetz> I haven't used that. I will definitely look into it
[16:34] <hazmat> but i'm very much thinking like an app developer ;-)
[16:34] <SpamapS> hazmat: yeah, thats when I gave up on it, when I realized unless I can make relations dynamic it just won't work.
[16:35] <mchenetz> It's interesting… I grew up as a hacker of code with bbs's in the early days and then became a network engineer. So, i think in terms of both code and network infrastructure. :-)
[16:35] <SpamapS> hazmat:  I think an ops guy would be fine with that as long as it was simple to understand and monitor.
[16:36] <SpamapS> mchenetz: WWIV vs. Telegard ... GO
[16:36] <mchenetz> hehe, i ran rbis originally and then WWIV, good old wayne bell
[16:36] <hazmat> SpamapS, so this go further into a notion of charms that juju distributes and core services, given things like we could offer additional syntax for cross-env relations
[16:37] <SpamapS> hazmat: yeah that would make sense.
[16:39]  * SpamapS prepares for an Ubuntu bug triage rampage today
[16:39] <mchenetz> To me, as long as you create the appropriate abstraction on the top level and the disparate environments have similar functionality then it really shouldn't matter what environment you are on...
[16:39] <marcoceppi> I guess I'm just confused about how to best tackle a scenario using Juju
[16:40] <mchenetz> there should be the idea of move-unit [enironment]
[16:40] <mchenetz> and again.. i don't know juju that much yet...
[16:41] <marcoceppi> I have three bare metal machines, lets say, each running an acceptable provider by Juju - I assume each would be it's own juju environment then?
[16:42] <marcoceppi> nevermind actually
[16:42] <hazmat> mchenetz, that assumes integrated volume management storage, even for just unit migration within an environment, and frankly at scale moving data across wans is a non transparent operation to QOS.
[16:43] <hazmat> its potentially a  huge impact on network resources, and a multi-day operation
[16:43] <mchenetz> hazmat: As long as the backend code accommodates the variables for that. Why would it matter? You can give the instructions for syncing code and throttling and so forth...
[16:44] <mchenetz> syncing db's and such...
[16:44] <mchenetz> It could say use-link [interface]  throttle [50%] of link or something like that
[16:45] <mchenetz> this is all conceptual
[16:45] <mchenetz> It could then set the proper qos tagging and such in the backend and setup the interface to use and maybe the timeframe
[16:46] <hazmat> mchenetz, move-unit is a generic capability to any service.. what a service/charm chooses to expose can be accomodated by something like a proxy without charm knowledge, or the functionality could be incorporated directly into a charm.
[16:46] <hazmat> mchenetz, more interesting though, juju right now is its infancy wrt to how it approaches networking.. i'm curious though what you would think of juju managing a soft overlay network that spanned machines
[16:47] <mchenetz> That would be very interesting. So, are you talking about creating a networking abstraction that would be unrelated to a single machine?
[16:48] <mchenetz> Can you elaborate on what you are thinking?
[16:49] <hazmat> mchenetz, yes.. this is a while out most likely.. but the notion of getting unit density on a machine, where each unit is an lxc container,  to be abstract to a provider, we need to establish a soft overlay net that we'd plug the lxc containers into, probably with something like openvswitch or just using openstack's quantum
[16:50] <hazmat> part of the problem is that we end up needing a bridge to reconnect the overlay, but the notion is for exposed services we would port forward
[16:50] <hazmat> it give us much better capabilties to expose in terms of setting up vlans etc
[16:51] <hazmat> but its also a pita
[16:51] <mchenetz> hmmm… interesting.. You could potentially keep the environments networked permanently through the virtual switch and then exchange data and move things where they need to be. I like it… It doesn't seem like it would take too much either.
[16:58] <mchenetz> I will definitely have more to contribute in the upcoming weeks as i learn juju. It's definitely a project i would like to be involved in.
[16:59] <mchenetz> I am just ingesting all of the knowledge right now. :-)
[17:02] <hazmat> mchenetz, awesome, probably the best way to get introduced to juju is to write a charm or have a look at some existing ones.. http://charms.kapilt.com
[17:02] <mchenetz> I am planning on writing many charms and looking at existing ones. ;-)
[17:04] <koolhead11> hazmat: revision  and config.yaml   are compulsary files to be with a charm
[17:04] <koolhead11> ?
[17:05] <koolhead11> i am learning writing juju with writing simplest charm which does things simply with apt-get isntall
[17:05] <koolhead11> *install
[17:06] <koolhead11> i moved mysql example in same directory and same part worked and charm got initialized
[17:09] <hazmat> koolhead11, config.yaml isn't, revision is
[17:10] <koolhead11> hazmat: i am using the existing mysql example
[17:10] <koolhead11> and i see a file with name revision there
[17:10] <koolhead11> notthing mentioned about same in config.yaml file
[17:11] <koolhead11> so i created both files accordingly and created hooks sub directory inside it
[17:12] <koolhead11> added options: {} in config.yaml file
[17:13] <koolhead11> i am just clueless while this thing is not working :(
[17:14] <hazmat> koolhead11, what do you mean by not working?
[17:14] <hazmat> can you deploy your charm?
[17:14] <koolhead11> hazmat: i get error in deploying charm i wrote
[17:17] <koolhead11> hazmat: http://paste.ubuntu.com/765110/
[17:18] <koolhead11> i have created a directory inside example named "oneiric"
[17:18] <koolhead11> and put the charm for boa inside it
[17:18] <koolhead11> and executing juju deploy --repository=example local:boa
[17:18] <koolhead11> while my pwd is /home/atul
[17:18] <hazmat> koolhead11, the path to boa should be /home/atul/example/oneiric/boa
[17:19] <koolhead11> hazmat: that is where boa is
[17:19] <mchenetz> I see some charm developers are using augeas to create/modify configs. This seems interesting. I never heard of that tool
[17:19] <hazmat> that directory should contain the metadata.yaml file, if your using a recent ppa, and there's a syntax error in the charm, it should report it
[17:20] <koolhead11> i am using oneiric and installed default juju
[17:20] <hazmat> mchenetz, its a bit like a generic dom api for configuration, some folks prefer writing out the whole config, some prefer patching in place.
[17:20] <koolhead11> from repo rather PPA
[17:20] <hazmat> koolhead11, so you are using the ppa?
[17:21] <mchenetz> Is there a standard you guys like in terms of directories? I notice the .aug files are in the root instead of the hooks directory...
[17:21] <koolhead11> hazmat: i have not added any PPA manually, installed juju which came with default
[17:21] <koolhead11> with oneiric
[17:21] <mchenetz> I really should read the documentation. :-)
[17:22] <koolhead11> hazmat: /home/atul/example/oneiric/boa   its very much here
[17:22] <koolhead11> and also i have metadata.yaml file there
[17:23] <hazmat> koolhead11, then you probably have a yaml error
[17:24] <hazmat> koolhead11, the ppa version will detect and report yaml errors, the default version in oneiric just won't find the charm
[17:24] <koolhead11> then why the error log says error in path
[17:24] <koolhead11> hazmat: point me to PPA am upgrading juju from there
[17:26] <hazmat> koolhead11, sudo add-apt-repository ppa:juju/pkgs && sudo apt-get update && sudo apt-get upgrade juju
[17:26] <koolhead11> cool
[17:27] <SpamapS> koolhead11: can you push your charm up to a branch on launchpad?
[17:27] <koolhead11> SpamapS: sure once am home.
[17:29] <koolhead11> catch u guys in sometime
[17:29]  * koolhead11 rushes 4 home
[17:30] <nijaba> SpamapS: if you feel like reviewing a charm, feel free to take a look at my roundcube one ;)
[17:31] <SpamapS> nijaba: I have some other queues to tend to today (server bug triage and SRU's), but I won't be able to resist reviewing your charm all weekend. ;)
[17:31] <SpamapS> marcoceppi: did you already have a look at it?
[17:31] <nijaba> SpamapS: he did
[17:31] <SpamapS> Oh, so, what do you need me for? ;)
[17:32] <nijaba> SpamapS: to make it official? can't wait for your comments either, specially on the https handling
[17:32] <SpamapS> as the #4 contributor to lp:charm (see https://launchpad.net/charm) I'd say he's quite qualified to ack and promulgate it :)
[17:33] <fwereade__> I need to stop for a little while, back later
[17:33] <SpamapS> I've actually wanted roundcube for some time as I plan to replace my crappy hastymail solution with it. :)
[17:34] <nijaba> SpamapS: he said he would feel more confortable with you reviwing it first.  He might need some re-assurance :)
[17:34] <mchenetz> I know this is a matter of opinion but… I have been using Eucalyptus for a long time because it is API compliant with Amazon EC2. Is there any advantage to going over to openstack? Anything from a juju side?
[17:34] <SpamapS> nijaba: roger that. I'll take a loko between ubuntu bugs and SRU's ;)
[17:34] <nijaba> SpamapS: no hurrry. cheers
[17:34] <SpamapS> mchenetz: Euca is very expensive to scale up
[17:35] <mchenetz> spamaps: I always hear that
[17:35] <SpamapS> mchenetz: if you have a working euca solution with a narrow focus, probably best to just stick with it.
[17:35] <nijaba> mchenetz: and hard to make HA (if posisble)
[17:35] <mchenetz> I am definitely thinking about scalability for my customers. I think i am going to have to look at openstack...
[17:36] <mchenetz> I think i will keep my dev in Eucalyptus
[17:36] <nijaba> mchenetz: the fact that more and more provider are announcing public cloud based on OpenStack feel very re-assuring
[17:37] <SpamapS> mchenetz: OpenStack is also more loosely coupled.. I find that attractive.
[17:37] <mchenetz> I will definitely have to put it on my agenda to get familiar with Openstack...
[17:37] <SpamapS> robbiew: hey how did your BOF go?
[17:37] <mchenetz> Thanks for the comments
[17:37] <robbiew> SpamapS: was great..attendance was so-so...but the BoFs started at 8pm
[17:38] <robbiew> after dinner
[17:38] <robbiew> luckily mine was BEFORE Google's beer and "icecream" social
[17:38] <robbiew> :P
[17:38] <robbiew> we should definitely have a Charm School at the next year's
[17:38] <robbiew> TOTALLY our crowd here
[17:38] <robbiew> and next year will be in San Diego...not Boston.
[17:39] <mchenetz> Any plans on a east coast charm school? I live close to Philly and about an 1hr from NY
[17:39] <robbiew> mchenetz: jcastro is the man with the plan on Charm Schools...I would expect we'd have one though
[17:40] <robbiew> we'd prefer it to be tied to some sort of pre-existing event though....the ones on IRC are pretty good too ;)
[17:40] <mchenetz> I would have to get in touch with him. I don't think i saw one when i looked
[17:40] <mchenetz> There's irc ones?
[17:41] <SpamapS> robbiew: *w00t*
[17:41] <SpamapS> mchenetz: I'd hope we can co-locate a charm school with Surge
[17:42] <SpamapS> mchenetz: we plan to have a charm school every couple of months on IRC
[17:42] <mchenetz> SpamapS: cool…. sounds good
[17:43] <marcoceppi> SpamapS: Cool, wasn't sure if you wanted to a two person review or not :)
[17:44] <marcoceppi> nijaba: I'll take a look at the SSL implementation and promulgate it after lunch :)
[17:44] <SpamapS> marcoceppi: only if you feel like you aren't 100% sure its ok
[17:44] <marcoceppi> SpamapS: gotchya, cool
[17:47] <nijaba> marcoceppi: thanks :)
[17:55] <robbiew> SpamapS: I did a local deployment of ThinkUp to demo the LXC stuff...worked perfectly
[17:55] <robbiew> then I showed them a real ec2 deployment of the same thing...already running to compare
[17:55] <robbiew> then I blew both away...and then did a live hadoop ec2 deployment
[17:55] <robbiew> ran terrasort...and scaled it live
[17:55] <robbiew> BAM!
[17:56] <rog> i'm off for the weekend now. see y'all tuesday (i'm off monday)
[17:56] <robbiew> had ganglia setup with auto-refresh plugin for chrome browser
[17:56] <robbiew> everyone was really impressed...and they GOT it, b/c this crowd knows their shit
[17:57] <hazmat> robbiew, nice
[17:57] <hazmat> rog, cheers
[17:57] <nijaba> robbiew: impressive!
[17:57] <nijaba> welcome home, koolhead17 ;)
[17:57] <robbiew> then I told them I learned all this on Tuesday
[17:57] <robbiew> :)
[17:57] <koolhead17> hehe nijaba:)
[17:58] <robbiew> though I've obviously known HOW to do it for quite sometime...but never got my hands "dirty" until this week
[17:58] <nijaba> rog: have a good long we
[17:58] <robbiew> I've found juju to be a bit addictive...like, "what can I deploy now?"
[17:59] <nijaba> robbiew: tell me about it.  I'm finding myself asking what else can I charm :D
[18:00] <robbiew> this whole juju thing just *might* have legs
[18:01] <robbiew> lol
[18:01] <SpamapS> robbiew: \o/ .. sounds awesome
[18:02] <robbiew> SpamapS: yeah...LISA'12 is going on my "give them money and love" list for next year
[18:02] <SpamapS> robbiew: we should write a script that just deploys the entire charm store on t1.micro's and relates everything that can be related
[18:02] <SpamapS> Right now that would be like, 40 t1.micro's .. so it would cost about $1.
[18:02] <robbiew> lol
[18:02] <robbiew> one charm to rule them all!
[18:03] <SpamapS> juju deploy *
[18:03] <robbiew> oh man
[18:03] <robbiew> you'd have to script the relations though
[18:03] <robbiew> or just talking about deploying only
[18:04] <SpamapS> yeah
[18:04] <SpamapS> 1 haproxy would not actually be able to serve every website, unfortunately
[18:05] <robbiew> heh
[18:05] <marcoceppi> Juju was great, until I got my Amazon bill :P
[18:05] <SpamapS> haproxy is a "monogamous" charm.
[18:05] <SpamapS> marcoceppi: LOL!
[18:06] <SpamapS> marcoceppi: thsi is all an evil plan to get people to setup openstack clouds
[18:06] <marcoceppi> SpamapS: I'm seriously considering setting up an openstack cloud at my house
[18:11]  * koolhead17 searching 2 two mintues bzr guide
[18:12] <jelmer> koolhead17: how about 5 minutes? http://doc.bazaar.canonical.com/latest/en/mini-tutorial/
[18:12] <koolhead17> jelmer: 3 mins is ok :) thanks
[18:13] <koolhead17> jelmer: am on LTS so all this command will work for it too. :)
[18:13] <mchenetz> SpamapS: I plan to have an Openstack server running at my house this weekend. ;-)
[18:14] <koolhead17> LTS lucid
[18:15] <robbiew> mchenetz: nice!
[18:15] <EvilBill> I'm itching for more info on deploying openstack with juju.
[18:15] <robbiew> fwiw, we're making sure openstack is easily deployable from the installer in 12.04
[18:15] <robbiew> EvilBill: talk to adam_g :)
[18:16] <EvilBill> I will. Played with juju about three weeks ago when I was between gigs, dug it a lot, but curious as to why the bootstrap node can't co-exist with where the juju client is running.
[18:16] <EvilBill> or maybe I'm just conceptually missing something.
[18:17] <EvilBill> I was trying to multitask between spending time with the family and learning something new, and I think I didn't do either thing very well.
[18:17] <robbiew> family? what's that?
[18:17] <EvilBill> lol
[18:19] <SpamapS> EvilBill: the bootstrap is really just ZooKeeper + the provisioning agent. It can't live with the client because clients can come and go (laptops.. workstations, etc)
[18:19] <EvilBill> OK, well, what I did was setup an orchestra server and had it working with two other machines at home with wake-on-lan, etc.
[18:19] <EvilBill> so Orchestra would rev up a bare-metal box on command
[18:20] <EvilBill> tied that into juju, and when I'd do a juju bootstrap, a machine would turn on and go install and become the bootstrap node
[18:20] <SpamapS> EvilBill: You *can* make the bootstrap node a VM on the orchestra server
[18:20] <EvilBill> but it sounds like a full machine JUST for bootstrapping seems silly.
[18:20] <EvilBill> That's what I didn't get around to playing with or figuring out
[18:20] <SpamapS> EvilBill: we even talked about having that be the default at one point.
[18:21] <SpamapS> EvilBill: its fairly simple to provision VMs in cobbler just like regular machines.
[18:21] <EvilBill> ok, so that begs the next question, what's the preferred VM framework?
[18:22] <EvilBill> my orchestra machine is an old laptop with a Core Duo, so it's not 64-bit capable.
[18:22] <EvilBill> which means I don't think it'll run KVM.
[18:24] <marcoceppi> I'd like to get a public openstack cloud open for charm testing and development against, but I feel that might be something for next year
[18:30] <SpamapS> EvilBill: 'kvm-ok' will tell you that
[18:31] <SpamapS> EvilBill: you *could*, in theory, have it provision an LXC container, but you'd have to figure out how to run the "late_command" bit to get juju to start its agents.
[18:31] <SpamapS> EvilBill: it might also be possible to simply have the cobbler machine register *itself* as a cobbler system, and then do the same thing.
[18:32] <SpamapS> marcoceppi: we've talked about having an openstack cloud which we make available to ubuntu members who want to work on bugs/testing
[18:32] <marcoceppi> SpamapS: I think it's a cool idea
[18:33] <marcoceppi> Limit it to charm-contributors even
[18:33] <marcoceppi> ?
[18:35] <koolhead17> EvilBill: i think there is a wiki which spells charm magic 4 openstack deployment allready!! :D
[18:35] <SpamapS> marcoceppi: I'd like to see it open to even those who are not interested in juju.
[18:36] <marcoceppi> SpamapS: so would this just be a free cloud for anyone to play in?
[18:36] <SpamapS> marcoceppi: I think we'd require ubuntu membership and have a limited number of seats available
[18:36] <marcoceppi> I see
[18:38] <adam_g> http://wiki.openstack.org/FreeCloud
[18:38] <adam_g> ^^ not sure what the status of that is ATM, tho
[18:47] <koolhead17> SpamapS: https://code.launchpad.net/~koolhead17/charm/oneiric/boa/trunk
[18:47] <koolhead17> hope you will not laugh, as it has notthing magical
[18:47] <koolhead17> made it to understand juju better :)
[18:49] <SpamapS> adam_g: *nice* I didn't know it had even been written down like that. :)
[18:57] <SpamapS> koolhead17: thats fine. It should work.. its not clear at all to me why its not working. :-/
[18:58] <koolhead17> SpamapS: hehe. i would love to see if someone tests it on AWS
[18:58] <koolhead17> i have no idea why i was getting the error i mentioned earlier on LXC
[18:59] <koolhead17> i created a directory name example/oneiric/boa*
[18:59] <koolhead17> and repository = /home/atul/example
[18:59] <koolhead17> :P
[19:00] <koolhead17> i moved mysql charm from the examples directory to that path it worked but not the boa one
[19:05] <SpamapS> koolhead17: ls -l /home/atul/example/oneiric
[19:06] <koolhead17> it will have boa and mysql directory )
[19:06] <koolhead17> SpamapS: am home now siir!!
[19:07] <SpamapS> koolhead17: so.. um, you can't test at home, but you can't push to bzr at work? :-/
[19:07] <SpamapS> koolhead17: you should be using the local provider at home
[19:08] <koolhead17> SpamapS: i tried juju formula writing on LXC at work, synced it to my home computer and then pushed it to bzr once i came home
[19:09] <koolhead17> i will install Oneiric tomorrow
[19:09] <koolhead17> its been in list for ages, :D
[19:09]  * koolhead17 is addicted to LTS
[19:24] <SpamapS> koolhead17: yeah, LXC isn't quite usable in 10.04 unfortunately
[19:24] <koolhead17> SpamapS: honestly am waiting 4 precious b4 i can format my poor lappy, that is why i do all charming stuff in office :P
[19:25] <koolhead17> 2 GB baby
[19:27] <mpl> niemeyer: I think I've got something going for zk + ssh. but it's still very messy so I'm gonna clean it up before showing it to you guys.
[19:27] <niemeyer> mpl: Ohh, sweet
[19:28] <mpl> gonna go home, dinner, and rest a bit first though. ttyl
[20:25] <marcoceppi> I can't find config-changed anywhere in the hooks documentation
[20:29] <hazmat> marcoceppi, yeah.. that's a problem.. its in a separate document on service config
[20:30] <hazmat> i just started to pull the docs into a separate branch to hopefully facilitate making them easier to contribute to.. there at the bzr branch lp:juju/docs
[20:31] <marcoceppi> Ah, cool - thanks for the heads up hazmat
[20:31] <hazmat> marcoceppi, re the config-hook docs atm https://juju.ubuntu.com/docs/drafts/service-config.html#creating-charms
[20:39] <marcoceppi> After a charm is reviewed and deemed needs more work, should I just remove the new-charm tag?
[20:44] <_mup_> juju/ssh-known_hosts r431 committed by jim.baker@canonical.com
[20:44] <_mup_> Refactored
[20:46] <_mup_> juju/ssh-known_hosts r432 committed by jim.baker@canonical.com
[20:46] <_mup_> Merged trunk
[20:47] <nijaba> marcoceppi: just put the bug bag to state incomplete
[20:49] <nijaba> marcoceppi: once the problem are addressed, the requester will put it back to state "fix-comited"
[20:53]  * nijaba takes off. Have fun
[20:54] <EvilBill> SpamapS: Coming back after a lengthy meeting… you could get cobbler to register itself? Hm, that's an idea, but I wouldn't want it to try to PXE boot itself...
[21:07] <koolhead17> bye nijaba
[21:40] <_mup_> juju/ssh-known_hosts r433 committed by jim.baker@canonical.com
[21:40] <_mup_> Simplify by keeping public keys in ProviderMachine itself
[21:41] <_mup_> juju/ssh-known_hosts r434 committed by jim.baker@canonical.com
[21:41] <_mup_> Refactored bootstrap
[22:12] <_mup_> juju/ssh-known_hosts r435 committed by jim.baker@canonical.com
[22:12] <_mup_> Fix error handling for refactored bootstrap
[23:26] <_mup_> Bug #902384 was filed: Service units get stuck in debug mode. <juju:New> < https://launchpad.net/bugs/902384 >
[23:43] <SpamapS> EvilBill: no you wouldn't pxeboot it .. you'd just run the final "late command" that juju sticks into the pre-seed