[00:06] hazmat: looking now [00:08] hazmat: looks like it is .. note that it won't be installed on anyone's system because the version is < than the one in 11.10 [00:08] hazmat: 11.10 has 0.8.0-0ubuntu1 , the ppa has 0.8.0-0juju45~oneiric1 .. j < u [00:09] bummer [00:11] hazmat: IMO its a good thing. :) This PPA shouldn't have anything more than you *need* to run juju. [00:11] hazmat: we probably need a dev PPA for stuff like that. [00:12] SpamapS, its not a breaker yet, but it will be in future ppa revs of the juju [00:15] hazmat: at that point we will put the backport in the PPA. [00:15] jimbaker, i'm wondering if it would be faster to just reset the groups on shutdown of ec2 [00:15] rather than playing the waiting game [00:16] hazmat, that does sound reasonable and equivalent [00:16] i think it was just an attempt to not create too much garbage [00:17] in terms of lots of security groups hanging around [00:17] hazmat, i'm pretty certain this is what was done in an earlier version, i don't know if that ever went through review [00:18] although the reset then was done at SG acquisition, so a bit different i guess [00:18] hmm.. yeah [00:18] jimbaker, group removal at shutdown almost never works for me [00:18] it always gives up [00:19] so i'm wondering if its worth the bother [00:19] hazmat, hmmm... it does tend to work for me, but i tend to just run the wordpress stack at most [00:19] effectively.. i wait 30s.. and then.. 2011-12-08 19:14:20,668 ERROR Instance shutdown taking too long, could not delete groups juju-public-0 [00:19] and it moves on [00:20] yeah, and without ill effect, since it can just use those SGs anyway [00:20] well it will try to delete them latter as i recall [00:20] and fail if can't delete them [00:21] ie. if you try to bootstrap immediately [00:21] resetting the security group means no waiting or errors [00:21] hazmat, that does sound like a valid diff approach then [00:21] on bootstrap we can go ahead and clear out any detected garbage [00:22] ugh.. [00:22] that sounds rather odd though.. but the reality is the sgs are still present, so its better than nothing [00:26] hazmat, it sounds reasonable to me. cleanup is supposed to solve the bounce problem seen in yes | juju destroy-environment && juju bootstrap - so if it doesn't, or not reliably, we need to revisit [00:27] interesting that error kees saw only exhibits in the us-west-1 region [00:27] the response from ec2 is different [00:27] so txaws parsing goes awry [00:27] when stringfing the error msg [00:29] <_mup_> juju/provisioning-agent-bug-901901 r431 committed by kapil.thangavelu@canonical.com [00:29] <_mup_> let the logging package format the exception [00:29] hazmat, that is very interesting [00:43] is it possible to change default-image-id at deploy time? [00:44] hazmat, is it deliberate that there's no RelationWorkflow transition from error -> departed? [00:44] http://paste.ubuntu.com/764414/ :| [00:49] fwereade_, i believe it was, but in retrospect it seems reasonable that there should be one [00:49] hazmat, cool, cheers [00:49] hmm [00:51] hazmat, even if we don't want to fire a departed hook I think we need to be able to make that transition [00:52] hazmat, I could be convinced either way on the fire-hook question [01:08] Whenever I reboot my host juju reports it as stopped even though it is running. Anyone know how to fix this? [01:10] Anyone, Whenever I reboot my host juju reports it as stopped even though it is running. Anyone know how to fix this? [01:11] osadmin, having juju survive reboots is a work in progress atm, which provider are you using? [01:11] hazmat, provider? not sure but I am running the most uptodate ubuntu server version [01:12] osadmin, are you running juju services on ec2, or physical machines via orchestra, or local/lxc dev on a machine [01:13] hazmat, running on physical machines via orchestra. hosts are running openstack [01:17] osadmin, could you pastebin the output of juju status [01:17] osadmin, at the moment, agents that juju launches aren't set to come back up on machine boot, its something thats being worked on though. [01:18] hazmat, will do, and fyi here is the doco I followed to create the env [01:18] hazmat, https://wiki.edubuntu.org/ServerTeam/UbuntuCloudOrchestraJuju [01:20] hazmat, http://pastebin.com/HuzfJqiq [01:21] hazmat, is there anyway I can manually reset the agent status? [01:22] osadmin, yes, its a little involved, but the command that launched the agent is in the cloud-init userdata [01:23] osadmin, its the output of... sudo cat /var/lib/cloud/instance/user-data.txt [01:24] er. its in the output of [01:24] hazmat, that would be great as I am using "juju ssh" to access the hosts [01:26] hazmat, ok I have logged into the host and am looking at that file now. [01:28] hazmat, what do I do with this? Sorry (noob to this stuff) [01:28] osadmin, hm.. that will start the machine agent.. but that won't start the unit agents.. [01:29] osadmin, so for example this is what i have my in output of that file.. http://pastebin.ubuntu.com/764439/ [01:30] osadmin, the command to run the agent is embedded in there... for that output its this one.. JUJU_MACHINE_ID=3 JUJU_ZOOKEEPER=ip-10-176-22-254.us-west-1.compute.internal:2181 python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log --pidfile=/var/run/juju/machine-agent.pid [01:30] you'd just run that with a sudo prefix on the cli [01:31] the machine will start reporting in, it looks like it will restart the unit agents, so that should do it [01:32] hazmat, lost my irc for a moment, back now and will look over the pastebin [01:32] osadmin, the command to run the agent is embedded in there... for that output its this one.. JUJU_MACHINE_ID=3 JUJU_ZOOKEEPER=ip-10-176-22-254.us-west-1.compute.internal:2181 python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log --pidfile=/var/run/juju/machine-agent.pid [01:32] you'd just run that with a sudo prefix on the cli [01:33] hazmat, ok [01:35] osadmin, fwiw i'd recommend running from the ppa, we keep it pretty stable, and when the restartable feature/bug fix lands, it will be there first, there's also some additional status output and fixes that are useful for orchestra usage. [01:37] hazmat, getting errors I will paste what I did [01:38] hazmat, http://pastebin.com/dr6BSMZe (added sudo to this command) [01:38] osadmin, there's a trailing '] that shouldn't be there [01:39] hazmat, oh, I removed that and got an error, will paste the error [01:40] http://pastebin.com/CBAqj0gj [01:40] hazmat, http://pastebin.com/CBAqj0gj [01:46] osadmin, the full command should look like this.. [01:46] JUJU_MACHINE_ID=3 JUJU_ZOOKEEPER=ip-10-176-22-254.us-west-1.compute.internal:2181 python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log --pidfile=/var/run/juju/machine-agent.pid [01:46] ie. it specifies environment variables [01:46] the whole line needs to be used [01:47] hazmat, I did the following was this wrong? export JUJU_MACHINE_ID=4; export JUJU_ZOOKEEPER=oscc-01.itos.deakin.edu.au:2181 [01:48] osadmin, that should be fine [01:48] osadmin, you can't use sudo then [01:48] the shell environment won't persist through the sudo [01:48] you'd have to use a root shell if your going to do it that way [01:49] ok [01:49] trying [01:49] no errors [01:50] hazmat, juju status has not changed however [01:52] hazmat, I can now "juju ssh" into the host, I will recheck juju status again [01:53] hazmat, status still says stopped [01:54] osadmin, can you pastebin the machine agent log file /var/log/juju/machine-agent.log [01:54] ok [01:55] osadmin, there's a cli tool that makes that easier.. apt-get install pastebinit [01:55] and then you can.. cat /var/log/juju/machine-agent.log | pastebinit [01:55] and it will give you a url [01:55] thx [01:56] bcsaller, jimbaker could i get a +1 on this trivial.. http://paste.ubuntu.com/764452/ [01:57] hazmat, host may not be able to get out at this stage. May have to do it the old fashion way. [02:00] hazmat: lgtm [02:00] hazmat, here is the tail of the file you requested http://pastebin.com/Z1QgvpEC [02:06] hazmat, here is the whole log file. http://pastebin.com/u9SWwc5x [02:06] hm.. [02:07] osadmin, could you paste log file at /var/lib/juju/units/nova-compute-1/charm.log [02:08] osadmin, the machine agent looks like its running fine.. the charm.log will show the service unit agent log file [02:08] hazmat, ok fyi: here is the juju status output. http://pastebin.com/qAhdTggJ [02:09] * hazmat nods [02:11] <_mup_> juju/trunk r431 committed by kapil.thangavelu@canonical.com [02:11] <_mup_> [trivial] provisioning agent fix, let the logging package format the exception [f=901901][r=bcsaller] [02:12] hazmat: tail of the file for starters. http://pastebin.com/Ge63NQvg [02:22] hazmat,whole of the requested log file is here: http://pastebin.com/9jChCVnS [02:24] hazmat: 2nd try http://pastebin.com/iHfLuWUh [02:27] hazmat, lol, grabbed to much with that last pastebin, u may have to scroll down a bit to see the contents of the log file [02:47] osadmin, yeah.. that's not going to recover without some surgery.. your probably better off just removing the unit, terminating the machine, and adding a new unit [02:48] ie. juju remove-unit nova-compute/1, juju terminate-machine 4, juju add-unit nova-compute === Guest46496 is now known as jrgiffor1 [02:53] hazmat, thanks. Will do but first, will doing this delete any apps from nova-comput/1? [02:54] osadmin, it will [02:54] well.. it probably will [02:54] om [02:54] ok [02:54] i'm not sure if orchestra is going to reinstall the machine when its cleared out [02:54] er. shutdown [02:55] for the next boot.. my understanding is atm it doesn't, so the data would still be there, but i wouldn't count on it [02:55] I guess I could wait until the fix is released [02:56] hazmat, d u think the fix will be a while away? [02:57] osadmin, the fix won't help for an existing installation, there's a branch in review which implements it [02:57] so not to far away [02:57] probably another week or two [02:58] hazmat, thats ok, I will be rebuilding this very soon. If timing is right, I will build with the fixed version. D u think release bfore xmas is poss? [02:58] ok [02:58] thanks [02:58] osadmin, np [02:59] hazmat, what d u use juju for mainly? [08:05] Good morning [08:30] does juju work with vmware ? === TeTeT_ is now known as TeTeT [10:06] SpamapS: @ubuntucloud will republish your tweets, except if you tweets start either by "@ubuntucloud" or "RT" or "♺". Hence why your tweet was not retweeted [10:07] SpamapS: so move @ubuntucloud toward the end, and it will be retweeted [11:01] how to deploy wordpress to a single instance [11:02] i.e. bootstrap instance + mysql instance + wordpress instance all on the same instance [11:05] shafiqissani: you can't do that currently. [11:05] I see [11:08] shafiqissani, some people have been bringing up single EC2 instances and running the local provider on just that one instance [11:09] shafiqissani, so it's not *impossible*, but it is not a configuration we would recommend for production [11:11] fwereade: I know it is not the optimal configuration but imagine it to be on the line of shared hosting [11:12] fwereade: a site or service that does not require high availability and get very little traffic would a scenario for such a configuration [11:12] shafiqissani, indeed, there are interesting possibilities when units can share machines, and we plan to do something about that -- but it's not on the current roadmap yet [11:13] hm so the solution for now is an ec2 instance with all the deploys runnning on local configuration using lxc as base [11:13] man virtualization inside of virtualization! ... is it just me or does that sound crazy :D [11:14] shafiqissani, yep; that's the current one-machine solution [11:14] shafiqissani, heh, I take your point, but juju isn't necessarily working with ec2 "machines": it could be working with real hardware managed by orchestra [11:15] fwereade: right, the service level abstraction ... got it [12:10] Has anyone used juju scp command successfully? [12:17] nijaba: jimbaker's the one to ask about that :-) [12:17] rog: actually his mail to the ml describing it is more useful than the help for the command. Got it to work now! [12:18] nijaba: cool! [12:20] Sander^work, no re vmware virtualization, yes wrt to cloud foundry, rabbitmq, etc. [12:22] nijaba, unfortunate.. it probably should be the help for the command [12:23] nijaba, what's unclear about the output of juju scp -h [12:32] hazmat: I think it just lacks an example. or maybe the "[remote_host:]file1" should be "[remote_host:]sourcefile1" and [remote_host:]file2 be [remote_host:]destfile1 [12:37] hazmat: also, what would be really cool, is to be able to use scp from a charm to the bootstrap machine. This way I could put my some file on bootstrap and scp the files from it to the charm automagically [12:40] hazmat: but I guess I am trying to work around bug 814974 [12:40] <_mup_> Bug #814974: config options need a "file" type < https://launchpad.net/bugs/814974 > [12:52] <_mup_> Bug #902143 was filed: juju set --filename does not work < https://launchpad.net/bugs/902143 > [12:52] nijaba, indeed [13:07] marcoceppi: hey [13:46] bcsaller, none of your branches show up on the kanban view [13:52] hazmat, Can juju install several wordpress installations to one apache and one mysql server? [13:53] hazmat, UnitLifecycle._process_relation_changes has an interesting little dance where all removed relation workflows are explicitly stopped before any depart transitions are fired [13:54] Sander^work, no, juju would model those as separate services, the wordpress charm is not done in a multi-tenant fashion [13:54] * hazmat puts on his dancing shoes [13:54] hazmat, this seems to be intended to ensure that no other hook executions (from joined, say) can sneak in once we know that we're departing [13:55] fwereade, interesting, indeed, thats seems quite correct [13:55] hazmat, but I don't see how it can work; stop itself will yield [13:56] fwereade, the logical flow to depart takes account the yield [13:57] hazmat, sorry, don't follow, restate please [13:57] fwereade, at the end of the stop, the scheduler is stopped, their maybe a hook execution that will happen before the depart, but the depart will be last [13:59] hazmat, so it's possible to create a new wordpress charm that can be deployed twince to one instance? [14:00] fwereade, the concurrency on the yield isn't relevant in this context, because at the end of the stop method the scheduler which serves as a sync point is stopped, and concurrent notifications/executions go through the scheduler, the depart directly schedules on the executioner, and it will be post any concurrent activity from the rel. [14:00] Sander^work, juju doesn't do density outside of the local provider atm [14:01] and the local provider isn't routable [14:01] hazmat, ...if that's the case, why don't we just stop inside the do_depart method on workflow? [14:01] hazmat, which we do in fact do [14:02] hazmat, what is a local provider? [14:03] hazmat, I'm worrying we really shouldn't execute normal relation hooks at all once we know we've departed, because we can't be sure that all the relevant state still exists [14:03] Sander^work, https://juju.ubuntu.com/docs/provider-configuration-local.html [14:04] fwereade, moving stop to inside do_depart is fine, but i don't see how that changes what happens [14:04] fwereade, we execute stop immediately after we're notified [14:04] hazmat, but this may just be because I'm still a little bit unsure about (1) what state needs to exist to run a relation hook and (2) what state may or may not be suddenly cleared by client operations [14:04] fwereade, and the zk structures are in place [14:06] hazmat, if all the necessary zk structures will remain in place throughout all client operations, then there's no need for the dance, right? [14:06] fwereade, the comment directly reasons why the dance is there [14:07] to avoid things like.. modify after depart [14:07] hazmat, what do you mean by "do density" ? [14:07] Sander^work, multiple units on a single 'machine' [14:08] hazmat, you just said "fwereade, moving stop to inside do_depart is fine, but i don't see how that changes what happens"; I'm confused [14:08] hazmat, ah, ok. Is there any reason why it dosn't do density outside of the local provider? [14:08] hazmat, either all we care about is stop-before-depart, in which case we can move it; or the little stop-everything-and-only-then-depart-everything dance is unnecessary [14:09] hazmat, ...right? [14:09] hazmat, sorry, scrambled something there [14:13] hazmat, stepping back [14:14] hazmat, (1) the only thing we care about is that no other relation hooks can fire once the relation-broken hook is has done so; agree? [14:15] hazmat, (2) once we've called stop(), we can be sure that no other relation hooks will fire; agree? [14:16] Sander^work, there's some work that will achieve density in a consenting adults fashion via unit placement/resource constraints, there's additional work being done to allow subordinate charms to live in a container with a parent/master charm for things like logging etc. The main reason for lack of density in a rigorous fashion, is that juju allows for dynamic port usage by a charm, and this is problematic when putting two independent cha [14:16] rms with port conflicts on the same machine, a the conflict is undetectable apriori. there's some talk of using like a soft network overlay to alleviate that for density, but its not on the roadmap atm [14:16] hazmat, (3) therefore, we can call lifecycle.stop() in workflow.do_depart(), and we can guarantee that from that point on no further hooks can be scheduled , so we're safe to just run lifecycle.depart(); agree? [14:18] 1) yes, 2) yes, but one may be currently executing, 3) yes [14:18] hazmat, and if you do agree with all the above, I don't understand the purpose of the dance, because it's just duplicating work already done in do_depart [14:18] fwereade, the purpose of the dance is to immediately stop all broken hooks [14:19] fwereade, if you do it in depart, your having exeuctions of a depart hooks, and more hooks for broken relations can be executing, as the rels are serially stopped. [14:20] where as the dance ensures all rels that are broken are stopped, and then executes their individual depart hooks [14:20] er. broken hooks via depart transition [14:20] hazmat, I whould like to see a diffrence on density when it comes to applications that uses another service's port. 2x Wordpress can easily be installed into one apache instance without any port issues. [14:21] Sander^work, you could write a wordpress charm that encapsulated that capability, ie multi-tenant wordpress hosting in a single unit [14:22] hazmat, is that correct, or am I still missing something? [14:23] fwereade, say i have 5 broken relations, the current dance ensures all 5 are stopped before executing any of their depart hooks [14:24] fwereade, your suggesting that we go through each of the rels, stop it execute its broken hook, and then process the next [14:24] hazmat, what would be the negative consequences of failing to do so? [14:25] hazmat, really that we just go through each and fire the departed transition, and trust the transition to ensure the lifecycle is stopped [14:25] fwereade, the problem is that may be events for those 5, that are happening and scheduling/executing hooks while your executing for the one.. ie your processing htem in serial [14:25] which means your getting hook execution for those not processed, even though the rel is known to be broken [14:26] hazmat, Am I understanding it right?.. So I then can deploy wordpresse installs on demand into customer's directories for one fixed apache instance? [14:27] hazmat, ok, that's fine; but we can't be sure that won't happen anyway, can we? we yield several times in the course of stopping all those lifecycles, and the not-yet-stopped ones could still be scheduling hooks [14:27] Sander^work, a charm can do whatever it wants to do on a machine, in this case you'd have to write the charm yourself [14:27] the existing wordpress charm doesn't address that use case [14:29] hazmat, and if it's a situation we're already prepared to accept, I don't see that reducing its incidence is exceptionally important [14:29] fwereade_, indeed its an optimistic guarantee not an absolute, if there is concurrent activity happening at that sec [14:29] hazmat, Ok. Do you know about any documents I should read to be able to write a charm like that? [14:29] hazmat, and the consequences of unjustified optimism could be, at worst, ..? [14:29] fwereade_, the goal is minimizing hook execution for hooks known broken, waiting on a scheduler is minimal [14:30] waiting on hook executions creates a large gap [14:30] hazmat, ok, thanks for clearing that up; the original comment seemed to me to be suggesting that the stop would prevent *any* extra hooks from slipping in [14:34] fwereade_, we could probably offer a better guarantee of that, if we stopped the executor, but given that's a shared resource i felt more comfortable with minimizing the possibility.. and the reality is that there is the possibility that a rel hook is executing when we get the notification the rel is broken [14:35] since the schedulers feed into the executor, stopping it there suffices [14:35] koolhead11: hey [14:35] hazmat, yeah, I pondered stopping the executor, it wouldn't be a nice solution [14:35] and the currently executing rel hook [14:35] is always a possibility [14:36] hazmat, I must be missing something about the significance of a currently executing rel hook [14:36] fwereade_, feel free to add to the comment about this [14:36] hazmat, I will :) [14:38] Sander^work, well the general understanding of charms helps, but first just figuring out how you do it outside of charms is helpful [14:39] Sander^work, http://askubuntu.com/questions/82683/what-juju-charm-hooks-are-available-and-what-does-each-one-do http://askubuntu.com/questions/84656/where-can-i-find-the-logs-of-irc-charm-school [14:44] http://www.debian-administration.org/article/Installing_Redmine_with_MySQL_Thin_and_Redmine_on_Debian_Squeeze ... looks like a charm to me. ;) [14:45] nijaba, sure, sounds like a good idea to augment juju scp (and other commands that need it) with more example-oriented help [14:49] jimbaker: we call that "man pages" [14:49] and you guys wanted me to make juju auto-generated which I've been looking into [14:50] err.. language.. not quite unthawed from sleep.. rrrrrr [14:52] We don't have a kind of "juju retrieve-environment ..." to retrieve a somewhere else setup environment and merge it into the own one. [14:54] The intention is that a 2nd new operator can easily extend his environment to take over the administration of an environment. [14:57] hazmat, is it possible to write a charm that deploys eg. wordpress over an ftp connection? [14:59] TheMue: I think that would be brilliant [14:59] Sander^work: no, juju is built on the ability to own whole servers. [15:00] Sander^work: you could write a charm which deploys a webservice + ftp onto a machine which accepts wordpress uploads. ;) [15:04] SpamapS: Aaargh, "bootstrap" has to be renamed! I allways do the same typo here. (smile) [15:04] SpamapS, Okay.. Is it possible to deploy a charm.. where an ldap database defines which uid/gid the files deployed is owned by? [15:04] Sander^work: certainly [15:05] Sander^work: things like system policy are hard right now.. dev work has just begun on a feature to separate system policy charms from servie charms. [15:05] service even [15:07] I'm using apache with an ldap module an mod_fcgid so every vhost get it's own uid. [15:07] Sander^work: yeah, that would be quite doable [15:09] Whould love to be able to deploy our whole architecture trough a set of charms :-) [15:10] TheMue: one thing to consider with the idea of retrieve-environment is that there is a desire, eventually, for environments.yaml to be limited to only facts that help you find and authenticate to the environment... [15:10] TheMue: any of the settings would be stored and managed inside ZK [15:11] Sander^work: we'd love for you to be able to do that too. [15:12] Sander^work: charms are just scripts in whatever language you want... so you can just duplicate whatever you have now into a charm. :) [15:15] So the new admin only should get those facts. Once added his commands would use the ZK on the bootstrap instance, wouldn't they? [15:15] I tried asking this on the Vagrant chat, but i think everyone is asleep. :-) Has anyone tried to implement Juju in Vagrant? I would be interested in working on that if not. [15:27] SpamapS: Where do I find the environment on the bootstrap instance? Only in ZK or does a file exists? [15:39] SpamapS: rouncube charm now has https support [15:42] mchenetz: No but I figure its probably possible [15:43] mchenetz: the local provider is basically vagrant-like tho [15:44] Spamaps: hmmm I am just learning Juju… What is the local provider? [15:44] mchenetz: spins up 'machines' by way of LXC containers [15:44] mchenetz: instead of using EC2 or a hardware provisioning system [15:45] mchenetz: so its quite useful for testing things disconnected [15:45] hmmm, interesting. I will look into that. I still think it would be nice to integrate it into Vagrant as i use it a lot and it already has chef and puppet... [15:45] mchenetz: juju is more like vagrant than chef or puppet [15:45] Definitely… I do a lot of deployments in the cloud for some hugh customers… Juju is definitely going to be a big part of my future! [15:46] I watched the webinar yesterday and my head is spinning with ideas [15:47] mchenetz: so it wouldn't really make sense for vagrant to run juju at the same level as chef or puppet... juju doesn't have a DSL or a big library of configuration tools. Its just for coordinating and orchestrating these encapsulated services. [15:47] mchenetz: I was "Clint" from the webinar. :) any questions? [15:47] mchenetz: and thanks for watching!! [15:48] mchenetz: I'm quit interested to hear how your vagrant knowledge maps to juju. [15:49] hehe, i asked the security question the other day. I am mainly an enterprise security consultant. So, i am thinking about how i can create charms that would encompass some security vm's into the solution. I am thinking about creating some special firewall and ids modules that integrate with juju charms [15:50] I will definitely keep you informed on how Vagrant and juju map up. :-) [15:50] mchenetz: complex networking, thus far, has not been a part of the juju conversation.. but the colocation (or actually, subordination) work that is going on will enable that quite nicely. [15:51] mchenetz: note that the security model of juju is still evolving, I'd love to hear your input on how important it is. There are a few bugs tagged "security" that are sort of our second priority. [15:51] I would like to be able to say add-firewall port-80 relation or something to that effect and it will add a firewall and maybe some die monitoring too [15:51] not die… ids... [15:52] mchenetz: well in EC2 nothing is accessible from outside -> inside [15:52] mchenetz: we use the ec2 ingress firewall extensively [15:52] thats true… I am not just thinking ec2 though… [15:52] mchenetz: you could write a firewall subordinate charm and do exactly what you're talking about [15:52] thats what i am thinking about [15:53] mchenetz: subordinate charms are just charms that live inside the same container as other charms [15:54] yeah.. aim a little familiar with how the charm structure works now. I am quickly getting up to speed. [15:54] I would love to help out on the security side if you guys need any assistance [15:58] Hmmm, funny, I can expose a wordpress w/o a mysql instance. I would have expected an error due to the not fulfilled requirement. [16:00] TheMue: the wordpress charm should not have any open port yet though [16:00] TheMue: open-port 80 should only happen after the db is configurd [16:01] TheMue: since the system is async.. its not an "error" .. you just don't get any open port [16:04] i am trying to deploy a charm and i need some assistance [16:04] i have moved the charm from /usr/share/doc/juju/oneiric directory [16:04] to my /home/juju directory [16:04] hazmat: I'm still really confused why docs needs to be a separate series and why we can't just agree that the docs dir under the trunk has a different policy. I'm *very* concerned now that the docs will get out of sync w/ trunk. [16:05] when am trying juju deploy --repository=/home/atul/juju local:mysql [16:05] SpamapS: I understand, and I should have had a debug-log open. *gna* [16:06] ERROR Charm 'local:oneiric/mysql' not found in repository /home/atul/juju [16:06] TheMue: I don't necessarily think that having debug-log going all the time is a good idea ;) [16:07] koolhead11: you need the series in there [16:07] koolhead11: mkdir /home/atul/juju/oneiric [16:07] koolhead11: and move the charms into that dir [16:07] SpamapS: ok [16:08] SpamapS: debug-hooks are better? I'm currently want to so what's going around. [16:09] so SpamapS my charm will be in /home/atul/juju/oneiric [16:09] TheMue: while developing and learning its probably a good idea.. I think though at some point we have to look at it as users of the charm, who won't necessarily be able to consume all of that data. [16:09] and i will deploy with [16:09] koolhead11: right [16:09] juju deploy --repository=/home/atul/juju local:mysql [16:09] ok [16:10] koolhead11: that is necessary so that we can match the OS series with the charms for that OS [16:10] Where do i find information on using a local provider in Juju? [16:11] SpamapS, let's give it a try, we can evaluate before 12.04 if its not worthwhile and move it back, but i'm hoping its still a benefit to getting doc contributions [16:13] hazmat: as long as we agree to actually put a version number on juju so the disconnected docs can be written to a specific version, it should work. I'm just not confident about that. ;) [16:14] TheMue, there was a spec out for doing import / export of environments, but it ran afoul of want for a design of service groups aka stacks as a first class entity that was modeled and agreed upon. [16:14] SpamapS, we call winners on that bet at uds ;-) === amithkk is now known as sabsui12 === sabsui12 is now known as sansui12 === sansui12 is now known as amithkk [16:15] hazmat: we should maybe think about putting version strings in juju and having a release process now that we have, you know, users. ;) [16:15] SpamapS, i should investigate read the docs some more. i know we tried moved on, but i believe it has support for multiple versions [16:16] hazmat: thx for the info [16:16] <_mup_> Bug #902219 was filed: config values of 0 are discarded < https://launchpad.net/bugs/902219 > [16:16] SpamapS, sounds good, would you mind putting in a bug for that? [16:17] Found the doc: https://juju.ubuntu.com/docs/provider-configuration-local.html, Doesn't survive reboots? That isn't good for my scenario as i stage development code in the local environment. [16:20] mchenetz, it will survive reboots for 12.04, but no it doesn't survive reboots, or even hibernates at the moment. [16:21] mchenetz, you'd also have to manually connect the bridge that the lxc containers are bound to allow external connectivity to them off the host. [16:21] or port forward from the host [16:22] okay. good to know. can i make a charm for that. :-) [16:23] What i would like to do is give developers a local environment to develop in and then move the units to the cloud for test an production. [16:23] Which i think is the whole purpose of juju. Easily move and provision units. [16:26] hazmat: this feels like a blueprint, there are really 3 things that need to happen. 1- add versions to the juju --help, 2- ratify and agree to maintain a stable branch. 3- Setup a PPA that just has the latest stable release. [16:26] mchenetz, well.. when you say move.. its not moving the data, you can develop/stage local and the copy the configuration/charms to a different cloud, but that doesn't sync data.. you'd need a separate charm/service for data syncing, right now.. juju environments do not bridge clouds. [16:27] each environment is specific to a provider, but you can have multiple environments in a given provider. [16:27] mchenetz: for what you're talking about.. you'd just repeat the local deployment into the cloud [16:27] I was thinking that i can utilize charms that would say, install mysql, and apache, and then create the relations for multiple environments. As long as i create charms that have the glue code then it should work. Correct? [16:28] And again… I am still learning Juju… I learn very fast, but if it sounds like a stupid question it's just that i haven't learned it all yet. :-) [16:29] mchenetz, yes the charms are meant to capture the configuration/best practices for a service in a provider independent fashion [16:29] mchenetz: eventually there's the idea that we'd be able to create a relationship between two environments .. but thats not done yet. ;) [16:29] mcclurmc, so you could deploy the same mysql/apache/appserver setup in multiple environments [16:29] Spamaps… That sound awesome [16:30] SpamapS: I eagerly await that idea :) [16:30] mchenetz: its already possible.. you can write a cloud-bridge charm that exchanges anything you need to exchange between the two envs.. and just use service configs to get them talking to eachother. [16:30] and even then you need an underlying setup that syncs data [16:31] I bet I could get the mysql charm to expose config settings to allow external slaves/masters [16:31] well maybe not it could just offer access, but wan connectivity solutions are better at a data tier [16:31] I don't mind creating, "glue" to connect disparate environments. I just need to know that limitations so that i implement things in the most efficient way. [16:32] marcoceppi, mchenetz so the first cut at gluing disseparate environments is a proxy charm that will relay notifications to a remote endpoint [16:32] you'd deploy a proxy service in each environment, bind it locally to the relations of interest, and then connect the proxy endpoints [16:32] at least that's one option [16:33] I took a stab at making an 'othercloud' charm that would use the juju client to talk to another juju env but the lack of wildcard interfaces made it not work like I wanted. [16:33] * hazmat nods [16:33] SpamapS, it would need support in the core, for basically assuming the interfaces of the proxy target [16:33] hazmat: that makes sense… To me, it sounds like i would use ssh to create a tunnel between the envirnments [16:33] SpamapS: hazmat: I was under the impression that orchestra was preferred for stringing multiple clouds into one env? [16:33] really.. if you want cross-cloud cross-AZ .. you probably want to make conscious decisions about what crosses those boundaries. [16:33] Then just run commands locally and remotely [16:33] mcclurmc, any secure transport would work [16:34] whoops [16:34] marcoceppi: that wouldn't really work. ;) [16:34] mchenetz, i was thinking zeromq with encrypted messages would do it [16:34] I haven't used that. I will definitely look into it [16:34] but i'm very much thinking like an app developer ;-) [16:34] hazmat: yeah, thats when I gave up on it, when I realized unless I can make relations dynamic it just won't work. [16:35] It's interesting… I grew up as a hacker of code with bbs's in the early days and then became a network engineer. So, i think in terms of both code and network infrastructure. :-) [16:35] hazmat: I think an ops guy would be fine with that as long as it was simple to understand and monitor. [16:36] mchenetz: WWIV vs. Telegard ... GO [16:36] hehe, i ran rbis originally and then WWIV, good old wayne bell [16:36] SpamapS, so this go further into a notion of charms that juju distributes and core services, given things like we could offer additional syntax for cross-env relations [16:37] hazmat: yeah that would make sense. [16:39] * SpamapS prepares for an Ubuntu bug triage rampage today [16:39] To me, as long as you create the appropriate abstraction on the top level and the disparate environments have similar functionality then it really shouldn't matter what environment you are on... [16:39] I guess I'm just confused about how to best tackle a scenario using Juju [16:40] there should be the idea of move-unit [enironment] [16:40] and again.. i don't know juju that much yet... [16:41] I have three bare metal machines, lets say, each running an acceptable provider by Juju - I assume each would be it's own juju environment then? [16:42] nevermind actually [16:42] mchenetz, that assumes integrated volume management storage, even for just unit migration within an environment, and frankly at scale moving data across wans is a non transparent operation to QOS. [16:43] its potentially a huge impact on network resources, and a multi-day operation [16:43] hazmat: As long as the backend code accommodates the variables for that. Why would it matter? You can give the instructions for syncing code and throttling and so forth... [16:44] syncing db's and such... [16:44] It could say use-link [interface] throttle [50%] of link or something like that [16:45] this is all conceptual [16:45] It could then set the proper qos tagging and such in the backend and setup the interface to use and maybe the timeframe [16:46] mchenetz, move-unit is a generic capability to any service.. what a service/charm chooses to expose can be accomodated by something like a proxy without charm knowledge, or the functionality could be incorporated directly into a charm. [16:46] mchenetz, more interesting though, juju right now is its infancy wrt to how it approaches networking.. i'm curious though what you would think of juju managing a soft overlay network that spanned machines [16:47] That would be very interesting. So, are you talking about creating a networking abstraction that would be unrelated to a single machine? [16:48] Can you elaborate on what you are thinking? [16:49] mchenetz, yes.. this is a while out most likely.. but the notion of getting unit density on a machine, where each unit is an lxc container, to be abstract to a provider, we need to establish a soft overlay net that we'd plug the lxc containers into, probably with something like openvswitch or just using openstack's quantum [16:50] part of the problem is that we end up needing a bridge to reconnect the overlay, but the notion is for exposed services we would port forward [16:50] it give us much better capabilties to expose in terms of setting up vlans etc [16:51] but its also a pita [16:51] hmmm… interesting.. You could potentially keep the environments networked permanently through the virtual switch and then exchange data and move things where they need to be. I like it… It doesn't seem like it would take too much either. [16:58] I will definitely have more to contribute in the upcoming weeks as i learn juju. It's definitely a project i would like to be involved in. [16:59] I am just ingesting all of the knowledge right now. :-) [17:02] mchenetz, awesome, probably the best way to get introduced to juju is to write a charm or have a look at some existing ones.. http://charms.kapilt.com [17:02] I am planning on writing many charms and looking at existing ones. ;-) [17:04] hazmat: revision and config.yaml are compulsary files to be with a charm [17:04] ? [17:05] i am learning writing juju with writing simplest charm which does things simply with apt-get isntall [17:05] *install [17:06] i moved mysql example in same directory and same part worked and charm got initialized [17:09] koolhead11, config.yaml isn't, revision is [17:10] hazmat: i am using the existing mysql example [17:10] and i see a file with name revision there [17:10] notthing mentioned about same in config.yaml file [17:11] so i created both files accordingly and created hooks sub directory inside it [17:12] added options: {} in config.yaml file [17:13] i am just clueless while this thing is not working :( [17:14] koolhead11, what do you mean by not working? [17:14] can you deploy your charm? [17:14] hazmat: i get error in deploying charm i wrote [17:17] hazmat: http://paste.ubuntu.com/765110/ [17:18] i have created a directory inside example named "oneiric" [17:18] and put the charm for boa inside it [17:18] and executing juju deploy --repository=example local:boa [17:18] while my pwd is /home/atul [17:18] koolhead11, the path to boa should be /home/atul/example/oneiric/boa [17:19] hazmat: that is where boa is [17:19] I see some charm developers are using augeas to create/modify configs. This seems interesting. I never heard of that tool [17:19] that directory should contain the metadata.yaml file, if your using a recent ppa, and there's a syntax error in the charm, it should report it [17:20] i am using oneiric and installed default juju [17:20] mchenetz, its a bit like a generic dom api for configuration, some folks prefer writing out the whole config, some prefer patching in place. [17:20] from repo rather PPA [17:20] koolhead11, so you are using the ppa? [17:21] Is there a standard you guys like in terms of directories? I notice the .aug files are in the root instead of the hooks directory... [17:21] hazmat: i have not added any PPA manually, installed juju which came with default [17:21] with oneiric [17:21] I really should read the documentation. :-) [17:22] hazmat: /home/atul/example/oneiric/boa its very much here [17:22] and also i have metadata.yaml file there [17:23] koolhead11, then you probably have a yaml error [17:24] koolhead11, the ppa version will detect and report yaml errors, the default version in oneiric just won't find the charm [17:24] then why the error log says error in path [17:24] hazmat: point me to PPA am upgrading juju from there [17:26] koolhead11, sudo add-apt-repository ppa:juju/pkgs && sudo apt-get update && sudo apt-get upgrade juju [17:26] cool [17:27] koolhead11: can you push your charm up to a branch on launchpad? [17:27] SpamapS: sure once am home. [17:29] catch u guys in sometime [17:29] * koolhead11 rushes 4 home [17:30] SpamapS: if you feel like reviewing a charm, feel free to take a look at my roundcube one ;) [17:31] nijaba: I have some other queues to tend to today (server bug triage and SRU's), but I won't be able to resist reviewing your charm all weekend. ;) [17:31] marcoceppi: did you already have a look at it? [17:31] SpamapS: he did [17:31] Oh, so, what do you need me for? ;) [17:32] SpamapS: to make it official? can't wait for your comments either, specially on the https handling [17:32] as the #4 contributor to lp:charm (see https://launchpad.net/charm) I'd say he's quite qualified to ack and promulgate it :) [17:33] I need to stop for a little while, back later [17:33] I've actually wanted roundcube for some time as I plan to replace my crappy hastymail solution with it. :) [17:34] SpamapS: he said he would feel more confortable with you reviwing it first. He might need some re-assurance :) [17:34] I know this is a matter of opinion but… I have been using Eucalyptus for a long time because it is API compliant with Amazon EC2. Is there any advantage to going over to openstack? Anything from a juju side? [17:34] nijaba: roger that. I'll take a loko between ubuntu bugs and SRU's ;) [17:34] SpamapS: no hurrry. cheers [17:34] mchenetz: Euca is very expensive to scale up [17:35] spamaps: I always hear that [17:35] mchenetz: if you have a working euca solution with a narrow focus, probably best to just stick with it. [17:35] mchenetz: and hard to make HA (if posisble) [17:35] I am definitely thinking about scalability for my customers. I think i am going to have to look at openstack... [17:36] I think i will keep my dev in Eucalyptus [17:36] mchenetz: the fact that more and more provider are announcing public cloud based on OpenStack feel very re-assuring [17:37] mchenetz: OpenStack is also more loosely coupled.. I find that attractive. [17:37] I will definitely have to put it on my agenda to get familiar with Openstack... [17:37] robbiew: hey how did your BOF go? [17:37] Thanks for the comments [17:37] SpamapS: was great..attendance was so-so...but the BoFs started at 8pm [17:38] after dinner [17:38] luckily mine was BEFORE Google's beer and "icecream" social [17:38] :P [17:38] we should definitely have a Charm School at the next year's [17:38] TOTALLY our crowd here [17:38] and next year will be in San Diego...not Boston. [17:39] Any plans on a east coast charm school? I live close to Philly and about an 1hr from NY [17:39] mchenetz: jcastro is the man with the plan on Charm Schools...I would expect we'd have one though [17:40] we'd prefer it to be tied to some sort of pre-existing event though....the ones on IRC are pretty good too ;) [17:40] I would have to get in touch with him. I don't think i saw one when i looked [17:40] There's irc ones? [17:41] robbiew: *w00t* [17:41] mchenetz: I'd hope we can co-locate a charm school with Surge [17:42] mchenetz: we plan to have a charm school every couple of months on IRC [17:42] SpamapS: cool…. sounds good [17:43] SpamapS: Cool, wasn't sure if you wanted to a two person review or not :) [17:44] nijaba: I'll take a look at the SSL implementation and promulgate it after lunch :) [17:44] marcoceppi: only if you feel like you aren't 100% sure its ok [17:44] SpamapS: gotchya, cool [17:47] marcoceppi: thanks :) [17:55] SpamapS: I did a local deployment of ThinkUp to demo the LXC stuff...worked perfectly [17:55] then I showed them a real ec2 deployment of the same thing...already running to compare [17:55] then I blew both away...and then did a live hadoop ec2 deployment [17:55] ran terrasort...and scaled it live [17:55] BAM! [17:56] i'm off for the weekend now. see y'all tuesday (i'm off monday) [17:56] had ganglia setup with auto-refresh plugin for chrome browser [17:56] everyone was really impressed...and they GOT it, b/c this crowd knows their shit [17:57] robbiew, nice [17:57] rog, cheers [17:57] robbiew: impressive! [17:57] welcome home, koolhead17 ;) [17:57] then I told them I learned all this on Tuesday [17:57] :) [17:57] hehe nijaba:) [17:58] though I've obviously known HOW to do it for quite sometime...but never got my hands "dirty" until this week [17:58] rog: have a good long we [17:58] I've found juju to be a bit addictive...like, "what can I deploy now?" [17:59] robbiew: tell me about it. I'm finding myself asking what else can I charm :D [18:00] this whole juju thing just *might* have legs [18:01] lol [18:01] robbiew: \o/ .. sounds awesome [18:02] SpamapS: yeah...LISA'12 is going on my "give them money and love" list for next year [18:02] robbiew: we should write a script that just deploys the entire charm store on t1.micro's and relates everything that can be related [18:02] Right now that would be like, 40 t1.micro's .. so it would cost about $1. [18:02] lol [18:02] one charm to rule them all! [18:03] juju deploy * [18:03] oh man [18:03] you'd have to script the relations though [18:03] or just talking about deploying only [18:04] yeah [18:04] 1 haproxy would not actually be able to serve every website, unfortunately [18:05] heh [18:05] Juju was great, until I got my Amazon bill :P [18:05] haproxy is a "monogamous" charm. [18:05] marcoceppi: LOL! [18:06] marcoceppi: thsi is all an evil plan to get people to setup openstack clouds [18:06] SpamapS: I'm seriously considering setting up an openstack cloud at my house [18:11] * koolhead17 searching 2 two mintues bzr guide [18:12] koolhead17: how about 5 minutes? http://doc.bazaar.canonical.com/latest/en/mini-tutorial/ [18:12] jelmer: 3 mins is ok :) thanks [18:13] jelmer: am on LTS so all this command will work for it too. :) [18:13] SpamapS: I plan to have an Openstack server running at my house this weekend. ;-) [18:14] LTS lucid [18:15] mchenetz: nice! [18:15] I'm itching for more info on deploying openstack with juju. [18:15] fwiw, we're making sure openstack is easily deployable from the installer in 12.04 [18:15] EvilBill: talk to adam_g :) [18:16] I will. Played with juju about three weeks ago when I was between gigs, dug it a lot, but curious as to why the bootstrap node can't co-exist with where the juju client is running. [18:16] or maybe I'm just conceptually missing something. [18:17] I was trying to multitask between spending time with the family and learning something new, and I think I didn't do either thing very well. [18:17] family? what's that? [18:17] lol [18:19] EvilBill: the bootstrap is really just ZooKeeper + the provisioning agent. It can't live with the client because clients can come and go (laptops.. workstations, etc) [18:19] OK, well, what I did was setup an orchestra server and had it working with two other machines at home with wake-on-lan, etc. [18:19] so Orchestra would rev up a bare-metal box on command [18:20] tied that into juju, and when I'd do a juju bootstrap, a machine would turn on and go install and become the bootstrap node [18:20] EvilBill: You *can* make the bootstrap node a VM on the orchestra server [18:20] but it sounds like a full machine JUST for bootstrapping seems silly. [18:20] That's what I didn't get around to playing with or figuring out [18:20] EvilBill: we even talked about having that be the default at one point. [18:21] EvilBill: its fairly simple to provision VMs in cobbler just like regular machines. [18:21] ok, so that begs the next question, what's the preferred VM framework? [18:22] my orchestra machine is an old laptop with a Core Duo, so it's not 64-bit capable. [18:22] which means I don't think it'll run KVM. [18:24] I'd like to get a public openstack cloud open for charm testing and development against, but I feel that might be something for next year [18:30] EvilBill: 'kvm-ok' will tell you that [18:31] EvilBill: you *could*, in theory, have it provision an LXC container, but you'd have to figure out how to run the "late_command" bit to get juju to start its agents. [18:31] EvilBill: it might also be possible to simply have the cobbler machine register *itself* as a cobbler system, and then do the same thing. [18:32] marcoceppi: we've talked about having an openstack cloud which we make available to ubuntu members who want to work on bugs/testing [18:32] SpamapS: I think it's a cool idea [18:33] Limit it to charm-contributors even [18:33] ? [18:35] EvilBill: i think there is a wiki which spells charm magic 4 openstack deployment allready!! :D [18:35] marcoceppi: I'd like to see it open to even those who are not interested in juju. [18:36] SpamapS: so would this just be a free cloud for anyone to play in? [18:36] marcoceppi: I think we'd require ubuntu membership and have a limited number of seats available [18:36] I see [18:38] http://wiki.openstack.org/FreeCloud [18:38] ^^ not sure what the status of that is ATM, tho [18:47] SpamapS: https://code.launchpad.net/~koolhead17/charm/oneiric/boa/trunk [18:47] hope you will not laugh, as it has notthing magical [18:47] made it to understand juju better :) [18:49] adam_g: *nice* I didn't know it had even been written down like that. :) [18:57] koolhead17: thats fine. It should work.. its not clear at all to me why its not working. :-/ [18:58] SpamapS: hehe. i would love to see if someone tests it on AWS [18:58] i have no idea why i was getting the error i mentioned earlier on LXC [18:59] i created a directory name example/oneiric/boa* [18:59] and repository = /home/atul/example [18:59] :P [19:00] i moved mysql charm from the examples directory to that path it worked but not the boa one [19:05] koolhead17: ls -l /home/atul/example/oneiric [19:06] it will have boa and mysql directory ) [19:06] SpamapS: am home now siir!! [19:07] koolhead17: so.. um, you can't test at home, but you can't push to bzr at work? :-/ [19:07] koolhead17: you should be using the local provider at home [19:08] SpamapS: i tried juju formula writing on LXC at work, synced it to my home computer and then pushed it to bzr once i came home [19:09] i will install Oneiric tomorrow [19:09] its been in list for ages, :D [19:09] * koolhead17 is addicted to LTS [19:24] koolhead17: yeah, LXC isn't quite usable in 10.04 unfortunately [19:24] SpamapS: honestly am waiting 4 precious b4 i can format my poor lappy, that is why i do all charming stuff in office :P [19:25] 2 GB baby [19:27] niemeyer: I think I've got something going for zk + ssh. but it's still very messy so I'm gonna clean it up before showing it to you guys. [19:27] mpl: Ohh, sweet [19:28] gonna go home, dinner, and rest a bit first though. ttyl [20:25] I can't find config-changed anywhere in the hooks documentation [20:29] marcoceppi, yeah.. that's a problem.. its in a separate document on service config [20:30] i just started to pull the docs into a separate branch to hopefully facilitate making them easier to contribute to.. there at the bzr branch lp:juju/docs [20:31] Ah, cool - thanks for the heads up hazmat [20:31] marcoceppi, re the config-hook docs atm https://juju.ubuntu.com/docs/drafts/service-config.html#creating-charms [20:39] After a charm is reviewed and deemed needs more work, should I just remove the new-charm tag? [20:44] <_mup_> juju/ssh-known_hosts r431 committed by jim.baker@canonical.com [20:44] <_mup_> Refactored [20:46] <_mup_> juju/ssh-known_hosts r432 committed by jim.baker@canonical.com [20:46] <_mup_> Merged trunk [20:47] marcoceppi: just put the bug bag to state incomplete [20:49] marcoceppi: once the problem are addressed, the requester will put it back to state "fix-comited" [20:53] * nijaba takes off. Have fun [20:54] SpamapS: Coming back after a lengthy meeting… you could get cobbler to register itself? Hm, that's an idea, but I wouldn't want it to try to PXE boot itself... [21:07] bye nijaba [21:40] <_mup_> juju/ssh-known_hosts r433 committed by jim.baker@canonical.com [21:40] <_mup_> Simplify by keeping public keys in ProviderMachine itself [21:41] <_mup_> juju/ssh-known_hosts r434 committed by jim.baker@canonical.com [21:41] <_mup_> Refactored bootstrap [22:12] <_mup_> juju/ssh-known_hosts r435 committed by jim.baker@canonical.com [22:12] <_mup_> Fix error handling for refactored bootstrap [23:26] <_mup_> Bug #902384 was filed: Service units get stuck in debug mode. < https://launchpad.net/bugs/902384 > [23:43] EvilBill: no you wouldn't pxeboot it .. you'd just run the final "late command" that juju sticks into the pre-seed