[00:20] <sebas5384> lazyPower: ping?
[07:29] <leotr> hi! we are going to buy one server (64 gb ram, dual processor) for virtualization purposes. Can juju be useful for us?
[08:45] <Bidule> hi
[08:45] <Bidule> amazing project !
[08:45] <Bidule> really thanks for that
[10:30] <rbasak> jamespage: I think the juju-core (in Utopic) dep8 tests are fetching tools from streams.canonical.com even though they're using the local provider. I wonder: if we want the archive to be as self-sufficient as possible, then can we eliminate that?
[10:30] <rbasak> Is that what juju bootstrap --upload-tools does, and if so, do you think I should use that in the tests?
[10:31] <rbasak> The way I see it, juju need not have any external dependencies in the local discover/test/development case.
[10:31] <rbasak> (if that's achievable)
[10:31] <jamespage> rbasak, --upload-tools is good I think
[13:51] <mivtachyahu> hi all, when I spin up Ubuntu 12.04 machines for Azure in juju, all the host names (/etc/hostname etc.) are set to "default". What is the best way to automatically set that to be the public-address / dns-address?
[13:54] <niedbalski__> hey bcsaller , please review https://code.launchpad.net/~niedbalski/charms/precise/rabbitmq-server/trunk/+merge/219909
[15:13] <l1l> new mysql charm seems broke, its creating empty databases with no tables
[15:24] <avoine> l1l: your application should creates table I think
[15:33] <Egoist> could someone tell me how relation-list command work in juju?
[15:48] <arosales> marcoceppi: Is there documentation somewhere for relation-ids and relation-list?
[15:50]  * arosales also browsing https://github.com/juju/juju/tree/master/doc
[15:50] <arosales> cory_fu ^
[15:50] <arosales> I don't think we cover that in juju.ubuntu.com/docs . .  .
[15:52] <arosales> cory_fu: mention here https://github.com/juju/juju/blob/master/doc/charms-in-action.txt which we also should have in our docs.
[15:52] <marcoceppi> arosales: we do
[15:52] <marcoceppi> we cover it in the docs
[15:52] <marcoceppi> arosales: Egoist https://juju.ubuntu.com/docs/authors-hook-environment.html#relation-list
[15:53] <arosales> marcoceppi: ah even better
[15:53] <arosales> marcoceppi: we desparately need better searching in the docs :-(
[15:53] <arosales> cory_fu ^
[15:53] <marcoceppi> arosales: we need actual searching, we don't have that yet
[15:53] <arosales> good point, we should just do SCE for our sub pages
[15:54]  * marcoceppi could implement pretty quickly
[15:54] <Egoist> marcoceppi: why relation-list sometimes don't return all remote unit in current relation?
[15:54] <marcoceppi> Egoist: because that remote unit isn't available at the time the hook started
[15:55] <Egoist> marcoceppi: how to check that remote unit is available? Ping it, or something?
[15:55] <marcoceppi> relation-list
[15:56] <marcoceppi> if you have it listed in relation-list, then it's available
[15:58] <Egoist> marcoceppi: but how charm will know how many units should be listed?
[15:58] <marcoceppi> Egoist: it doesn't know how many should, it knows which ones currently are
[16:16] <mbruzek> Can someone tell me what the command `relation-ids` does and what it is good for?  https://juju.ubuntu.com/docs/authors-hook-environment.html#relation-ids
[16:37] <lazyPower> sebas5384: Hey!
[16:38] <sebas5384> lazyPower: hey!
[16:38] <lazyPower> sebas5384: really sorry about missing you lastnight. I forgot to put it on my calendar and totally spaced it off :(
[16:38] <lazyPower> i'm a terrible person
[16:38] <sebas5384> hehe thats ok, it happens :)
[16:38] <sebas5384> next time we put in the agenda hehe
[16:38] <lazyPower> agreed
[16:39] <lazyPower> i felt pretty bad when i woke up this morning and went through my irc backlog to see you hanging out with no lazy in sight.
[16:39] <lazyPower> oops
[16:39] <lazyPower> hopefully you didnt cancel any plans to attend.
[16:40] <sebas5384> no worries, if you can we can do it today
[16:42] <sebas5384> i will be working on vagrant juju things with another colleague here at Taller, so we are going to make a brainstorm about the that
[16:43] <lazyPower> Sure
[16:43] <lazyPower> I'll be around today until late prepping for another proejct going live tomorrow.
[16:43] <sebas5384> i'll ping you, and if you can, more than welcome to join us
[16:43] <lazyPower> so I can host a hangout and order a pizza
[16:43] <sebas5384> ooh nice!
[16:43] <sebas5384> great!!
[17:03] <cjohnston> can anyone help me investigate bug #1319947 please
[17:03] <_mup_> Bug #1319947: LXC local provider fails to provision precise instances from a trusty host - take 2 <juju-core:Confirmed> <https://launchpad.net/bugs/1319947>
[17:08] <jcastro> http://summit.ubuntu.com/uos-1406/track/devops/
[17:08] <jcastro> please submit your sessions!
[17:17] <avoine> noodles775: do you think that would be a good occasion to talk about juju+ansible roles? ^
[17:18] <noodles775> avoine: yep, I've got a meeting proposed, which seems to be approved... just not in the schedule yet: http://summit.ubuntu.com/uos-1406/michael.nelson/meetings
[17:18] <avoine> noodles775: ok , great!
[17:20] <avoine> noodles775: btw I'll will have something to show soon do you want me to do a pull request or should I put it else where?
[17:22] <noodles775> avoine: A pull request would be great, if it's a shared role (or fixes/additions to the shared roles). Otherwise just pointing me at your repo/branch is fine too :)
[17:22]  * noodles775 goes to put kids to bed.
[17:22] <avoine> noodles775: ok, perfect
[17:54] <cjohnston> jcastro: are you still unable to use juju local on trusty?
[17:55] <jcastro> yeah my stuff is stuck on pending
[18:06] <l1l> are constraints supposed to work when add-machine kvm/0 is used?
[18:06] <l1l> I can't get a kvm to launch without the default settings (1core, 512meg, 8gb)
[18:27] <dpb1> What is the right way to refresh lxc templates on juju local provider?  lxc-destroy --name <template> and just let juju refresh it?   Specifically the apt package cache gets out of date
[18:30] <sebas5384> jcastro: remember we talk about having more than one juju local?
[18:31] <sebas5384> jcastro: i did a poc to test that, https://github.com/sebas5384/ansible-juju-local
[18:31] <sebas5384> and now i use that for my vagrant workflow
[18:45] <jcastro> sebas5384, that looks awesome, sec, on the phone!
[18:46] <sebas5384> jcastro: :)
[19:14] <cjohnston> tvansteenburgh, marcoceppi, is there a way with amulet to load multiple deployer files?
[19:14] <marcoceppi> cjohnston: no, not at the moment, what's the use case?
[19:15] <cjohnston> marcoceppi: a true integration test...
[19:15] <cjohnston> we have ~a dozen deployer files...
[19:15] <marcoceppi> so you want to deploy all of those at once?
[19:15] <hazmat> marcoceppi, what if the deployer files load other ones.. ie are you parsing it are passing through?
[19:15] <cjohnston> all or subsets
[19:19] <tvansteenburgh> iirc amulet just takes the first target from the deployer file
[19:23] <marcoceppi> hazmat: if you load multiple deployments, I'm not sure exactly what happens, I think it just resets the deployment, tvansteenburgh do you know?
[19:24]  * marcoceppi is reminded about doing a non-amulet test, will have that for hazmat's modest proposal next week
[19:27] <tgz> Hi all. Just starting my investigation into juju. I am wondering what happens if a node launched with juju goes down. Does juju monitor and relaunch a node?
[19:29] <marcoceppi> tgz: so, it used to but we found that sometimes there's network latency in a cloud environment, and a node would appear offline but it wasn't, so it'd launch a duplicate and now you've got two nodes registered as the same
[19:30] <marcoceppi> tgz: so, juju status will show agent-state as down (unavailable) and you can juju add-unit to add another unit of that service group
[19:30] <marcoceppi> tgz: juju also is fully driven by a websocket API, so you could write tools that monitor the status and does this. You could also extend that same pattern to implement autoscaling etc
[19:32] <tgz> OK. Thanks for the reply. Does the agent-state have the same false positives issue?
[19:40] <marcoceppi> tgz: it's susceptible to timeouts,I hhink the timeout is set to 2m
[19:43] <tgz> Sounds good. Thank you very much. I appreciate your time.
[20:38] <cory_fu> How much of an issue is interface name collision between charms?  That is, if two completely unrelated charms both use the same interface name but are never likely to be related, does the conflict matter?
[20:48] <marcoceppi> ppetraki: hey, remember your relation-get out of band issue?
[20:48] <marcoceppi> cory_fu: you should never have an interface that doesn't connect with it's counterpart
[20:49] <ppetraki> marcoceppi, yeah
[20:49] <marcoceppi> cory_fu: if you have an interface, then it should connect
[20:49] <marcoceppi> if it doesn't then one charm is using the wrong interface name
[20:49] <marcoceppi> ppetraki: `relation-get -r engine:0 - nginx/0`
[20:49] <marcoceppi> the - will get you all output
[20:49] <ppetraki> marcoceppi, oh come on...
[20:50]  * ppetraki adds to quirks list
[20:50] <ppetraki> marcoceppi, thanks
[20:50] <marcoceppi> np, I'll make a note to document it in the docs
[20:51] <lazyPower> hey marcoceppi, i'm probably the last to know but did you know about this? http://paste.ubuntu.com/7597307/
[20:51] <lazyPower> it even puts you in teh failed hook context
[20:51] <marcoceppi> lazyPower: dude, I wrote that plugin
[20:51] <lazyPower> oh
[20:52]  * lazyPower is the last to know then
[20:52] <marcoceppi> https://github.com/juju/plugins/blob/master/juju-debug
[20:52] <l1l> are constraints supposed to work when add-machine kvm:0 is used? I can't get a kvm to launch without the default settings (1core, 512meg, 8gb)
[20:53] <thumper> l1l: yes
[20:54] <l1l> hmm, must be broke then.. i've tried deploying with more CPU's and more ram, and it still used the default
[21:18] <cory_fu> marcoceppi: I'm asking about the case when there's a conflict.  That is, two charms use the same interface name for different protocols.  Have we had to deal with resolving such a case before?
[21:18] <marcoceppi> no, we haven't
[21:18] <marcoceppi> charms when they go through review are generally checked to make sure there are no collisions
[21:19] <cory_fu> That's going to get exponentially harder as time goes on
[21:19] <cory_fu> But obviously we should look for interface names that may be too generic when reviewing
[21:19] <marcoceppi> cory_fu: well, an interface registry/testing framework will make lite of that work
[21:20] <marcoceppi> cory_fu: well, we have a few generic interfaces already, and they're great! For example, monitors and local-monitors
[21:20] <marcoceppi> super generic, but super flexible
[21:21] <cory_fu> I'm coming at it from the case of having a set of charms that are likely to only ever talk to each other, and wondering how much I should worry about potential conflicts ahead of time
[21:21] <designated> what about being able to specify the network interface used when deploying charms?  as an example, I would like to use two bonded 10Gbe interfaces as bond0 instead of the 1Gbe NICs in the servers.  Is there a way to do this?
[21:22] <designated> or is there logic to use the network interface with the most available bandwidth?
[21:33] <designated> marcoceppi, can you answer my question?
[21:34] <marcoceppi> designated: juju is adding network topology to it's constraints, but it's not available at the moment
[21:34] <designated> marcoceppi, so there is currently no way to specify the use of one network interface over another when building relationships?
[21:35] <marcoceppi> designated: not in the interface, but you could add it to your charm's configuration
[21:35] <marcoceppi> designated: openstack does this, it allows you to specify the nic you want to configure your service to talk on
[21:36] <designated> I'm deploying openstack on maas and I just want to tell the charms to use bond0 instead of the single 1Gbe interfaces.
[21:36] <designated> is that fiarly easy?
[21:36] <marcoceppi> designated: most openstack charms have that option, let me see if I can find it
[21:37] <marcoceppi> maybe not, I'm not well versed in the openstack charms
[21:37] <marcoceppi> or, maybe it's part of neutron
[21:37] <designated> i see where you can set things like HA interface or vip interface
[21:38] <marcoceppi> right
[21:38] <designated> but ceph for example has nothing about specifying an interface.  how will other openstack services talk to ceph nodes on my 10Gbe interfaces instead of the 1Gbe interfaces?
[21:38] <marcoceppi> designated: well, I assume the 10Gbe are on a different network?
[21:39] <designated> marcoceppi, yes they are
[21:39] <marcoceppi> so if all of openstack has been configured for that network they should reach other on those interfaces
[21:39] <designated> but each node also has a single 1Gbe NIC on the same network used for maas
[21:39] <designated> what's the stop them from building a relationship on the 1Gbe NICs?
[21:40] <marcoceppi> I can see the issue, so it's advertising itself as on the MAAS network IP range
[21:40] <marcoceppi> and not the bonded networking
[21:40] <designated> right
[21:40] <marcoceppi> This is where juju knowing the network topology will come in handy, I think setting the VIP on ceph, MySQL, etc will make it so the charm advertises itself on that range rather thanthe 1Gbe nics
[21:40] <designated> i found this: https://lists.ubuntu.com/archives/juju/2014-January/003392.html which provides a workaround when deploying on openstack but doesn't mention deploying on maas
[21:40] <marcoceppi> I believe that's how we overcame it in customer deployments
[21:41] <b0c1> hi
[21:41] <marcoceppi> designated: again, not as well versed in the openstack charms
[21:41] <b0c1> I try to use juju in vagrant... but I have a little problem
[21:41] <marcoceppi> jamespage and the ~openstack-charmers would be much more adept at answering, but most of them are European timezone
[21:41] <b0c1> after the first start everything works, but if I restart the vagrant machine the internal routing (juju gui iptables command) not run
[21:42] <marcoceppi> designated: https://launchpad.net/~openstack-charmers/+members
[21:42] <marcoceppi> designated: you could also mail the juju mailing list about it, so those in other timezones with more knowledge can reply
[21:42] <b0c1> so juju-gui accessible only inside the vagrant machine
[21:42] <marcoceppi> juju@lists.ubuntu.com
[21:43] <designated> marcoceppi, thank you
[21:43] <marcoceppi> designated: np, sorry I couldn't be of more assistance
[21:47] <lazyPower> b0c1: you'll want to suspend the virtual image instead of halting it.
[21:47] <lazyPower> s/image/machine/
[21:48] <lazyPower> when you halt it and fire it back up, the cloud-init routine to setup that host bridging doesn't get executed.
[21:48] <b0c1> hmmm
[21:49] <b0c1> lazyPower: thnx...
[21:49] <b0c1> good to know...
[21:50] <marcoceppi> lazyPower: is there anyway to trap the vagrant halt command and warn the user before proceeding
[21:50] <lazyPower> marcoceppi: not that I'm aware of
[21:50] <lazyPower> we could monkeypatch vagrant, but thats not a great idea. its subject to breakage and odd behavior down the road as vagrant updates.
[21:54] <b0c1> lazyPower: only the routing not work after vagrant halt?
[21:54] <lazyPower> b0c1: thats about it. Everything else is a single time setup.
[21:55] <b0c1> why not write the cloud init script the routing into the rc.local ?
[21:55] <b0c1> fast and simple way...
[21:55] <marcoceppi> no, no monkeypatching
[21:55] <lazyPower> marcoceppi: i didnt' say it was a great idea, but you asked if there was any way.... thats all i've got :(
[21:56] <marcoceppi> http://i.imgur.com/tnk4BBl.gif
[21:56] <lazyPower> Really, we should probably add the port mapping to the Vagrantfile
[21:56] <lazyPower> that way it will persist through a halt
[21:57] <b0c1> ummm... the internal cloud init script will write the external vagrant file?
[21:57] <b0c1> I think rc.local is much cleaner...
[21:57] <lazyPower> b0c1: no, the port juju gui deploys to doesn't change. So adding it to the vagrantfile is the safe-bet.
[21:58] <lazyPower> the cloud-init script is calling the vagrant api to set that host-guest mapping.
[21:58] <b0c1> yeah but the juju-gui deployed into internal vagrant with different ip (maybe it´s random?)
[21:58] <b0c1> ohh it can?
[21:59] <lazyPower> b0c1: actually it may even be the shell provisioner thats setting it up, i haven't looked at it in a couple of weeks.
[21:59] <lazyPower> if you look in the box directory, you can see all of the provisioner statements.
[21:59] <lazyPower> which should be something like $HOME/.vagrant/boxes
[22:05] <b0c1> lazyPower: maybe I not understand the problem... but the juju gui run in a separated vagrant machine inside the vagrant machine...
[22:06] <lazyPower> b0c1: there's a juju-gui redirector service that gets deployed on first run.
[22:07] <lazyPower> when you watch teh scrolling output of a first run, it implicilty calls out its setup declaration "deploying the juju gui redirector"
[22:07] <lazyPower> and that's always bound to 6080 on the VM machine.
[22:07] <b0c1> lazyPower: in this case I can access the internal vagrant juju-gui as localhost in the main vagrant machine
[22:07] <lazyPower> i don't know if its in the upstart scripts to restart after a shutdown. I haven't gotten that far into it.
[22:08] <lazyPower> but really, aside from not being able to access it via a port redirect - you can also use sshuttle to route to the gui as a work around. Just hit the IP of teh juju-gui instance and you should be right back where you left off.
[22:08] <b0c1> but after restart I can´t access, I can only access when I direct connect the internal vagrant ip
[22:09] <b0c1> hmmm.... I never used sshuttle...
[22:10] <lazyPower> sebas5384: still kicking around/
[22:23] <sebas538_> lazyPower: yeah! but i'm planning to leave early today :(
[22:23] <sebas538_> here is +1h hehe
[22:24] <lazyPower> sebas5384: all good my friend. Just poking you since we were talking about linking up today
[22:24] <sebas5384> lazyPower: what about monday?
[22:24] <lazyPower> I tell ya what sebas5384, with tomorrow being friday and nobody works late on friday - lets shoot for next week
[22:24] <lazyPower> yeah
[22:25] <lazyPower> Lets try for monday to sync on a vagrant plugin and cook up some awesome sauce
[22:25] <sebas5384> lazyPower: that looks awesome for me!
[22:25] <sebas5384> same bat time?
[22:25] <sebas5384> 7pm EDT
[22:26] <sebas5384> ?
[22:26] <thumper> lazyPower: so about this pending python-django charm
[22:26] <thumper> lazyPower: what's happening with that?
[22:26]  * thumper wants to deploy django
[22:39] <lazyPower> thumper: have a MP for me to look at and speak to?
[22:56] <Egoist> why unit do not appear in relation-list even if relation-set was executed?
[22:56] <Egoist> because machine is busy handling some kind of hook?
[23:01] <marcoceppi> Egoist: the unit on the other end has to have successfully called relation-joined for that relation before the hook which you're calling relation-list in is executed
[23:02] <marcoceppi> Question about subordinates. If I add two subordinates to a service, and one calls open-port 80 and the other does a close-port 80 what happens?
[23:03] <marcoceppi> assuming open was called first and closed called next
[23:03] <marcoceppi> Does juju know not to close the port since it was opened by another charm?
[23:07] <marcoceppi> btw, ppetraki that relation-set thing was in the docs this whole time
[23:07] <marcoceppi> I went to update it and found it in there
[23:07] <marcoceppi> https://juju.ubuntu.com/docs/authors-hook-environment.html#relation-get
[23:15] <marcoceppi> what is the app app-servers category entail?
[23:21] <jose> http://www.reddit.com/r/Ubuntu/comments/27cfhw/ibm_app_throwdown_canonicals_juju_selected_as_top/chzmn1y
[23:21] <jose> happy people is what Juju gets
[23:25] <Egoist> marcoceppi: Sorry but i don't get it
[23:25] <lazyPower> wooo!
[23:26] <lazyPower> marcoceppi: depending on which subordinate executes last, the yielding result will happen.
[23:26] <lazyPower> i dont think there's any kind of notion in juju of the open/close ports from one service locking it out to another.
[23:26] <marcoceppi> lame