=== thumper-afk is now known as thumper === defunctzombie is now known as defunctzombie_zz === amithkk_ is now known as amithkk === stevanr_ is now known as stevanr [09:55] Hi, after creating a service with several units 'juju deploy -n 3 mysql' (which create 3 VMs) then I put on each a different file on them (to keep their identity), then I remove two units (go into the machine pull), add a unit to the service (machine coming from the pool) , and at last I deploy -n wordpress (a new service), so a machine coming from the pool and the last one is created (pool empty) [09:56] I was surprise to have in the last service (wordpress) an old unit of mysql. Is this dangerous not ? [09:57] Indeed if a hook do not clean well a unit (log, lib,etc.) , it means that in the future we can have a service with different VMs [09:59] So, is there a way to create a pool for each servcie or to specify which Vm of the pool could be add into the service, or force to create a VM with deploy instead of using one of the pool ? [10:03] nfoata: #993288 [10:05] nfoata: in general, destroy-service won't remove the machine, so it will be reused upon next deploy [10:05] nfoata: generally you'd need to terminate said machine before deploying if you wanted a fresh machine [10:06] nfoata: although as the bug ticket says, you could deploy a null charm to a deallocated machine if you didn't want to destroy it [10:07] nfoata: presumably using constraints you may be able to force the deployment of a new machine even where others are in the pool [10:11] bbcmicrocomputer: thanks, so perfect this problem is already knows (bug ticket) and I will see the constraint to force the deployment of new machine (for security, to be sure machines are the same) [10:11] knows = known === danilos_ is now known as danilos === BradCrittenden is now known as bac === wedgwood_away is now known as wedgwood === vednis is now known as mars [14:44] jhf: so I'm so looking into the future, but we could so totally ship this in the charm: http://blog.jelastic.com/2013/06/06/liferay-cluster/ === defunctzombie_zz is now known as defunctzombie [14:49] hah… yeah.. that blog post has generated some buzz in our community today [15:24] awesome, that seems like the perfect next steps to pimping out the liferay charm [15:45] pimping out :) [15:45] is wayne brady gonna have to cluster a juju?? [16:07] jhf: lol [16:07] nice one [16:22] mgz, around? did you ever get to the bottom of how to upload juju-core tools into an offline environment? === defunctzombie is now known as defunctzombie_zz [17:57] Hello, can someone help me figure out why juju can't deploy an instance after the initial bootstrap? [18:00] jpds: Sure, what's going on? === defunctzombie_zz is now known as defunctzombie [18:01] marcoceppi: It just shows 1: instance-id: pending. [18:02] jpds: what provider? [18:02] marcoceppi: MAAS. [18:02] ah [18:04] jpds: someone else had a similar issue, where they could bootstrap but not deploy. I wasnt' able to help them (not used maas in a long time) [18:04] There's a maas.log somewhere that might reveal more information, let me see if I can find where that log is [18:04] marcoceppi: Nothing interesting there. === defunctzombie is now known as defunctzombie_zz [18:05] What version of juju are you using? [18:05] marcoceppi: ppa:juju/pkgs. [18:06] juju or juju-core? === defunctzombie_zz is now known as defunctzombie [18:06] marcoceppi: juju. [18:07] So, there's a provisioner log on the bootstrap node that's responsible for commisioning the machines, you can use juju ssh 0 to get to the bootstrap node. I believe the log is in /var/log/juju/ and it might provide more details as to where the provisioner failed [18:07] jpds: again, haven't used juju+maas in quite a long time, so just trying to shepard you to where you can find info on why it failed [18:08] juju.agents.provision@ERROR: Cannot get machine list [18:08] Hmm. [18:10] What would it be trying to reach? [18:12] I can ping all the relevant IPs I can think of. [18:13] jpds: are the machines assigned to your maas user for provisioning, etc? [18:13] Could have just been a timeout, try another deployment to see if it also is listed as pending [18:14] marcoceppi: I've been destroying the environment for hours. [18:14] jpds: okay, then that's not it, sorry! [18:14] marcoceppi: And re-bootstraping; and they are all under my user. [18:15] jpds: could you pastebin your entire log? I want to see if anyting jumps out at me [18:16] http://pastebin.ubuntu.com/5742654/ [18:16] jpds: what do you have for your maas-server field in environments.yaml ? [18:17] marcoceppi: maas-server: 'http://192.168.125.10/MAAS' [18:17] jpds: add the port in there, http://192.168.125.10:80/MAAS, destroy and re-bootstrap, try again [18:17] that integer error looks reallly familiar, I think that's the cause of it [18:19] OK, reinstalling the system. === defunctzombie is now known as defunctzombie_zz [18:33] marcoceppi: Well, that worked. [18:33] jpds: awesome. I'm going to update the docs with that caveat. I know you're not the first to spend too long trying to figure that out [18:35] hey arosales [18:35] any word on new docs landing? [18:43] jcastro, I think we are targeting end of month for them to officially be the standard. [19:18] wedgwood, around? [19:35] adam_g: yep [19:38] wedgwood, nvm. was gonna modify core.apt_install() but decided against it. couldn't find any charms that are using it as-is. know of any? [19:39] HI -- anyone seen this error trying to bootstrap on openstack? http://paste.ubuntu.com/5742835/ [20:25] dpb1: is there a security group already named "juju-precise" in your account? [20:25] Possibly left over from a bad teardown [20:26] marcoceppi: so, apparently goose does not deal with the change of security group ids from id to uuid in grizzly (I think). [20:26] it's never that simple === BradCrittenden is now known as bac____ === wedgwood is now known as wedgwood_away