[09:55] <nfoata> Hi, after creating a service with several units 'juju deploy -n 3 mysql' (which create 3 VMs) then I put on each a different file on them (to keep their identity), then I remove two units (go into the machine pull), add a unit to the service (machine coming from the pool) , and at last I deploy -n wordpress (a new service), so a machine coming from the pool and the last one is created (pool empty)
[09:56] <nfoata> I was surprise to have in the last service (wordpress) an old unit of mysql. Is this dangerous not ?
[09:57] <nfoata> Indeed if a hook do not clean well a unit (log, lib,etc.) , it means that in the future we can have a service with different VMs
[09:59] <nfoata> So, is there a way to create a pool for each servcie or to specify which Vm of the pool could be add into the service, or force to create a VM with deploy instead of using one of the pool ?
[10:03] <bbcmicrocomputer> nfoata: #993288
[10:05] <bbcmicrocomputer> nfoata: in general, destroy-service won't remove the machine, so it will be reused upon next deploy
[10:05] <bbcmicrocomputer> nfoata: generally you'd need to terminate said machine before deploying if you wanted a fresh machine
[10:06] <bbcmicrocomputer> nfoata: although as the bug ticket says, you could deploy a null charm to a deallocated machine if you didn't want to destroy it
[10:07] <bbcmicrocomputer> nfoata: presumably using constraints you may be able to force the deployment of a new machine even where others are in the pool
[10:11] <nfoata> bbcmicrocomputer: thanks, so perfect this problem is already knows (bug ticket) and I will see the constraint to force the deployment of new machine (for security, to be sure machines are the same)
[10:11] <nfoata> knows = known
[14:44] <jcastro> jhf: so I'm so looking into the future, but we could so totally ship this in the charm: http://blog.jelastic.com/2013/06/06/liferay-cluster/
[14:49] <jhf> hah… yeah.. that blog post has generated some buzz in our community today
[15:24] <m_3> awesome, that seems like the perfect next steps to pimping out the liferay charm
[15:45] <jhf> pimping out :)
[15:45] <jhf> is wayne brady gonna have to cluster a juju??
[16:07] <robbiew> jhf: lol
[16:07] <robbiew> nice one
[16:22] <jamespage> mgz, around? did you ever get to the bottom of how to upload juju-core tools into an offline environment?
[17:57] <jpds> Hello, can someone help me figure out why juju can't deploy an instance after the initial bootstrap?
[18:00] <marcoceppi> jpds: Sure, what's going on?
[18:01] <jpds> marcoceppi: It just shows 1: instance-id: pending.
[18:02] <marcoceppi> jpds: what provider?
[18:02] <jpds> marcoceppi: MAAS.
[18:02] <marcoceppi> ah
[18:04] <marcoceppi> jpds: someone else had a similar issue, where they could bootstrap but not deploy. I wasnt' able to help them (not used maas in a long time)
[18:04] <marcoceppi> There's a maas.log somewhere that might reveal more information, let me see if I can find where that log is
[18:04] <jpds> marcoceppi: Nothing interesting there.
[18:05] <marcoceppi> What version of juju are you using?
[18:05] <jpds> marcoceppi: ppa:juju/pkgs.
[18:06] <marcoceppi> juju or juju-core?
[18:06] <jpds> marcoceppi: juju.
[18:07] <marcoceppi> So, there's a provisioner log on the bootstrap node that's responsible for commisioning the machines, you can use juju ssh 0 to get to the bootstrap node. I believe the log is in /var/log/juju/ and it might provide more details as to where the provisioner failed
[18:07] <marcoceppi> jpds: again, haven't used juju+maas in quite a long time, so just trying to shepard you to where you can find info on why it failed
[18:08] <jpds> juju.agents.provision@ERROR: Cannot get machine list
[18:08] <jpds> Hmm.
[18:10] <jpds> What would it be trying to reach?
[18:12] <jpds> I can ping all the relevant IPs I can think of.
[18:13] <marcoceppi> jpds: are the machines assigned to your maas user for provisioning, etc?
[18:13] <marcoceppi> Could have just been a timeout, try another deployment to see if it also is listed as pending
[18:14] <jpds> marcoceppi: I've been destroying the environment for hours.
[18:14] <marcoceppi> jpds: okay, then that's not it, sorry!
[18:14] <jpds> marcoceppi: And re-bootstraping; and they are all under my user.
[18:15] <marcoceppi> jpds: could you pastebin your entire log? I want to see if anyting jumps out at me
[18:16] <jpds> http://pastebin.ubuntu.com/5742654/
[18:16] <marcoceppi> jpds: what do you have for your maas-server field in environments.yaml ?
[18:17] <jpds> marcoceppi: maas-server: 'http://192.168.125.10/MAAS'
[18:17] <marcoceppi> jpds: add the port in there, http://192.168.125.10:80/MAAS, destroy and re-bootstrap, try again
[18:17] <marcoceppi> that integer error looks reallly familiar, I think that's the cause of it
[18:19] <jpds> OK, reinstalling the system.
[18:33] <jpds> marcoceppi: Well, that worked.
[18:33] <marcoceppi> jpds: awesome. I'm going to update the docs with that caveat. I know you're not the first to spend too long trying to figure that out
[18:35] <jcastro> hey arosales
[18:35] <jcastro> any word on new docs landing?
[18:43] <arosales> jcastro, I think we are targeting end of month for them to officially be the standard.
[19:18] <adam_g> wedgwood, around?
[19:35] <wedgwood> adam_g: yep
[19:38] <adam_g> wedgwood, nvm. was gonna modify core.apt_install() but decided against it. couldn't find any charms that are using it as-is. know of any?
[19:39] <dpb1> HI -- anyone seen this error trying to bootstrap on openstack?  http://paste.ubuntu.com/5742835/
[20:25] <marcoceppi> dpb1: is there a security group already named "juju-precise" in your account?
[20:25] <marcoceppi> Possibly left over from a bad teardown
[20:26] <dpb1> marcoceppi: so, apparently goose does not deal with the change of security group ids from id to uuid in grizzly (I think).
[20:26] <ahasenack> it's never that simple