[02:08] <gbc> anyone out there tonight? hoping to get help with a basic question ... been searching, but can't find a straight answer
[02:08] <gbc> in general, can you deploy multiple charms on the same server without containers?
[02:09] <gbc> more specifically, can you deploy an openstack all-in-one type system without containers, using juju charms?
[02:26] <pmatulis> gbc, yes, using the '--to' option of the 'juju deploy' command
[02:26] <pmatulis> https://jujucharms.com/docs/stable/charms-deploying#deploying-to-specific-machines-and-containers
[02:27] <gbc> pmatulis, yes, but are the openstack charms designed in such a way that they can co-reside on the same server?
[02:29] <gbc> 'cause when I do this, it seems like they're conflicting with each other
[02:29] <pmatulis> oh, that i do not know
[02:30] <pmatulis> the charms should be documented
[02:30] <gbc> yeah, you'd think so
[02:30] <gbc> the charms don't say that it *can't* be done, so I assumed that means that it can be done ...
[02:30] <pmatulis> gbc, have you tried conjure-up? i believe that's what it does. puts everything on one system. but it uses LXD
[02:31] <pmatulis> (i'm quite sure, haven't tried in a long time)
[02:32] <gbc> i'm trying to avoid using containers (reasons) ...
[02:32] <gbc> https://askubuntu.com/questions/506647/juju-and-openstack-provisioning
[02:32] <gbc> that link suggests that most juju itself doesn't care if charms co-exist on the same server, but it's left up to charm developers ...
[02:33] <gbc> also says most charms "assume they own the whole machine" ...
[02:33] <gbc> and every example I see uses containers ...
[02:33] <gbc> so i'm putting 2&2 together and thinking that the openstack charms don't combine well on the same server ...
[02:34] <gbc> and that it's just not explicitly stated
[02:34] <gbc> anyway, thanks for the response
[02:34] <pmatulis> np, good luck
[02:42] <gbc> for the record (especially in case anyone comes across this while searching in the future) ...
[02:43] <gbc> here's another link that says "Using --to flag without containerization is a really bad idea...Basically you're layering a ton of services on top of each other that all expect to own the machine."
[02:43] <gbc> https://askubuntu.com/questions/459992/hook-failed-shared-db-relation-changed-when-using-openstack-in-the-same-syste
[07:46] <BlackDex> Hello there. I'm having some issues with LXD/LXC Containers when using juju (with MAAS). The problem is that the containers (lxc on 14.04 and lxd on 16.04) arn't getting there DHCP address because UFW is enabled and blocking the DHCP Request from passing to the MAAS Node. It blocks these requests on the Container Host. This in turn prevents juju from configuring the container. Is this a know problem? As i
[07:46] <BlackDex> can't find a bug/issue about this. If i disable UFW on the Container Host it seems to work, but UFW seems to get enabled sometimes even if i disabled it using `sudo ufw disable`
[10:59] <Coompiax> Hi Guys. Please could someone help us? We are deploying Openstack using the JuJu charm bundle "openstack-base". We have 2 major subnets including: 10.0.0.0/24 for MAAS and 192.168.0.0/16 for our private cloud/local lan/public facing services. When deploying, we get "no obvious space for x/lxd/y, host has spaces "cloud", "maas". So we need to deploy these charms to the correct spaces.
[11:01] <Coompiax> When we use the cli using: juju deploy --to lxd:0 --constraints spaces=cloud, the lxd container gets created and all is well. But, what about the bindings? Especially when we get to the openstack bundle which has a bunch of applications, each with its own set of bindings.
[11:01] <Coompiax> So basically, we are struggling to get to grips on how to allocate applications/units/containers to the correct spaces.
[11:02] <Coompiax> Any help would be greatly appreciated. We can simplify this by just using haproxy as an example.
[11:17] <Coompiax> Hello?
[11:35] <Coompiax> Anyone?
[12:05] <Cooooompiax> .
[14:08] <petevg> Darn. Coompiax left. I was going to point them at https://jujucharms.com/docs/2.3/charms-bundles#binding-endpoints-of-applications-within-a-bundle
[14:09] <petevg> On the command line, it's the --bind param, formatted like --bind "somebindname=somespacename otherbindname=otherspacename"
[16:52] <bobeo> df
[16:52] <bobeo> whats the easiest way for me to move juju instances and containers from one machine to another/ I just realized im wasting a lot of resources on a system that I could better utilize
[16:55] <zeestrat> bobeo: easiest would be to add units to a new machine and remove units from the old if the charms support that.
[17:58] <el_tigro1> I set up an aws controller some time ago and have a couple of models running. Haven't touched anything in a while. Today I decided to make some changes but my aws credentials had changed. So I used `juju-autoload-credentials` and then `juju update-credential aws default`. Everything went smoothly. To warm myself back up to juju, I just did a simple `juju add-machine` but it's been stuck in the "pending" state for about an hour. So I
[17:58] <el_tigro1> connected to the controller with `juju ssh -m admin/controller 0` to check out the logs in '/var/log/juju/machine-0.log'. I see the same error message every 3 seconds:
[17:58] <el_tigro1> ERROR juju.worker.dependency engine.go:546 "compute-provisioner" manifold worker returned unexpected error: failed to process updated machines: failed to get all instances from broker: AWS was not able to validate the provided access credentials (AuthFailure)
[18:01] <el_tigro1> First thing is that I'm sure the credentials are correct but I figure I'll use `juju remove-machine --force <machine-number` to stop the spamming. The returns success with "removing machine 2" however `juju status` still shows it in the pending state. And the controller is still see the same error message in the controller logs every 3 seconds.
[18:05] <cory_fu> el_tigro1: I'm pretty sure that loading new credentials only applies to new controllers or models.  The existing controller is going to continue to try to use the old credentials.  I do not know if there's a way to tell it to use new ones
[18:05] <el_tigro1> first question is how can I force juju to cancel the operation. BTW I'm running juju version 2.3.2-xenial-amd64
[18:05] <el_tigro1> cory_fu: I thought that's what `juju update-credential` is for
[18:06] <cory_fu> el_tigro1: You seem to be right.  I haven't used that before.
[18:07] <cory_fu> el_tigro1: As for cancelling, you could try just doing a `juju remove-machine --force <num>`
[18:07] <cory_fu> Or possibly --keep-instance if that fails
[18:07] <el_tigro1> cory_fu: Thanks but I tried that already. See my 4th paragraph :)
[18:08] <cory_fu> el_tigro1: Yeah, sorry I missed that.  What about --keep-instance?
[18:09] <el_tigro1> cory_fu: just tried it. No luck
[18:10] <el_tigro1> Still getting the same error message every 3s !
[18:10] <cory_fu> balloons: Any suggestions?  ^
[18:11] <cory_fu> el_tigro1: Only other thing I can think of would be to try restarting the jujud process on the controller, but that seems a bit heavy-handed
[18:11] <cory_fu> And if it didn't get the updated credential in the right place, it probalby wouldn't help
[18:11] <el_tigro1> cory_fu: Thanks for the suggestion. I guess I could give it a shot
[18:16] <el_tigro1> cory_fu: I ran `sudo systemctl restart jujud-machine-0.service` and the pending machines have vanished from the model. Thanks!!
[18:17] <cory_fu> Oh, nice.  I wonder if that will help it pick up the new credential as well?
[18:17] <el_tigro1> About to try it out :D
[18:17] <cory_fu> el_tigro1: It would probably be worth filing a bug for that.
[18:18] <el_tigro1> cory_fu: success!
[18:18] <cory_fu> That's great.  I wonder why it got into a bad state
[18:19] <cory_fu> Maybe it was a backlog of failed operations that was blocking resolution?
[18:20] <el_tigro1> so I guess the sequence of commands to update credentials on the controller requires: `juju-autoload-credentials` and `juju update-credential aws default` on the client. And then 'sudo systemctl restart jujud-machine-0.service' on the controller
[18:20] <el_tigro1> workaround
[18:20] <el_tigro1> cory_fu: I wonder as well
[22:54] <bobeo> o/
[22:55] <bobeo> whats the constraint option for storage space? Is it disk?
[22:55] <bobeo> juju deploy lxd --constraints disk=400G ?
[22:59] <bobeo> I want to be able to deploy an lxd system to build web applications on, but it just gave me an error, says "cannot use --constraints on a subordinate machine"
[23:00] <bobeo> application*
[23:03] <rick_h> beisner: so the constraints on disk is that you need a root disk of 400G from the underlying cloud or find me a machine in maas that has a disk of 400G
[23:03] <rick_h> sorry, bobeo ^
[23:04] <rick_h> bobeo: and normally you can't use the constraints on something that's a subordinate of another charm because it's the main charm the machine comes up to match. The subordinate is installed afterwards as a kind of "add-on" chunk of software