[02:08] anyone out there tonight? hoping to get help with a basic question ... been searching, but can't find a straight answer [02:08] in general, can you deploy multiple charms on the same server without containers? [02:09] more specifically, can you deploy an openstack all-in-one type system without containers, using juju charms? [02:26] gbc, yes, using the '--to' option of the 'juju deploy' command [02:26] https://jujucharms.com/docs/stable/charms-deploying#deploying-to-specific-machines-and-containers [02:27] pmatulis, yes, but are the openstack charms designed in such a way that they can co-reside on the same server? [02:29] 'cause when I do this, it seems like they're conflicting with each other [02:29] oh, that i do not know [02:30] the charms should be documented [02:30] yeah, you'd think so [02:30] the charms don't say that it *can't* be done, so I assumed that means that it can be done ... [02:30] gbc, have you tried conjure-up? i believe that's what it does. puts everything on one system. but it uses LXD [02:31] (i'm quite sure, haven't tried in a long time) [02:32] i'm trying to avoid using containers (reasons) ... [02:32] https://askubuntu.com/questions/506647/juju-and-openstack-provisioning [02:32] that link suggests that most juju itself doesn't care if charms co-exist on the same server, but it's left up to charm developers ... [02:33] also says most charms "assume they own the whole machine" ... [02:33] and every example I see uses containers ... [02:33] so i'm putting 2&2 together and thinking that the openstack charms don't combine well on the same server ... [02:34] and that it's just not explicitly stated [02:34] anyway, thanks for the response [02:34] np, good luck [02:42] for the record (especially in case anyone comes across this while searching in the future) ... [02:43] here's another link that says "Using --to flag without containerization is a really bad idea...Basically you're layering a ton of services on top of each other that all expect to own the machine." [02:43] https://askubuntu.com/questions/459992/hook-failed-shared-db-relation-changed-when-using-openstack-in-the-same-syste [07:46] Hello there. I'm having some issues with LXD/LXC Containers when using juju (with MAAS). The problem is that the containers (lxc on 14.04 and lxd on 16.04) arn't getting there DHCP address because UFW is enabled and blocking the DHCP Request from passing to the MAAS Node. It blocks these requests on the Container Host. This in turn prevents juju from configuring the container. Is this a know problem? As i [07:46] can't find a bug/issue about this. If i disable UFW on the Container Host it seems to work, but UFW seems to get enabled sometimes even if i disabled it using `sudo ufw disable` === frankban|afk is now known as frankban [10:59] Hi Guys. Please could someone help us? We are deploying Openstack using the JuJu charm bundle "openstack-base". We have 2 major subnets including: 10.0.0.0/24 for MAAS and 192.168.0.0/16 for our private cloud/local lan/public facing services. When deploying, we get "no obvious space for x/lxd/y, host has spaces "cloud", "maas". So we need to deploy these charms to the correct spaces. [11:01] When we use the cli using: juju deploy --to lxd:0 --constraints spaces=cloud, the lxd container gets created and all is well. But, what about the bindings? Especially when we get to the openstack bundle which has a bunch of applications, each with its own set of bindings. [11:01] So basically, we are struggling to get to grips on how to allocate applications/units/containers to the correct spaces. [11:02] Any help would be greatly appreciated. We can simplify this by just using haproxy as an example. [11:17] Hello? [11:35] Anyone? [12:05] . [14:08] Darn. Coompiax left. I was going to point them at https://jujucharms.com/docs/2.3/charms-bundles#binding-endpoints-of-applications-within-a-bundle [14:09] On the command line, it's the --bind param, formatted like --bind "somebindname=somespacename otherbindname=otherspacename" === agprado_ is now known as agprado [16:52] df [16:52] whats the easiest way for me to move juju instances and containers from one machine to another/ I just realized im wasting a lot of resources on a system that I could better utilize [16:55] bobeo: easiest would be to add units to a new machine and remove units from the old if the charms support that. === frankban is now known as frankban|afk [17:58] I set up an aws controller some time ago and have a couple of models running. Haven't touched anything in a while. Today I decided to make some changes but my aws credentials had changed. So I used `juju-autoload-credentials` and then `juju update-credential aws default`. Everything went smoothly. To warm myself back up to juju, I just did a simple `juju add-machine` but it's been stuck in the "pending" state for about an hour. So I [17:58] connected to the controller with `juju ssh -m admin/controller 0` to check out the logs in '/var/log/juju/machine-0.log'. I see the same error message every 3 seconds: [17:58] ERROR juju.worker.dependency engine.go:546 "compute-provisioner" manifold worker returned unexpected error: failed to process updated machines: failed to get all instances from broker: AWS was not able to validate the provided access credentials (AuthFailure) [18:01] First thing is that I'm sure the credentials are correct but I figure I'll use `juju remove-machine --force el_tigro1: I'm pretty sure that loading new credentials only applies to new controllers or models. The existing controller is going to continue to try to use the old credentials. I do not know if there's a way to tell it to use new ones [18:05] first question is how can I force juju to cancel the operation. BTW I'm running juju version 2.3.2-xenial-amd64 [18:05] cory_fu: I thought that's what `juju update-credential` is for [18:06] el_tigro1: You seem to be right. I haven't used that before. [18:07] el_tigro1: As for cancelling, you could try just doing a `juju remove-machine --force ` [18:07] Or possibly --keep-instance if that fails [18:07] cory_fu: Thanks but I tried that already. See my 4th paragraph :) [18:08] el_tigro1: Yeah, sorry I missed that. What about --keep-instance? [18:09] cory_fu: just tried it. No luck [18:10] Still getting the same error message every 3s ! [18:10] balloons: Any suggestions? ^ [18:11] el_tigro1: Only other thing I can think of would be to try restarting the jujud process on the controller, but that seems a bit heavy-handed [18:11] And if it didn't get the updated credential in the right place, it probalby wouldn't help [18:11] cory_fu: Thanks for the suggestion. I guess I could give it a shot [18:16] cory_fu: I ran `sudo systemctl restart jujud-machine-0.service` and the pending machines have vanished from the model. Thanks!! [18:17] Oh, nice. I wonder if that will help it pick up the new credential as well? [18:17] About to try it out :D [18:17] el_tigro1: It would probably be worth filing a bug for that. [18:18] cory_fu: success! [18:18] That's great. I wonder why it got into a bad state [18:19] Maybe it was a backlog of failed operations that was blocking resolution? [18:20] so I guess the sequence of commands to update credentials on the controller requires: `juju-autoload-credentials` and `juju update-credential aws default` on the client. And then 'sudo systemctl restart jujud-machine-0.service' on the controller [18:20] workaround [18:20] cory_fu: I wonder as well [22:54] o/ [22:55] whats the constraint option for storage space? Is it disk? [22:55] juju deploy lxd --constraints disk=400G ? [22:59] I want to be able to deploy an lxd system to build web applications on, but it just gave me an error, says "cannot use --constraints on a subordinate machine" [23:00] application* [23:03] beisner: so the constraints on disk is that you need a root disk of 400G from the underlying cloud or find me a machine in maas that has a disk of 400G [23:03] sorry, bobeo ^ [23:04] bobeo: and normally you can't use the constraints on something that's a subordinate of another charm because it's the main charm the machine comes up to match. The subordinate is installed afterwards as a kind of "add-on" chunk of software