/srv/irclogs.ubuntu.com/2018/01/31/#juju.txt

gbcanyone out there tonight? hoping to get help with a basic question ... been searching, but can't find a straight answer02:08
gbcin general, can you deploy multiple charms on the same server without containers?02:08
gbcmore specifically, can you deploy an openstack all-in-one type system without containers, using juju charms?02:09
pmatulisgbc, yes, using the '--to' option of the 'juju deploy' command02:26
pmatulishttps://jujucharms.com/docs/stable/charms-deploying#deploying-to-specific-machines-and-containers02:26
gbcpmatulis, yes, but are the openstack charms designed in such a way that they can co-reside on the same server?02:27
gbc'cause when I do this, it seems like they're conflicting with each other02:29
pmatulisoh, that i do not know02:29
pmatulisthe charms should be documented02:30
gbcyeah, you'd think so02:30
gbcthe charms don't say that it *can't* be done, so I assumed that means that it can be done ...02:30
pmatulisgbc, have you tried conjure-up? i believe that's what it does. puts everything on one system. but it uses LXD02:30
pmatulis(i'm quite sure, haven't tried in a long time)02:31
gbci'm trying to avoid using containers (reasons) ...02:32
gbchttps://askubuntu.com/questions/506647/juju-and-openstack-provisioning02:32
gbcthat link suggests that most juju itself doesn't care if charms co-exist on the same server, but it's left up to charm developers ...02:32
gbcalso says most charms "assume they own the whole machine" ...02:33
gbcand every example I see uses containers ...02:33
gbcso i'm putting 2&2 together and thinking that the openstack charms don't combine well on the same server ...02:33
gbcand that it's just not explicitly stated02:34
gbcanyway, thanks for the response02:34
pmatulisnp, good luck02:34
gbcfor the record (especially in case anyone comes across this while searching in the future) ...02:42
gbchere's another link that says "Using --to flag without containerization is a really bad idea...Basically you're layering a ton of services on top of each other that all expect to own the machine."02:43
gbchttps://askubuntu.com/questions/459992/hook-failed-shared-db-relation-changed-when-using-openstack-in-the-same-syste02:43
BlackDexHello there. I'm having some issues with LXD/LXC Containers when using juju (with MAAS). The problem is that the containers (lxc on 14.04 and lxd on 16.04) arn't getting there DHCP address because UFW is enabled and blocking the DHCP Request from passing to the MAAS Node. It blocks these requests on the Container Host. This in turn prevents juju from configuring the container. Is this a know problem? As i07:46
BlackDexcan't find a bug/issue about this. If i disable UFW on the Container Host it seems to work, but UFW seems to get enabled sometimes even if i disabled it using `sudo ufw disable`07:46
=== frankban|afk is now known as frankban
CoompiaxHi Guys. Please could someone help us? We are deploying Openstack using the JuJu charm bundle "openstack-base". We have 2 major subnets including: 10.0.0.0/24 for MAAS and 192.168.0.0/16 for our private cloud/local lan/public facing services. When deploying, we get "no obvious space for x/lxd/y, host has spaces "cloud", "maas". So we need to deploy these charms to the correct spaces.10:59
CoompiaxWhen we use the cli using: juju deploy --to lxd:0 --constraints spaces=cloud, the lxd container gets created and all is well. But, what about the bindings? Especially when we get to the openstack bundle which has a bunch of applications, each with its own set of bindings.11:01
CoompiaxSo basically, we are struggling to get to grips on how to allocate applications/units/containers to the correct spaces.11:01
CoompiaxAny help would be greatly appreciated. We can simplify this by just using haproxy as an example.11:02
CoompiaxHello?11:17
CoompiaxAnyone?11:35
Cooooompiax.12:05
petevgDarn. Coompiax left. I was going to point them at https://jujucharms.com/docs/2.3/charms-bundles#binding-endpoints-of-applications-within-a-bundle14:08
petevgOn the command line, it's the --bind param, formatted like --bind "somebindname=somespacename otherbindname=otherspacename"14:09
=== agprado_ is now known as agprado
bobeodf16:52
bobeowhats the easiest way for me to move juju instances and containers from one machine to another/ I just realized im wasting a lot of resources on a system that I could better utilize16:52
zeestratbobeo: easiest would be to add units to a new machine and remove units from the old if the charms support that.16:55
=== frankban is now known as frankban|afk
el_tigro1I set up an aws controller some time ago and have a couple of models running. Haven't touched anything in a while. Today I decided to make some changes but my aws credentials had changed. So I used `juju-autoload-credentials` and then `juju update-credential aws default`. Everything went smoothly. To warm myself back up to juju, I just did a simple `juju add-machine` but it's been stuck in the "pending" state for about an hour. So I17:58
el_tigro1connected to the controller with `juju ssh -m admin/controller 0` to check out the logs in '/var/log/juju/machine-0.log'. I see the same error message every 3 seconds:17:58
el_tigro1ERROR juju.worker.dependency engine.go:546 "compute-provisioner" manifold worker returned unexpected error: failed to process updated machines: failed to get all instances from broker: AWS was not able to validate the provided access credentials (AuthFailure)17:58
el_tigro1First thing is that I'm sure the credentials are correct but I figure I'll use `juju remove-machine --force <machine-number` to stop the spamming. The returns success with "removing machine 2" however `juju status` still shows it in the pending state. And the controller is still see the same error message in the controller logs every 3 seconds.18:01
cory_fuel_tigro1: I'm pretty sure that loading new credentials only applies to new controllers or models.  The existing controller is going to continue to try to use the old credentials.  I do not know if there's a way to tell it to use new ones18:05
el_tigro1first question is how can I force juju to cancel the operation. BTW I'm running juju version 2.3.2-xenial-amd6418:05
el_tigro1cory_fu: I thought that's what `juju update-credential` is for18:05
cory_fuel_tigro1: You seem to be right.  I haven't used that before.18:06
cory_fuel_tigro1: As for cancelling, you could try just doing a `juju remove-machine --force <num>`18:07
cory_fuOr possibly --keep-instance if that fails18:07
el_tigro1cory_fu: Thanks but I tried that already. See my 4th paragraph :)18:07
cory_fuel_tigro1: Yeah, sorry I missed that.  What about --keep-instance?18:08
el_tigro1cory_fu: just tried it. No luck18:09
el_tigro1Still getting the same error message every 3s !18:10
cory_fuballoons: Any suggestions?  ^18:10
cory_fuel_tigro1: Only other thing I can think of would be to try restarting the jujud process on the controller, but that seems a bit heavy-handed18:11
cory_fuAnd if it didn't get the updated credential in the right place, it probalby wouldn't help18:11
el_tigro1cory_fu: Thanks for the suggestion. I guess I could give it a shot18:11
el_tigro1cory_fu: I ran `sudo systemctl restart jujud-machine-0.service` and the pending machines have vanished from the model. Thanks!!18:16
cory_fuOh, nice.  I wonder if that will help it pick up the new credential as well?18:17
el_tigro1About to try it out :D18:17
cory_fuel_tigro1: It would probably be worth filing a bug for that.18:17
el_tigro1cory_fu: success!18:18
cory_fuThat's great.  I wonder why it got into a bad state18:18
cory_fuMaybe it was a backlog of failed operations that was blocking resolution?18:19
el_tigro1so I guess the sequence of commands to update credentials on the controller requires: `juju-autoload-credentials` and `juju update-credential aws default` on the client. And then 'sudo systemctl restart jujud-machine-0.service' on the controller18:20
el_tigro1workaround18:20
el_tigro1cory_fu: I wonder as well18:20
bobeoo/22:54
bobeowhats the constraint option for storage space? Is it disk?22:55
bobeojuju deploy lxd --constraints disk=400G ?22:55
bobeoI want to be able to deploy an lxd system to build web applications on, but it just gave me an error, says "cannot use --constraints on a subordinate machine"22:59
bobeoapplication*23:00
rick_hbeisner: so the constraints on disk is that you need a root disk of 400G from the underlying cloud or find me a machine in maas that has a disk of 400G23:03
rick_hsorry, bobeo ^23:03
rick_hbobeo: and normally you can't use the constraints on something that's a subordinate of another charm because it's the main charm the machine comes up to match. The subordinate is installed afterwards as a kind of "add-on" chunk of software23:04

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!