=== mwhudson_ is now known as mwhudson | ||
=== frankban|afk is now known as frankban | ||
externalreality | test | 09:12 |
---|---|---|
jam | tick | 12:32 |
rick_h | Jam tock | 12:34 |
jam | rick_h: sorry I'm late, just finishing up the last meeting, brt | 12:34 |
=== Guest71369 is now known as skay | ||
=== freyes__ is now known as freyes | ||
jac_cplane | We have a charm for xenial that relys on /etc/network/interfaces, there was a recent update to curtin that moves /etc/network/interfaces to /etc/network/interfaces.d. I dont think this is correct, but I' not sure why this change was made. Can someone help? | 14:23 |
jac_cplane | bug openend https://bugs.launchpad.net/maas/+bug/1732202 on curtin 532 | 14:54 |
mup | Bug #1732202: Xenial Deploy fails when using /etc/network/interface <MAAS:New> <https://launchpad.net/bugs/1732202> | 14:54 |
=== frankban is now known as frankban|afk | ||
=== jac_ is now known as jac_cplane | ||
[Kid] | can you not create models from logging into a controller? | 19:13 |
[Kid] | also, is it possible to login to a controller as admin on any other server than the one that juju was bootstrapped from? | 19:13 |
[Kid] | ahh i see what the problem is. the cloud providers are only stored on the juju server that the controllers were bootstrapped from | 19:15 |
[Kid] | i.e., you can't login to a controller from a random juju install and see the cloud providers for that controller | 19:15 |
[Kid] | so far, it seems like if i need to make most changes outside of removing or adding a worker node to a kubernetes cluster i have to remove the model and re-add it | 19:29 |
[Kid] | so basically a full re-deploy | 19:29 |
[Kid] | does that sound right? | 19:29 |
rick_h | [Kid]: so you can add new users with admin and superuser permissions to access from other locations | 19:38 |
rick_h | [Kid]: check out https://jujucharms.com/docs/stable/tut-users | 19:39 |
rick_h | [Kid]: as far as changes to the cluster needing a redeploy, I'd hope not. I think the team would be curious what changes and see what's not supported in the charms and such that wrap the operations | 19:39 |
ryebot | [Kid]: gimme a second to catch up | 19:42 |
ryebot | [Kid]: What changes do you need to make? | 19:42 |
[Kid] | sorry, i am here | 20:11 |
[Kid] | i currently tried a deploy and MAAS didn't finish deploying 2 of the nodes and i have machines in a down state | 20:12 |
[Kid] | i released the machines in MAAS, but how to i get juju to try and redeploy to those machines? | 20:12 |
[Kid] | it already thinks it allocated them | 20:12 |
[Kid] | the changes that i was trying to make were like removing flannel and adding calico. it worked but then it rebuilt the master nodes and my ssh keys changed and i couldn't get juju scp to work. | 20:14 |
kwmonroe | [Kid]: watcha mean by 'rebuilt the master nodes'? you mean like the flannel cni relation was removed and calico joined? | 20:38 |
kwmonroe | also [Kid], i'm at a loss when you say you lost 'juju scp' capabilities. i can't think of a reason why the juju keys used to ssh to a unit would change, regardless of how that unit was changed post deployment (unless, of course, you changed ~/.ssh/authorized_keys on the juju unit out-of-band) | 20:41 |
kwmonroe | rick_h: are there circumstances that cause ~/.local/share/juju/ssh/juju_id_rsa* to change? | 20:43 |
rick_h | kwmonroe: ...not that I can think of | 20:44 |
kwmonroe | [Kid]: the ssh key that juju uses for things like juju scp are stored in ~/.local/share/juju/ssh/juju_id_rsa, and the .pub key in there is what is normally on all deployed units. rick_h assures me this is bulletproof. ;) | 20:44 |
cory_fu | What does this error mean when trying to bootstrap lxd / localhost? ERROR Get https://10.130.48.1:8443/1.0: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "root@murdoch") | 21:00 |
cory_fu | stokachu: Do you recognize that, by chance? ^ | 21:05 |
stokachu | cory_fu: yea i think it's the certificate in .local/share/juju/bootstrap-config | 21:05 |
stokachu | or one of those files | 21:05 |
stokachu | maybe credentials.yaml | 21:05 |
cory_fu | stokachu: Ah. Just torch it? | 21:06 |
stokachu | yea | 21:06 |
cory_fu | stokachu: It was credentials.yaml. Thanks! | 21:08 |
stokachu | cool, np | 21:08 |
[Kid] | kw and rick, thanks. i will continue to look at it | 21:11 |
[Kid] | kw and rick, is there a way to have juju retry the deployment of a machine? | 21:12 |
[Kid] | i have two machines in a down state and message is failed deployment. | 21:12 |
[Kid] | i manually fixed MAAS, so i wanted it to try on those same machines agin | 21:13 |
[Kid] | again | 21:13 |
rick_h | [Kid]: so there's juju retry-provisioning, if the machine didn't come up | 21:14 |
rick_h | [Kid]: or if you want to just retry as something failed and you cleaned it up just juju add-unit xxx | 21:15 |
rick_h | and have juju pull another up for use | 21:15 |
[Kid] | ahh yes, i think juju retry-provisioning is what i need | 21:19 |
rick_h | cool, hopefully that helps some | 21:19 |
[Kid] | rick | 21:26 |
* rick_h ducks | 21:26 | |
[Kid] | it accepts that command, but doesn't do anything | 21:27 |
[Kid] | haha | 21:27 |
[Kid] | just stuck in a down status | 21:27 |
[Kid] | guess i might have to remove and re-deploy | 21:27 |
rick_h | [Kid]: try with --debug and see if anything is fishy? Maybe trace debug-log and see if anything comes up. | 21:27 |
[Kid] | i did | 21:27 |
rick_h | [Kid]: just remove and add-unit ? | 21:27 |
[Kid] | juju debug-log didn't have anything | 21:27 |
[Kid] | i will try the --debug on the command | 21:27 |
[Kid] | well, destroying the model again | 21:32 |
[Kid] | ..... | 21:32 |
[Kid] | redeployings | 21:32 |
[Kid] | i have to wonder if i am just a special case..... | 21:33 |
rick_h | [Kid]: no I mean something's up. Do you know it failed to come up? | 21:34 |
zeestrat | Hey [Kid], if you fixed the maas nodes so that's not the issue anymore and the `retry-provisioning` didn't work then you should be able to do `juju add-unit <name-of-service-that-failed-to-deploy>` so you don't have to destroy the whole model as rick_h mentioned above. | 21:46 |
[Kid] | rick, yes, it didn't come up | 21:51 |
[Kid] | i waited like 30 minutes | 21:51 |
[Kid] | just in a down state and had "failed deployment" in the message field | 21:51 |
[Kid] | looks like the re-deploy worked | 21:51 |
[Kid] | i just hate that i have to keep re-deploying for simple changes | 21:52 |
rick_h | [Kid]: well this isn't a redeploy as the first one didn't succeed right? | 21:53 |
rick_h | I mean it's not in a change, but that maas didn't get a machine to juju? | 21:53 |
rick_h | I guess that's what I'm wondering is what went wrong between maas/juju, did curtain get things started, cloud-init go ok, juju agents get installed and setup? | 21:53 |
zeestrat | Maas has logs for it's deployments on the node page and should say why it ended up as a failed deployment. | 22:01 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!