thumper | https://github.com/juju/juju/pull/11755 | 05:01 |
---|---|---|
thumper | for anyone | 05:01 |
thumper | just forward porting | 05:01 |
hpidcock | thumper: looking | 05:06 |
hpidcock | thumper: any merge conflicts? | 05:07 |
thumper | hpidcock: I resolved them | 05:14 |
thumper | had to change the imports to be v2 | 05:14 |
thumper | for workertest | 05:14 |
thumper | but that was it | 05:14 |
thumper | hpidcock: thanks | 05:21 |
manadart_ | achilleasa: I think I addressed your comments. | 07:53 |
stickupkid | manadart_, you seen thishttps://bugs.launchpad.net/juju/+bug/1855013 | 09:06 |
mup | Bug #1855013: upgrade-series hangs, leaves lock behind <seg> <upgradeseries> <juju:Triaged> <https://launchpad.net/bugs/1855013> | 09:06 |
stickupkid | manadart_, achilleasa can you do a CR https://github.com/juju/bundlechanges/pull/64 - it's a forward port of https://github.com/juju/bundlechanges/pull/63 | 09:35 |
achilleasa | stickupkid: done | 09:52 |
stickupkid | ah we broke go mod in 2.8 branch, win win - trying to resolve it now | 10:11 |
=== CajuM[m] is now known as mcaju | ||
stickupkid | merging forward (example: from 2.7 -> 2.8) will most likely break go mod | 10:15 |
achilleasa | stickupkid: why so? | 10:21 |
stickupkid | achilleasa, ho? got an issue | 10:25 |
manadart_ | stickupkid: https://github.com/juju/juju/pull/11758 | 11:03 |
stickupkid | manadart_, ho? | 11:06 |
stickupkid | daily | 11:06 |
stickupkid | achilleasa, https://github.com/juju/juju/pull/11759 | 11:06 |
Eryn_1983_FL | how do i deploy another nova-cloud-controller/0? | 12:52 |
Eryn_1983_FL | the one i got on 0 is broken with hook failed install | 12:52 |
hml | Eryn_1983_FL: juju add-unit nova-cloud-controller, if you haven’t removed the application. although depending on how the install hook failed, you might get the same results | 13:01 |
Eryn_1983_FL | ok | 13:15 |
Eryn_1983_FL | something is happening.. | 13:15 |
* Eryn_1983_FL nervous giggle | 13:15 | |
Eryn_1983_FL | nova-cloud-controller/1 waiting allocating 3 10.3.251.44 waiting for machine | 13:16 |
Eryn_1983_FL | so its putting it on a diff machine not even in the cluster curently | 13:16 |
Eryn_1983_FL | great now 0/1 are just down | 13:22 |
Eryn_1983_FL | wtf. | 13:22 |
Eryn_1983_FL | 1 is back up now 0 down still is this normal for machines just to go down for no reason? | 13:23 |
mcaju | hi, I've made available on Archlinux's AUR juju 2.8.0 . Now I just have to spread the word, somehow... | 13:24 |
Eryn_1983_FL | you were right hml | 13:26 |
Eryn_1983_FL | hook failed install | 13:26 |
Eryn_1983_FL | wtf. | 13:26 |
Eryn_1983_FL | i must pray to the wrong linux gods, for it to work on ubuntu/juju | 13:26 |
Eryn_1983_FL | 2020-06-25 13:25:17 ERROR juju.worker.uniter.operation runhook.go:136 hook "install" (via explicit, bespoke hook script) failed: exit status 1 | 13:28 |
gsamfira | petevg: proposed agent stream worked great. Thanks a lot! :) | 13:45 |
petevg | gsamfira: glad to hear it! ou're welcome :-) | 13:45 |
manadart_ | mcaju: Nice. The thing to do would be to mention it at https://discourse.juju.is | 13:46 |
mcaju | manadart_: Ok | 13:55 |
manadart_ | stickupkid: Forward port: https://github.com/juju/juju/pull/11760 | 13:56 |
stickupkid | why...? cannot update github.com/juju/charmrepo/v5 from local cache: cannot hash "/home/simon/go/src/github.com/juju/charmrepo": open /home/simon/go/src/github.com/juju/charmrepo/internal/test-charm-repo/series/format2/hooks/symlink: no such file or directory | 14:29 |
stickupkid | hmmm | 14:29 |
stickupkid | manadart_, hml, CR please https://github.com/juju/charm/pull/310 | 14:51 |
hml | stickupkid: looking, | 14:54 |
hml | stickupkid: there are no tests for error paths? | 14:54 |
stickupkid | nope, just that there is an error | 14:56 |
hml | stickupkid: approved. ty for that change | 14:56 |
=== sfeole_away is now known as sfeole | ||
=== sfeole is now known as sfeole_away | ||
=== sfeole_away is now known as sfeole | ||
stickupkid | achilleasa, hml, CR https://github.com/juju/charmrepo/pull/162 | 17:06 |
stickupkid | or even petevg :) | 17:07 |
flxfoo | stickupkid: sorry for yesterday, could not get your answer (if any) | 19:29 |
flxfoo | any tips on floating ips with aws? I am trying to setup an ha cluster with percona cluster, of course "Resource: res_mysql_90aa447_vip not running" any idea? | 19:30 |
=== sfeole is now known as sfeole_away | ||
Eryn_1983_FL | so | 19:57 |
Eryn_1983_FL | how do i get an old version of juju | 19:57 |
petevg | Eryn_1983_FL: you can install anything back to Juju 2.3 via the snap. You just need to do "snap refresh juju --channel <version>/stable" | 19:58 |
petevg | Eryn_1983_FL: you can install anything back to Juju 2.3 via the snap. You just need to do "snap refresh juju --channel <version>/stable" | 19:59 |
petevg | (Whoops. Sorry for the dupe.) | 19:59 |
Eryn_1983_FL | ok | 19:59 |
petevg | You can see available versions by doing "snap info juju" | 19:59 |
Eryn_1983_FL | petevg you are so awesome | 19:59 |
Eryn_1983_FL | i have 2.8.0 installed.. | 20:00 |
Eryn_1983_FL | is that bad? | 20:00 |
Eryn_1983_FL | how does that affect how my charms work? | 20:00 |
petevg | Eryn_1983_FL: 2.8 is the latest release, and it should work just fine with existing charms. There are some open issues, which will be fixed in a 2.8.1 release. | 20:01 |
petevg | That's just the client version, btw. Your controller won't change unless you specifically upgrade your controller. | 20:01 |
petevg | And newer clients can talk to older controllers. | 20:01 |
Eryn_1983_FL | :( | 20:02 |
Eryn_1983_FL | sigh i can't even get it to remove machines | 20:03 |
=== sfeole_away is now known as sfeole | ||
petevg | What command are you using to remove the machines? And did you ever determine a reason for the install hook failing? | 20:07 |
Eryn_1983_FL | no i didnt figure it out i looked the logs and i dont see anything but exit 1 | 20:07 |
Eryn_1983_FL | juju remove-machine 3 | 20:07 |
Eryn_1983_FL | the fact that my servers keeps going down and up and the ovn keeps picking a new leader, makes me wonder if something is wrong with my hw or the network.. | 20:08 |
Eryn_1983_FL | 3 stopped 10.3.251.44 above-ram focal default Deployed | 20:08 |
petevg | I'd definitely guess that there were hardware or network issues. I'd take a look at disk, ram and cpu utlization on the underlying host machines. | 20:10 |
Eryn_1983_FL | mmm ok its not down it just juju status says it went down. | 20:13 |
petevg | The Juju agent is just a process that runs alongside your workload, reporting back to the controller with status and changes. | 20:14 |
Eryn_1983_FL | ok | 20:14 |
Eryn_1983_FL | question | 20:40 |
Eryn_1983_FL | is i remove node 0 will the services be started again on a different machine | 20:41 |
Eryn_1983_FL | does it matter is i use bionic or focal? | 20:51 |
petevg | Eryn_1983_FL: bionic or focal should just work. Juju doens't automatically reallocate services, so if you remove machine 0, you'll need to add-unit to re-add a unit for each application that was deployed to that machine. | 21:01 |
Eryn_1983_FL | ok | 21:02 |
Eryn_1983_FL | makes sense | 21:02 |
Eryn_1983_FL | ill nave to redeploy the controller and the valut, | 21:02 |
Eryn_1983_FL | both were on that 0 machine, | 21:02 |
Eryn_1983_FL | => There are 20 zombie processes. | 21:08 |
Eryn_1983_FL | fml | 21:08 |
=== ulidtko|k is now known as ulidtko | ||
petevg | Eryn_1983_FL: The controller should live in its own model, separate from any charms that you've deployed. Were you deploying charms into the controller model? | 21:39 |
=== sfeole is now known as sfeole_away |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!