thumperfor anyone05:01
thumperjust forward porting05:01
hpidcockthumper: looking05:06
hpidcockthumper: any merge conflicts?05:07
thumperhpidcock: I resolved them05:14
thumperhad to change the imports to be v205:14
thumperfor workertest05:14
thumperbut that was it05:14
thumperhpidcock: thanks05:21
manadart_achilleasa: I think I addressed your comments.07:53
stickupkidmanadart_, you seen thishttps://bugs.launchpad.net/juju/+bug/185501309:06
mupBug #1855013: upgrade-series hangs, leaves lock behind <seg> <upgradeseries> <juju:Triaged> <https://launchpad.net/bugs/1855013>09:06
stickupkidmanadart_, achilleasa can you do a CR https://github.com/juju/bundlechanges/pull/64 - it's a forward port of https://github.com/juju/bundlechanges/pull/6309:35
achilleasastickupkid: done09:52
stickupkidah we broke go mod in 2.8 branch, win win - trying to resolve it now10:11
=== CajuM[m] is now known as mcaju
stickupkidmerging forward (example: from 2.7 -> 2.8) will most likely break go mod10:15
achilleasastickupkid: why so?10:21
stickupkidachilleasa, ho? got an issue10:25
manadart_stickupkid: https://github.com/juju/juju/pull/1175811:03
stickupkidmanadart_, ho?11:06
stickupkidachilleasa, https://github.com/juju/juju/pull/1175911:06
Eryn_1983_FLhow do i deploy another nova-cloud-controller/0?12:52
Eryn_1983_FLthe one i got on 0 is broken with hook failed install12:52
hmlEryn_1983_FL: juju add-unit nova-cloud-controller,  if you haven’t removed the application.  although depending on how the install hook failed, you might get the same results13:01
Eryn_1983_FLsomething is happening..13:15
* Eryn_1983_FL nervous giggle13:15
Eryn_1983_FLnova-cloud-controller/1      waiting   allocating  3                        waiting for machine13:16
Eryn_1983_FLso its putting it on a diff machine not even in the cluster curently13:16
Eryn_1983_FLgreat now 0/1 are just down13:22
Eryn_1983_FL1 is back up now 0 down still is this normal for machines just to go down for no reason?13:23
mcajuhi, I've made available on Archlinux's AUR juju 2.8.0 . Now I just have to spread the word, somehow...13:24
Eryn_1983_FLyou were right hml13:26
Eryn_1983_FLhook failed install13:26
Eryn_1983_FLi must pray to the wrong linux gods, for it to work on ubuntu/juju13:26
Eryn_1983_FL2020-06-25 13:25:17 ERROR juju.worker.uniter.operation runhook.go:136 hook "install" (via explicit, bespoke hook script) failed: exit status 113:28
gsamfirapetevg: proposed agent stream worked great. Thanks a lot! :)13:45
petevggsamfira: glad to hear it! ou're welcome :-)13:45
manadart_mcaju: Nice. The thing to do would be to mention it at https://discourse.juju.is13:46
mcajumanadart_: Ok13:55
manadart_stickupkid: Forward port: https://github.com/juju/juju/pull/1176013:56
stickupkidwhy...? cannot update github.com/juju/charmrepo/v5 from local cache: cannot hash "/home/simon/go/src/github.com/juju/charmrepo": open /home/simon/go/src/github.com/juju/charmrepo/internal/test-charm-repo/series/format2/hooks/symlink: no such file or directory14:29
stickupkidmanadart_, hml, CR please https://github.com/juju/charm/pull/31014:51
hmlstickupkid: looking,14:54
hmlstickupkid:  there are no tests for error paths?14:54
stickupkidnope, just that there is an error14:56
hmlstickupkid: approved.  ty for that change14:56
=== sfeole_away is now known as sfeole
=== sfeole is now known as sfeole_away
=== sfeole_away is now known as sfeole
stickupkidachilleasa, hml, CR https://github.com/juju/charmrepo/pull/16217:06
stickupkidor even petevg :)17:07
flxfoostickupkid: sorry for yesterday, could not get your answer (if any)19:29
flxfooany tips on floating ips with aws? I am trying to setup an ha cluster with percona cluster, of course "Resource: res_mysql_90aa447_vip not running" any idea?19:30
=== sfeole is now known as sfeole_away
Eryn_1983_FLhow do i get an old version of juju19:57
petevgEryn_1983_FL: you can install anything back to Juju 2.3 via the snap. You just need to do "snap refresh juju --channel <version>/stable"19:58
petevgEryn_1983_FL: you can install anything back to Juju 2.3 via the snap. You just need to do "snap refresh juju --channel <version>/stable"19:59
petevg(Whoops. Sorry for the dupe.)19:59
petevgYou can see available versions by doing "snap info juju"19:59
Eryn_1983_FLpetevg you are so awesome19:59
Eryn_1983_FLi have 2.8.0 installed..20:00
Eryn_1983_FLis that bad?20:00
Eryn_1983_FLhow does that affect how my charms work?20:00
petevgEryn_1983_FL: 2.8 is the latest release, and it should work just fine with existing charms. There are some open issues, which will be fixed in a 2.8.1 release.20:01
petevgThat's just the client version, btw. Your controller won't change unless you specifically upgrade your controller.20:01
petevgAnd newer clients can talk to older controllers.20:01
Eryn_1983_FLsigh i can't even get it to remove machines20:03
=== sfeole_away is now known as sfeole
petevgWhat command are you using to remove the machines? And did you ever determine a reason for the install hook failing?20:07
Eryn_1983_FLno i didnt figure it out i looked the logs and i dont see anything but exit 120:07
Eryn_1983_FLjuju remove-machine 320:07
Eryn_1983_FLthe fact that my servers keeps going down and up and the ovn keeps picking a new leader, makes me wonder if something is wrong with my hw or the network..20:08
Eryn_1983_FL3        stopped   above-ram            focal   default  Deployed20:08
petevgI'd definitely guess that there were hardware or network issues. I'd take a look at disk, ram and cpu utlization on the underlying host machines.20:10
Eryn_1983_FLmmm ok its not down it just juju status says it went down.20:13
petevgThe Juju agent is just a process that runs alongside your workload, reporting back to the controller with status and changes.20:14
Eryn_1983_FLis i remove node 0 will the services be started again on a different machine20:41
Eryn_1983_FLdoes it matter is i use bionic or focal?20:51
petevgEryn_1983_FL: bionic or focal should just work. Juju doens't automatically reallocate services, so if you remove machine 0, you'll need to add-unit to re-add a unit for each application that was deployed to that machine.21:01
Eryn_1983_FLmakes sense21:02
Eryn_1983_FLill nave to redeploy the controller and the valut,21:02
Eryn_1983_FLboth were on that 0 machine,21:02
Eryn_1983_FL  => There are 20 zombie processes.21:08
=== ulidtko|k is now known as ulidtko
petevgEryn_1983_FL: The controller should live in its own model, separate from any charms that you've deployed. Were you deploying charms into the controller model?21:39
=== sfeole is now known as sfeole_away

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!