/srv/irclogs.ubuntu.com/2017/04/26/#juju.txt

anrahany guess when 2.2 Beta 3 will be out?06:29
anastasiamacanrah: is there something specific u r after in 2.2-b3?06:33
anrahanastasiamac: Yes: https://bugs.launchpad.net/juju/+bug/165414406:36
mupBug #1654144: AllocatePublicIP fails on some Openstacks <bootstrap> <oil> <openstack-provider> <juju:Fix Committed by hmlanigan> <juju 2.1:Won't Fix> <https://launchpad.net/bugs/1654144>06:36
anrahThere is couple fixes in 2.2 I would like to test on our environment but 2.2 does not work with FloatingIp's on our OpenStack06:37
anastasiamacanrah: there is a hope that 2.2-b3 may be available this week. I cannot give u more concrete dates at this stage but it's very close :)06:47
anrahGreat, thanks :)06:51
=== abc is now known as Guest52313
Guest52313juju connection timed out error07:10
sai_I have configured juju with local and openstack. I followed the instructions here: Bootstrapping JuJu on top of an OpenStack private cloud for configuring juju on a private openstack deployment.  Juju spawns a new VM in openstack and it is visible in the horizon dashboard but the problem occurs when juju tries to ssh into the newly spawned VM for installing juju agent. This is what I get in the terminal:07:23
sai_use-floating-ip specifies whether a floating IP address is     # required to give the nodes a public IP address. Some     # installations assign public IP addresses by default without     # requiring a floating IP address.     #     use-floating-ip: true      # use-default-secgroup specifies whether new machine instances     # should have the "default" Openstack security group assigned.     #     use-default-secgroup: true      # network 07:24
anrahsai_: can you use some paste-service to paste that output?07:27
anrahist there an error somewhere on the output and how you are running the bootstrap command?07:28
sai_2017-04-19 09:46:26 INFO juju.environs.instances image.go:91 find instance - using image with id: eca6b790-18e5-42c8-b239-1f33f0400187 2017-04-19 09:46:26 DEBUG juju.cloudconfig.instancecfg instancecfg.go:525 Setting numa ctl preference to false 2017-04-19 09:46:27 DEBUG juju.service discovery.go:65 discovered init system "systemd" from series "xenial" 2017-04-19 09:46:27 DEBUG juju.provider.openstack provider.go:1034 openstack user data;07:29
sai_2017-04-19 09:46:27 DEBUG juju.provider.openstack provider.go:1034 openstack user data; 1031 bytes 2017-04-19 09:46:27 DEBUG juju.provider.openstack provider.go:1049 allocating public IP address for openstack node 2017-04-19 09:46:37 DEBUG juju.provider.openstack provider.go:913 found unassigned public ip: 172.16.50.108 2017-04-19 09:46:37 INFO juju.provider.openstack provider.go:1054 allocated public IP 172.16.50.108 2017-04-19 09:46:44 07:29
sai_2017-04-19 09:47:15 DEBUG juju.provider.openstack provider.go:459 instance c19d0ff7-8f43-41f9-a268-9600df72168a has floating IP address: 172.16.50.108 2017-04-19 09:47:16 DEBUG juju.provider.common bootstrap.go:257 connection attempt for 172.16.50.108 failed: ssh: connect to host 172.16.50.108 port 22: Connection refused 2017-04-19 09:47:21 DEBUG juju.provider.common bootstrap.go:257 connection attempt for 172.16.50.108 failed: ssh: conne07:30
anrahsai_: can you paste the whole output for example to: http://paste.ubuntu.com/07:34
sai_http://paste.ubuntu.com/24458888/07:36
kjackalGood morning Juju world!08:01
kklimondatych0: hey, have you ever looked into https://bugs.launchpad.net/juju/+bug/1632189 ? I've started hitting this today randomly, and I'm not sure how to debug it.08:51
mupBug #1632189: juju can not use upstart to run service in juju machine <ppc64el> <upstart> <juju:Triaged> <https://launchpad.net/bugs/1632189>08:51
anrahdoes anyone have a example of a charm that communicates with Juju API?09:03
magicaltrouti think the auto scaling charm thing does09:08
magicaltrouthttps://jujucharms.com/charmscaler09:09
=== vds_ is now known as vds
kklimondahow do juju install lxc & co? It's not installing my version of lxcfs, even though it's candidate for installation11:05
kklimondabummer, something is calling --target-release trusty-backports11:07
kjackalcory_fu: is there a way to have a decorator of my own in a reactive charm?11:37
cnfm0012:08
lazyPowermagicaltrout: are you charming up gitlab i see?13:18
rick_hmagicaltrout: hah, you did the work I was poking at with your bundle. :)13:19
magicaltroutlazyPower: its been available for 12 months, but nicely hidden by the charmstore search13:19
lazyPowermagicaltrout: i for one, apologize for our non-linear graph search.13:19
rick_hlazyPower: yes, we've been talking about it last night and want to see about an upgrade path from the current promulgated one at some point13:19
lazyPowermagicaltrout: only teh git repo or were you including CI and Registry as part of that?13:19
magicaltroutlazyPower: i'm lazy it ships what they put in the omnibus repo13:20
lazyPowermagicaltrout: so everything, but not integrated yet. gotchya13:20
magicaltrouti've used it for ages for internal git repo, i've not expanded yet13:20
magicaltroutalthough I have a branch here with an external db hook that i want to get into the charm soonish13:21
lazyPowermagicaltrout: lmk if you want help w/ the ci subsystem. the registry is still hairy to setup13:21
lazyPowerit involves a lot of self-signed key or proper authority-based-tls-key schenanigans that i'm not ready to support for everyone yet.13:21
magicaltroutthey use the gitlab registry on one of the darpa projects, I never even knew that was a thing13:21
magicaltroutI need to delve properly13:21
lazyPoweryeah man13:21
lazyPowerits actually quite good13:21
lazyPoweri'm using the sameersbn images a-la dorker (pun intended) and its been a rock solid performer for me13:22
lazyPowermuch easier than using a registry without an auth doman and no gui13:22
lazyPowers/doman/domain/13:22
magicaltroutyeah on our genomics project we use the docker v2 reg13:22
magicaltroutits fine as long as you like using curl to prod it ;)13:23
lazyPoweron a scale of one to trolled, how trolled does it make you feel? :)13:23
lazyPowermagicaltrout: oh and the final integration is actually mattermost13:23
lazyPowerso you get a full startup pack if you're using the omnibus.13:23
magicaltroutyeah13:23
magicaltroutso what do we want lazyPower, gitlab sorting out some certificates and then having k8s trust them?13:31
magicaltroutyou chaps must already have some code kicking around for k8s cert generation?13:32
magicaltroutthe back button in the charm store is an interesting sight to behold these days13:36
hatchmagicaltrout is it not working for you?13:39
magicaltroutwell it sorta works hatch13:43
magicaltroutbut it also sorta doesn't13:43
hatchit definitely should work :) can you explain what you're seeing?13:44
magicaltrouterm13:45
magicaltrouti'll see if a screenshot can explain13:45
hatchsure thanks!13:45
rick_hmagicaltrout: I'm not able to login. Your readme references "password of your choice"13:46
rick_hmagicaltrout: but it's not the old default from the first charm and I'm not seeing a config/etc?13:46
magicaltroutrick_h: you should define a random password and it just ask you to set it13:46
magicaltroutwas my understanding13:46
rick_hmagicaltrout: when you say "define" what do you mean?13:47
magicaltroutjust stab some random characters13:47
rick_hmagicaltrout: oh damn, I'm blind. I thought it was a login form but it's a change password form13:47
rick_hI see the link for "login" now. My bad.13:47
magicaltroutyeah its sneaky isn't it!13:47
magicaltroutI did the same yesterday13:47
rick_hmagicaltrout: this is what I get for using the older one yesterday13:47
rick_hmagicaltrout: ty :)13:48
magicaltrouthey hatch https://ibin.co/3KPFGS2PHiAo.png13:48
magicaltroutyou see the untitled-model back links?13:48
magicaltroutbut I've not been in a model13:48
magicaltroutand going back from this page takes me no where it doesn't even change13:49
hatchahh, hmm13:49
hatchhave you been sitting on this page a while?13:49
magicaltroutprobably13:49
hatchok np, just trying to think about what is causing those entries13:49
magicaltroutlike i said some times its absolutely fine13:49
hatchthanks, I'll create an issue so we can try and track this down13:50
magicaltroutsometimes i land like that and you have to go somewhere else to get moving again13:50
lazyPowermagicaltrout: we sure do, its teh easyrsa charm :)13:51
lazyPowermagicaltrout: you could in theory support that relation, and reuse the ca + trust chain there.13:51
magicaltroutsounds like that might be a sane way to do it lazyPower13:51
lazyPowermagicaltrout: so long as gitlab and k8s are deployed in the same model or you use xmodel relations to define that relationship, it should be as simple as placing the certs in teh correct configs and "magic"13:52
lazyPowermagicaltrout: whats nice is that so long as any replacement (say vault) reuse that relationship we can make that CA a pluggable component.13:52
lazyPoweri keep saying this but haven't had time to really dig into using another Cert Authority, let alone charm it up.13:52
lazyPowermagicaltrout: also whats your expected integration with with k8s? run the ci-worker in k8s? there's a multi-manager bin that allows your ci jobs to scale and create their own ephemeral pods...13:54
lazyPowermagicaltrout: https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/executors/kubernetes.md13:54
magicaltroutyeah that would be very useful lazyPower I do similar with Jenkins on DC/OS13:55
lazyPoweri agree13:55
magicaltroutas a cash strapped startup, having CI servers sucking up resource and doing nothing makes me sad ;)13:55
lazyPowerthere's an obvious intersection here, i'm just not certain how this integration should actually work. I feel like its going to require another interface to get all the data but i dont want interface proliferation either.13:56
lazyPowerwhat we *might* do... is add the kube-admin relationship to fetch credentials and create a workload manifest in the gitlab charm and have an action deploy it.13:57
lazyPowerthat seems clean and keeps the onus of having the ci-multi-runner update from the gitlab charm side.13:57
lazyPoweri'll noodle on this s'more and get back to you magicaltrout. Feel free to poke between now/then about this though.13:59
magicaltroutwill do13:59
bdxhttp://paste.ubuntu.com/24460316/14:01
bdxis ^ a bug?14:01
petevgcory_uf: small PR for you: https://github.com/juju/python-libjuju/pull/11314:03
lazyPowerbdx: i dont think instance type is a post-deploy constraint juju understands14:20
lazyPowerand afaik instance-type was actually a deprecated constraint, but i may be wrong there...14:20
bdxlazyPower: post deploy ?14:21
lazyPowerright so there's sets of constraints14:21
lazyPowerwait i'm about ot talk about both sides of my mouth, i see what you did here14:21
lazyPoweradd-machine and deploy14:21
lazyPowerbut i think that still holds true... yeah, i beleive add machine and deploy are 2 different sets of constraints. One understands that instance type the other doesn't.14:22
bdxoh really14:22
lazyPoweryeah, and if i'm wrong, i'll go ahead and put my pants on my head14:23
lazyPowerbecause i'm only 80% certain thats the case14:23
lazyPowerthats a pretty large margin for error14:23
* magicaltrout waits expectantly14:23
bdxa `juju deploy`ed application automatically consumes any available machine instead of getting affinity to its own resources?14:23
bdxok14:23
bdxgood to know14:23
bdxthx14:23
bdxI thought that was only a thing when using `--to`14:24
lazyPoweractually bdx...14:24
lazyPowerlet me flip this on its head14:24
bdx:)14:24
lazyPoweryou provisioned a super large machine14:24
lazyPowerit has no workload on it, and juju knows about it14:25
lazyPowerthen you requested a deployment to a much smaller machine, juju said "Yo dog, i see you like to deploy things, and i have this large machiine that totally fits that t2.micro in it"14:25
lazyPowerso mebbe it just said "Welp, nothing to do here" and crammed it on that machine because it effectively did what you asked. Now if you provisioned a t2.micro first, and attempted to put a m4.xlarge on it, i can see that being where its def a bug14:25
lazyPowerbut technically, juju used the resources that were available to it, before requesting more.14:26
bdxI see14:26
lazyPowerbdx: try flipping that around and see if the behavior changes14:26
lazyPoweranecdotal validation, but i'm pretty sure it'll behave how you would expect then14:26
magicaltrouti'd tend to agree with bdx though, if i'd asked for a t2.micro deployment and a large machine was available14:27
magicaltrouti'd not want my workload on it14:27
magicaltroutnot that i have that use case14:27
bdxin my case, I want one host to deploy my containers on, and one haproxy box to proxy to it14:28
bdxthe fact that juju makes assumptions about what I want to do with my workload makes my stomach churn14:28
magicaltroutthats just last nights beer14:29
bdxfor u14:30
bdxI don't drink beer bro - get it right14:30
bdxjeeeeze14:30
magicaltroutlol14:30
bdx:-)14:30
bdxlazyPower: I think I see what you are saying .... the constraints for 'add-machine' and 'deploy' commands aren't the same constraints, so the 'instance-type' isn't evaluated as to whether its the same or not14:41
lazyPowerbdx: i think its closer related to my last assumption...14:42
lazyPowerthat had you inverted those constraints, it would behaving as you expect.14:42
bdxlazyPower: ahhh ... so what we are dancing around is that juju doesn't take into consideration the actual instance type14:43
bdxjust the resources associated with the instance type14:44
bdxso like a t2.xlarge and a m4.xlarge would be equivelent in the eyes of juju because the both have the same cpu, mem14:45
bdxmagicaltrout: for future reference - whiskey with a single ice cube, or a malbec with room to breathe :)14:47
rick_hreminder juju show in 3hrs15:07
rick_hmagicaltrout: will show off your bundle and such if you're interested in joining15:07
magicaltrouti will swing by rick_h15:35
zeestratrick_h: question for the juju show (or here). What will be the upgrade strategy for 2.1 -> 2.2? Supported for controllers with models running and all?15:37
rick_hzeestrat: completely15:41
rick_hzeestrat: model migrations is the safest production ready method to upgrade15:41
rick_hzeestrat: https://jujucharms.com/docs/2.1/models-migrate15:41
petevgcory_fu: got another PR for you (matrix-side JaaS support): https://github.com/juju-solutions/matrix/pull/12715:45
petevg(it depends on that other PR, so it's a WIP for now.)15:46
cory_fupetevg: https://github.com/juju/python-libjuju/pull/113 can't be right.  The redirect info code was put in there specifically *for* jaas.  Also, the current libjuju master works fine for me on jaas.15:51
cory_futvansteenburgh: Can you confirm that first bit?   ^15:51
petevgcory_fu: can you run the add_model example against JaaS. It fails for me ...15:51
cory_fupetevg: Hrm.  Yeah, it does fail for me.  Let me re-test with conjure-up, which is where I saw it working, and confirm that it's using latest master for libjuju15:54
tvansteenburghcory_fu: in a meeting, can't look right now15:56
cory_fupetevg: Hrm.  In conjure-up, I'm getting the server-version KeyError15:57
petevgcory_fu: hmmm ... if you do "juju list-controllers", what version do you get for your jaas controller?15:58
cory_fupetevg: 2.1.215:59
petevgcory_fu: that's the same as mine. That's weird.15:59
cory_fupetevg: Of course when I add a logging statement to see what the info dict contains, it has the server-version key16:02
petevgHeh.16:02
zeestratrick_h: Cool. Great to hear. So what do you do when model migrations doesn't work? #168039216:02
mupBug #1680392: Model migration fails on large model <juju:Fix Committed by thumper> <https://launchpad.net/bugs/1680392>16:02
rick_hzeestrat: so you can do in place upgrades like some folks still do from pre-migrations16:02
rick_hzeestrat: however, it's riskier16:03
rick_hmigrations are meant to be put together such that they fail in a clean way, reversible, etc.16:03
cory_fupetevg: Another thing to note: conjure-up is using the CLI to do the add-model and is then establishing a model connection without ever establishing a controller connection.  It's the controller connection that is failing in examples/add_model.py with the redirect issue16:03
cory_fupetevg: I'm also having the problem with not being able to remove the models from jaas16:04
zeestratYeah. We'd love to do migrations, but that doesn't seem to work with a model for for example Openstack in 2.1.216:04
petevgcory_fu: if you're not establishing a controller connection, I understand why you wouldn't be seeing the error :-)16:04
petevgcory_fu: as for removing models ... if I add and remove the model with the websocket api (e.g., matrix does it by itself), everything works.16:05
petevgIf I try to add a model via the cli, I can't remove it via the cli.16:05
petevgAnd I can't use the cli to remove models added by the websocket api (this happens when matrix fails without figuring out that it has added a model).16:05
petevgcory_fu: I am running into an annoying problem: I occasionally get timeouts in matrix's add_model call, despite making the timeout a lot more generous. When I check the model got added, but we apparently never heard about it.16:06
petevgAlso, I can't clean it up :-(16:06
cory_fupetevg: Only on JAAS?  Also, have you replicated it with debug logging turned on to see if the message ever came in on the websocket and we missed it somehow?16:07
petevgcory_fu: yes. Only on JaaS.16:08
petevgcory_fu: I've got matrix debugging turned on, but I haven't gone in and hacked in debug logging for the controller. Can I do that in JaaS?16:08
petevgOh. You just mean turn on debugging in python-libjuju. I can do that.16:08
cory_fupetevg: It's a bit late for him, but maybe urulama can help us out with these JAAS issues?16:08
cory_fuurulama: If you're still around16:09
petevgcory_fu: yeah. Squawking for help makes senes.16:09
petevg*sense16:09
petevgcory_fu: it looks like redirect_info is never meant to work for controllers, per the docstring in the source:16:09
urulamacory_fu: otp for a while :-/16:10
petevgRedirectInfo returns redirected host information for the model. In Juju it always returns an error because the Juju controller does not multiplex controllers.16:10
cory_fupetevg: The JAAS controller is slightly different from a normal Juju controller, specifically in that it *does* multiplex controllers.16:11
petevgHmmm ... don't know why we get the not implemented error, then. I have a feeling that it might have something to do with the timeouts (e.g., I end up talking to a controller that doesn't want to talk to me any more.)16:11
cory_fupetevg: It may be that the reason examples/add_model.py is failing is because we need to give the controller connection the info that it needs to redirect us to the proper actual controller16:11
mhiltonpetevg, cory_fu: can I help with something?16:12
cory_fumhilton: I hope so!  :)16:13
petevgmhilton: hopefully you can :-) We're trying to debug two issues when talking to a JaaS controller via the websocket api.16:13
zeestratrick_h: I guess there is no 2.1.3 planned that could fix model migrations for us on 2.1.2 without the big jump to 2.2?16:13
mhiltoncory_fu, petevg: if you're seeing a not implemented error from JAAS there's a good chance the JAAS controller just doesn't implement the call yet.16:13
rick_hzeestrat: not sure. I'd not be surprised and maybe it can be fixed on the 2.2 end. Migrations is interesting as sometimes the new model can handle issues.16:13
petevgmhilton: The call is theoretically a JaaS specific one, though: Admin.RedirectInfo16:13
cory_fumhilton: My understanding, though, is that the redirect logic is exactly for JAAS and it would be the only thing that *does* implement it16:14
cory_fumhilton: This is the example we're using that is giving us the issue with the RedirectInfo: https://github.com/juju/python-libjuju/blob/master/examples/add_model.py16:15
petevgmhilton: the json request is {"type": "Admin", "request": "RedirectInfo", "version": 3}16:15
cory_fumhilton: I feel like this worked in the past, when JAAS was at version 2.0.2, but I am not 100% sure16:15
petevgI can confirm that it worked in 2.0.2. mhilton: are we possibly just using the wrong version? (The facade isn't in the list of facades that schemagen puts together, so the version is just hard coded currently.)16:16
mhiltonpetevg, cory_fu: OK RedirectInfo is only implemented on a Model connection (that changed recently, but before it was a bug)16:16
cory_fuOk, so it did change, but it was working by accident before16:17
petevgmhilton: That makes sense. That means that my workaround is okay (though could be better -- I just swallow the error, rather than trying to skip asking for redirect_info on a Controller connection.)16:17
petevgmhilton: the second issue is that I add a model, and timeout, possibly because we never get a websocket response. I probably need to get better debugging info on that one, though.16:18
mhiltoncory_fu, petevg: so to be clear if you are connecting to the model you should use a new connection that connects to the model's UUID.16:18
petevgIt's good to know that it probably doesn't have anything to do with the RedirectInfo facade.16:18
cory_fupetevg: It's probably reasonable to just always try and swallow the error if the result is "not implemented" because a regular controller might do the same even for model connections16:18
petevgmhilton: yes. We are getting a new connection for the model.16:18
cory_fumhilton: Yeah, we do create a new connection with the UUID16:19
mhiltonthe login will fail with an error saying it's been redirected, and then you can log in to the redirected location, if you put that in the login part of the code it should make everything make more sense16:19
petevgmhilton: ... the timouts aren't consistent, like I'd expect they would be if we were doing something obviously wrong.16:19
mhiltoncory_fu, petevg: what's the second part of the problem?16:19
petevgmhilton: It's the timeouts. I ask the websocket api to create a model, and it does, but we're apparently not hearing about it.16:20
zeestratrick_h: yeah, the problem here seems to be due to an issue with the mongodb on the original controller we'd like to migrate from so I hope there will be some kind of fix or workaround.16:20
petevgmhilton: I guess the call to create the model is to the controller, so maybe something is going wrong with the handoff, and we're swallowing the error somehow.16:20
mhiltonpetevg, so you just get a closed connection with no error response or anything? is it on any particular cloud or region?16:21
petevgmhilton: the timeout is actually coming from Python, several layers up; it's just as likely to be an uncaught error as it is to be a network timeout.16:21
petevgmhilton: I've been testing on aws/us-east-1. Haven't gotten around to testing on other clouds.16:22
petevgOoh. This is before we actually start monitoring the websocket connection to make sure that it is alive. It could just be closing.16:22
mhiltonpetevg: OK as far as I know that one is working fine.16:22
petevgmhilton: I'll add some more debugging to this beast and ping back here if I have questions based on the results. Thank you for all the help thus far :-)16:23
cory_fupetevg: Am I correct in thinking that the reason you're hitting a timeout is because the we never see the model show up in the controller connection's watcher?  Or is it because we're not seeing a response for the API request itself?16:23
mhiltonpetevg: Oh the JAAS controller can be a little aggressive about shutting down connections if it doesn't get a Ping (or some other call) every minute or so.16:24
petevgcory_fu: ambiguous. We're hitting a timeout because we call "await Controller.add_model" with an x second timeout, and that call doesn't come back to us in x seconds.16:24
petevgmhilton: we're pinging it every 10 seconds, so that part should be okay :-)16:24
mhiltonpetevg: OK that sounds fine then. feel free to ping back later.16:25
petevgWill do. Thank you again.16:25
cory_fumhilton: One other issue that I'm seeing is that I'm occasionally, but not consistently, getting a login response that doesn't appear to contain a server-version key.  I haven't been able to reproduce it since increasing the logging to see what else is present in the response, but can you think of any way the login response could come back without an error but also without a server-version?16:25
mhiltoncory_fu: interesting. give me a couple of secs and I'll have a look16:26
cory_fupetevg: So that could either be that model_facade.CreateModel isn't returning in a timely fashion (i.e., the websocket response doesn't come in), or because the subsequent ssh key logic is hanging, or because the attempt to connect to the model after the fact is hanging.   I guess you're working on increasing logging around that16:28
cory_fuha, I reproduced it16:28
mhiltoncory_fu: are these controller connections, or model connections?16:29
cory_fumhilton: They're model connections, but I was able to reproduce it and it's an issue with how we're handling discharge-required-error responses16:30
cory_fupetevg: The build_facades call in login (https://github.com/juju/python-libjuju/blob/master/juju/client/connection.py#L467) needs to either account for or come after the discharge-required-error response check (https://github.com/juju/python-libjuju/blob/master/juju/client/connection.py#L343)16:31
cory_fupetevg: I'll open an issue16:32
petevgcory_fu: interesting. Accounting for it is the right thing, I think, because we need those facades for the pinger.16:34
tych0kklimonda: i haven't looked at it16:34
cory_fupetevg: Well, we could just move both the build_facades and create_task(pinger) to after this check: https://github.com/juju/python-libjuju/blob/master/juju/client/connection.py#L34316:35
cory_fuInstead of doing it in login16:35
cory_fuThough I guess we also need to do it on the no-redirect-info case above16:35
petevgcory_fu: yep. I like the accounting better than the making sure it gets called in multiple places :-)16:36
petevg... though I guess then the pinger might try to set itself up even though we want to just discharge an error and move on.16:36
petevgHmmm ....16:36
cory_fupetevg: Created https://github.com/juju/python-libjuju/issues/11416:37
cory_fupetevg: It would.  And then we'd end up with multiple tasks16:37
cory_fupetevg: I can resolve that today16:38
cory_fubrb16:38
petevgcory_fu: awesome. Thx.16:38
cory_fumhilton: Can you speak at all to the issue that we're seeing with JAAS where we get models added that can't be removed?17:06
mhiltoncory_fu, yes that one is a known JAAS bug, the model is removed, but JAAS doesn't find out.17:08
cory_fumhilton: Is there any way to get those models cleaned up?17:08
mhiltoncory_fu, there is a fix in the pipeline.17:08
cory_fuOk17:08
mhiltoncory_fu, there aren't any machines running on the models or anything that needs cleaning up if that's what you mean.17:09
cory_fumhilton: Right, I can see that.  So it's just an issue of clutter in juju list-models17:09
mhiltoncory_fu: although if you're just getting upset by a long list of dying models then it would be possible to manually clean them up if it's a big problem. There's a chance I can get the fix into produciton tomorrow though. which might end up being easier.17:11
cory_fumhilton: It's not a big deal, I can wait on the fix17:13
rick_h30min juju show warning wheeeeee17:27
rick_hjuju show links: https://hangouts.google.com/hangouts/_/spafozizrrgsvkyfq733kzxsmme to join the hangout17:55
rick_hhttps://www.youtube.com/watch?v=h2X9gIxXPH8 to watch it stream17:55
rick_hhatch: jrwren lazyPower cory_fu petevg magicaltrout and anyone else interested ^17:57
rick_hmagicaltrout: are you still able to join?18:04
=== frankban is now known as frankban|afk
lazyPoweroo snap, i missed the intro. /me pulls up the watch url18:19
lazyPowerif you want me there i'll hop in18:19
* lazyPower applauds and fanfares rick_h18:29
lazyPowergreat show this week rick_h18:29
lazyPowersorry i missed the CTA18:29
rick_hlazyPower: <318:31
magicaltroutsorry rick_h got home to find the kids still awake and causing chaos18:37
magicaltrouthow dare they not be in bed!18:37
rick_hmagicaltrout: hah, of course :)18:37
rick_hmagicaltrout: all good, just didn't want to start without you if you wanted to join18:37
rick_hmagicaltrout: <3 and ty for the updates!18:37
magicaltroutno probs18:37
magicaltroutkeep prodding me and i'll make sure the new stuff gets done at some point in the not too distant future ;)18:38
magicaltroutOOTB docker repo for CDK sounds pretty cool18:38
rick_hmagicaltrout: mmmm, yummy18:38
lazyPowermagicaltrout: there's been some work by a former contributor for a stand-alone registry18:39
lazyPowerplus we have a registry action in the k8s charms...18:39
lazyPowerbut i'm all for the gitlab registry. its by far the most robust free solution you can get today on the free market.18:39
rick_hyea, gitlab is very cool. I'd heard about it but didn't realize it did as much as it does.18:40
rick_hI'd love to see this build system in place doing CI/CD18:40
tychicus1I've been using gitlab since version 6.x, been very happy with the features they have added.  getting ready to consolidate the responsibilities of redmine and jenkins into our gitlab server18:47
rick_htychicus1: very cool19:00
bdxrick_h: the mailman project is a great example https://gitlab.com/mailman/mailman/pipelines19:02
bdxfor CI19:03
tychicus1I'll be doing this, as soon as I get persistent storage up and running on k8s http://blog.lwolf.org/post/fully-automated-gitlab-installation-on-kubernetes-including-runner-and-registry/19:06
cory_fupetevg, stokachu: https://github.com/juju/python-libjuju/pull/11621:39
petevgcory_fu: +1 after the tests pass.21:57
cory_fupetevg: I'm running into a strangely consistent issue running the integration tests locally.  The tests/integration/test_unit.py::test_run_action deployment of the git charm on trusty always ends up with my juju machine stuck in pending, and when I manually check the cloud-init-output.log on the lxd instance, I just see "Job failed to start" from the jujud-machine-0 service22:01
cory_fupetevg: The weird thing is that all of the other tests are fine, and it's always that one test22:02
cory_fupetevg: Have you seen anything like that?22:02
petevgcory_fu: I haven't. Huh.22:03
cory_fuI can't see it being related to libjuju in any way, and I suspect it has something to do with my lxd / zfs setup, but I can't even get any more debugging info because no logs are created for that service22:04
cory_fuAnd the first thing the service does is touch the log file, and it doesn't even do that22:04
petevgcory_fu: weird. I think that the only difference for me is that I'm not running zfs.22:05
magicaltrouteveryone should run zfs22:06
magicaltroutits the lay22:06
magicaltroutlaw22:06
cory_fupetevg: Hrm.  The travis build for that PR is now up to almost 35 minutes, where all the previous runs took about 20.  Maybe the failures I'm seeing are due to my changes somehow after all.  I really can't see how libjuju would cause such a weird provisioning error, though22:14
petevgHuh.22:15
cory_fupetevg: Of course, it could be an issue with the latest beta.  Aren't we still using that in our tests?22:15
petevgWe are.22:15
petevgcory_fu: I'm on beta2 locally. Has there been a new one released?22:18
petevgNope. That looks like the latest.22:19
cory_fupetevg: 2.2-beta3.122:20
petevgWeird. Don't know why apt doesn't see the new one.22:21
cory_fupetevg: Travis passed on that PR.  It just took a while for some reason22:21
petevgcory_fu: I'm still good with merging, then.22:22
petevgThe time on a lot of these is kind of hard to pin down.22:23
cory_fupetevg: I'm going to close out https://github.com/juju/python-libjuju/pull/113 as well22:23
petevgcory_fu: I beat you to it :-p22:23
cory_fupetevg: :)22:24
cory_fupetevg: Also, shouldn't 90, 98, and 99 be closed?22:24
cory_fuhttps://github.com/juju/python-libjuju/issues/9022:24
cory_fuhttps://github.com/juju/python-libjuju/issues/9822:24
cory_fuhttps://github.com/juju/python-libjuju/issues/9922:24
petevgcory_fu: yep. Closing them now.22:25
cory_fuThanks22:25
petevgnp22:25
petevgcory_fu: I'm going to call it a night. Tomorrow, I'll rebuild matrix's wheelhouse, and switch that WIP to being a real PR.22:27
cory_fupetevg: Cool, have a good night22:27
petevgYou, too :-)22:27

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!