[06:29] any guess when 2.2 Beta 3 will be out? [06:33] anrah: is there something specific u r after in 2.2-b3? [06:36] anastasiamac: Yes: https://bugs.launchpad.net/juju/+bug/1654144 [06:36] Bug #1654144: AllocatePublicIP fails on some Openstacks [06:37] There is couple fixes in 2.2 I would like to test on our environment but 2.2 does not work with FloatingIp's on our OpenStack [06:47] anrah: there is a hope that 2.2-b3 may be available this week. I cannot give u more concrete dates at this stage but it's very close :) [06:51] Great, thanks :) === abc is now known as Guest52313 [07:10] juju connection timed out error [07:23] I have configured juju with local and openstack. I followed the instructions here: Bootstrapping JuJu on top of an OpenStack private cloud for configuring juju on a private openstack deployment. Juju spawns a new VM in openstack and it is visible in the horizon dashboard but the problem occurs when juju tries to ssh into the newly spawned VM for installing juju agent. This is what I get in the terminal: [07:24] use-floating-ip specifies whether a floating IP address is # required to give the nodes a public IP address. Some # installations assign public IP addresses by default without # requiring a floating IP address. # use-floating-ip: true # use-default-secgroup specifies whether new machine instances # should have the "default" Openstack security group assigned. # use-default-secgroup: true # network [07:27] sai_: can you use some paste-service to paste that output? [07:28] ist there an error somewhere on the output and how you are running the bootstrap command? [07:29] 2017-04-19 09:46:26 INFO juju.environs.instances image.go:91 find instance - using image with id: eca6b790-18e5-42c8-b239-1f33f0400187 2017-04-19 09:46:26 DEBUG juju.cloudconfig.instancecfg instancecfg.go:525 Setting numa ctl preference to false 2017-04-19 09:46:27 DEBUG juju.service discovery.go:65 discovered init system "systemd" from series "xenial" 2017-04-19 09:46:27 DEBUG juju.provider.openstack provider.go:1034 openstack user data; [07:29] 2017-04-19 09:46:27 DEBUG juju.provider.openstack provider.go:1034 openstack user data; 1031 bytes 2017-04-19 09:46:27 DEBUG juju.provider.openstack provider.go:1049 allocating public IP address for openstack node 2017-04-19 09:46:37 DEBUG juju.provider.openstack provider.go:913 found unassigned public ip: 172.16.50.108 2017-04-19 09:46:37 INFO juju.provider.openstack provider.go:1054 allocated public IP 172.16.50.108 2017-04-19 09:46:44 [07:30] 2017-04-19 09:47:15 DEBUG juju.provider.openstack provider.go:459 instance c19d0ff7-8f43-41f9-a268-9600df72168a has floating IP address: 172.16.50.108 2017-04-19 09:47:16 DEBUG juju.provider.common bootstrap.go:257 connection attempt for 172.16.50.108 failed: ssh: connect to host 172.16.50.108 port 22: Connection refused 2017-04-19 09:47:21 DEBUG juju.provider.common bootstrap.go:257 connection attempt for 172.16.50.108 failed: ssh: conne [07:34] sai_: can you paste the whole output for example to: http://paste.ubuntu.com/ [07:36] http://paste.ubuntu.com/24458888/ [08:01] Good morning Juju world! [08:51] tych0: hey, have you ever looked into https://bugs.launchpad.net/juju/+bug/1632189 ? I've started hitting this today randomly, and I'm not sure how to debug it. [08:51] Bug #1632189: juju can not use upstart to run service in juju machine [09:03] does anyone have a example of a charm that communicates with Juju API? [09:08] i think the auto scaling charm thing does [09:09] https://jujucharms.com/charmscaler === vds_ is now known as vds [11:05] how do juju install lxc & co? It's not installing my version of lxcfs, even though it's candidate for installation [11:07] bummer, something is calling --target-release trusty-backports [11:37] cory_fu: is there a way to have a decorator of my own in a reactive charm? [12:08] m00 [13:18] magicaltrout: are you charming up gitlab i see? [13:19] magicaltrout: hah, you did the work I was poking at with your bundle. :) [13:19] lazyPower: its been available for 12 months, but nicely hidden by the charmstore search [13:19] magicaltrout: i for one, apologize for our non-linear graph search. [13:19] lazyPower: yes, we've been talking about it last night and want to see about an upgrade path from the current promulgated one at some point [13:19] magicaltrout: only teh git repo or were you including CI and Registry as part of that? [13:20] lazyPower: i'm lazy it ships what they put in the omnibus repo [13:20] magicaltrout: so everything, but not integrated yet. gotchya [13:20] i've used it for ages for internal git repo, i've not expanded yet [13:21] although I have a branch here with an external db hook that i want to get into the charm soonish [13:21] magicaltrout: lmk if you want help w/ the ci subsystem. the registry is still hairy to setup [13:21] it involves a lot of self-signed key or proper authority-based-tls-key schenanigans that i'm not ready to support for everyone yet. [13:21] they use the gitlab registry on one of the darpa projects, I never even knew that was a thing [13:21] I need to delve properly [13:21] yeah man [13:21] its actually quite good [13:22] i'm using the sameersbn images a-la dorker (pun intended) and its been a rock solid performer for me [13:22] much easier than using a registry without an auth doman and no gui [13:22] s/doman/domain/ [13:22] yeah on our genomics project we use the docker v2 reg [13:23] its fine as long as you like using curl to prod it ;) [13:23] on a scale of one to trolled, how trolled does it make you feel? :) [13:23] magicaltrout: oh and the final integration is actually mattermost [13:23] so you get a full startup pack if you're using the omnibus. [13:23] yeah [13:31] so what do we want lazyPower, gitlab sorting out some certificates and then having k8s trust them? [13:32] you chaps must already have some code kicking around for k8s cert generation? [13:36] the back button in the charm store is an interesting sight to behold these days [13:39] magicaltrout is it not working for you? [13:43] well it sorta works hatch [13:43] but it also sorta doesn't [13:44] it definitely should work :) can you explain what you're seeing? [13:45] erm [13:45] i'll see if a screenshot can explain [13:45] sure thanks! [13:46] magicaltrout: I'm not able to login. Your readme references "password of your choice" [13:46] magicaltrout: but it's not the old default from the first charm and I'm not seeing a config/etc? [13:46] rick_h: you should define a random password and it just ask you to set it [13:46] was my understanding [13:47] magicaltrout: when you say "define" what do you mean? [13:47] just stab some random characters [13:47] magicaltrout: oh damn, I'm blind. I thought it was a login form but it's a change password form [13:47] I see the link for "login" now. My bad. [13:47] yeah its sneaky isn't it! [13:47] I did the same yesterday [13:47] magicaltrout: this is what I get for using the older one yesterday [13:48] magicaltrout: ty :) [13:48] hey hatch https://ibin.co/3KPFGS2PHiAo.png [13:48] you see the untitled-model back links? [13:48] but I've not been in a model [13:49] and going back from this page takes me no where it doesn't even change [13:49] ahh, hmm [13:49] have you been sitting on this page a while? [13:49] probably [13:49] ok np, just trying to think about what is causing those entries [13:49] like i said some times its absolutely fine [13:50] thanks, I'll create an issue so we can try and track this down [13:50] sometimes i land like that and you have to go somewhere else to get moving again [13:51] magicaltrout: we sure do, its teh easyrsa charm :) [13:51] magicaltrout: you could in theory support that relation, and reuse the ca + trust chain there. [13:51] sounds like that might be a sane way to do it lazyPower [13:52] magicaltrout: so long as gitlab and k8s are deployed in the same model or you use xmodel relations to define that relationship, it should be as simple as placing the certs in teh correct configs and "magic" [13:52] magicaltrout: whats nice is that so long as any replacement (say vault) reuse that relationship we can make that CA a pluggable component. [13:52] i keep saying this but haven't had time to really dig into using another Cert Authority, let alone charm it up. [13:54] magicaltrout: also whats your expected integration with with k8s? run the ci-worker in k8s? there's a multi-manager bin that allows your ci jobs to scale and create their own ephemeral pods... [13:54] magicaltrout: https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/executors/kubernetes.md [13:55] yeah that would be very useful lazyPower I do similar with Jenkins on DC/OS [13:55] i agree [13:55] as a cash strapped startup, having CI servers sucking up resource and doing nothing makes me sad ;) [13:56] there's an obvious intersection here, i'm just not certain how this integration should actually work. I feel like its going to require another interface to get all the data but i dont want interface proliferation either. [13:57] what we *might* do... is add the kube-admin relationship to fetch credentials and create a workload manifest in the gitlab charm and have an action deploy it. [13:57] that seems clean and keeps the onus of having the ci-multi-runner update from the gitlab charm side. [13:59] i'll noodle on this s'more and get back to you magicaltrout. Feel free to poke between now/then about this though. [13:59] will do [14:01] http://paste.ubuntu.com/24460316/ [14:01] is ^ a bug? [14:03] cory_uf: small PR for you: https://github.com/juju/python-libjuju/pull/113 [14:20] bdx: i dont think instance type is a post-deploy constraint juju understands [14:20] and afaik instance-type was actually a deprecated constraint, but i may be wrong there... [14:21] lazyPower: post deploy ? [14:21] right so there's sets of constraints [14:21] wait i'm about ot talk about both sides of my mouth, i see what you did here [14:21] add-machine and deploy [14:22] but i think that still holds true... yeah, i beleive add machine and deploy are 2 different sets of constraints. One understands that instance type the other doesn't. [14:22] oh really [14:23] yeah, and if i'm wrong, i'll go ahead and put my pants on my head [14:23] because i'm only 80% certain thats the case [14:23] thats a pretty large margin for error [14:23] * magicaltrout waits expectantly [14:23] a `juju deploy`ed application automatically consumes any available machine instead of getting affinity to its own resources? [14:23] ok [14:23] good to know [14:23] thx [14:24] I thought that was only a thing when using `--to` [14:24] actually bdx... [14:24] let me flip this on its head [14:24] :) [14:24] you provisioned a super large machine [14:25] it has no workload on it, and juju knows about it [14:25] then you requested a deployment to a much smaller machine, juju said "Yo dog, i see you like to deploy things, and i have this large machiine that totally fits that t2.micro in it" [14:25] so mebbe it just said "Welp, nothing to do here" and crammed it on that machine because it effectively did what you asked. Now if you provisioned a t2.micro first, and attempted to put a m4.xlarge on it, i can see that being where its def a bug [14:26] but technically, juju used the resources that were available to it, before requesting more. [14:26] I see [14:26] bdx: try flipping that around and see if the behavior changes [14:26] anecdotal validation, but i'm pretty sure it'll behave how you would expect then [14:27] i'd tend to agree with bdx though, if i'd asked for a t2.micro deployment and a large machine was available [14:27] i'd not want my workload on it [14:27] not that i have that use case [14:28] in my case, I want one host to deploy my containers on, and one haproxy box to proxy to it [14:28] the fact that juju makes assumptions about what I want to do with my workload makes my stomach churn [14:29] thats just last nights beer [14:30] for u [14:30] I don't drink beer bro - get it right [14:30] jeeeeze [14:30] lol [14:30] :-) [14:41] lazyPower: I think I see what you are saying .... the constraints for 'add-machine' and 'deploy' commands aren't the same constraints, so the 'instance-type' isn't evaluated as to whether its the same or not [14:42] bdx: i think its closer related to my last assumption... [14:42] that had you inverted those constraints, it would behaving as you expect. [14:43] lazyPower: ahhh ... so what we are dancing around is that juju doesn't take into consideration the actual instance type [14:44] just the resources associated with the instance type [14:45] so like a t2.xlarge and a m4.xlarge would be equivelent in the eyes of juju because the both have the same cpu, mem [14:47] magicaltrout: for future reference - whiskey with a single ice cube, or a malbec with room to breathe :) [15:07] reminder juju show in 3hrs [15:07] magicaltrout: will show off your bundle and such if you're interested in joining [15:35] i will swing by rick_h [15:37] rick_h: question for the juju show (or here). What will be the upgrade strategy for 2.1 -> 2.2? Supported for controllers with models running and all? [15:41] zeestrat: completely [15:41] zeestrat: model migrations is the safest production ready method to upgrade [15:41] zeestrat: https://jujucharms.com/docs/2.1/models-migrate [15:45] cory_fu: got another PR for you (matrix-side JaaS support): https://github.com/juju-solutions/matrix/pull/127 [15:46] (it depends on that other PR, so it's a WIP for now.) [15:51] petevg: https://github.com/juju/python-libjuju/pull/113 can't be right. The redirect info code was put in there specifically *for* jaas. Also, the current libjuju master works fine for me on jaas. [15:51] tvansteenburgh: Can you confirm that first bit? ^ [15:51] cory_fu: can you run the add_model example against JaaS. It fails for me ... [15:54] petevg: Hrm. Yeah, it does fail for me. Let me re-test with conjure-up, which is where I saw it working, and confirm that it's using latest master for libjuju [15:56] cory_fu: in a meeting, can't look right now [15:57] petevg: Hrm. In conjure-up, I'm getting the server-version KeyError [15:58] cory_fu: hmmm ... if you do "juju list-controllers", what version do you get for your jaas controller? [15:59] petevg: 2.1.2 [15:59] cory_fu: that's the same as mine. That's weird. [16:02] petevg: Of course when I add a logging statement to see what the info dict contains, it has the server-version key [16:02] Heh. [16:02] rick_h: Cool. Great to hear. So what do you do when model migrations doesn't work? #1680392 [16:02] Bug #1680392: Model migration fails on large model [16:02] zeestrat: so you can do in place upgrades like some folks still do from pre-migrations [16:03] zeestrat: however, it's riskier [16:03] migrations are meant to be put together such that they fail in a clean way, reversible, etc. [16:03] petevg: Another thing to note: conjure-up is using the CLI to do the add-model and is then establishing a model connection without ever establishing a controller connection. It's the controller connection that is failing in examples/add_model.py with the redirect issue [16:04] petevg: I'm also having the problem with not being able to remove the models from jaas [16:04] Yeah. We'd love to do migrations, but that doesn't seem to work with a model for for example Openstack in 2.1.2 [16:04] cory_fu: if you're not establishing a controller connection, I understand why you wouldn't be seeing the error :-) [16:05] cory_fu: as for removing models ... if I add and remove the model with the websocket api (e.g., matrix does it by itself), everything works. [16:05] If I try to add a model via the cli, I can't remove it via the cli. [16:05] And I can't use the cli to remove models added by the websocket api (this happens when matrix fails without figuring out that it has added a model). [16:06] cory_fu: I am running into an annoying problem: I occasionally get timeouts in matrix's add_model call, despite making the timeout a lot more generous. When I check the model got added, but we apparently never heard about it. [16:06] Also, I can't clean it up :-( [16:07] petevg: Only on JAAS? Also, have you replicated it with debug logging turned on to see if the message ever came in on the websocket and we missed it somehow? [16:08] cory_fu: yes. Only on JaaS. [16:08] cory_fu: I've got matrix debugging turned on, but I haven't gone in and hacked in debug logging for the controller. Can I do that in JaaS? [16:08] Oh. You just mean turn on debugging in python-libjuju. I can do that. [16:08] petevg: It's a bit late for him, but maybe urulama can help us out with these JAAS issues? [16:09] urulama: If you're still around [16:09] cory_fu: yeah. Squawking for help makes senes. [16:09] *sense [16:09] cory_fu: it looks like redirect_info is never meant to work for controllers, per the docstring in the source: [16:10] cory_fu: otp for a while :-/ [16:10] RedirectInfo returns redirected host information for the model. In Juju it always returns an error because the Juju controller does not multiplex controllers. [16:11] petevg: The JAAS controller is slightly different from a normal Juju controller, specifically in that it *does* multiplex controllers. [16:11] Hmmm ... don't know why we get the not implemented error, then. I have a feeling that it might have something to do with the timeouts (e.g., I end up talking to a controller that doesn't want to talk to me any more.) [16:11] petevg: It may be that the reason examples/add_model.py is failing is because we need to give the controller connection the info that it needs to redirect us to the proper actual controller [16:12] petevg, cory_fu: can I help with something? [16:13] mhilton: I hope so! :) [16:13] mhilton: hopefully you can :-) We're trying to debug two issues when talking to a JaaS controller via the websocket api. [16:13] rick_h: I guess there is no 2.1.3 planned that could fix model migrations for us on 2.1.2 without the big jump to 2.2? [16:13] cory_fu, petevg: if you're seeing a not implemented error from JAAS there's a good chance the JAAS controller just doesn't implement the call yet. [16:13] zeestrat: not sure. I'd not be surprised and maybe it can be fixed on the 2.2 end. Migrations is interesting as sometimes the new model can handle issues. [16:13] mhilton: The call is theoretically a JaaS specific one, though: Admin.RedirectInfo [16:14] mhilton: My understanding, though, is that the redirect logic is exactly for JAAS and it would be the only thing that *does* implement it [16:15] mhilton: This is the example we're using that is giving us the issue with the RedirectInfo: https://github.com/juju/python-libjuju/blob/master/examples/add_model.py [16:15] mhilton: the json request is {"type": "Admin", "request": "RedirectInfo", "version": 3} [16:15] mhilton: I feel like this worked in the past, when JAAS was at version 2.0.2, but I am not 100% sure [16:16] I can confirm that it worked in 2.0.2. mhilton: are we possibly just using the wrong version? (The facade isn't in the list of facades that schemagen puts together, so the version is just hard coded currently.) [16:16] petevg, cory_fu: OK RedirectInfo is only implemented on a Model connection (that changed recently, but before it was a bug) [16:17] Ok, so it did change, but it was working by accident before [16:17] mhilton: That makes sense. That means that my workaround is okay (though could be better -- I just swallow the error, rather than trying to skip asking for redirect_info on a Controller connection.) [16:18] mhilton: the second issue is that I add a model, and timeout, possibly because we never get a websocket response. I probably need to get better debugging info on that one, though. [16:18] cory_fu, petevg: so to be clear if you are connecting to the model you should use a new connection that connects to the model's UUID. [16:18] It's good to know that it probably doesn't have anything to do with the RedirectInfo facade. [16:18] petevg: It's probably reasonable to just always try and swallow the error if the result is "not implemented" because a regular controller might do the same even for model connections [16:18] mhilton: yes. We are getting a new connection for the model. [16:19] mhilton: Yeah, we do create a new connection with the UUID [16:19] the login will fail with an error saying it's been redirected, and then you can log in to the redirected location, if you put that in the login part of the code it should make everything make more sense [16:19] mhilton: ... the timouts aren't consistent, like I'd expect they would be if we were doing something obviously wrong. [16:19] cory_fu, petevg: what's the second part of the problem? [16:20] mhilton: It's the timeouts. I ask the websocket api to create a model, and it does, but we're apparently not hearing about it. [16:20] rick_h: yeah, the problem here seems to be due to an issue with the mongodb on the original controller we'd like to migrate from so I hope there will be some kind of fix or workaround. [16:20] mhilton: I guess the call to create the model is to the controller, so maybe something is going wrong with the handoff, and we're swallowing the error somehow. [16:21] petevg, so you just get a closed connection with no error response or anything? is it on any particular cloud or region? [16:21] mhilton: the timeout is actually coming from Python, several layers up; it's just as likely to be an uncaught error as it is to be a network timeout. [16:22] mhilton: I've been testing on aws/us-east-1. Haven't gotten around to testing on other clouds. [16:22] Ooh. This is before we actually start monitoring the websocket connection to make sure that it is alive. It could just be closing. [16:22] petevg: OK as far as I know that one is working fine. [16:23] mhilton: I'll add some more debugging to this beast and ping back here if I have questions based on the results. Thank you for all the help thus far :-) [16:23] petevg: Am I correct in thinking that the reason you're hitting a timeout is because the we never see the model show up in the controller connection's watcher? Or is it because we're not seeing a response for the API request itself? [16:24] petevg: Oh the JAAS controller can be a little aggressive about shutting down connections if it doesn't get a Ping (or some other call) every minute or so. [16:24] cory_fu: ambiguous. We're hitting a timeout because we call "await Controller.add_model" with an x second timeout, and that call doesn't come back to us in x seconds. [16:24] mhilton: we're pinging it every 10 seconds, so that part should be okay :-) [16:25] petevg: OK that sounds fine then. feel free to ping back later. [16:25] Will do. Thank you again. [16:25] mhilton: One other issue that I'm seeing is that I'm occasionally, but not consistently, getting a login response that doesn't appear to contain a server-version key. I haven't been able to reproduce it since increasing the logging to see what else is present in the response, but can you think of any way the login response could come back without an error but also without a server-version? [16:26] cory_fu: interesting. give me a couple of secs and I'll have a look [16:28] petevg: So that could either be that model_facade.CreateModel isn't returning in a timely fashion (i.e., the websocket response doesn't come in), or because the subsequent ssh key logic is hanging, or because the attempt to connect to the model after the fact is hanging. I guess you're working on increasing logging around that [16:28] ha, I reproduced it [16:29] cory_fu: are these controller connections, or model connections? [16:30] mhilton: They're model connections, but I was able to reproduce it and it's an issue with how we're handling discharge-required-error responses [16:31] petevg: The build_facades call in login (https://github.com/juju/python-libjuju/blob/master/juju/client/connection.py#L467) needs to either account for or come after the discharge-required-error response check (https://github.com/juju/python-libjuju/blob/master/juju/client/connection.py#L343) [16:32] petevg: I'll open an issue [16:34] cory_fu: interesting. Accounting for it is the right thing, I think, because we need those facades for the pinger. [16:34] kklimonda: i haven't looked at it [16:35] petevg: Well, we could just move both the build_facades and create_task(pinger) to after this check: https://github.com/juju/python-libjuju/blob/master/juju/client/connection.py#L343 [16:35] Instead of doing it in login [16:35] Though I guess we also need to do it on the no-redirect-info case above [16:36] cory_fu: yep. I like the accounting better than the making sure it gets called in multiple places :-) [16:36] ... though I guess then the pinger might try to set itself up even though we want to just discharge an error and move on. [16:36] Hmmm .... [16:37] petevg: Created https://github.com/juju/python-libjuju/issues/114 [16:37] petevg: It would. And then we'd end up with multiple tasks [16:38] petevg: I can resolve that today [16:38] brb [16:38] cory_fu: awesome. Thx. [17:06] mhilton: Can you speak at all to the issue that we're seeing with JAAS where we get models added that can't be removed? [17:08] cory_fu, yes that one is a known JAAS bug, the model is removed, but JAAS doesn't find out. [17:08] mhilton: Is there any way to get those models cleaned up? [17:08] cory_fu, there is a fix in the pipeline. [17:08] Ok [17:09] cory_fu, there aren't any machines running on the models or anything that needs cleaning up if that's what you mean. [17:09] mhilton: Right, I can see that. So it's just an issue of clutter in juju list-models [17:11] cory_fu: although if you're just getting upset by a long list of dying models then it would be possible to manually clean them up if it's a big problem. There's a chance I can get the fix into produciton tomorrow though. which might end up being easier. [17:13] mhilton: It's not a big deal, I can wait on the fix [17:27] 30min juju show warning wheeeeee [17:55] juju show links: https://hangouts.google.com/hangouts/_/spafozizrrgsvkyfq733kzxsmme to join the hangout [17:55] https://www.youtube.com/watch?v=h2X9gIxXPH8 to watch it stream [17:57] hatch: jrwren lazyPower cory_fu petevg magicaltrout and anyone else interested ^ [18:04] magicaltrout: are you still able to join? === frankban is now known as frankban|afk [18:19] oo snap, i missed the intro. /me pulls up the watch url [18:19] if you want me there i'll hop in [18:29] * lazyPower applauds and fanfares rick_h [18:29] great show this week rick_h [18:29] sorry i missed the CTA [18:31] lazyPower: <3 [18:37] sorry rick_h got home to find the kids still awake and causing chaos [18:37] how dare they not be in bed! [18:37] magicaltrout: hah, of course :) [18:37] magicaltrout: all good, just didn't want to start without you if you wanted to join [18:37] magicaltrout: <3 and ty for the updates! [18:37] no probs [18:38] keep prodding me and i'll make sure the new stuff gets done at some point in the not too distant future ;) [18:38] OOTB docker repo for CDK sounds pretty cool [18:38] magicaltrout: mmmm, yummy [18:39] magicaltrout: there's been some work by a former contributor for a stand-alone registry [18:39] plus we have a registry action in the k8s charms... [18:39] but i'm all for the gitlab registry. its by far the most robust free solution you can get today on the free market. [18:40] yea, gitlab is very cool. I'd heard about it but didn't realize it did as much as it does. [18:40] I'd love to see this build system in place doing CI/CD [18:47] I've been using gitlab since version 6.x, been very happy with the features they have added. getting ready to consolidate the responsibilities of redmine and jenkins into our gitlab server [19:00] tychicus1: very cool [19:02] rick_h: the mailman project is a great example https://gitlab.com/mailman/mailman/pipelines [19:03] for CI [19:06] I'll be doing this, as soon as I get persistent storage up and running on k8s http://blog.lwolf.org/post/fully-automated-gitlab-installation-on-kubernetes-including-runner-and-registry/ [21:39] petevg, stokachu: https://github.com/juju/python-libjuju/pull/116 [21:57] cory_fu: +1 after the tests pass. [22:01] petevg: I'm running into a strangely consistent issue running the integration tests locally. The tests/integration/test_unit.py::test_run_action deployment of the git charm on trusty always ends up with my juju machine stuck in pending, and when I manually check the cloud-init-output.log on the lxd instance, I just see "Job failed to start" from the jujud-machine-0 service [22:02] petevg: The weird thing is that all of the other tests are fine, and it's always that one test [22:02] petevg: Have you seen anything like that? [22:03] cory_fu: I haven't. Huh. [22:04] I can't see it being related to libjuju in any way, and I suspect it has something to do with my lxd / zfs setup, but I can't even get any more debugging info because no logs are created for that service [22:04] And the first thing the service does is touch the log file, and it doesn't even do that [22:05] cory_fu: weird. I think that the only difference for me is that I'm not running zfs. [22:06] everyone should run zfs [22:06] its the lay [22:06] law [22:14] petevg: Hrm. The travis build for that PR is now up to almost 35 minutes, where all the previous runs took about 20. Maybe the failures I'm seeing are due to my changes somehow after all. I really can't see how libjuju would cause such a weird provisioning error, though [22:15] Huh. [22:15] petevg: Of course, it could be an issue with the latest beta. Aren't we still using that in our tests? [22:15] We are. [22:18] cory_fu: I'm on beta2 locally. Has there been a new one released? [22:19] Nope. That looks like the latest. [22:20] petevg: 2.2-beta3.1 [22:21] Weird. Don't know why apt doesn't see the new one. [22:21] petevg: Travis passed on that PR. It just took a while for some reason [22:22] cory_fu: I'm still good with merging, then. [22:23] The time on a lot of these is kind of hard to pin down. [22:23] petevg: I'm going to close out https://github.com/juju/python-libjuju/pull/113 as well [22:23] cory_fu: I beat you to it :-p [22:24] petevg: :) [22:24] petevg: Also, shouldn't 90, 98, and 99 be closed? [22:24] https://github.com/juju/python-libjuju/issues/90 [22:24] https://github.com/juju/python-libjuju/issues/98 [22:24] https://github.com/juju/python-libjuju/issues/99 [22:25] cory_fu: yep. Closing them now. [22:25] Thanks [22:25] np [22:27] cory_fu: I'm going to call it a night. Tomorrow, I'll rebuild matrix's wheelhouse, and switch that WIP to being a real PR. [22:27] petevg: Cool, have a good night [22:27] You, too :-)