[02:06] <beisner> kwmonroe, haha - clearly, layer basic attempted a config-get too early.  ;-)
[02:25] <kwmonroe> i don't think so beisner.. i didn't show the whole log (which i have conveniently lost by now), but the charm had already gone through "installing charm software", after which i think should be fair game to config-get to your heart's content.  i think the issue was more with "error: connection is shut down", as in maybe config-get isn't safe to run if a unit's jujud can't contact the controller.
[02:26] <beisner> kwmonroe, mostly joking.  but yeah, the charm can't be expected to solve tooling/agent errors.  i'd raise a core bug.
[02:28] <kwmonroe> beisner: i appreciate your jovialityness.  i also like your no-retry stance and hope to live in your world someday.  it's just so much easier to push those silly errors under the covers until even a retry isn't enough.  it makes debugging future-kevin's problem, and i don't care about that guy.
[02:35] <beisner> kwmonroe, ha!  happy to provide levity.  well to be clear and differentiate:  i think failed hook retry is a good thing in user/production land.  it removes needless face-slaps from transient infra/interweb type issues.  it's in dev/test/gates where i think it is actually counterproductive.
[09:12] <kjackal> Good morning Juju world!
[11:09] <Ankammarao> Hi All
[11:50] <Ankammarao> Hi All
[11:58] <Mmike> Hi, lads. Any news on when 1.25.8 is going to hit the archives?
[13:14] <marcoceppi> Mmike: should be in the next hour, at most, are you pulling from the stable ppa or the Ubuntu archive?
[13:15] <Mmike> marcoceppi, depending on the customer :) I'm asking for trusty so both are applicable
[13:18] <marcoceppi> Mmike: it's available here https://launchpad.net/~juju/+archive/ubuntu/1.25
[13:19] <Mmike> oh
[13:19] <Mmike> haven't checked the ppa today
[13:19] <Mmike> thn marcoceppi
[13:19] <Mmike> thnx
[15:13] <Ankammarao> Hi
[15:15] <cory_fu> petevg: Added a reply to https://github.com/juju-solutions/matrix/issues/24 to clarify that I didn't mean that we should break pre-deployment.  I agree that that's a useful dev feature.
[15:16] <petevg> cory_fu: thx for the clarification. I'm +1 on the issue now.
[15:19] <Ankammarao> Hi All
[15:20] <Ankammarao> can any on tell me to create terms
[15:20] <Ankammarao> i am not aware of creating terms
[15:22] <rick_h> mattyw: you able to help Ankammarao?
[15:25] <cory_fu> kjackal: Am I correct in remembering that you can build Bigtop charms with a layer option "bigtop_version: master" or similar to get a charm that deploys the cutting edge of Bigtop?
[15:26] <kjackal> this is somewhat accurate. But we decided we will not be supporting this option/path
[15:26] <kjackal> So if i were to guess i would say it will not work
[15:27] <kjackal> However, I saw someone asking for this in the bigtop list
[15:27] <kjackal> cory_fu: ^
[15:28] <cory_fu> kjackal: Yeah, that was Amir.  I think it's safe to let him know that it's available as an unsupported dev option
[15:28] <kjackal> sounds resonable
[15:29] <cory_fu> kjackal: Am I right in remembering the value being "master"?
[15:29] <kjackal> I do not remember right now, let m echek
[16:36] <kjackal> yes it is master
[16:39] <narindergupta> hi all i started seeing the error during deployment ssl.SSLError: [Errno 1] _ssl.c:510: error:1409442E:SSL routines:SSL3_READ_BYTES:tlsv1 alert protocol version
[16:39] <narindergupta> https://build.opnfv.org/ci/view/joid/job/joid-deploy-baremetal-daily-master/1171/console
[17:08] <bdx> charmstore is down
[17:08] <bdx> to some extent
[17:09] <bdx> https://s11.postimg.org/d2frsznwz/Screen_Shot_2016_11_23_at_9_12_24_AM.png
[17:09] <urulama> web part is down atn, service is up and juju deploy still works
[17:09] <urulama> proxy issues
[17:09] <bdx> urulama: ok, thanks
[17:12] <cory_fu> petevg: Hrm.  I didn't run in to any issues with a fresh run of rules.1.yaml on that branch of matrix with a fresh model.  :/  I guess matrix and unit logs will be necessary.
[17:12] <petevg> cory_fu: I just did a re-run, and didn't run into an issue, either. Except for realizing that I was on the wrong branch, which definitely isn't an issue w/ your code :-)
[17:12] <cory_fu> heh
[17:13] <petevg> I'm now running on the actual right branch. The end result should be a blank model, correct?
[17:14] <petevg> Yay! That's what happened.
[17:16] <petevg> cory_fu: was going to give bcsaller a chance to comment before merging, but the branch looks good to me, and the tests run great, too.
[17:31] <urulama> bdx: should be back up
[17:41] <petevg> cory_fu: just left a comment on your PR. There is a test that needs updating (didn't see it until after I blew up .tox and rebuilt)
[17:41] <bdx> urulama: nice, thx
[17:43] <mattyw> rick_h, sorry - I can now, but it seems like Ankammarao has gone?
[17:45] <mattyw> rick_h, for future reference, we do have docs for terms: https://jujucharms.com/docs/2.0/developer-terms
[17:48] <rick_h> mattyw: sorry, was otp and trying to punt
[17:49] <cory_fu> petevg: blarg
[17:50] <cory_fu> :)
[17:50] <mattyw> rick_h, no problem at all, I was afk and forgot to set my nick, I'll keep an eye out tomorrow
[17:50] <cory_fu> petevg: Updated
[17:50] <cory_fu> Thanks for the catch
[18:00] <petevg> cory_fu: after running glitch, I also notice that we can get into a state where we can't reset. I think that this is more of an underlying juju issue than an issue with your code, though. Will save off logs so that we can file a bug.
[18:01] <cory_fu> petevg: I think I might have seen that.  Is it due to agents being in a "failed" state?  I added some additional checks to the health task to spot that
[18:01] <petevg> cory_fu: yes.
[18:02] <petevg> cory_fu: I see an error in the logs, but I suspect that it's a lie, and that the actual problem is that the juju agent is borked: http://paste.ubuntu.com/23523296/
[18:02] <cory_fu> petevg: So, that should now set the "health.status.unhealthy" state, but I'm not sure how to properly handle that.
[18:03] <viswesn> I am getting error "Failed to copy file" during bs_manager.booted_context() and then task goes for destroying the controller; - http://paste.ubuntu.com/23523247/
[18:04] <viswesn> how to overcome this issue
[18:04] <petevg> cory_fu: I can get it unstuck by running "juju destroy-machine --force <int>", where <int> is the machine name.
[18:05] <cory_fu> petevg: This touches on bcsaller's comment on the PR and we eventually might want to move to managing entire models.
[18:06] <petevg> cory_fu: in any case, I merged your PR. We can open a separate ticket to figure out what we want to do when the usual way of resetting the juju model doesn't work.
[18:56] <bdx> https://s22.postimg.org/oc82s31g1/Screen_Shot_2016_11_23_at_10_59_09_AM.png
[18:56] <bdx> SOS ^
[18:57] <bdx> ahh, finally logged in
[18:57] <bdx> looks like my controller is super loaded
[18:57] <bdx> https://s21.postimg.org/9mg4bhwef/Screen_Shot_2016_11_23_at_11_00_08_AM.png
[18:57] <bdx> jesus
[18:58] <bdx> marcoceppi, rick_h: I'm thinking I'm going to need a massive controller here
[19:29] <bdx> rick_h: per the rackspace xenial support thing ... I was able to negotiate a deal to give me xenial for stateless services, even though its not supported yet :-)
[19:46] <marcoceppi> bdx: what version of Juju?
[19:54] <bdx> marcoceppi: 2.0.1-xenial-amd64
[19:54] <marcoceppi> bdx: do you have a lot of models/applications deployed?
[19:55] <bdx> marcoceppi: http://paste.ubuntu.com/23523955/
[19:56] <bdx> I may of had +5-10 more machines deployer earlier
[19:56] <marcoceppi> bdx: so a decent amount. This might be a bug in the controller, but you can do one of two things. The first is you can just deploy controllers on larger instance sizes. The second is to enable-ha on the controller which will also help load balance reqeusts
[19:56] <bdx> marcoceppi: ahh, controller ha faciltates load balancing too? sweet!
[19:57] <bdx> ahh, only of requests though
[19:57] <marcoceppi> bdx: I'm like 90% sure it does, basically all the agents get the address of all the controllers, and will over time spread those calls around
[19:57] <bdx> ok
[19:57] <rick_h> marcoceppi: bdx yes, reads, but not writes
[19:57] <marcoceppi> rick_h: \o/ thanks for the confirmation
[19:58] <rick_h> bdx: will be interesting to see is 2.0.2 helps at all
[19:58] <rick_h> bdx: there's a lot of discussion and work going in to watching that as folks push things with multi-models/etc.
[19:59]  * marcoceppi eod's for some R&R, cheers o/
[20:00] <rick_h> enjoy and have a good holiday marcoceppi
[20:01] <bdx> marcoceppi: thanks as always for your insight, happy holidays!
[20:02] <bdx> rick_h: what work is going into 2.0.2 that you are thinking will help lighten the load?
[20:03] <rick_h> bdx: so 2.0.2 is cut but in vetting atm because there might be a regression on maas with it
[20:03] <rick_h> bdx: but it does some work to help with cpu load.
[20:03] <rick_h> bdx: our internal folks are chomping to get at it
[20:03] <bdx> nice, good to know
[20:03] <rick_h> bdx: but there's some other work going on around identifying storms of activity and working on how to prevent the controller from getting swamped
[20:04] <rick_h> bdx: and for the cycle as a whole, there's a big goal to reduce the cpu load, especially at idle, because there's more event handling going on than you'd think
[20:05] <bdx> rick_h: how is that being approached?
[20:06] <rick_h> bdx: so the idle work is that we've got long running controllers with different sized models and identifying what events is keeping the CPU going
[20:06] <rick_h> bdx: and so there's some work to move to pub/sub vs poll/etc
[20:06] <bdx> ahh nice
[20:06] <rick_h> bdx: so that things are only handled when required vs old fashioned whole sale
[20:06] <rick_h> bdx: so there's some very low hanging fruit
[22:14] <smgoller> hey, so I'm trying to use juju 2 (currently 2.1 beta 1 I think, whatever's latest as of yesterday) with maas 2, and when i bootstrap, it keeps trying to talk to version 1 of MAAS' API, which has been deprecated. Any ideas on how to fix this?
[22:15] <smgoller> yeah, 2.1 beta 1 and maas is 2.1.1+bzr5544
[22:36] <thumper> smgoller: do you have the endpoint set correctly?
[22:37] <thumper> smgoller: because that should just work
[22:37] <thumper> I think we even have CI tests to ensure it does