[00:01] <axw> cholcombe: I am around now
[00:02] <cholcombe> axw: hey!  i was wondering if you had any additional pointers for loopback storage, lxd and juju.  i can share my profile.  i figured out how to edit it.  it still seems to be stuck in allocating
[00:03] <axw> cholcombe: if you share I'll see what I can work out. also the juju machine agent log for the machine that's trying to create the loop devices please
[00:04] <cholcombe> axw: http://paste.ubuntu.com/24093220/ here's the juju-default profile
[00:04] <cholcombe> axw: haha!  I spoke too soon.  It worked this time.
[00:05] <cholcombe> it took a long time but it eventually went through when i wasn't looking
[00:05] <axw> cholcombe: :p I thought you would need loop-control though TBH
[00:05] <cholcombe> loop-control?
[00:05] <stormmore> OK I am going offline for a little bit... only 108 lines of bash code to help me narrow down why another piece of software is messing up!
[00:05] <axw> cholcombe: /dev/loop-control. I thought losetup used it to allocate loop devices
[00:05] <cholcombe> oh right
[00:05] <cholcombe> yeah it seemed to work without it
[00:06] <cholcombe> gluster/2  brick/2  /dev/loop3  attached
[00:06] <cholcombe> that's the juju storage output
[00:06] <axw> cholcombe: cool. does it work? :)
[00:06] <cholcombe> gluster says no bricks found so something screwed up but i think it's on my side
[00:07] <cholcombe> axw: yeah i see it.  i tried to format /dev/loop3 with zfs and it messed up.  it's my issue
[00:07] <axw> cholcombe: there's a new experimental storage provider for LXD on develop uses the new LXD storage API. probably a bit early to be usable yet tho
[00:08] <axw> okey dokey
[00:08] <cholcombe> axw: thanks for the rubber duck debugging haha
[00:08] <axw> cholcombe: :)
[01:25] <hatch> kwmonroe :D
[03:01] <Budgie^Smore> lazyPower so I found out that for some reason, yet to be determined, MaaS doesn't like VirtualBox VMs created via Vagrant today
[08:52] <kjackal> Good morning Juju world!
[09:03] <disposable2> is there a way to select a specific maas node to become the juju cloud controller? i have several physical servers and 1 kvm one. i want the kvm one to be the controller.
[09:10] <disposable2> i'll answer my own question, somebody correct me if i'm wrong please - "juju bootstrap --to kvmnode.maas mymaas controller"
[09:34] <cnf> morning
[09:35] <cnf> so, still kind stuck with juju :/
[10:00] <zeestrat> disposable2: That works. We have multiple kvm's setup for controllers so we tag them in MAAS. Then to bootstrap we use --bootstrap-constraints tags=[your-tag-here]
[11:24] <jamespage> any charmers around? need a review of https://code.launchpad.net/~james-page/charm-helpers/flush-after-grant/+merge/318750
[11:27] <disposable2> zeestrat: thanks. unfortunately, i get a rather unhelpful error message at the end: """ERROR failed to bootstrap model: bootstrap instance started but did not change to Deployed state: instance "rfarht" is started but not deployed""". i have no idea where to find more details about what went wrong.
[11:35] <jamespage> ditto on https://code.launchpad.net/~james-page/charm-helpers/percona-tuning-level/+merge/318755
[13:08] <cnf> so any way to use juju if you don't have direct ssh access to the ip range it's deploying to?
[13:30] <kjackal> cnf: can you ssh to the controller and from there to the units deployed?
[13:31] <cnf> no
[13:31] <cnf> well, i can through jumphosts
[13:31] <cnf> but not difectly
[13:31] <cnf> directly
[13:31] <cnf> kjackal: but i can't even get the controller going
[13:31] <cnf> because juju wants to ssh to it, and fails when it can't do that
[13:32] <magicaltrout> i think its a fair enough requirement to have access to the boxes it wants to control ;)
[13:32] <cnf> magicaltrout: not directly
[13:32] <cnf> it's just not a realistic scenario
[13:33] <cnf> the controller can access whatever it needs to direct
[13:33] <cnf> but i can't start a controller, because i have no direct ssh access to the machine it is deploying to
[13:33] <magicaltrout> can't get creative with an socks proxy?
[13:34] <cnf> well, juju doesn't support SOCKS
[13:34] <cnf> that's another issue i have open
[13:35] <cnf> and it won't use http / socks proxies for ssh connections
[13:35] <cnf> I do have proxied / jumphost access to everything
[13:35] <cnf> just no way to get juju to use them
[13:35] <magicaltrout> well no, but you could forward the port to your local machine and do a fake local deploy
[13:35] <cnf> https://bugs.launchpad.net/juju/+bug/1668727 as a ref
[13:35] <mup> Bug #1668727: juju does not support socks5 as a proxy <juju:Triaged> <https://launchpad.net/bugs/1668727>
[13:36] <cnf> magicaltrout: how? juju does a lookup of the controller ip, and decides what to connect to itself
[13:36] <cnf> and i'm not going to start spoofing ip's etc just to get juju working
[13:36] <magicaltrout> in that case i'm out of ideas :)
[13:37] <kjackal> cnf, you have a cloud where you can get VMs from, right?
[13:37] <cnf> kjackal: MAAS, so it's metal and not vm's, but yes
[13:38] <cnf> which i need a socks proxy to connect to it (which i already have to trick juju into using through a local proxy)
[13:38] <kjackal> cnf: but you do not have ssh access to the nodes they come up because the entire rack is behind a firewall blocking ssh. Do I get this correctly?
[13:38] <cnf> kjackal: before the firewall, the range isn't even routed
[13:39] <cnf> kjackal: i have firewalled access to the MAAS controller
[13:39] <cnf> (2 or 3 jumps between me and the maas controller)
[13:40] <kjackal> Would it be possible to have your juju client in a node (kvm/lxd/physical) inside the rack?
[13:40] <cnf> in theory, but that's a lot of extra work
[13:41] <cnf> and not a way i'd want to work in production
[13:41] <cnf> (which this thankfully isn't)
[13:46] <cnf> if i could tell it to use specific names, or a proxy defined in my ssh config or something, i'd be golden
[13:53] <cnf> maybe https://bugs.launchpad.net/juju/+bug/1669180 is relevant
[13:53] <mup> Bug #1669180: proxy-ssh/juju ssh --proxy is ignored <juju:Triaged> <https://launchpad.net/bugs/1669180>
[13:53] <cnf> though i guess that already assumes a controller
[13:55] <magicaltrout> conversely cnf you could clone the code from github and add to the ssh module to pick up proxies :)
[13:56] <cnf> i have 5 other projects that need patches to work, already
[13:56] <cnf> queue is kinda full
[14:36] <lazyPower> Budgie^Smore: Interesting. What seems to be the trouble there?
[14:38] <cnf> hmm, and sshuttle doesn't work, either
[14:46] <lazyPower> jamespage: what if FLUSH PRIVLEGES fails? its in a try block that doesn't seem to care about any exceptions. Do we not care if it raises?
[14:47] <lazyPower> pardon me if thats a dumb question... i'm still working on my first cuppa
[14:47] <magicaltrout> I care :'(
[14:48] <jamespage> lazyPower: the try/finally is really to ensure the cursor gets cleaned up
[14:48] <jamespage> if the flush fails, we'll propagate the exception up to the calling process
[14:48] <lazyPower> ok so we're less interested if it fails and more interested we dont leave a dangling connection
[14:48] <jamespage> but doing the cleanup on the way
[14:48] <lazyPower> sgtm, just making sure that was the intent
[14:48] <jamespage> lazyPower: +1 yah
[14:48] <lazyPower> lookin @ the innodb bits now
[14:48]  * jamespage spent alot of time fixing java code which did not do that....
[14:49] <lazyPower> jamespage: do you mind if i just approve? are you cool with doing the merge + push?
[14:49] <jamespage> lazyPower: works for me
[14:49] <jamespage> lazyPower: if you want to see the innodb tuning stuff in context - https://review.openstack.org/#/c/440333
[14:49] <lazyPower> http://pad.lv/318750 is approved
[14:51] <jamespage> lazyPower: muchas gracias
[14:51] <lazyPower> http://pad.lv/318755  is  approved as well. tests + passing test results
[14:51] <lazyPower> easy A on this one jamespage
[14:51] <jamespage> lazyPower: ta
[14:52] <lazyPower> thanks for being patient while i drug my bum out of bed :)
[14:59] <jamespage> lazyPower: thankyou for the reviews - having a bug day targetted at percona-cluster
[15:22] <skayskay> I want to help document to my teammates how to restore a missing secgroup rule for ssh for juju, so I've removed it manually in my environment. it's taking longer than I expected for juju to lose contact with agents
[15:22] <skayskay> how long should it take?
[15:31] <kwmonroe> petevg: here's the libjuju issue, in case you didn't see the number on my screen:   https://github.com/juju/python-libjuju/issues/67
[15:31] <petevg> kwmonroe: thx
[15:40] <chrome0> Is there a way to change a machines AZ post-deploying? I'm deploying via MAAS and had AZ=default. Changing it in MAAS doesn't seem to propagate to juju apparently
[15:41] <chrome0> *had deployed with AZ=default
[15:46] <rick_h> chrome0: no, the info goes from maas to the machine during install/setup but then it's static
[15:47] <rick_h> chrome0: so you'd have to either manually change the info on the host machines or to redeploy them
[15:47] <chrome0> rick_h : Ah, I'd like to avoid redeploy, but if there's a way to plant that info on the host machine that'd be awesome -- how can I do that?
[15:49] <rick_h> chrome0: oh heh, I assumed you "changed it in maas" meant that you were looking for some file/output on the systems that said it was AZ=default
[15:50] <chrome0> rick_h : I'd like to make use of ceph-{mon,osd} failure domain distr. feature, which uses juju AZ info
[15:50] <rick_h> chrome0: I see, there's no way other than monkeying with the jujudb to change that after deploy since the machine is in an AZ and they don't tend to move much (usually)
[15:51] <rick_h> chrome0: so no knob for that unfortunately
[15:51] <chrome0> Ack, was afraid that'd be the case -- thanks
[15:53] <lazyPower> arosales cory_fu https://hub.docker.com/r/jujusolutions/charmbox/builds/bbohvddrrcadljfmfthwmzg/ -- new charmbox build including the matrix component
[15:53] <chrome0> FTR., is no biggie -- the ceph-* charms have a availability_zone setting which allows customization, will use that. Just making sure there's no more "standard" way
[15:55] <arosales> lazyPower: woot!
[15:55] <arosales> lazyPower: thanks for doing that. I'll give it a test run today :-)
[15:55] <lazyPower> lmk if anything goes funk in there and i'll give it some love
[15:56] <arosales> cory_fu: petevg ^ be great if guys give it a poke as well.
[15:56] <petevg> arosales, lazyPower: will try to set aside some time to take a look today.
[15:56] <lazyPower> thanks petevg
[15:57] <petevg> np
[16:10] <cnf> kjackal, lazyPower  so i'm trying to use sshuttle to bypass my restrictions
[16:10] <cnf> both http and ssh
[16:10] <cnf> http works, ssh isn't playing ball yet
[16:10] <lazyPower> cnf: err i'm not sure i have context here
[16:11] <cnf> lazyPower: yesterday ( and the day before), I was failing to connect to my MAAS, because juju doesn't understand socks5
[16:11] <cnf> lazyPower: and today, i was failing because juju can't connect directly to the hosts
[16:12] <lazyPower> cnf: this sounds like you're stitching together a ball of work-arounds because of networks segments?
[16:12] <cnf> so i'm trying to bypass both using sshuttle
[16:12] <cnf> lazyPower: yep
[16:13] <lazyPower> cnf: i would probably advise to setup an OpenVPN service in your lab, that way you can join the network space directly instead of relying on sshuttle, which historically has been really flakey for me.
[16:13] <cnf> bah, openvpn :(
[16:13] <lazyPower> well you're seeing the result of sshuttle here where it half works
[16:13] <cnf> lazyPower: and i'd need openvpn to openvpn
[16:13] <cnf> because i'm 2 or 3 hops away form it
[16:13] <cnf> from*
[16:14] <lazyPower> cnf: without having a clear view of your network setup, its hard to recommend a proper fix here. Networking can be dififcult when you have no context of the topology of the dc
[16:14] <cnf> i'd like juju to support socks5, and ssh config/wrappers
[16:14] <cnf> would solve everything
[16:14] <lazyPower> cnf: have you filed bugs for this?
[16:14] <cnf> for the socks one, yes
[16:14] <cnf> https://bugs.launchpad.net/juju/+bug/1668727
[16:14] <mup> Bug #1668727: juju does not support socks5 as a proxy <juju:Triaged> <https://launchpad.net/bugs/1668727>
[16:14] <lazyPower> fantastic, and its been looked at / triaged.
[16:15] <lazyPower> step 1 is good.
[16:15] <cnf> for ssh, i'm not quite sure what i want to ask for
[16:15] <cnf> i _think_ juju just wraps my system ssh?
[16:15] <cnf> not sure
[16:17] <lazyPower> cnf: if you're referring to juju ssh - it should proxy through the controller and reach the node. you can set this on a model-by-model basis or on the entire controller by setting it as a model default
[16:17] <lazyPower> proxy-ssh                   default  false
[16:17] <cnf> lazyPower: i'm trying to install the controller :P
[16:17] <lazyPower> plot thickens
[16:18] <cnf> hmm, i _think_ i'm mostly there?
[16:18] <cnf> juju is just using the wrong ssh key
[16:18] <lazyPower> cnf: yeah juju ssh is using the system ssh agent. afaik we aren't shipping anything custom there. it just hands off some flags to the ssh client to do the connections.
[16:18] <cnf> i think
[16:18] <lazyPower> cnf: it's going to use whatever is in $HOME/.local/share/juju/ssh
[16:19] <cnf> lazyPower: right, and i use assh to wrap my ssh config
[16:19] <lazyPower> juju does generate a client keyset for the controller that lives there, vs using your user credentials.
[16:19] <cnf> so how does it put the ssh key on the controller during bootstrap?
[16:19] <cnf> the pub one, that is?
[16:20] <lazyPower> I'm not as familiar with the intricacies of the bootstrap process. I presume its just a scp or cloud-init operation, but i would urge you to ping the juju mailing list about that so a core dev can chime in
[16:25] <jrwren> afaik its cloud-init, but controller may do something different than other machine units.
[16:26] <cnf> ok, so atm networkwise i can connect
[16:27] <cnf> but it is rejecting the auth, from what i can tell
[16:43] <cnf> hmm
[16:52] <kwmonroe> tvansteenburgh: minor comment on your release-term hint in https://github.com/juju-solutions/review-queue/pull/77.  otherwise, lgtm.  want me to click the button?
[16:52] <tvansteenburgh> kwmonroe: looking...
[16:53] <tvansteenburgh> kwmonroe: yeah, click it
[16:53] <tvansteenburgh> kwmonroe: thanks
[16:53] <kwmonroe> clicked.  thank you tvansteenburgh!
[16:53] <cnf> hmm, it stays stuck on "Attempting to connect to 172.20.20.16:22"
[16:53] <cnf> but i can ssh manually
[17:08] <jrwren> cory_fu: please take another look at those two haproxy charm MR. I think I've cleaned them up and corrected them. Thank you.
[17:09] <cnf> hmm, weird
[17:16] <cnf> hmm, all manual ssh connection attempts work
[17:16] <cnf> but juju isn't making it
[17:17] <cnf> anyway, time to go home
[17:17] <cnf> i'll try again tomorrow
[17:17] <cnf> \
[17:28] <cholcombe> with the latest juju-2.1 i'm still unable to bootstrap a localhost lxd cloud
[17:28] <admcleod> cholcombe: ?!
[17:28] <cholcombe> admcleod: http://paste.ubuntu.com/24097100/
[17:29] <cholcombe> i tried a apt-get remove juju juju-2.0 --purge.  deleted all local/share files and reinstalled.  didn't change anything
[17:30] <bdx> https://aws.amazon.com/message/41926/
[17:30] <admcleod> cholcombe: juju show-cloud localhost ?
[17:31] <cholcombe> admcleod: http://paste.ubuntu.com/24097115/
[17:31] <bdx> ansible playbooks and fat fingered techs .... a lesson learned
[17:32] <admcleod> cholcombe: remove-cloud and add-cloud?
[17:32] <admcleod> bdx: hah
[17:32] <cholcombe> admcleod: lets try
[17:32] <cholcombe> admcleod: no personal cloud called "localhost" exists lol
[17:32] <cholcombe> 2.1 really hates me
[17:33] <admcleod> add?
[17:33] <cholcombe> it won't let me add it either
[17:33] <cholcombe> it's not a cloud type
[17:33] <cholcombe> is there something i can purge to get this back to a clean slate?
[17:36] <admcleod> you could try ..
[17:37] <admcleod> cholcombe: put this in lxd.yaml : https://pastebin.canonical.com/181327/
[17:37] <admcleod> cholcombe: then 'juju bootstrap lxd lxd.yaml'
[17:37] <admcleod> er sorry
[17:37] <admcleod> cholcombe: juju add-cloud lxd lxd.yaml
[17:37] <admcleod> then bootstrap
[17:38] <cholcombe> admcleod: ok lets give it a spin
[17:38] <cholcombe> admcleod: seems to be going with the juju bootstrap lxd lxd.yaml
[17:39] <cholcombe> yeah now i see it under bootstrap.  cool
[17:39] <admcleod> cholcombe: cool - no idea what you needed to clean up otherwise...
[17:40] <cholcombe> i'm not sure.  i rm'd everything i could find but something is remaining
[17:40] <admcleod> cholcombe: mongo?
[17:41] <cholcombe> admcleod: yeah i prob need to poke mongo
[17:41] <cholcombe> i actually don't see a running mongo process
[17:41] <lazyPower> cholcombe: i've been using the snapped stack since pre 2.1-GA and its been rocking and auto-updating for me.
[17:41] <Budgie^Smore> lazyPower something to do with the way the hardware presents itself during enlistiment from what I can tell
[17:42] <cholcombe> lazyPower: nice
[17:42] <lazyPower> Budgie^Smore: thats odd, why would it matter if its a manually created image vs a scripted image?
[17:42] <Budgie^Smore> lazyPower the VM PXE Boot's but fails to enlist
[17:42]  * lazyPower throws things at vagrant
[17:43] <Budgie^Smore> lazyPower that is what I am trying to determine, nothing in the VM config suggests where to look next
[17:43] <lazyPower> Budgie^Smore: perhaps isntead of vagrant use vboxmanage to create the vms?
[17:43] <lazyPower> give it the pepsi challenge
[17:44] <lazyPower> headed out for lunch, bbiaf
[17:44] <Budgie^Smore> lazyPower I just know that if I hand build a VM using the VB cli it works, about the only thing I haven't tested is using vboxheadless vs vboxmage to start the VMs. that is what I am planning on doing for the other VMs than the MaaS system
[17:45] <cholcombe> admcleod: i think lxd is wedged
[17:51] <cholcombe> purge and reinstall seems to have fixed it
[17:54] <lazyPower> Budgie^Smore: it might be a difference in the image?
[17:55] <lazyPower> is vagrant using the exact same image that your hand rolled vm is?
[17:55] <Budgie^Smore> by image what do you mean? the software of the virtual hdd?
[17:56] <Budgie^Smore> or are we talking the whole 9 yards - virtual machine config, etc.?
[17:56] <lazyPower> yeah, your base vm image. vagrant uses boxfiles right?
[17:56] <lazyPower> is that the same as what you're launching by hand?
[17:57] <Budgie^Smore> yeah I ruled that out when I manually pxe booted the VM as maas-enlist cli just wasn't working and no very helpful in figuring out why
[17:58] <Budgie^Smore> lazyPower that rules out the software image being the prob
[17:59] <Budgie^Smore> lazyPower as far as MaaS should have seen it was a machine with "old" software on the drive
[18:00] <Budgie^Smore> lazyPower but you are right, not hand building the nodes using vboxmanage is just me being lazy
[18:07] <Budgie^Smore> lazyPower when / if I get time I will torubleshoot further as I still have questions around vboxheadless or the fact that the vagrant vm uses vmdk instead of vdi format but I have a solution to that issue which should be as clean.
[18:08] <zeestrat> cholcombe: see https://bugs.launchpad.net/juju/+bug/1665056 for localhost issue
[18:08] <mup> Bug #1665056: interactive boostrap nor 'juju regions'  recognise localhost as a valid cloud <juju:Fix Committed by anastasia-macmood> <https://launchpad.net/bugs/1665056>
[18:09] <cholcombe> zeestrat: looks like it's going to land in 2.2
[18:29] <cory_fu> lazyPower: I gave https://review.jujucharms.com/reviews/96 my +1 but I know you've been helping with the design of that as well.  Can you give it a quick look?
[18:54] <catbus1> Hi, I tried to launch conjure-up, but it just went back to command prompt. It may have something to do with installing conjure-up via apt before, but I have removed it and installed it via snap.
[19:05] <catbus1> what else do I need to clean before I can launch it successfully?
[19:07] <cory_fu> catbus1: I haven't heard of it doing that before.  To rule out apt, what do you get when you do `which conjure-up`?
[19:07] <catbus1> cory_fu: nothing. it returns nothing.
[19:08] <catbus1> wait
[19:08] <cory_fu> catbus1: Try doing `hash -r`
[19:08] <catbus1> I just removed it again via snap. give me a sec.
[19:08] <cory_fu> Ah
[19:10] <catbus1> cory_fu: /snap/bin/conjure-up
[19:11] <cory_fu> catbus1: And what's the exact command you're running that just drops back to the CLI?
[19:11] <catbus1> 'conjure-up'
[19:11] <cory_fu> catbus1: That's very strange.  You should at least get some sort of message or the UI for choosing a spell.
[19:12] <cory_fu> catbus1: Maybe stokachu has some insight
[19:12] <cory_fu> catbus1: Also, can you check if ~/.cache/conjure-up/conjure-up.log has any info?
[19:13] <catbus1> cory_fu: the last info from conjure-up.log is dated 2017-02-07 15:26:10.
[19:14] <lazyPower> cory_fu: looking now
[19:20] <lazyPower> cory_fu: scanned it and it looks good at first glance. Disclaimer: I have not run deployment tests on this charm
[19:21] <lazyPower> however i see its using all the latest versions of the componentry and looks well thought out. The last time i looked at this was a much much earlier version. I left an abstain vote comment on the review.
[19:22] <cory_fu> lazyPower: Fair enough.  I was able to run the tests and they worked with the caveat that I put on my comment  (https://review.jujucharms.com/reviews/96?revision=244) but that was apparently based the tests in the kubernetes charm?
[19:23] <cory_fu> *based on
[19:23] <lazyPower> cory_fu: yeah that sounds like the correct recommendation
[19:23] <lazyPower> that or default to it, and allow it to be overridden by env config or something like that for cases of local testing
[19:24] <lazyPower> SimonKLB: i'm pretty sure the elastisys layer source is available as foss though right?
[19:24] <cory_fu> lazyPower: Yeah, something like that.  I think updating the test would be ideal, but I'm not sure if I think it's worth holding up promulgation.  What do you think?
[19:24] <lazyPower> s/elastisys layer/elastisys autoscaler layer/
[19:24] <lazyPower> So long as you were able to validate, i think its fine to file a bug against the layer/charm and allow a pass for now. Simon's pretty responsive on feedback
[19:25] <lazyPower> i'm unsure of his TZ though, he may be out for the remainder of the day
[19:28] <lazyPower> cory_fu: i can spend some time tomorrow running the gambit of tests on this if you want me to be a voting contribiutor. i see its only got +1 and is blocked on having a second +1
[19:29] <lazyPower> thats the only reason i abstained is i had not run the functional tests on the charm
[19:29] <cory_fu> lazyPower: Yeah, that would be great.
[19:29] <lazyPower> ok i'll pencil this in tomorrow and get some time on it.
[19:29] <cory_fu> Thanks
[19:29] <lazyPower> np np
[19:30] <SimonKLB> lazyPower: still here!
[19:30] <lazyPower> SimonKLB: *snap* darn ;)
[19:30] <lazyPower> hehe, hey is the autoscaler layer source publically available?
[19:31] <SimonKLB> not right now, but i could push it to github if you guys want
[19:32] <lazyPower> SimonKLB: one of the requirements for promulgation is that the source be licensed under FOSS, which in turn means the source should be easily accessable
[19:32] <lazyPower> i think we were looking for the layer source more for bug tracking reasons in this context, but the store requirement plays nicely into that ask :)
[19:33] <SimonKLB> right, should i push it straight away or do you want me to put it off until tomorrow? ;D
[19:34] <lazyPower> SimonKLB: whatever works for you. I wont +1 it until i've gotten the link
[19:35] <lazyPower> i'll piggyback off cory_fu's comment and file a bug for the test case so it doesn't get lost in the shuffle
[19:36] <SimonKLB> lazyPower: alright! how are you handling it btw? i think i grabbed the amulet helper functions from you or mbruzek ?
[19:36] <lazyPower> SimonKLB: i'm not certain i understand. What would I be handling?
[19:37] <SimonKLB> how the resource is downloaded in the amulet test
[19:37] <SimonKLB> (when the charm is local)
[19:38] <lazyPower> ah, so i would do this per cory's approach.  1) specify teh resource path as the default fallback behavior. And allow the user to override that payload using something like os.getenv('SCALER_IMAGE') which would point to either a filepath on disk or remote. The idea is that its not a pre-req to have local before attempting the test, as any ci subsystem is not likely to have that image, ever.
[19:38] <lazyPower> so you have to account for operator overrides
[19:40] <cory_fu> Really, this should be handled at the Amulet level, but it would be more difficult to do it there in a clean but generic way, so working around it in the charm is easier.
[19:41] <SimonKLB> lazyPower: okok! i know that you did something similar in the kubernetes bundle test before to what im doing in the test atm
[19:41] <cory_fu> But since it's really an Amulet issue, I wouldn't be against pushing back against adding more work-arounds to the charm.  Hence, I don't consider it a blocker
[19:41] <SimonKLB> the function still seem to be in there but is unused https://github.com/juju-solutions/bundle-canonical-kubernetes/blob/master/tests/amulet_utils.py#L5
[19:42] <lazyPower> cory_fu: i think this is yet another case for using libjuju native testing vs amulet.
[19:44] <cory_fu> lazyPower: Well, this would still be an issue, honestly.  The problem is if you want to test a local copy of the charm it still needs to honor the resources in the store, but the testing framework needs to be smart enough to remember the link and to manage those resources
[19:44] <lazyPower> ah, good point
[19:44] <lazyPower> cory_fu: yeah matt and i were grumpy about this when we were piloting it in k8s
[19:44] <lazyPower> that much i do remember
[19:44] <SimonKLB> lazyPower: how is it handled now a days?
[19:45] <SimonKLB> are you working around it somehow that you dont need to use resources in the tests anymore?
[19:45] <lazyPower> SimonKLB: scripted work-arounds doing a juju-attach in jenkins.
[19:46] <lazyPower> SimonKLB: we're kind of divergent from existing tooling and have a handfull of bash to lend a hand, we opensourced our build routines over in the juju-solutions namespace... 1 sec and i'll grab a link
[19:46] <lazyPower> https://github.com/juju-solutions/kubernetes-jenkins
[19:46] <lazyPower> SimonKLB: https://github.com/juju-solutions/kubernetes-jenkins/blob/master/jenkins-deploy-local-charms.sh
[19:46] <lazyPower> as well as https://github.com/juju-solutions/kubernetes-jenkins/blob/master/juju-attach-resources.sh
[19:50] <SimonKLB> lazyPower: ah i actually do our charm CI tests kind of similar, packaging the resources as a part of the test - that way i have access to it locally and also can test it running the latest version of the docker images
[19:50] <SimonKLB> but ill add the default path to the charmstore archived version of the resource so that it can be tested out of the box like you guys said!
[19:50] <lazyPower> sounds like theres overlap there. I know that there's work beign done to cwrci to support resources as well
[19:51] <lazyPower> SimonKLB:  there might be new tooling in the pipeline to help ease this. Have you been introduced to cwr-ci yet?
[19:51] <lazyPower> kwmonroe: need your link to your prose on the subject sir ^
[19:51] <SimonKLB> lazyPower: i've seen being mentioned but i havent deployed it and tried it out myself
[19:52] <lazyPower> SimonKLB: we are alike in that regard. matt has been the primary driver on our side for that effort. I'll be looking into it in short order for the etcd refactors
[19:52] <lazyPower> plus we just landed matrix in charmbox this morning, which is a component of that stack
[19:52] <lazyPower> so its starting to surface in places  :)  almost no excuse now
[19:53] <SimonKLB> hehe :)
[19:53] <SimonKLB> i would really like to try out matrix, it sounds like a really cool tool
[19:53] <lazyPower> then cory_fu can yell at me for submitting stuff even though its broken in cwr-ci :P :P
[19:53] <lazyPower> <3
[19:53] <cory_fu> :)
[19:54] <lazyPower> because thats totally something i would do
[19:54] <lazyPower> "critical path, fix tests later"
[20:30] <kwmonroe> cory_fu: what do you think about me revving a cwrbox lxd image to pull in bt-11?
[20:30] <cory_fu> kwmonroe: +1.  You should be in the approved signers list
[20:30] <kwmonroe> cool
[20:30] <cory_fu> kwmonroe: We should probably add petevg and kjackal to that list
[20:30] <cory_fu> And ses
[21:25] <SimonKLB> lazyPower, cory_fu: https://github.com/elastisys/layer-charmscaler
[21:27] <SimonKLB> also, i pushed a new version that falls back on the latest revision of the resource if it's not already on the controller or specified using an env var
[21:28] <lazyPower> nice SimonKLB, thanks for the quick turnaround
[21:31] <SimonKLB> lazyPower: if you have time it would be really sweet it you tried it out a bit :) would be fun to hear a user-story of someone that is not already familiar with our software!
[21:31] <lazyPower> SimonKLB: tomorrow :)
[21:31] <SimonKLB> awesome :)
[21:33] <SimonKLB> lazyPower: do you know how to set the Bugs link? i've tried `charm set cs:~elastisys/charmscaler bugs-url=https://github.com/elastisys/layer-charmscaler` but it doesn't seem to have any effect
[21:34] <lazyPower> SimonKLB: its in there. charm show cs:~elastisys/charmscaler bugs-url
[21:34] <rick_h> SimonKLB: looks like it's set: charm show cs:~elastisys/charmscaler bugs-url
[21:34] <rick_h> SimonKLB: but that the apache web cache and such might take it a bit to update on the webui
[21:34] <SimonKLB> lazyPower: ah, so that is different from the Bugs link on jujucharms.com?
[21:34] <lazyPower> the web front end may be a little behind if thats what you were looking for. the page fragment cache has to invalidate before it updates.
[21:34] <rick_h> SimonKLB: no, but it'll influence that link but it's cached
[21:35] <SimonKLB> right! thanks guys :)
[21:35] <lazyPower> rick_h: we have truly become one
[21:35] <rick_h> normally charms don't change much w/o a new revision so very cache-able...unless you do something like change the bugs-url
[21:35] <rick_h> lazyPower: I feel like I've grown super powers
[21:35] <lazyPower> O_O
[21:35] <lazyPower> this implies you think i'm super :D
[21:35] <lazyPower> <3
[21:36] <rick_h> you see what I did there :P very sneaky
[21:51] <cory_fu> tvansteenburgh, petevg: Thanks for the comments on https://github.com/juju/python-libjuju/pull/63  I've added integration tests to the PR
[22:57] <catbus1> I gave up fixing conjure-up on the maas node. I tried launching conjure-up on a medium sized openstack 16.04 instance, I lost the connection to the instance.