[08:54] <kjackal> Hello Juju world!
[10:30] <lazyPower> o/
[11:29] <lazyPower> Odd_Bloke - ping
[11:31] <Odd_Bloke> lazyPower: Pong.
[11:31] <lazyPower> hey there, i was taking a look through the revq and landed on https://code.launchpad.net/~daniel-thewatkins/charms/trusty/ubuntu-repository-cache/error-message-fix/+merge/292622
[11:31] <lazyPower> i noticed  there was no feedback, and i'm curious if there is a way we can scope those tests to sync a smaller subset of the archive, so it would be feesible to gate on automated testing
[11:32] <lazyPower> as it stands right now, i'm sad that we're marking this merge as needs fixing, because the work is great, its the automated test result thats giving it grief :(
[11:33] <Odd_Bloke> lazyPower: Yeah, it does suck. :(
[11:34] <lazyPower> Odd_Bloke - i'm half a mind to open an issue against the charm re: the tiemout in automated testing, and approve this mp, as it has zero to do with why its failing in CI
[11:34] <Odd_Bloke> Yeah, that's probably worthwhile.
[11:34] <lazyPower> i dont think this ever passed in CI because i've been pretty lenient landing changes against it while we sorted our charm testing
[11:34] <Odd_Bloke> It's something we want to fix, but it's not high enough priority for us to actually get to it any time soon. :(
[11:35] <lazyPower> well, thats a problem :|
[11:35] <lazyPower> any chance we could get that bumped in terms of priority? maybe "soonish" rather than "not anytime soon"
[11:36] <Odd_Bloke> Honestly, probably not; the charm is only a small part of what our team works on, and everything else we're working on is to fulfil contractual obligations.
[11:37] <lazyPower> Odd_Bloke - do yinz run this in any kind of automated capcity on your team? mojo specs or anything like that?
[11:37] <lazyPower> if you've got passing results for me, i'll give it as pass
[11:37] <Odd_Bloke> lazyPower: IS deploy it using mojo etc., but I don't think we have anything running from that branch.
[11:38] <lazyPower> ok
[11:43] <lazyPower> Odd_Bloke - ok, i think i have enough information here to move forward. thanks for the feedback. I'll take an item to follow up with the ~charmers about scenarios like this one as well. This is known  to me having touched the charm extensively in the past, and its not fair at all to gate. This MP will wither on the vine it sounds like due to time constraints... we need to come up with something better both from the tests standpoint, and from
[11:43] <lazyPower>  our gating standpoint.
[11:45] <Odd_Bloke> lazyPower: Ack, thanks for spending time on it. :)
[11:45] <lazyPower> np. keep submitting quality work :)
[11:49] <lazyPower> Odd_Bloke - ah one additional line item here, once approved, we'll need you or a member of your team to sort out proper publishing of the charm (charm push). We can sort access with a launchpad team so anyone there can push, but moving forward as LP ingestion is disabled, we're left in limbo for publishing.
[11:53] <rick_h_> lazyPower: huh? how are we in limbo for charm publishing?
[11:53] <lazyPower> rick_h_ - once i approve this and push to the targeted lp branch, it isn't in the store. Its not anywhere that i can reasonably charm push.
[11:53] <lazyPower> i dont want to own it, its not my charm
[11:54] <Odd_Bloke> lazyPower: I believe rcj was working with... someone to handle that.
[12:00] <lazyPower> rick_h_ - the short story here is: there's about 30 some odd merges left in the queue, that are following the old practice of pushing into lp, and targeting the ~charmer lp branches. If the author doesn't charm push to either their namespace, or a team namespace, we're left creating teams in the store which will further continue to promote non maintainership of charms. I believe Marco/Cory brought this up in Holland
[12:01] <lazyPower> so i'm trying to circle back and help those 30 some odd merges find a proper home, and once the new queue launches (any day now i hear), this will start to just go away. the docs already reference the new publish model, so its a time-limited problem that should solve itself in short order.
[12:12] <shruthima> Hello Team, Iam trying to upgrade charm with fixpacks which is deployed from charm store. But it is showing this error  root@ubuntu:~/charms# juju upgrade-charm ibm-im --resource ibm_im_fixpack=/root/repo/agent.installer.linux.gtk.x86_64_1.8.4001.20160217_1716.zip ERROR already running latest charm "cs:~ibmcharmers/trusty/ibm-im-5" Could anyone please suggest on the same?
[12:13] <lazyPower> shruthima - correct me if I'm wrong, but all you're wanting to do is update the resource correct?
[12:14] <shruthima> no i want to update the charm installtion with fixpacks
[12:15] <shruthima> iam providing actual package from local machine
[12:15] <lazyPower> shruthima - see juju attach -h
[12:16] <lazyPower> yo udont need to upgrade the charm if you're only updating the resource, if you have an updated charm locally you will need to additionally specify the --path to use to upgrade the charm code itself,
[12:16] <lazyPower> default behavior if you've deployed from teh charm store, is to poll the charm store when you issue juju upgrade-charm, as there haven't been any additional revisions published to that channel, there's no new revision for the charm to upgrade to.
[12:16] <shruthima> ya juju attach is working fine
[12:17] <shruthima> oh k we are with the wrong assumption
[12:18] <lazyPower> shruthima - common misconception :) happy i could lend a hand though
[12:20] <shruthima> ya locally juju upgrade working fine with tha option path .thankyou
[12:44] <lazyPower> stub - are you still around?
[12:45] <stub> lazyPower: yes
[12:45] <lazyPower> stub - you broke ubuntu-repository-cache with revision 212 :(
[12:45] <lazyPower> by moving to python3, and not moving series to xenial, the charm will never work
[12:46] <stub> really?
[12:47] <lazyPower> yeah, the install hook fails 100% of the time. if you branch the current tip of whats in https://code.launchpad.net/~charmers/charms/trusty/ubuntu-repository-cache/trunk and run a deploy its beyond broke, due to moving to python3
[12:47] <stub> I get python3 on my trustys.
[12:47] <lazyPower> well, i've verified its broke by default in aws and on the lxd provider
[12:48] <lazyPower> looks like its missing python3-six, python3-yaml, and jinja
[12:48] <stub> So how do all the other charms using py3 work?
[12:48] <stub> Oh... so it needs a charmhelpers sync
[12:48] <lazyPower> more than likely
[12:48] <lazyPower> i'm trying to figure out a path to fix the CI timing out, as thats holding up a whole slew of merges for this charm
[12:49] <lazyPower> stumbled into that in the process
[12:49] <stub> I have an uncommitted merge here adding multiseries support... I might have messed up landing.
[12:50] <lazyPower> https://bugs.launchpad.net/charms/+source/ubuntu-repository-cache/+bug/1609594  was filed by jose, and i followed up with the findings
[12:50] <mup> Bug #1609594: Charm does not install python3 modules yaml and jinja2 <ubuntu-repository-cache (Juju Charms Collection):New> <ubuntu-repository-cache (Charms Xenial):New> <https://launchpad.net/bugs/1609594>
[12:50] <stub>   Robert C Jennings 2016-07-05 Add multiseries support
[12:50] <lazyPower> to be fair, i looked for the r=  in the merge message
[12:50] <lazyPower> so, i dont mean to point a finger
[12:50] <lazyPower> <3
[12:51] <stub> yeah, I'm just trying to track down the mps.
[12:51] <stub> The fix is just resync charmhelpers, assuming no incompatibilities show up. I've seen it before.
[12:52] <lazyPower> let m finish this test run bumping the tests from series=trusty to series=xenial and i'll back out those changes and give a re-sync a go
[12:53] <stub> I committed it, but didn't publish it.
[12:53] <stub> Or I can prepare a MP with new charmhelpers.
[12:53] <lazyPower> I'll defer to your judgement, i trust ya
[12:53] <stub> https://jujucharms.com/ubuntu-repository-cache/20 has it listed as both trusty and xenial
[12:54] <stub> I think that branch might be out of date now... ut can't see the repo in the store.
[12:56] <lazyPower> yeah it looks like the charmstore metadata is missing repo and bugs-url
[12:56] <lazyPower> s/repo/homepage
[12:57] <stub> https://code.launchpad.net/~cloudware/cloudware/ubuntu-repository-cache seems to be the branch
[12:57] <lazyPower> stub - ok, so we have a few merges in the queue targeting the wrong branch :|
[12:57] <lazyPower> and you're 2 revs ahead of whats in lp:~charmers
[12:58] <stub> lazyPower: Right. But it is still in bzr, so the resubmit button should do the right thing,.
[12:58] <lazyPower> s/you're/that branch is/
[12:58] <lazyPower> hmm ok #TIL theres a resubmit button
[12:58] <lazyPower> let me see if i can help that along...
[12:59] <stub> And ~cloudware might want to review it too since they are maintaining things
[13:00] <lazyPower> lp:~daniel-thewatkins/charms/trusty/ubuntu-repository-cache/error-message-fix is not mergeable into lp:~cloudware/cloudware/ubuntu-repository-cache
[13:00] <lazyPower> welp, that didnt help anything
[13:00] <stub> I've been updating the branch descriptions for moved branches and setting the branch status to Merged.
[13:02] <stub> The branches might need to be pushed to the new namespace or something equally annoying :-(
[13:02] <stub> lazyPower: ok if I edit the branch description?
[13:02] <stub> Of the ~charmers branch?
[13:03] <lazyPower> I'm fine with that
[13:03] <lazyPower> the one thing is ee as potentially problematic is that the cloudware branch is private, which means no bugs
[13:04] <lazyPower> stub thanks for lending a hand here, i see i've stumbled into a rabbit hole :)
[13:04] <stub> lazyPower: punt it to ~cloudware? ;)
[13:05] <stub> Dan and Colin should both be happy enough to shuffle branches and resubmit themselves.
[13:05] <lazyPower> works for me, i'll follow up on these MP's and wrap this up. I've got an > 1 hour invested in it already
[13:07] <stub> The description on the old branch says it is moved anyway. In small print, but it is there. https://code.launchpad.net/~charmers/charms/trusty/ubuntu-repository-cache/trunk
[13:10] <lazyPower> gah, i missed that too
[13:10] <stub> Because I just added it
[13:10] <lazyPower> oh!
[13:11] <lazyPower> well, perfect :)
[13:11] <lazyPower> thanks stub
[14:29] <Prabakaran> Hello Team, Could someone please advise me on how to keep bundle stream for pushing the charms in Launchpad? For example if our product is supported on both Trusty and Xenial version how do i name the streams... is it like this https://code.launchpad.net/~<group name>/charms/bundle/<charm name>/trunk
[14:34] <rick_h_> Prabakaran: I'd suggest moving to the new publish method and not keeping the charms in that path any more
[14:35] <rick_h_> Prabakaran: https://jujucharms.com/docs/stable/authors-charm-store
[14:35] <Prabakaran> k thanks rick_h_
[15:02] <tvansteenburgh> anyone have an example of setting frontend config options with the haproxy charm?
[15:11] <tvansteenburgh> never mind, looks like you just set all your options and the charm is smart enough to know which are frontend vs backend
[15:13] <lazyPower> tvansteenburgh "magic" *hand wavey*
[17:23] <jhobbs> how much ram do i need for a controller machine 0?
[17:24] <jhobbs> I'm creating a VM to host it.. is 2GB enough? 4GB?
[17:26] <jrwren> jhobbs: I am not a good person to answer, but IIRC EC2 m3.large is default using that provider because of the memory usage.
[17:26] <jhobbs> jrwren: cool that's helpful, thanks
[17:27] <jhobbs> i'll go with 8
[19:25]  * D4RKS1D3 hi everyone
[19:29] <D4RKS1D3> Anyone knows if it is possible to upgrade a charm from local when you installed from the repos?
[19:30] <D4RKS1D3> Or I need to remove and install the unit?
[19:47] <beisner> hi D4RKS1D3, i think for that you can use --switch <foo> with charm upgrade.  https://jujucharms.com/docs/1.25/authors-charm-upgrades
[19:47] <beisner> or rather upgrade-charm --switch  :)
[19:49] <D4RKS1D3> thanks beisner
[19:50] <D4RKS1D3> I will check
[21:01] <mwenning> hi, did juju-gui change somehow?   I'm deploying juju-gui and wiki-simple using juju 2.0 on my local system - juju-gui shows nothing
[21:02] <mwenning> juju status shows all units idle and ready and I can point my browser at mediawiki fine.
[21:03] <mwenning> you can login ok, but it shows 0 machines