#juju 2012-05-21
<ihashacks> SpamapS: is the mysql-oneway-replication horribly broken like Nagios was or is it just me?
<ihashacks> errors: http://paste.ubuntu.com/998416/
<ihashacks> running: juju add-relation skydb-master:master skydb-slave:slave
<ihashacks> juju status: http://paste.ubuntu.com/998419/
<ihashacks> I know that there are settings for "user password hostname port dumpurl" referenced here: https://juju.ubuntu.com/Interfaces/mysql-oneway-replication
<ihashacks> Not clear to me if the burden of setting those is on me or the relation scripts
<JoseeAntonioR> Hi guys! I have a question: if you deploy mysql, do you get a web interface or something?
<ihashacks> "deploy mysql" gives you a mysql server that you can "add-relation" to other services such as wordpress for the web part.
<JoseeAntonioR> great, thanks
<SpamapS> ihashacks: mysql-onewa-replication should work in theory.. but like the nagios charm, it may have bitrotten since I first created it almost a year ago
<jcastro> hazmat: good afternoon!
<jcastro> how we looking wrt. the queue page?
<SpamapS> jcastro: heh.. negronjl did a nice job producing a cmdline version :)
<jcastro> !
<jcastro> donde?
<SpamapS> jcastro: I'm about to +1 the merge proposal into charm-tools
<negronjl> SpamapS: nice :)
<jcastro> I was going to ask that next, heh
<hazmat> greetings jcastro
<hazmat> jcastro, wip
<SpamapS> negronjl: threading buys me about 0.5s ... the other 3.5s is all waiting for the bug search :-P
<SpamapS> negronjl: merge away, sir
<negronjl> SpamapS: thx
<jcastro> hey so, by default for ubuntu sponsorship each person does 4h a month
<jcastro> obviously we're much smaller
<jcastro> so I was thinking going for something like 2 hours a week?
<jcastro> for people in ~charmers
<jcastro> is that too much/little?
<SpamapS> jcastro: I don't think we all have 2 hours a week to give
<SpamapS> jcastro: nor does the influx of sponsorship demand 2 hours a week
<SpamapS> jcastro: 4 hours, in one block, is more valuable than 2 hours in 4 blocks, IMO.
<jcastro> ok so you're thinking once we get past the hump we should just go with that?
<SpamapS> jcastro: we don't need 24/7 coverage either.. we just need "most of the day on most days" coverage
 * jcastro nods
<SpamapS> jcastro: a quick link to the calendar in topic or just on the Charms page would be nice tho.. so you can see when there is coverage
<negronjl> jcastro, SpamapS: For the time being, we could concentrate on number of reviews ( say one per person per week )
<SpamapS> negronjl: I'm hesitant to change the plan we came up with
<jcastro> we have about ~7 people in charmers that have been regular reviewers
<SpamapS> negronjl: patch pilot works because it is simple and focused on using peoples' time to maximum effect
<jcastro> SpamapS: the topic and calendar and stuff will be easy
<negronjl> SpamapS: I'm ok with that.  Let's just pick something.  I threw that suggestion out there because I saw some hesitation to move forward with the current plan.
<jcastro> we'll be fine I think
<SpamapS> negronjl: only hesitation from jcastro.. ;)
<SpamapS> who listens to *that* guy? ;)
<jcastro> I have no hesitation, I was just wondering if the # of hours was right
<negronjl> SpamapS: not many people :)
<SpamapS> jcastro: we have no baseline.. so we can try 4 hrs and measure the effect
<jcastro> SpamapS: I was thinking of proposing "everyone all in until we get it under control, then go nice and easy"
<SpamapS> jcastro: I've tried that before. Doesn't seem to have much effect. The usual people do their usual awesomeness, then return to not having enough time to address the queue.
<SpamapS> jcastro: lets just light the fire of 4 hours per month.. assign people days.. nag them.. and if things aren't getting touched enough, bug people who are in ~charmers to do more.
<SpamapS> We have, what, 27 people!?
<negronjl> SpamapS, jcastro:  I agree that, with 27 people and no baseline, 4 hours is as good as anything so, for now ... start there.
<SpamapS> I would hope that those 27 would be a little more involved with a known time to plan for
<negronjl> SpamapS: that makes me think that, the same way that there is a process to become a charmer, there should be one to remove people that are inactive.  Just a thought
<SpamapS> negronjl: I'm fine with developing an inactivity report.. we can simply scan the bugs commented on for each user and if there are non in /charms for the past 3 months, warn then remove.
<SpamapS> negronjl: Lets make that a TODO for something to add to policy after we get policy in the bzr repo.
 * SpamapS opens a charms bug
<jcastro> I was just going to mention it in the sponsorship initial mail
<jcastro> "if you're in ~charmers and haven't been reviewing you have a few days to get out, otherwise I'll start assigning you and annoying you."
<SpamapS> Yeah start seeding now "please participate or flag yourself as not participating so we can set expectations appropriately"
<negronjl> jcastro:  Have you created the calendar yet ?
<jcastro> nope
<jcastro> daniel's script just adds your assignment to your work calendar
<SpamapS> jcastro: would be much better if it was a single calendar that people can see
<negronjl> jcastro:  I think I like that better as well
<jcastro> k
<jcastro> I'll figure something out
<SpamapS> Ok, filed bug 1002406 for the ~charmers policy
<_mup_> Bug #1002406: Add policy to discuss when charmers members should be automatically removed <policy> <Juju Charms Collection:New> < https://launchpad.net/bugs/1002406 >
<twobottux> Launchpad bug 1002406 in charms "Add policy to discuss when charmers members should be automatically removed" [Undecided,New]
<_mup_> Bug #1002406: Add policy to discuss when charmers members should be automatically removed <policy> <Juju Charms Collection:New> < https://launchpad.net/bugs/1002406 >
<SpamapS> Uhhh
<SpamapS> twobottux: you need to go away
<twobottux> SpamapS: Error: "you" is not a valid command.
<twobottux> SpamapS: Error: I am only a bot, please don't think I'm intelligent :)
<bkerensa> =o
<SpamapS> whose bot is that?
<hazmat> twobottux, its an askubuntu bot afaik
<twobottux> hazmat: Error: "its" is not a valid command.
<twobottux> hazmat: Error: I am only a bot, please don't think I'm intelligent :)
<hazmat> SpamapS, -> <twobottux> 07:52:48> Announcement from my owner (amithkk): Hey! I'm 2bottuX, A bot by Amith KK. I'm on 2 ubuntu channels and #2buntu. My Function is to provide AskUbuntu Integration. If you want me in any of your channels watching a tag, msg amithkk
<hazmat> marcoceppi, ^
<hazmat> not sure why its doing lp stuff
<SpamapS> Yeah it needs to ignore bugs
<SpamapS> otherwise bugs will be 3 lines of spam every time
<marcoceppi> hazmat: I'll talk to amithkk
<jrgifford> marcoceppi: SpamapS so you need it to ignore bugs?
<marcoceppi> jrgifford: yeah, the _mup_ bot already does that
<jrgifford> gotcha
<marcoceppi> if it's to do anything it should only follow the juju tag on Ask Ubuntuy
<jrgifford> ok, let me see if amithkk left the tmux session running
<marcoceppi> At least until that functionality is merged into the _mup_ bot
<jrgifford> i'm going to ctrl-c it at the console, it'll be (hopefully) right back
<jrgifford> do you really want it in #juju-dev ?
<marcoceppi> Juju questions should end up in this room, not juju-dev
<jrgifford> ok
<jrgifford> i think i fixed it
<jrgifford> lets try it in a moment
<jrgifford> Bug #1002406
<_mup_> Bug #1002406: Add policy to discuss when charmers members should be automatically removed <policy> <Juju Charms Collection:New> < https://launchpad.net/bugs/1002406 >
<marcoceppi> cool
<jrgifford> looks like i fixed that
<marcoceppi> jrgifford: is the source for this public?
<jrgifford> marcoceppi: um, no. at least, not right now as far as i know
<jrgifford> i think amithkk's going to submit a merge request like, next week or something with his changes
<SpamapS> jrgifford: *thank you*
<jrgifford> SpamapS: no problem.
<jrgifford> it's on my server, i'd be really disappointed if i couldn't stop it. :P
<marcoceppi> negronjl SpamapS jcastro with the "Review Queue" stuff merged in to charm-tools do we still want to pursue that web-based thing?
<SpamapS> marcoceppi: I'm fine w/ the cmdline tool. :)
<marcoceppi> Guess I'll learn Django another day
<marcoceppi> :)
<jcastro> yes
<jcastro> let's keep it web based
<jcastro> I mean, having a companion CLI tool is nice
<jcastro> but it'd be nice to have it part of the web UI, so anyone can see what's going on, etc.
<marcoceppi> Django is back on, hazmat if I get you code for this in the form of a Django project will you be able to integrate it into the current Charm World thing?
<hazmat> marcoceppi, negronjl already provided it
<marcoceppi> oh
<marcoceppi> makes that easy
<SpamapS> :)
<hazmat> negronjl, http://jujucharms.com/review-queue
<marcoceppi> hazmat: sexy
<marcoceppi> Should we drop the Charm Needed: stuff?
<hazmat> marcoceppi, yeah.. probably, the need has been fufilled
<hazmat> although that's a matter of perspective i suppose
<hazmat> its still needed till its in 'main'
<marcoceppi> IMO, if it's a bug in the charm project, it's for a charm being worked on; Since we can open bugs directly against promulgated charms, etc
<SpamapS> I'd leave the Charm needed
<SpamapS> For the Proposals I'd rather see the URL there than "Proposal"
<SpamapS> Though I think we talked about grabbing the summary of any linked bugs
<hazmat> SpamapS, fixing that right now
<hazmat> a couple of other minors as well
<SpamapS> Another way to go would be Proposal: $(charmname)
<hazmat> SpamapS, i'm doing Merge proposal for %{charmname/branch_name}
<hazmat> hm.. although perhaps i should just do %series/charm_name
<hazmat> yeah.
<SpamapS> Yeah perfect
<hazmat> SpamapS, should be good now
<hazmat> let me know if you think of any other tweaks
<SpamapS> hazmat: its sorted with newest on top
<hazmat> SpamapS, you want inverse?
<hazmat> er. normal sort
<SpamapS> well I think we do
<hazmat> fifo
 * SpamapS is once again spinning too many plates to recall which direction this plate should be spinning
<hazmat> cool
<hazmat> SpamapS, fixed.. you'll have to ctrl-r for your browser to force a fetch
<SpamapS> hazmat: looks good
<hazmat> its setup for a 3m http cache, and a 10m cron update
<SpamapS> I think we may want to remove In Progress tho
<SpamapS> hazmat: thats mighty fine. :)
<hazmat> SpamapS, most of those have branches attached re 'in progress'
<hazmat> your call though
<hazmat> SpamapS, for example.. https://bugs.launchpad.net/charms/+bug/983339  this one wouldn't be in the queue otherwise
<_mup_> Bug #983339: New Charm: munin-node <new-charm> <Juju Charms Collection:In Progress> < https://launchpad.net/bugs/983339 >
<hazmat> well i guess it would for being a new charm tag
<SpamapS> hazmat: the merge proposals In Progress are fine. The bugs, are not.
<SpamapS> a bug in progress means charmers is not expected to do anything
<SpamapS> though perhaps instead, we should just unsubscribe charmers and let the user ask for attention again when its time.
<hazmat> SpamapS, so the munin charm isn't ready for review?
<SpamapS> hazmat: the munin merge proposal is
<SpamapS> err
<hazmat> SpamapS, :-)
<SpamapS> no never mind
<SpamapS> hazmat: so james page set it back to 'In Progress' to suggest that its not ready for any further review. I think.
<hazmat> SpamapS, work in progress works well on a merge proposal.. on a bug there isn't a clear way to indicate ready for review outside of a tag
<hazmat> right now pretty much all bug states outside of committed, released with a 'new-charm' tag are considered part of the queue
<SpamapS> hazmat: yeah, I think we just need to think about how we want to manage the queue a bit
<SpamapS> hazmat: simplest is to just have new-charm be the clear "I need help" flag
<SpamapS> or, I think we'll change it to subscribing ~charmers
<hazmat> SpamapS, new-charm works for the initial point of contact, but as things progress, its unclear that it remains cogent of the current state of the charm branch
<hazmat> SpamapS, looking over https://bugs.launchpad.net/charms/+bug/806044 for example, the author has incorporate review feedback, but there's really no way of knowing it from the bug per se.
<_mup_> Bug #806044: Charm needed: Moodle <new-charm> <Juju Charms Collection:In Progress by rkather> < https://launchpad.net/bugs/806044 >
<dpb_> hi folks.  I had a 75GB log file after just a few days of running a local lxc 2-service test bed.  Is this a known issue?  (machine-agent.log)
<hazmat> dpb_, yes.. its fixed, and awaiting an sru
<hazmat> SpamapS, any updates on that getting pushed out?
<dpb_> hazmat: cool, thx
<dpb_> hazmat: in the ppa already?
<SpamapS> hazmat: I'll upload to precise-proposed tomorrow
<SpamapS> wow.. charm getall takes a *long* time
#juju 2012-05-22
<hazmat>  SpamapS why do you need to use getall?
<hazmat> SpamapS, its just not going to scale, so if there are use cases...
<hazmat> we should come up with some alternatives
<hazmat> dpb_, yeah.. its been in the ppa for a while
<SpamapS> hazmat: to look at every charm
<SpamapS> hazmat: same problem exists in Ubuntu
<hazmat> SpamapS, define look at every charm? what sort of things do you want to look for?
<SpamapS> hazmat: right now, searching for all charms w/o maintainers
<hazmat> SpamapS, i'm just trying to think if we can setup some exposed interfaces against the charm farm to do m/r
<SpamapS> hazmat: and generating stats to help assign them :)
<SpamapS> hazmat: also we have to check them all out for charmtester
<hazmat> SpamapS, charmtester does them in minimal sets
<SpamapS> hazmat: well we don't.. we could lazy-branch them whenever a test wants a different charm
<hazmat> er.. charmrunner does
<SpamapS> explicit tests are allowed to ask for any charm
<SpamapS> and its expected that it will be from the same place as the testing charm, not the charm store
<hazmat> better to do that lazy, or just get the store version
<SpamapS> yeah, NACK to the store version, but ACK to just grab it as needed
<SpamapS> hazmat: there are also times where we just want to dig through them and find out who is doing what in charms. Those types of analysis problems are problematic in any project at scale.
<hazmat> SpamapS, right.. which is why i'm thinking about a map/reduce interface for doing them
<SpamapS> hazmat: ubuntu could benefit if you keep it bzr-pure.
<hazmat> hmm
<hazmat> that might be nice, but depends on the interface.. effectively it should just look like run these commands/scripts against this directory
<hazmat> where directory is charm or deb
<hazmat> and distribute to a cluster of servers
<imbrandon> m_3 / jcastro / SpamapS / hazmat ( as you all were present in 80% of those few i got work items from ) yall mind helping me find all/any of mine, I seem to be listed as I should be but with nadda items from ANY tracks and i know thats just flat wrong ( and or poke me to a better solution as I only generally rember the ideas of them not the specifics of any really )
<imbrandon> and not tonight, like over the next days etc
<imbrandon> pwease
<SpamapS> hazmat: charm getall, btw, is only really slow because it connects and disconnects from launchpad's SSH about 5 times per branch
<SpamapS> imbrandon: none of the BP's have been approved just yet
<SpamapS> imbrandon: so you won't see them on status.ubuntu.com
<imbrandon> ohhhh whew, i thought i was screwed
<imbrandon> rockin, ok perfect i'm just premature here
<imbrandon> ok not the first time, i can deal
<hazmat> SpamapS, sure, but effectively that's already been done by the charm browser.. its keeping up to date copies of the entire charm universe to support hook browsing
<hazmat> SpamapS, just thinking it would be nice to leverage that and distribute across multiple servers
<SpamapS> hazmat: eeexxxcccellent smithers
<SpamapS> hazmat: sounds like the makings of a 'charm-mirror' tool :)
<imbrandon> ( and yea that was totaly a terrible joke I seriously applogise now for those that read it as intended )
<hazmat> SpamapS, perhaps.. not really needed in this context, gluster or ceph would give multi-server access
<hazmat> SpamapS, getall is a mirror tool ;-)
 * imbrandon gets back to some other crap code due in the morning that is not so sexy but is paying some electric bills in a few months or something along those lines
<imbrandon> yea just needs a mirror alias in jitsu
<imbrandon> :)
<hazmat> but why?
<SpamapS> hazmat: I know, a crappy craptastic one
<hazmat> we can create a tool for the analysis
<hazmat> SpamapS, what sort of analytics are you looking for?
<hazmat> basically just grep with a file filter?
<SpamapS> hazmat: grep
<SpamapS> yep
<imbrandon> hazmat: no real reason, from me at least except i can see it being only slight help via surgar syntax when using a local offline only maas or something
<SpamapS> Running offline is an absolute requirement for real production work.
<SpamapS> Many places will be using juju to bring up the whole infra.. perhaps before it is connected to the internets
<imbrandon> SpamapS: btw i got to eat my cacke and keep it too, as i kept the new iPad AND just after i dropped of that macbook at the local non-prof that refurbs em for battered woman and a few other seemed very worhy causes and giving my 32 year old kid brother the other MBP in kindness instead of selling it on craigslist ( he's trained better than me tho as he had ubuntu on it before i left his driveway lol ) when i got home my boy john came over and for a
<imbrandon> so now mini with ubuntu as main and osx cloned in a vm of my old install , the new mini 10 with unity but only 1gb ram and atom N270 proc but very much caopable for mobile work and no others here to take myt time maintaining un-needed
<imbrandon> :)2~
<imbrandon> oh and not  spending a extra dime AND have the ipad :) so yea good monday so far
<imbrandon> now if i could get the ipad to read my mind and share nice with ubuntu on the mini custody of the attached blue tooth keyboard when i say tripple tabbed or something , then i think i would have a nerdgasm not having to lift a hand to use both
<imbrandon> ipkvm client softeware is available on the ipad so i bet its doable is i look up the api's and compile a lil app and can even sign it still hell yea, i know what i'm gonna do next weekend
<SpamapS> imbrandon: Hey does the newer mini w/ multi-monitor work in Linux?
<SpamapS> imbrandon: I had heard tell that the thunderbolt stuff didn't work
<imbrandon> yea no it works perfect
<SpamapS> imbrandon: and I need a desktop machine so I can drive two bigger monitors.
<imbrandon> full res and all
<imbrandon> yea i bout the cheapest new new new model , the 599 or what ever it was
<imbrandon> with a instore 50 off
<imbrandon> and then i took the 50 instalnt off and bout5h 8gb of ram
<imbrandon> and it can take 16gb totaly but not upgraded to that yet, and it comes with both the hdmi --> dvi and the display port to dvi dongles both and both work perfect out the box in linux
<imbrandon> and has room for a second hard drive or first ssd that will be my next purchase and its very veyr simple to add in now with the bottom that opens with no tools
<imbrandon> the ONLY down side so far i found is like most apple things i run out of usb quickly and have 2 powered hubs on my desk
<imbrandon> and the only other tiny thing is if you get the "non server" e.g. normal os and only one drive confuigured option
<imbrandon> you have to buy a apple only sata cable to add the second drive later like i wanna do
<imbrandon> that they only sell online, no picking up at the apple store :(
<imbrandon> its cheap compareitavly tho at like 12$ or something, just the hassle
<imbrandon> everything else its by far the quickest mac desktop betweem the imacs and other mini's ive owned and that 3rd party usb to dvi was only thing including magic trackpad gestures that dident work out of the box in 12.4
<imbrandon> 12.04
<SpamapS> I may actually go with a system76 box tho
<imbrandon> i run 2 vm's with 2gb each ram and photoshop and chrome with tons of tabs plus mail.app and other minor crap like adium with no gitter etc
<SpamapS> I want to have more muscle than the mini
<imbrandon> well its becomming blastphmey but i do atleaste like the idea of being able to run basre metal osx if wanted
<SpamapS> yeah I couldn't care less about OS X anymore
<imbrandon> and will still likely make sure i have the option to do so for a while to come until the withdrals quit after detox :)
<imbrandon> but mostly to compile to iOS and Mac App Store.app
<imbrandon> and those i'm using html5 + js + adobe phonegap more and more
<imbrandon> so wont even need it for that if i jumped head first there
<imbrandon> would only have photoshop to cling to that i'm betting a month with gimp 2.8 and inkscape could fix up
<imbrandon> not a year ago, but now possibly
<SpamapS> anyway, time for family stuff
<imbrandon> cheers yea, i promosd my self to get this code done tonight too
<imbrandon> so afk on irc for now anyhow
<imbrandon> l8tr
<imbrandon> SpamapS: i'll seend you all the lsproc lspci etc etc etc if you do consider it more later
<imbrandon> fyi
<imbrandon> oh and know the damn thing runs HOT all the time, i bought a desk fan i keep running 24/7 pointed at mine
<imbrandon> probably a dealbreaker for most
<imbrandon> like orignial MBP 15inch hot
<bkerensa> imbrandon: http://i.imgur.com/CS7HQ.jpg <-- sexy
<imbrandon> bk yea its nice hardware
<imbrandon> i do give them that and ubunu is the closest thing to a Apple Hardware / OS X marrage that is the real magic right beside iOS and OS X intergration that really makes the apple an apple
<imbrandon> on a pure hardware level they are identical chipsets and all as any XPS series dell, just cheaper and no windows tax :)
<imbrandon> ( and yes I said the Dell was the expensive one, on avg 100$ when compared 3 months or ago but neither company has had major releases yet since so it should still be right on )
<imbrandon> and HP and Lenovo were both nearly the same price as the Apples but not fair to compate as they only matched specs and used cheaper chipsets and other hardware that was not as nice or required an addon
<imbrandon> :)
<imbrandon> bkerensa: Y U NO SEND ME A MOUSE YET
<bkerensa> I went to UDS and all I got was a free System76 to play with :D
<bkerensa> oh yeah let me e-mail them and then I will fedex after I snap some photos and do a mediocre review
<bkerensa> :D
<JoseeAntonioR> bkerensa: you got a system76?!
<imbrandon> np, i'll even do the proper review too if you want, i'm fairly certain I have as much readership :)
<imbrandon> bkerensa: it was a wired one you mentioned correct ?
<imbrandon> actually IDK .me crack whip on himself , u no get distracted with shineys
<hazmat> there's somethign broken with lp's aliases
<SpamapS> bkerensa: I actually don't find System76 hardware very attractive at all.. I wish they would spend a little more time on the case design. :-P
<SpamapS> To me they look like Toshibas from 2004
<SpamapS> I may treat myself to one of these though: https://www.system76.com/desktops/model/leox3 ... mostly because the water cooling makes it quiet. :)
<victorp> does anyone know if I can set up a default environment in juju so I dont have to keep appending -e blah to all my commands?#
<ikk> alias juju='juju -e' in your .bashrc >
<ikk> not sure where the > came from ignore that bit!
<jamespage> victorp, put 'default: <environmentname>' at the top of ~/.juju/environment.yaml
<victorp> jamespage,  Thanks!
<victorp> jamespage,  worked like a... well... charm :)
<jamespage> victorp, I remember discussion about being able to use an environment variable as well but I don't think thats been implemented yet
<jamespage> victorp, np
<_mup_> juju/trunk r537 committed by kapil.thangavelu@canonical.com
<_mup_> [merge] dont-proxy-https, update local provider apt-proxy config to passthrough for https repos. [a=davidpbritton][r=hazmat][f=993034]
<jcastro> hazmat: buenas mornings!
<hazmat> jcastro, Buenos dÃ­as
<jcastro> hazmat: I can haz queue?
<jcastro> marcoceppi: wordpress charm question
<jcastro> when you backport the fixes you made for OMG into the official charm
<jcastro> how do we handle upgrades? Is it just a juju upgrade wordpress or do we need to do anything special?
<hazmat> jcastro, its up.. http://jujucharms.com/review-queue
<jcastro> \o/
<jcastro> hazmat: want me to file bugs on the other details or are you still WIP?
<jcastro> (needs a link somewhere on the homepage, needs "8 days old" column, etc.
<hazmat> jcastro, pls file bugs
<hazmat> jcastro, i've moved on to other items at the moment
<jcastro> this is awesome, enough for us to get working
<jcastro> thanks dude!
<mhall119> jcastro: if I install LXC again, will cgroups break my ability to suspend again?
<jcastro> no clue
<jcastro> suspend works with my LXC
<mhall119> :(
<mhall119> Someone (smoser maybe?) told me at UDS that my issue most likely came from a known bug in cgroups
<jcastro> though the other day it filled my disk with machine logs because I left it running for like a week
<smoser> mhall119, your issue was cgroups, yes.
<smoser> do not install cgroups
<smoser> install cgroup-lite
<hazmat> its the dep for lxc afaicr (cgroup-lite)
<mhall119> ah, ok
<spidersddd> Does anyone know of a way around the S3 dependency in juju?
<mhall119> jcastro: where is the "Get LXC setup and start deploying to it with juju" tutorial?
<spidersddd> I have a swift install that does not have compatibility enabled and want to use it with openstack.
<jcastro> mhall119: https://juju.ubuntu.com/docs/getting-started.html#configuring-a-local-environment
<jcastro> I have a WI to update that
<mhall119> jcastro: what's with all the java dependencies?
<hazmat> spidersddd, we'll have a better story for openstack providers for alpha testing by the end of the week
<hazmat> mhall119, zookeeper
<hazmat> mhall119, its pretty minimal deps for a java server
<hazmat> libjline-java, liblog4j1.2-java (>> 1.2.15-8), libnetty-java, libxerces2-java
<hazmat> although transtitive blows that up to near a dozen i think
<jrgifford> amithkk: hey, i don't think endeavor can handle twobottux anymore, for some reason it's swapping really bad and is almost unresponsive. so... idk whats going on, i'll let you know once i've figured it out.
<mhall119> do I need to generate random strings for control-bucket and admin-secret when using LXC?
<jrgifford> mhall119: yes
<mhall119> can they be random, or do they have to match something specific?
<hazmat> m_3, ping
<hazmat> m_3, i think your migration script broke all the oneiric charms
<jcastro> mhall119: mine are random
<jcastro> I just paste in the example and jumble the numbers
<jcastro> m_3: negronjl: SpamapS: proposed charm process mail is on the list!
<mhall119> jcastro: docs aren't very clear on this
<jcastro> I know
<jcastro> I have a WI to fix up the entire LXC part
<jcastro> mhall119: it's in lp:juju/docs if you want to toss in some quick fixes
<jcastro> (busy with charm process crap right now)
<mhall119> are there specific lengths to these keys?
* jcastro changed the topic of #juju to: Reviewer: everyone || Review Queue: http://jujucharms.com/review-queue || Charms at http://jujucharms.com || Want to write a charm? http://juju.ubuntu.com/Charms || OSX client: http://jujutools.github.com/"
* jcastro changed the topic of #juju to: Reviewer: ~charmers || Review Queue: http://jujucharms.com/review-queue || Charms at http://jujucharms.com || Want to write a charm? http://juju.ubuntu.com/Charms || OSX client: http://jujutools.github.com/
<marcoceppi> jcastro: When the charm is updated I'll make sure it doesn't break older charms
<marcoceppi> Well this is a weird BZR error: http://paste.ubuntu.com/1000957/
<mgz> what's your local config?
<mgz> looks like a slightly borked locations rule or similar.
<marcoceppi> http://paste.ubuntu.com/1000974/
<jcastro> @pilot in
<jcastro> hmm, guess that doesn't work
<mgz> marcoceppi: as in, run `bzr config` in that dir
<marcoceppi> mgz: http://paste.ubuntu.com/1000977/
<mgz> hm, is server side.
<marcoceppi> \o/
<imbrandon> jcastro: i dont think you it in here, must be ina query and then also say its name there too
<jcastro> http://jujucharms.com/review-queue
<jcastro> in case anyone missed it. :)
<imbrandon> and there is a fair amount of manual setup on the ical server ( gmail ) too if you hadent done that , its not automated in daniels process i was going to try and script it later what i had time
<imbrandon> nice, me adds it as the first bookmark on the always showing bar
<imbrandon> btw did anyone find out what red tape that needs to be gone through to get this GPV3 or so its listed as on LP code ? I still cant figure out the politics involved but its like 4 weeks now and I still dont even know whom at canonical would actualy need to bless it for hazmat or whomever still has the physical bits, and actully I'm bussy now anyhow but still gonna say it so maybe it wont be another 4 ?
<jcastro> it's in progress
<jcastro> I'd say find something else to do for now. :-/
<imbrandon> i have,, for the last month and will continue, but soon enough i wont and wont ask again i'll just rip the css and then make a clone fork
<hazmat> marcoceppi, that's related to the mail i just sent
<hazmat> marcoceppi, all the official oneiric charms got broke
<hazmat> er.. branches are broken
<marcoceppi> hazmat: ah, I'm at like inbox 800 still. Need to catch up
<imbrandon> jono let the news out
<imbrandon> wow ok maybe not, bad title to blog post /me is going back to bed before foot goes into mouth 3rd time in an hour
<mhall119> juju bootstrap isn't working for me: http://paste.ubuntu.com/1001009/
<jcastro> imbrandon: we have the queue now
<imbrandon> ?
<jcastro> see topic
<imbrandon> i did , i'm looking them over to review some now, just wonder what you were meanign if more than that
<imbrandon> :)
<mhall119> jcastro: http://paste.ubuntu.com/1001009/ when you can
<jcastro> jimbaker`: don't forget to add your show to juju.ubuntu.com/Events pls
<jcastro> mhall119: never seen that one before
<mgz> hazmat: which list did you send that mail about broken charms to?
<jcastro> did you log out after you installed all the LXC stuff?
<hazmat> mgz, there's one on both lists
<mhall119> jcastro: yeah, rebooted
<hazmat> separate threads though
<mhall119> jcastro: it was doing this before reboot too
<mhall119> I was hoping reboot would fix it
<jcastro> hmm not sure.
<mgz> okay, I see the reply on the juju list.
<jcastro> imbrandon: any ideas on mhall119's problems?
<imbrandon> sorry was detached , ummm
<imbrandon> is that lxc , hrm looks like not rebooted after the package was installed
<imbrandon> or logged out, not sure a full reboot is needed
<mhall119> jcastro: imbrandon: http://paste.ubuntu.com/1001017/ is my ifconfig output
<mhall119> imbrandon: I rebooted already
<imbrandon> ok try to just destroy the envirnment and bootstrap it a second time clean
<imbrandon> that clears up a zk error, not sure if it will help here
<mhall119> do I need to run bootstrap as root?
<imbrandon> i think localy yes as sudo
<imbrandon> but not on anything else
<imbrandon> eg aws or osapi
<mhall119> heh, yeah,  sudo juju bootstrap goes a lot better
<mhall119> jcastro: ^^ would be nice to have in the docs
<jcastro> wait what?
<jcastro> since when?
<imbrandon> yea, i forgot that was an issue in the charm room 205 sometimes, that probably needs to be fixed ( iirc jamespage is on it ) or updated in docs that lxc containers need sudo prefix for juju
<jcastro> for the bootstrap or everything?
<imbrandon> everything, well most everything
<imbrandon> cuz really bootstrap bgut then perms are wrong for it all
<imbrandon> so you gotta keep using it
<jamespage> hrm - you should not run any juju commands via sudo
<imbrandon> it was brought up as a bug
<marcoceppi> imbrandon: you should never use juju with sudo
<imbrandon> jamespage: only on lxc or bootstrapping dont work
<imbrandon> marcoceppi: yes i know this
<imbrandon> i said the same thing
<marcoceppi> There's a merge on the docs branch which has a fix in the documentation so you won't need sudo
<jamespage> imbrandon, no - not with lxc - should just work
<marcoceppi> It's related to the issue with libvirtd group not being added to the current users group
<imbrandon> jamespage: right , we are all saying the same thing just diffrent ways
<jamespage> great
<imbrandon> you SHOULD NOT NEED TO, but atm some do due to that nic stuff adding to grp you was fixingt
<imbrandon> or agreed to look into
<jcastro> after this meeting I'll review the incoming doc request
<imbrandon> yea , not sure why a reboot did not fix him then though
<jamespage> imbrandon, https://bugs.launchpad.net/juju/+bug/997667 I think
<_mup_> Bug #997667: local provider makes assumptions about libvirtd default network setup <juju:New> < https://launchpad.net/bugs/997667 >
<imbrandon> yup yup thats the one
<jamespage> marvellous!
<marcoceppi> imbrandon: the install sometimes doesn't even add the user to the group, so no matter how many times you reboot it won't work.
<imbrandon> and yea we're all on the same page, just all had it worded in our own ways diffently :)
<imbrandon> marcoceppi: ahh
<marcoceppi> You'll need to usermod -aG libvirtd <user>
<marcoceppi> jcastro:  <3
<jamespage> imbrandon, marcoceppi: and https://bugs.launchpad.net/juju/+bug/997213
<_mup_> Bug #997213: Unhelpful error message when doing a local bootstrap if user is not in libvirtd group <juju:New> < https://launchpad.net/bugs/997213 >
<imbrandon> :)
<imbrandon> mhall119: ^^ see what marcoceppi said to run, that should remove the sudo req for lxc then you should be "normal" at that point
 * marcoceppi cant' wait for the document merge
<imbrandon> might need to log out.in after ?
<marcoceppi> imbrandon: you can just run `newgrp libvirtd` for that terminal session
<marcoceppi> http://askubuntu.com/a/65360/41
<marcoceppi> That question has everything we learned from UDS charm school, and the pending docs merge has all that information updated in the Getting Started section for LXC
<jcastro> ok after this meeting we can merge it
<marcoceppi> cool
<jcastro> you're in ~charmers you should be able to just merge
<marcoceppi> jcastro: I proposed the merge :)
<jcastro> brandon is in ~charmers now too
<jcastro> imbrandon: there you go, your first review queue item. :)
<jcastro> get that badboy in there!
<imbrandon> heh trying man, got a phone call  i just got off of, a code put to github going, and a 101 temprature
<imbrandon> can i start the week over >?
<jimbaker`> marcoceppi, nice doc in askubuntu on lxc, the only thing i saw is that there's an inconsistency in describing using a local charm repo, vs apparently going against cs
<imbrandon> oh that queue has the merge preposals and such too
<imbrandon> nice
<SpamapS> imbrandon: take care of brandon first
<imbrandon> oh i am, i'm litrally in bed today, got a zpack from the doc yesterday
<imbrandon> but i get too stir crazy w/o laptop
<imbrandon> ok how is the new process work, i see you pulled the new-charm tags off, is that not needed anymore ?
<jcastro> hazmat: hmm, so I added ~charmers to a few bugs
<imbrandon> and i'm thinking the needs-review status means it needs review from the whom submitted it not the ~charmers right ? we;re looking/re-looking at the fix-commited and confirmed ones correct ?
<jcastro> and removed the tag
<jcastro> and they got removed from the queue!
<mhall119> marcoceppi: so I just need to add myself to libvirtd group?
<hazmat> jcastro, the queue is based on tags and merge proposals
<hazmat> specifically the 'new-charm' tag for new charms, and merge proposals for changes to existing official charms
<jcastro> hazmat: anything ~charmers is subscribed to is the new process
<jcastro> 'new-charm' is going away, that's why we're using charmers
<hazmat> jcastro, news to me
<jcastro> ? we mentioned it during the call
<hazmat> jcastro, ic, i'll reply on list, but i've got some other meetings to get to
<hazmat> atm
<jcastro> that's why we're reusing charmers, because that's how ubuntu does it
<hazmat> gotcha
<jcastro> with subscribing a group to get something in the queue, not a tag. K, talk to you on the list.
<imbrandon> ok gotta get my head round this error, detached from irc a bit
<negronjl> 'morning all
<jcastro> yikes
<jcastro> 7 MPs for docs!
<jcastro> maybe we should add juju/docs to the ~charmers review queue
<jcastro> since we can work on those
<hazmat> jcastro, noted
<hazmat> negronjl, g'morning
<m_3> hazmat: pong
<jcastro> hazmat: filing
<hazmat> m_3, greetings
<negronjl> hazmat: yo :)
<jcastro> hazmat: from now on I'll file on LP so we don't mix up anything
<m_3> hazmat: been moving... fun fun
<hazmat> m_3, no doubt
<hazmat> m_3, so bit of a fire drill. but afaics the migration script that was run killed all the oneiric charm branches
<hazmat> jcastro, awesome, thanks
<hazmat> m_3, i wanted to review the logic of that script, but also we need to figure out how to get them back
<m_3> hazmat: hmmm... lemme look
<hazmat> m_3, there's messages on both lists regarding this
<m_3> hazmat: should be easy enough to add back those versions under oneiric branches
<hazmat> m_3, with history?
<m_3> hazmat: sure... none of that was really killed
<hazmat> m_3, great i was worried
<mhall119> jcastro: I tried deploying mysql and wordpress to my LXC, but agent-state for bothis still "pending"
<jcastro> it takes a while the first time
<jcastro> it has to DL like 300mb of stuff
<mhall119> it's been 20 minutes
<mhall119> oh...
<m_3> the branches were renamed by the script... realize they should've been copied instead.  perhaps we shouldn't use the `branch-distro` tool from lp to do this in the future
<SpamapS> mhall119: ps auxfw .. you should see lxc-create or lxc-clone or a chroot doing stuff
<mhall119> SpamapS: I see a couple lxc-start
<mhall119> but no -create or -clone
<mhall119> ah, because they appear to be started now :)
<mhall119> sweet, it's working!
<mhall119> juju destroy-environment takes down all the instances, right?
<mgz> yup.
<marcoceppi> jimbaker`: thanks, I'll update that inconsistency
<mhall119> ok, so now how do I get a charm I'm writing deployed?  Do I need to copy it somewhere?
<marcoceppi> mhall119: you'll want to create a local repository
<mhall119> docs?
<marcoceppi> mhall119: let me find them
<jcastro> juju deploy --repository= ~/wherever local:charm-name
<mhall119> is a repository just a directory,  or something special?
<jcastro> it needs to be /precise/charmname
<mhall119> ok
<marcoceppi> mhall119: it's pretty straight forward. Create like a charms folder, put the series name inside of it (~/charms/precise) then each charm is it's own directory inside of the series folder (~/charms/precise/foo) to deploy just `juju deploy --repository ~/charms local:foo`
<mhall119> do I need to bootstrap after I destroy-environment?
<marcoceppi> mhall119: yup
<jcastro> imbrandon: you don't need new-charm anymore
<marcoceppi> I can't find local deployment details in the doc anywhere
<imbrandon> jcastro: i put it back until hazmat fixed the bug so it would not get overlooked
<jcastro> I got you bro
<jcastro> one sec
<mhall119> ok, deploying summit test charm, where do I see the output/logs?
<imbrandon> ty ty
<imbrandon> juju status and juju debug-log are the startrs
<mhall119> does it have to download the 300mb every time I bootstrap and deploy?
* jcastro changed the topic of #juju to: "Reviewer: ~charmers || Review Queue: http://goo.gl/pnUaL  || Charms at http://jujucharms.com || Want to write a charm? http://juju.ubuntu.com/Charms || OSX client: http://jujutools.github.com/"
<jcastro> no
<jcastro> it caches for you
<jcastro> it's just the first time sucks
<mhall119> ok, cool
<jcastro> ok that review queue is Good Enough for now
<jcastro> that looks like the 15 charms I removed from the real queue. :-/
<imbrandon> yea
<jcastro> <Spamaps> I told you to not go so fast, it can wait.
<imbrandon> lol
<jcastro> I know I know Clint, I couldn't help myself
<negronjl> jcastro: mira, for the review-queue, should we still track new-charm tag as well as the bugs assigned to charmers?  It appears that by just subscribing charmers and removing the new-charm tag, we broke the review-queue....Am I missing something ?
<jcastro> negronjl: bug filed, hazmat is on it
<imbrandon> negronjl: right
<negronjl> jcastro:  give me the bug number so I can fix the CLI version of it as well
<jcastro> https://bugs.launchpad.net/charmworld/+bug/1002976
<_mup_> Bug #1002976: Review queue needs to account for ~charmers <charmworld:New> < https://launchpad.net/bugs/1002976 >
<negronjl> jcastro: thx
<jcastro> hey when you are up to date, can you add a link to your tool in the Reviewers section? https://juju.ubuntu.com/CharmsProposedProcess
<jcastro> dang, sorry about the queue fellas, it was looking so wicked this morning and then I got ahead of myself
<jcastro> oh well, at least we found a bug!
<negronjl> I'm on the CLI fixes now
<negronjl> jcastro:  We'll have it all fixed soon :)
<jcastro> oh hey
<jcastro> I have another one
<jcastro> what's the cli project on lp?
<jcastro> I'll just affects you
<marcoceppi> jcastro: it's charm-tools
<jcastro> negronjl: this one too: https://bugs.launchpad.net/charm-tools/+bug/1002977
<_mup_> Bug #1002977: Review queue should list lp:/juju/docs <Juju Charm Tools:New> <charmworld:New> < https://launchpad.net/bugs/1002977 >
<negronjl> jcastro:  got it.  working on it so the CLI gets that
<jcastro> ok I'll affects charm-tools if I file a bug that needs parity in the CLI tool from now
<negronjl> jcastro: ok
<negronjl> dist-upgrading ... I may or may not be back in a minute :)
<jcastro> :)
<jcastro> negronjl: when he's all done we need to set up the graphs too
<imbrandon> robbiew: Y U NOT SEND ANY CANONICAL PPL
<imbrandon> http://cloudslam.org/cloud/conference-program
<robbiew> b/c it's called "Cloud Slam"
<robbiew> lol
<imbrandon> heh its the oldest one in the country tho intel everyone else is there :)
<robbiew> if we sent people to every cloud conference, we'd have no one to work on ubuntu server/cloud ;)
<imbrandon> woulda been nice to have juju int the Cloud PaaS DeathMatch deploying some PaaS :)
<imbrandon> true i was thinkin that was a big one tho, really guess not
<imbrandon> i mean it is but diffrent big
<imbrandon> more like the Ubuntu Cloud Summit, infact looks like most of the same attendees
 * negronjl is back 
<jcastro> jamespage: nice work on a promulgation!
<jcastro> up to 81 official folks!
<jamespage> jcastro, a little tardy....
<jamespage> but a nice charm - thanks nathwill!
<nathwill> thank you for your help in getting it to good quality, jamespage :)
<jamespage> nathwill, np - you're welcome!
<mhall119> the waiting is the hardest part..
<nathwill> take it on faith
<ejat> jcastro : Bug 997339 â¦ u remove the new-charm mean its being review ?
<_mup_> Bug #997339: Charm needed : Openbravo <Juju Charms Collection:New for fenris> < https://launchpad.net/bugs/997339 >
<jcastro> ejat: see the list, subscribing ~charmers is the new review process
<jcastro> I just updated the bug
<jcastro> so it's still in the queue
<jcastro> nathwill: how many charms do you have in the store now?
<SpamapS> jcastro: did you make sure charm-tools and jujucharms.com also respect that new process before changing it?
<ejat> ok noted
<nathwill> jcastro, just the one for now, got another one pending. (precise/varnish). still contemplating what rocking apps i'm familiar with that aren't already charmed, before i start my next one :)
<SpamapS> nathwill: have a look at all the existing stuff.. there's a need to make them better/morescalable/etc.
<nathwill> SpamapS: ok, i can do something like that... are we not worried about stepping on people's toes, then?
<SpamapS> nathwill: definitely not! If you have an improvement, just make a merge proposal
<jcastro> indeed
<nathwill> SpamapS: okey doke :)
<jcastro> owncloud could use an update if you're looking for an easy one
<jcastro> big new upstream release, would be nice to have it shiny in the store.
<nathwill> jcastro, i love the owncloud, so i'll definitely take a gander tonite.
<jcastro> <3
<SpamapS> make sure upgrade-charm goes smoothly :)
<nathwill> kk. looks like there's some room to add the db-relation-joined hook as well...
<jcastro> marcoceppi: don't forget to self document your maas adventures!
<negronjl> I need a review on https://code.launchpad.net/~negronjl/charm-tools/review-queue-charmers-group-juju-docs/+merge/106861
<negronjl> marcoceppi, SpamapS, jcastro, <anyone> : ^^
<jcastro> on it!
<jcastro> nuts from the branch name I thought that was docs, not code
<twobottux> aujuju: What is the best way to use the mysql charm in Juju with dynamic database credientials? <http://askubuntu.com/questions/140818/what-is-the-best-way-to-use-the-mysql-charm-in-juju-with-dynamic-database-credie>
<negronjl> jcastro:  This is the fix so the CLI review-queue get bugs from charmers and the MPs from lp:juju/docs as well
<jcastro> negronjl: dang, I don't see how I can unassign myself from the review
<negronjl> jcastro:  someone else can claim it
<negronjl> s/can/has to/
<jcastro> clint will save us
<negronjl> lol
<negronjl> jcastro: or... you can actually review this :)
<jcastro> <--- codes as much as he can speak spanish
<negronjl> ahhh ... SpamapS will save us then ... or marcoceppi :)
<hazmat> jcastro, so what do you want to show up in the review queue re bugs?
<negronjl> hazmat:  I modified the CLI review queue ... it should probably look like it ( but of course prettier ).
<negronjl> hazmat: The updated code for the cli is here: https://code.launchpad.net/~negronjl/charm-tools/review-queue-charmers-group-juju-docs
<hazmat> negronjl, i'm looking at it.. but that doesn't look right to me
<hazmat> negronjl, its still relying on tags for bugs
<negronjl> hazmat:  I updated the code in the above link
<hazmat> negronjl, also use python-dateutil
<hazmat> negronjl, dateutil.parser.parse(some_date_string)  == python_datetime
<jcastro> hazmat: anything ~charmers is subscribed to
<jcastro> that is an open bug
<hazmat> ah.. its getting both
<hazmat> negronjl, ic it now
<negronjl> hazmat: :)
<hazmat> negronjl, itertools.chain(bugs.entries, charmer_bugs.entries)
<hazmat> one block then.. cause its pretty ;-)
<negronjl> hazmat: hey .. i never said my code was pretty xD
<m_3> hazmat: talking with lp folks about the oneiric charms... think I understand what's going on with it now... long-term fix involves bugs #1003016 #1003017 and #991980 ... short-term fix is still in the works
<_mup_> Bug #1003016: branch-distro stacks on unique_name of new branches, not allowing renames <codehosting-ssh> <Launchpad itself:Triaged> < https://launchpad.net/bugs/1003016 >
<_mup_> Bug #1003017: Allow changing the new name of the branches produced by branch-distro <Launchpad itself:Won't Fix> < https://launchpad.net/bugs/1003017 >
<_mup_> Bug #991980: Oneiric official branches are all locked <Juju Charms Collection:New> < https://launchpad.net/bugs/991980 >
<negronjl> hazmat: That's actually a good idea so, I'll change that for the nex iteration
<SpamapS> whoa
<SpamapS> I'm impressed byu _mup_ there
<m_3> yeah... surprised that worked
<hazmat> m_3, okay that's the restore fix.. but afaics we shouldn't be renaming in anycase..
<hazmat> for the next release
<hazmat> m_3, that doesn't lok promising
<hazmat> re restore fix
<hazmat> m_3, what's the short term fix discussion?
<m_3> hazmat: long-term it's the branch-distro tool we used.  it wasn't actually renaming, it was creating new branches based on the old
<hazmat> m_3, but in the process it altered the old branches?
<m_3> hazmat: it just created them with branch names we can't use... so we had to rename to /trunk
<bkerensa> jcastro: see I got my peeps writing charms for you now (nathwill) :P
<jcastro> \o/
<jcastro> bkerensa: are you sure you're not his peep?
<bkerensa> lol
<m_3> hazmat: yes, as I understand, it changes what those branches are stacked on top of
<hazmat> m_3, but it shouldn't it be stacking the copy not the original
<hazmat> the precise branch copy should be stacked on oneiric
<m_3> hazmat: not sure... I'll dig through the actual branch-distro tool itself... don't really understand what it's for... upgrading series, but the branches have very specific names
<hazmat> not the other way around
<hazmat> m_3, yeah... i just re-read your email now with proper context
<m_3> perhaps we need to just do our charm promotion from scratch and not use branch-distro at all
<hazmat> but its still unclear why it modified the oneiric branches at all.
<bkerensa> SpamapS: let me know when you guys need more reviewers ;P
<m_3> agree
<hazmat> the stacked on branch is for the new target to pull revs from the old
<hazmat> but we have the old pulling from the new
<hazmat> which suggests something is wrong with how its being done
<hazmat> m_3, are the scripts you used for this available?
<m_3> hazmat: lp ran `branch-distro` when clint told them to
<m_3> then I ran the script that's in the email for renaming
<m_3> hazmat: the renaming broke the stacking
<m_3> hazmat: but it's a good question to ask why the old oneiric branches get suddenly stacked on top of the new precise one
<SpamapS> hazmat: I believe the reason for the stacking from precise <- oneiric is because the distro model has the oneiric branch *frozen*
<hazmat> m_3, so none of these scripts are nesc imo.. we can just script bzr to push the branches to the new location and promugate them
<hazmat> SpamapS, but that's not the case with something like an lts
<SpamapS> hazmat: its always the case
<hazmat> SpamapS, frozen is fine.. but not accessible i find very hard to believe
<SpamapS> release pocket does not change
 * m_3  hanging with antonio... one sec
<SpamapS> and it works for crap
<SpamapS> lp:ubuntu/oneiric/anything should be "the newest upload"
<SpamapS> not "the one we froze"
<hazmat> SpamapS, that's what tags are for ;-)
<SpamapS> instead we have lp:ubuntu/oneiric-updates/anything
<hazmat> SpamapS, wait your saying you can't access the ubuntu/oneiric/anything?... or is that workflow in combo with the rename that killed everything
<SpamapS> hazmat: I'm going to see if we can just write our own fork of branch-distro that doesn't do this
<SpamapS> hazmat: you can read it, but you can never change it
<hazmat> SpamapS, where does that script live incidentally?
<SpamapS> hazmat: lp:launchpad
<hazmat> k
<hazmat> SpamapS, its not a concern atm, we've got some time, but yeah.. that we could come up with a juju charm specific one that matches our use case better
<SpamapS> hazmat: scripts/branch-distro.py
<hazmat> because their definitely not frozen for juju
<SpamapS> hazmat: but it calls some lib/... stuff
<SpamapS> hazmat: I just think we should use tags, as you say, and updates should be able to go into the branch
 * hazmat knows better than to branch lp:launchpad
<marcoceppi> negronjl: sorry, mid-maas adventure. I can take a look in an hour or so if it hasn't been touched yet
<negronjl> marcoceppi: thx
<SpamapS> 228M	/home/clint/src/launchpad/.bzr
<SpamapS> there's actually 88M of code.. wtf?!
<hazmat> SpamapS, for the distro, the problem with tags is that their per branch, where as the distro wants to create an illusion of a group tag.. and individual tags still means the branch can be overwritten..
<hazmat> SpamapS, lp is ginormous
<SpamapS> Ahh there's a ridiculous amount of test data
<SpamapS> hazmat: on to an unrelated issue. Did you have a branch to enable proposed?
<SpamapS> hazmat: we need that for the SRU
<james_w> hazmat, it stacks old-on-new otherwise every six months a link (and hence a round-trip) would be added to the common case
<james_w> it's not so much that the old version is frozen
<hazmat> james_w, ah.. its an optimization
<james_w> charms could probably get away without any stacking
<hazmat> indeed
<james_w> ubuntu can't as it is *large*
<hazmat> their really independent
<james_w> so we stack to reduce the disk usage
<james_w> which charms remain small and relatively few that shouldn't be an issue
<james_w> but if bug 1003016 is fixed then stacking shouldn't really affect you
<_mup_> Bug #1003016: branch-distro stacks on unique_name of new branches, not allowing renames <codehosting-ssh> <Launchpad itself:Triaged> < https://launchpad.net/bugs/1003016 >
<hazmat> james_w, well that's true for the most part.. but some charms have binaries or src in them.. and the number will grow larger.
<hazmat> right now a full checkout of all charms runs near ~1G afaicr
<james_w> right, and Ubuntu is something like 200G, so it will be a few years until not stacking charms would get close to the usage of Ubuntu
<hazmat> james_w, indeed
<james_w> but I don't see a problem with keeping the stacking
<james_w> it's not the stacking that makes lp:charms/oneiric/* immutable
<hazmat> as long as its done per your bug with numeric ids, it shouldn't be an issue
<james_w> yeah
<hazmat> james_w, what makes it immutable?
 * hazmat bites the bullet and checkouts lp
<james_w> hazmat, there's a check in LP that ties mutability to the status of the seres
<james_w> series
<james_w> but having said that https://launchpad.net/charms/+series says that both oneiric and precise are "Active Development", so perhaps it only allows one to be mutable
<hazmat> james_w, i dunno that we've really tested it
<mhall119> so close
<hazmat> james_w, yeah.. the ppa oneiric charms are still mutable
<hazmat> http://jujucharms.com/~yellow/oneiric/buildbot-slave has a recent commit
<james_w> hazmat, but not the official?
<hazmat> james_w, the official ones are all unavailble
<hazmat> james_w, they can't be checkout
<SpamapS> james_w: ultimately, I think we want to not repeat the mistake of having pockets for updates/security in lp:charms/* .. we just want a branch, which is the latest one that everyone should have.
<hazmat> because of the stacking issue
<james_w> hazmat, right
<james_w> but I wonder if they are immutable too
<james_w> SpamapS, yeah, as long as you don't have pockets you shouldn't have pocket branches for sure
<SpamapS> james_w: well, IMO ubuntu shouldn't have pocket branches either.
<SpamapS> :-P
<james_w> well, IMO it needs them as long as it has pockets
<james_w> we could get rid of pockets
<negronjl> m_3:  Are you still reviewing https://bugs.launchpad.net/charms/+bug/898714 ?
<_mup_> Bug #898714: Charm Needed: The Locker Project <Juju Charms Collection:Fix Committed by bkerensa> < https://launchpad.net/bugs/898714 >
 * negronjl is going through the review_queue now
<SpamapS> The archive needs them, because apt needs them.. but for branches, you really only want one mainline of "what should be shipped to users right now"
<SpamapS> Anyway, we've diverged
<SpamapS> The fix seems to be to not use branch-distro as-is and fork its behavior somewhat for charms..
<SpamapS> don't make the branches immutable.. don't rename them to $series, and that will eliminate the problems we have
 * m_3 back
<m_3> negronjl: I grabbed that the night of the contest, but then we decided to use a different triage process
<negronjl> m_3: is it up for grabs then ?
<negronjl> m_3: so I can review it
<m_3> negronjl: sure
<negronjl> m_3:  ok.  I am reviewing it now.
<m_3> negronjl: lemme know if I have to do something to the bug to release it
<mhall119> m_3: I've almost got a working summit charm
<mhall119> without puppet
<m_3> mhall119: cool!
<negronjl> m_3: ok
<m_3> mhall119: lemme know when it's ready to deploy and we can swap it out for linux plumbers
<m_3> mhall119: linux plumbers needs a little love anyways... the environment's pretty stale
<james_w> SpamapS, I don't think branch-distro makes them immutable
<mhall119> m_3: this might take some more testing before it's ready for real-world use
<SpamapS> james_w: its entirely possible that I thought they were immutable, but they were actually just stacked improperly
<mhall119> but the good news is, 90% of the charm is being auto-generated from the django setup
<mhall119> which means we'llbe able to do the same for any django site
<james_w> SpamapS, sounds possible, but they may well be immutable
<mhall119> django-openid-auth is a pain though, how were you installing 0.4 ?
<james_w> SpamapS, if they are then I don't think it's because of branch-distro, I think it's just a piece of logic in Launchpad somewhere
<m_3> mhall119: ah, that's right... I remember you were working on a generator for django charms
<m_3> mhall119: I'll have to look... one sec
<SpamapS> james_w: ahh, like "newest active dev series wins" ?
<james_w> SpamapS, possibly
<james_w> the rule used to be "if you can upload the package to that pocket you can push to the branch"
<james_w> I'm not sure how that has changed
<m_3> mhall119: I do a pip install from requirements.txt first and then install django packages for django openid and python openid to catch anything missing
<james_w> and I'm not sure what the rules are for that when there are two active development releases, as Ubuntu never does that
<jimbaker`> mhall119, like the idea of the django charm generator - i wonder if it can be genericized to just use some juju cleverness, but best step is to get this sort of stuff working first
<james_w> jimbaker`, what juju cleverness would that be?
<SpamapS> james_w: perhaps more confusing is whether or not ~charmers == ~ubuntu-core-dev
<jimbaker`> james_w, as in config settings or some of the relation orchestration support
<SpamapS> jimbaker`: IMO juju needs 'extends: other-charm-name'
<SpamapS> jimbaker`: if juju had single inheritance, I'd be a very happy camper. :)
<mhall119> jimbaker`: what kind of juju cleverness?
<jimbaker`> SpamapS, i think that's a charm splice sort of thing for now
<SpamapS> jimbaker`: charm splice is more like mixins.. I want real inheritance.
<SpamapS> so you can have a django app that inherits from the django abstract charm.
<mhall119> jimbaker`: I'm already looking at things like settings.py, setup.py, and requirements.txt to generate custom charms
<jimbaker`> mhall119, without looking at your charm, rather impossible to  say specifically
<jimbaker`> SpamapS, how would you use charm inheritance on top of splicing? i believe it makes sense, just want to understand better what we are losing
<SpamapS> jimbaker`: splicing is a hack.. just look at how it does relations between spliced charms.. bad bad bad idea IMO
<mhall119> jimbaker`: you can bzr branch lp:~mhall119/+junk/django_juju if you're interested
<jimbaker`> mhall119, thanks!
<mhall119> jimbaker`: simply add 'django_juju' to your settings.py, and run 'python manage.py charm'
<SpamapS> jimbaker`: inheritance would just allow us to have abstract charms for things like django and nodejs
<jimbaker`> SpamapS, seems very reasonable to me. i do wonder if there's an intermediate step that's less hacky but doesn't require this getting into core
<jimbaker`> SpamapS, which i find to be very interesting questions!
<SpamapS> jimbaker` the reason it needs to be in core is I want it to be easy to use.
<SpamapS> jimbaker`: I don't want an intermediate "build" step
<james_w> I think a subordinate for a django app would likely work reasonably well too
<SpamapS> jimbaker`: I want to be able to say 'extends: mysql' and have juju look in the current namespace, and then maybe also in the default namespace
<SpamapS> james_w: yeah thats the current way to do this
<marcoceppi> Anyone have any experience bootstrapping MAAS?
<SpamapS> but Its clunky IMO
<SpamapS> ok, lunch time
<m_3> yikes... yeah, I'd better eat too
<m_3> we'll chat about how to immediately fix the oneiric charms... I'm not happy that it might require us to rename (break) the precise ones as a step in the process
<jimbaker`> mhall119, looks interesting. my first reaction is, it still looks like it could be done with config settings. but i need to delve more into it. regardless, your efforts would not be wasted, just a matter of changing when things are done
<marcoceppi> Getting the following error when trying to bootstrap MAAS: http://paste.ubuntu.com/1001423/
<m_3> marcoceppi: looks like dns stuff... maybe check that you can dig stuff using your maas server for dns?  dunno
<marcoceppi> m_3: I'm using an IP for the MAAS master
<marcoceppi> I just realized I haven't "commissioned" the machines
<marcoceppi> trying that first
<negronjl> marcoceppi: lol
<negronjl> marcoceppi: now you just need to trip over the ethernet cable ;)
<marcoceppi> negronjl: they're all virtual machines :)
<hazmat> SpamapS, ideally we'd have generic charms for those things imo
<hazmat> SpamapS, even more ideally we just heroku packs ;-)
<hazmat> which are all mit licensed
<SpamapS> hazmat: but the point is that I want a charm for *my* service, not some generic django thing
<mhall119> jimbaker`: maybe, I'm not too familiar with configs
<hazmat> SpamapS, but then its just a matter of configuration of the generic django thingy
<SpamapS> hazmat: django can generically talk to databases, but only my app knows what context each db connection is in, or how I also talk to rabbitmq
<SpamapS> hazmat: no, unless you have dynamic relationships.. I need explicit relations for each thing I want to relate to
<mhall119> ok, I think I have it working on a local sqlite db
<hazmat> SpamapS, there are fairly standard settings for all those things, and yes the generic needs to support all of the possible rels, and inject into std settings
<SpamapS> hazmat: lets say I start using neo4j .. django might not support that. Or two instances of the database, one for highly transactional database stuff, and one as a readonly slave of public data that I need... I doubt there are standard settings that can handle this. Plus when I relate to the databases.. how do I give that context without separately named relations?
<mhall119> I need to make it convert bzr+ssh branch urls to their cooresponding http ones
<mhall119> \o/ it works!
<bkerensa> jcastro: you need to do some heavy juju evangelizing up here in July
<bkerensa> ;)
 * bkerensa is getting bogged down by the "Puppet is Win" crowd
<jcastro> juju works awesome with puppet. :)
<SpamapS> bkerensa: indeed.
<SpamapS> jcastro: so, just when we got a "big list" .. you go and move the way to calculate the big list. :-/
<mhall119> mhall@mhall-laptop:~/projects/juju-local/mhall-local$ juju add-relation postgresql summit
<mhall119> No matching endpoints
<jcastro> yeah dude, I'm sorry. :(
<mhall119> :(
<SpamapS> jcastro: fixing now in charm-tools
<jcastro> SpamapS: hazmat is going to rescue me
<SpamapS> heh, who cares about the web version? ;)
<jcastro> I thought juan just fixed it
<mhall119> what am I doing wrong with this relation?
<SpamapS> oh if he did then fantastic :)
<jcastro> hey mira, what will you backport policy be for charm tools in 12.04?
<jcastro> I want the text one too
<jcastro> but I don't want a PPA
<SpamapS> jcastro: charm-tools is a great candidate for precise-backports
<SpamapS> just have to do the work of requesting it
<bkerensa> jcastro: yeah well the OSU LUG folk Pro-Puppet and think its futile to use Juju at all
<marcoceppi> m_3: I needed to specify a port for the URL :\
<bkerensa> :P also they seem to have beef with the juju documentation
<bkerensa> said its hard to find info
<jcastro> we all have beef with juju docs.
<SpamapS> jcastro: I kind of like that we're all subscribed to the new charm bugs now. Getting email about them is actually going to make people look at them more I think.
<imbrandon> everyone is working on them
<SpamapS> The docs aren't that bad IMO
<SpamapS> I mean, there's a getting started
<SpamapS> it works reasonably well
<jcastro> SpamapS: indeed, I feel stupid that we didn't do it like this in the first place
<SpamapS> Most people get stuck on the "you need an amazon account"
<marcoceppi> I hope to make the docs super this cycle
<jcastro> but oh well, there was something nice about the wild wild west of the last cycle, heh
<mhall119> jcastro: http://paste.ubuntu.com/1001510/
<mhall119> I need help
<imbrandon> bkerensa: and you dont have to convert everyone :) let them be puppet fans, competition is good
<jcastro> They don't compete!
<imbrandon> just mention once in a while that Wint^Juju is comming
<negronjl> SpamapS: I fixed the review-queue subcommand in charm-tools but need, review ( https://code.launchpad.net/~negronjl/charm-tools/review-queue-charmers-group-juju-docs/+merge/106861 )
<jcastro> If I said "Screw dpkg, apt is the better tool" you would make fun of me for comparing things the wrong way
<bkerensa> jcastro: I will where my PuppetLabs shirt to your talk :P
<jcastro> mhall119: never saw that one before.
<bkerensa> wear*
<imbrandon> jcastro: if you insisted at the freaklevel i do when i get excited aka your typical lug, yea i'd try then let them learn the hard way
<marcoceppi> mhall119: does summit use postgresql?
<negronjl> bkerensa:  I'll be working on puppet modules that will use SpamapS puppetmaster and puppet client charms ... you will be able to deploy your puppet modules with Juju and _without_ having to change the way you code your puppet modules
<mhall119> marcoceppi: yeah
<marcoceppi> mhall119: I don't know where the summit repo is, can you pastebin each metadata.yaml file?
<mhall119> ah, my summit metadata.yaml had:
<mhall119> requires:
<mhall119>   db:
<mhall119>     interface: sqlite3
<mhall119> do I have to re-deploy summit to use interface: pgsql?
<bkerensa> imbrandon: do you want two smartfish mice?
<jcastro> man that error sucks
<imbrandon> are they the same ?
<bkerensa> negronjl: I dont use puppet.... I just use their monies :P
<SpamapS> negronjl: review in progress....
<negronjl> bkerensa: ... just giving you something to tell the puppet minions :)
<negronjl> SpamapS: th
<negronjl> SpamapS: thx
<bkerensa> :)
<SpamapS> negronjl: btw thanks for fixing the tabs :)
<negronjl> SpamapS: lol ... np
<SpamapS> negronjl: though I'd rather have seen those fixed as a separate change.. its hard to seew hat you changed
<negronjl> SpamapS: I hate them as much as the next guy ... was just trying a new editor ...
<SpamapS> been there, done that, got those scars
<bkerensa> imbrandon: I have a fullsize and mini still in the box in my swag closet
<imbrandon> ahhh kk i guess
<negronjl> SpamapS: I guess my thoughts on trying _not_ to complicate things with multiple MPs and more process actually made things more complicated ... :/
<SpamapS> negronjl: merge away
<bkerensa> imbrandon: Im also getting some of those sexy mice soon though
<negronjl> SpamapS: thx
<bkerensa> :D
<SpamapS> negronjl: meh, I just did the diff w/ -b
<marcoceppi> mhall119: yeah, if it knows how to use pgsql then add another relation db: pgsql
<marcoceppi> that way you can keep the sqlite3 there
<marcoceppi> after you update the metadata run an upgrade-charm
<jcastro> and in the README you'll want to say "We use sqlite and postgres, but we recommend using $whichever" so people know how to deploy it
<marcoceppi> jcastro: +1
<imbrandon> man README should be a README, can juju charms grow a proper doc thats not 1000000000 miles long dude to it being best to put everything in there
<mhall119> marcoceppi: ok
<SpamapS> With the "use a local db" vs. "use a remote db"
<mhall119> jcastro: that's going to be project-specific
<SpamapS> I'd like to think that we make the charms default to using a remote db, and make a setting to use the local db
<SpamapS> It causes a lot of problems if you default to local db
<SpamapS> its neat for demos..
<mhall119> SpamapS: I have it set to use the local sqlite until a db relation is added
<marcoceppi> SpamapS: I'd like to think that charms can interchange local to remote
<mhall119> django doesn't like running without a db
<SpamapS> but ultimately its hard to migrate away from the local db to the remote db.. but its not hard to just "turn on" local db before any relations are made.
<SpamapS> mhall119: yeah, I find that problematic
<SpamapS> mhall119: so defer django until you have the remote db or have the setting to use the local db.
<SpamapS> marcoceppi: sure, but you're left with this orphaned local db... who knows what state it was in..
<marcoceppi> SpamapS: wouldn't the charm clean it up?
<SpamapS> marcoceppi: I would hope not!
<SpamapS> data is precious
<marcoceppi> SpamapS: it's using remote now!
<marcoceppi> dump to a consumable format, clean the database
<marcoceppi> save the data, but remove the moving parts
<SpamapS> marcoceppi: right, but think of the instance where you set things up 90% of the way, then realized "wait we have to scale" .. you don't want the relationship to delete your old DB
<imbrandon> you can remove the unit and the data will stay i dont see why moveing to remote would any more likely trigger a cleanup
<SpamapS> Oh ok you meant something way more sane than I thought ;)
<marcoceppi> SpamapS: no, it should push the data out to the remote db!
<SpamapS> Another thing to consider, maybe sqlite should be a subordinate
<SpamapS> Then the workflow is the same, it just ends up local to the unit
<imbrandon> SpamapS: i dunno, if its absolutely required as part of the code on a django app
<imbrandon> why
<mhall119> do I have to expose the postgresql before my summit unit can see it?
<imbrandon> e..g that sub is not good elsewhere is it ?
<SpamapS> mhall119: no
<mhall119> or is there an internal private IP it can connect to?
<SpamapS> imbrandon: actually it would be useful for openstack
<SpamapS> imbrandon: nova also defaults to using sqlite, which causes problems because things register with the sqlite db, then the database is changed to mysql.. and the sqlite db registrations are not migrated
<imbrandon> hrm well i intentially left the sqlite in the druapl charm, as its more like a flatfile fallback
<imbrandon> not a use case
<marcoceppi> mhall119: the relation settings will provide the internal hostname for the hook to use
<SpamapS> mhall119: the address that the pgsql charm returns is private and should be freely accessible from the summit unit
<mhall119> I have db_host=`relation-get host` in my db-relation-changed hook on the summit charm
<imbrandon> SpamapS: no thats the apps problem as far as data integrity , every app already needs to deal with that as is
<mhall119> which gives me 192.168.122.168
<marcoceppi> mhall119: right
<mhall119>   File "/usr/lib/python2.7/dist-packages/psycopg2/__init__.py", line 179, in connect
<mhall119>     connection_factory=connection_factory, async=async)
<mhall119> but I get that
<mhall119> psycopg2.OperationalError: could not connect to server: Connection refused
<mhall119>         Is the server running on host "192.168.122.168" and accepting
<mhall119>         TCP/IP connections on port 5432?
<imbrandon> SpamapS: but yea , honestly doesnt that sound like a bug in Nova ( corner case rarely thought of on the dev side , but still one none the less )
<imbrandon> btw , drupal does the same and has the same issue if you dont check for it and most modules do
<imbrandon> if they uise db direct
<SpamapS> imbrandon: err, no. Its a bug in the charm.
<SpamapS> imbrandon: the charm should migrate the data if its going to change the database target
<imbrandon> i want cross environment relation hooks , please  please
<SpamapS> imbrandon: nova is doing exactly what it should. Writing data into the database it is configured for.
<SpamapS> imbrandon: as is keystone, and everything else.
<SpamapS> imbrandon: you can do cross env with a proxy charm
<imbrandon> yea i considered that
<SpamapS> Its not even a horrible idea :)
<SpamapS> leave a 'juju open-tunnel' running so its not slow
<imbrandon> nope, would samnitise it too
<imbrandon> god why do i get so cranky/irritated at every little thing when i am sick, ugh, even irritated at that itself
<imbrandon> bleh
<imbrandon> back in a bit , gonna try to eat something an see how that goes
<hazmat> SpamapS, its a 95/5 rule re django generic ;-)
<imbrandon> SpamapS: you got some time to show me how to make mysql create a db or really table is fine and use the MEMORY backend and then run a quick grant and schema creation + dataset on it, well thats assuming i dont just need to do it externally etc, kinda hoping its something i dont konw about internally that will do it so i dfont have to manage it seperately
<imbrandon> if its just "run blah.sql" on server start then i can do that already
<SpamapS> hazmat: but what happens when you cross from 95 to 5 and you're using the generic django charm? Fork it?
<SpamapS> hazmat: because ultimately, those 5 percenters are the most successful users.
<mhall119> SpamapS: every django site will most likely have it's own charm
<SpamapS> yeah that sucks
<SpamapS> I agree, but I think we can fix that
<hazmat> exactly
<SpamapS> with ineheritance
<mhall119> yeah, but that's because Django is a framework, not a service
<SpamapS> not asking for it now
<SpamapS> just saying, thats how you fix that
<mhall119> every django site would still have it's own child-charm
<hazmat> SpamapS, fair enough
<hazmat> mhall119, 95% of them shouldn't need it
<mhall119> in my experience, 100% would
<hazmat> they need a requirements.txt for deps, and db settings injection, and the rest is just what?
<mhall119> either that or a custom config so large that it may as well be a custom charm
<hazmat> mhall119, the charm is going to pull from git/vcs.. the user already has their specific settings configured
<hazmat> mhall119, what else do they really need?
<mhall119> hazmat: environment setup, db bootstrapping, etc
<SpamapS> I have to agree that everybody *starting out* will just need a db and requirements
<SpamapS> but even that case is *so* much smoother with inheritance
<hazmat> mhall119, ignoring django admin commands, which can be handled in a number of different ways.. environment setup.. is just config of settings.py typically
<SpamapS> If I could write a django-myapp charm that just extends: django and drops a requirements.txt file in the right place.. bazinga, thats hot
<imbrandon> SpamapS: you can make a sub do that :)
<hazmat> not really
<SpamapS> Indeed, but thats clunky :)
<imbrandon> hazmat: sure, it wouldent have the implioed checking inheritance would but would surely drop the half of the app you have in the sub anywhere including the riught place
<mhall119> hazmat: for summit we have other branches we need to pull in
<imbrandon> mhall119: yea thats broken
<SpamapS> I can write subs that drop files places. Thats the point of subs. Whether you *want* people to use them that way, thats how they'll be used.
<hazmat> mhall119, summit is a bit of a custom job i'd say
<mhall119> LTP is the same
<avoine> this is the code I use if your curious: https://code.launchpad.net/~patrick-hetu/+junk/python-django
<SpamapS> what django app isn't custom? :)
<imbrandon> mhall119: i saw what chris did their, and hrmmmm yea bad juju
<SpamapS> LIke if you don't need a custom app, use a CMS
<mhall119> heck, most of ISD's projects have various post-install things that need to be done
<avoine> also checkout the *_site in my account to see how I deploy apps
<mhall119> then you have the question of is it using django 1.2 or a later version
<mhall119> is it using South or not
<imbrandon> you dont HAVE to , i cleaned it up for him
<marcoceppi> YES!
<hazmat> SpamapS, mhall119 okay.. you've convinced me.. although the version and extensions can be done generically i think.. there are all sorts of things that aren't.
<mhall119> probably using celery will reqiure some other customizations
<hazmat> mhall119, not really.. celery config is pretty standard
<mhall119> no special db setup for it?
<hazmat> mhall119, no more so than for the django's rdbms db
<mhall119> ok
<hazmat> so maybe its 80%/20% ;-)
<imbrandon> and btw , mentioning that ISD does something in a certain way pertaining  to web app engeering is not a real heavy argument with me :)
 * hazmat nods
<mhall119> imbrandon: it is to me
<SpamapS> hazmat: I think 80/20 is definitely realistic in terms of one-db-simple-setup/customized-in-crazy-ways
<m_3> mhall119: did you get your connection to pgsql worked out?  you need to disambiguate the endpoint `juju add-relation summit postgresql:db`
<imbrandon> mhall119: but yea the un-holly that was summits base theme checkeout  that was 3 layed bzr forks at dirrent directory roots but overlayed and THEN more un-holly
<imbrandon> yea its bad
<mhall119> I'm working on a 95% solution, that requires every project to add their own extra 5%
<mhall119> m_3: still no
<SpamapS> hazmat: whats the story on bug 926550 ? Without it, we can't do proper testing of a precise SRU
<_mup_> Bug #926550: No way to test proposed updates to juju <rls-mgr-p-tracking> <juju:In Progress by hazmat> <juju (Ubuntu):Triaged> <juju (Ubuntu Oneiric):Triaged> <juju (Ubuntu Precise):Triaged> < https://launchpad.net/bugs/926550 >
<m_3> imbrandon: I'd expect submodules could work for themes if necessary
<m_3> mhall119: try using `postgresql:db` instead of just `postgresql`
<imbrandon> m_3: indeed, i actually have a branche in LP where i had it all cleaned up like that anc cjohnston did not have time to "learn a new way" before UDS
<m_3> imbrandon: gotcha
<imbrandon> given it was only liek 2 weeks
<imbrandon> but still not a hard change
<m_3> right
<imbrandon> brb
<imbrandon> btw anyone mentions i actually admited to cleaning up django code and i will deny it
<SpamapS> hazmat: Ugh.. http://askubuntu.com/questions/140818/what-is-the-best-way-to-use-the-mysql-charm-in-juju-with-dynamic-database-credie .. bad answer!
<SpamapS> hazmat: the mysql charm *already* has that capability
<SpamapS> hazmat: and we don't want to be encouraging forks!
<hazmat> SpamapS, then please correct away, it didn't have that ability to my knowedge
<mhall119> m_3: I ran juju add-relation summit:db postgresql:db
<mhall119> didn't work
<imbrandon> drop the :db bits
<hazmat> SpamapS, it can create multiple dbs per rel?
<mhall119> imbrandon: I tried it without the :db bits
<mhall119> didn't work
<SpamapS>   db-admin:
<SpamapS>     interface: mysql-root
<imbrandon> ok what dident
<mhall119> I tried it with *just* the :db on postgresql
<mhall119> didn't work
<SpamapS> hazmat: gives you root, which is what he needs.
<hazmat> SpamapS, good  point
<imbrandon> add-relation summit postgresql
<mhall119> imbrandon: it gives the same error
<mhall119> it sends connection info to my summit hook just fine
<imbrandon> ok then the metadata.yaml is likly off
<mhall119> it's just that the connection info doesn't work
<marcoceppi> SpamapS: +1
<imbrandon> can you pb the error and full metadata.yaml please :)
<m_3> mhall119: strange... can you paste your metadata on each one?
 * SpamapS answered, and -1'd
<mhall119> m_3: one minute
<hazmat> SpamapS, updated my answer as well
<mhall119> m_3: http://paste.ubuntu.com/1001655/ is the summit metadata.yaml
<imbrandon> SpamapS: while i 1000% agree about no need for the shell etc, why are we discouraging forks, i missed that bit cuz everyone used to be all "everyone should have their own cusom local charms" iirc
<hazmat> mhall119, that's not valid yaml
<hazmat> i hope not anyways.. multiple keys of the same value?
<mhall119> hazmat: what?
<hazmat> db:  && db:
<SpamapS> imbrandon: because forks mean only the users of the new fork get the new functionality.
<m_3> mhall119: with http://paste.ubuntu.com/1001657/ and http://paste.ubuntu.com/1001658/, I can use http://paste.ubuntu.com/1001666/ to drive it (note the postgresql:db in the add-relation)
<SpamapS> imbrandon: people should have their own custom charms for the bits that are custom to their infra
<imbrandon> SpamapS: and thats gonna happen if we fight it or use the energy elsewhere :)
<hazmat> SpamapS, forks are the lifeblood of evolution :-)
<hazmat> the dna even
<SpamapS> Well I'm not saying people should just accept what is there as-is
<m_3> mhall119: I see what you're trying to do, but we need to use a different interface to connect to both interchangably
<SpamapS> I'm saying, people should be encouraged to *submit a patch*
<SpamapS> not to just go off in the weeds with their own custom thing
<SpamapS> the whole point of the charm store is collaborating around best practices
<imbrandon> sure but what he had was not patch worthy and a local fork would have been just fine
<m_3> mhall119: the trick would be to extend the mysql and pgsql interfaces to be generic relational-db with an extra db-type parameter
<imbrandon> is what i mean
<SpamapS> imbrandon: it was patch worthy, and we already had that patch, when we added phpmyadmin, we added it to mysql.. and now everybody has access to it.
<SpamapS> m_3: ew
<imbrandon> not the ":problem" of dynamic credentials
<m_3> mhall119: then the client charm can do different things with different db types
<SpamapS> imbrandon: that is the problem. He wants dynamic creds.. thats what a user with the grant option is for.
<m_3> mhall119: otherwise, stick to a single type of backend
<SpamapS> m_3: wouldn't you rather have a strong "I speak to these DB types" in metadata.yaml ?
<imbrandon> ok maybe this is bad example cuz we;re on the tech bits a little much i was generalizeing a little more
<imbrandon> SpamapS: then juju would need to learn every new dbms and not justa new charm that way wouldent it
<SpamapS> imbrandon: You can have a requires: interface: db-with-no-charm and when that db gets charmed, viola, you can relate to it. :)
<imbrandon> hrm
<SpamapS> imbrandon: but I see the point
<m_3> SpamapS: depends... sometimes no.  in particular for a framework like django, I might lean towards a more generic interface
<SpamapS> a generic one is not totally useless
<SpamapS> anyway, I have to go afk for a bit
<imbrandon> esp largely interchangeable ones in common webapps that abstract them with pdo anyhow on the code side
<SpamapS> m_3: Yeah I do see that value.. where django can at least *try* to talk to anything its ORM can talk to, and we might not have charms for all of those yet.
<hazmat> jcastro, negronjl the review-queue web thing should be good now
<imbrandon> and would just flip flop or be a real mess like drupal 8 and use all 4 types at the same time for diffrent tables and data
<m_3> SpamapS: but yes, in general I do agree that your app should be opinionated about the relation store it uses... then it can optimize accordingly.  Probably not a place to be super general
<imbrandon> lol
<m_3> hazmat: url path to review queue?
<hazmat> m_3, http://jujucharms.com/review-queue
<m_3> gracias
<hazmat> m_3, see channel title ;-)
<jcastro> hey, fixed!
<jcastro> I changed it when I broke it
<jcastro> fixing
<m_3> hazmat: doh
* jcastro changed the topic of #juju to: Reviewer: ~charmers || Review Queue: http://jujucgarns.com/review-queue  || Charms at http://jujucharms.com || Want to write a charm? http://juju.ubuntu.com/Charms || OSX client: http://jujutools.github.com/
<imbrandon> hrm wtf am i gonan do with drupal 8 , thinking about that its gonna be a real mess SpamapS
<hazmat> m_3, its tricky.. i never read that..
* hazmat changed the topic of #juju to: Reviewer: ~charmers || Review Queue: http://jujucharms.com/review-queue  || Charms at http://jujucharms.com || Want to write a charm? http://juju.ubuntu.com/Charms || OSX client: http://jujutools.github.com/
<jcastro> ok so now we have a queue, if you guys could post on the list on how we should resolve getting this down hardcore at first and then move to a more normal mode
<jcastro> (see my post about the queue from early today)
<imbrandon> as it will seriously use mysql and pgsql and sqlite and memcache all at the same time for what in d6 and 7 are one single database
<imbrandon> if you tell it to
<imbrandon> bah i worry about that in 2 years when its released
<m_3> imbrandon: specific interfaces (mysql,pgsql) can live right alongside more generic ones... we just have to keep the docs/examples/readmes straight so people know what to use
<imbrandon> yea, plus i hope drupal learns of their madness before release on that one
<imbrandon> they were trying like alot of apps do and solve a problem that needed to be done at another level of the stack inside the code
<imbrandon> e.g. db HA in this case
<imbrandon> makes me cringe eveytime i see stuff like that
<jcastro> https://tahoe-lafs.org/trac/tahoe-lafs <-- I just sent a mail to this guy for a charm
<jcastro> I can't believe we didn't think of it before, it's pretty clever
<jimbaker`> jcastro, sounds good about a charm for tahoe-lafs - one of my friends works on that project (zooko)
<jcastro> jimbaker`: he's the guy I just mailed!
<jcastro> jimbaker`: it's interesting because if you have multiple cloud providers ... it's like what Tahoe is for!
<jcastro> ok so we're down to 24 items now
<jcastro> I removed a bunch of mostly dead ones where the person wasn't responding
<jcastro> and told them to add the group if they need a review
<jcastro> so these should be the real deal
<marcoceppi> Why does GlusterFS show up if it's "In Progress"?
<imbrandon> Libcloud is composed of multiple components, currently those are:
<imbrandon> Compute - libcloud.compute.*
<imbrandon> Storage - libcloud.storage.*
<imbrandon> Load balancers - libcloud.loadbalancer.*
<imbrandon> DNS - libcloud.dns.*
<jimbaker`> jcastro, very cool
<m_3> marcoceppi: yeah, it shouldn't be... we've had problems in the past with removing things once they were in such a list... perhaps that's what's going on
<jcastro> is it promulgated?
<jcastro> it's showing up under ~marcoceppi in the web UI
<marcoceppi> jcastro: it's not promulgated because it's not done yet :)
<jcastro> oh ok so you don't need a review?
<marcoceppi> well, the glusterfs-server one is, the client still needs a few tweaks
<marcoceppi> nope
<jcastro> unsubscribe ~charmers
<marcoceppi> that's why it's in progress
<marcoceppi> Ah, just unsubscrived charmers
<jcastro> ah I see what I did there, I just swapped out the tag with the group
<jcastro> sorry about that
<bkerensa> imbrandon: http://i.imgur.com/XKj2d.jpg <-- look at that sexy beast
<m_3> jcastro: wait, so why'd you remove new-charm from Gluster?
<jcastro> m_3: we have the group now
<jcastro> we don't need the tag
<m_3> jcastro: oh, sorry I'm behind
<jcastro> no worries
<jcastro> the post this morning on the juju list should explain it all
<m_3> but note that it still shows up in the review queue
<jcastro> the TLDR is, if it needs a review, subscribe ~charmers
<jcastro> yeah
<jcastro> we just did that so we wouldn't accidentally lose one
<jcastro> but I went through and just fixed all the open bugs
<MarkDude> Which direction will the weighting of resources go?
<m_3> jcastro: so should I close #1003116 as invalid?
<_mup_> Bug #1003116: bugs that're In Progress shouldn't show up in the review queue <charmworld:New> < https://launchpad.net/bugs/1003116 >
 * MarkDude was readling ML and sees this piece as important to getting it in Fedora repos
<jcastro> m_3: yeah that was just me messing it up
<jcastro> MarkDude: what do you mean?
<m_3> jcastro: ok, thanks... now I'll go read the mail :)
<jcastro> m_3: though it probably shouldn't do that, dunno
<jcastro> now I'm thinking otherwise
<imbrandon> bkerensa: do you really need me to take a pic of my ipad2 and ipad3 side by side here on the desk :)
<MarkDude> prioritizing resources
<jcastro> m_3: "let's discuss next call" for that one
<m_3> jcastro: ha!
<jcastro> I don't feel strongly about it, I mean, adding the group is fine I think
<jcastro> and if the person wants more time to work on it without being in the queue just remove the team and tell him, readd it when you're done
<bkerensa> imbrandon: your ipad2 and ipad3 cost how much? I got like $5k worth of free stuff last year alone :P
<jcastro> that's how distro does it
<jcastro> MarkDude: I need more specifics, you mean as far as the python version?
<imbrandon> bkerensa: maybe true but i got $500 that i actually use daily :)
<MarkDude> resource maps
 * imbrandon runs from those
<imbrandon> libcloud
<MarkDude> remembering that I am a bit like Jono on technical matters :D
 * imbrandon runs from everyone 
<imbrandon> MarkDude: think of it like a internal spreadsheet that juju will keep to its self with cross refs of all the cloud providers and what they provide etc etc
<jcastro> SpamapS: bikeshed, our own calendar or just share the calendar with distro? i'd rather not proliferate more calendars in the project
<imbrandon> past that it dont matter to anyonr but the core devs peepes
<jcastro> they have at most 3 people per day
<MarkDude> Well from what I have gathered - this will make it a bit easier to *sell* to Fedora
 * MarkDude understands linking it to USD is a method that will help management folks
<MarkDude> And that this is a debate that has devs on one side, and managers on the other
<jcastro> I still have no idea what you want, do you mean like an easy explanation to explain to them why they'd want juju?
<MarkDude> Well from my read of recnet emails, it looks like there is a debate
<MarkDude> and that those wishing to tie resource maps to USD will win
 * MarkDude has the simple explanation understood
<MarkDude> sorta at least
 * MarkDude is just not sure if this debate will be over soon
<MarkDude> Or if you guys will drag it out a bit
<MarkDude> Like Fedora is doing with naming debate..... still
<imbrandon> MarkDude: oh, some of that to is just have its hashed out on this side of the fence, putting it to a any supported world bank currency even if we choose another would be dead simple
<jcastro> MarkDude: ok I need a link so I can catch up. :)
<MarkDude> https://lists.ubuntu.com/archives/juju/2012-May/001542.html
<MarkDude> towards the end
<imbrandon> progmaticly currency converion is easy MarkDude so whatever is decided if fedora needed to change it to soemthing else its dead simple once the hard bits of calc are in place and thats given no matter what they choose unless its monoply money
<MarkDude> That what I figured
<MarkDude> The devs can be sold on juju
<MarkDude> this part will help me get the manager types
<MarkDude> or at least those that listen to them ;)
<imbrandon> btw when you send that out or blog post it PLEASE cc me :)
<imbrandon> not ment in a bad way but that is sooo going to be my post-uds train wreck i GOT to watch happen :P
<MarkDude> Sure wooooowoooo trainwreck >>> https://docs.google.com/document/d/1h5kHxrn-DxotB-YCNHbdgfdhewkwVXCJ0D6WilkiDVU/edit
<imbrandon> it will fly but still be very fun to see :)
 * MarkDude has been given a blessing by default from some in leadership over this
<imbrandon> MarkDude: a group of 11, when there is 12 in the group someone is always Lazurus
<imbrandon> :)
<MarkDude> lol
 * MarkDude has had 10+ viewers of the document, and NO COMMENTS
<MarkDude> Those with common sense are going to stay away. Hopefully for me to write my reality based objections
 * MarkDude just assumes a few of you might find the hotdog debates funny
<imbrandon> just change it to "Maddog" instead of Hotdog, in honor of Jon Hall
<MarkDude> You like how I included my best link?
<MarkDude> LUGOD, Maddog is the 1st video
 * MarkDude is second
<MarkDude> Better than ANYTHING else I have done in FOSS
<SpamapS> jcastro: I know it seems like "moar calendars" but I'd rather have one calendar for ~charmers people to subscribe to, since I imagine a lot of ~charmers have no interest in Ubuntu dev
<SpamapS> jcastro: but, yeah, green, red.. just pick a color for the shed :)
<imbrandon> anyone seen a kfreebsd/ubuntu in the wild after debian started support for it ?
<SpamapS> imbrandon: no, but upstart would be the major hurdle there
<SpamapS> inotify *and* ptrace
<imbrandon> zfs
<imbrandon> mmm
<imbrandon> threads mmm
<imbrandon> so fast webserver mmmm
<imbrandon> and m_3 is probably cursing me for highlights
<imbrandon> lol
<imbrandon> SpamapS: would dtrace and a userspace inotify take their place ?
<imbrandon> given i have no real idea
<imbrandon> er way dtrace is solaris
<imbrandon> bah i give up
<SpamapS> imbrandon: I've been told that dtrace != ptrace so no
<SpamapS> imbrandon: and the inotify bits will likely just need ifdefs and then some assumptions in places after writing jobs will have to be replaced with 'initctl reload-configuration'
<imbrandon> SpamapS: ok i JUST realized something that puts to rest in my mind resistance is futile and we should give in to libcloud and wrap it in twisted ( there is even an example to do it in the docs of libcloud ) lead dude on libcloud == cloudkick founder , cloud kick == rackspace now , rackspace == openstack
<hazmat> dtrace is awesome.. but osx, freebsd, solaris only
<hazmat> well.. oracle has a port in progress to linux.. but its not there yet
<imbrandon> ahh yea i knew it was killer on solaris and osx i thought it was something else on bsd
<SpamapS> imbrandon: zfs is meh.. there are ways to get that on Linux IIRC. Threads are not any better from the measurements I've seen. The webserver stuff I'm not aware of.
<hazmat> crossbow networking, zones containers, lets pick apart the corpse ;-)
<imbrandon> SpamapS: yea they are, i've seen real scientificly done well not just back of the napkin that its MUCH better in bsd
<imbrandon> not zfs the threads specificly for serving
<imbrandon> but yea first hand
<imbrandon> and really loook no further than ftp.cdrom.com
<imbrandon> one lone machine
<imbrandon> or was for years
<SpamapS> Hah that still exists?
<imbrandon> heh
<imbrandon> no idea but i did do a shit ton of reasearch into it first hand about 3 years ago and its all still relevant
<imbrandon> web serving on bsd is by far the best hands down, its just everything else that makes it impractical to do on a scale we need
<imbrandon> thus a bsd+ubuntu would rock for that
<SpamapS> imbrandon: you'd need *at least* a 10% improvement to justify the amount of work necessary
<imbrandon> i'll put it like this i was saturating a 100MB core switch with a 600mhz celeron 1u server on a single ide drive on realse day for gutsy
<imbrandon> serving ISO's as fast as i could on a bsd box to test some :)
<m_3> imbrandon: nope, I've turned off mmm highlights _long_ ago :)
<SpamapS> is that really important at this point?
<imbrandon> SpamapS: well yes and no
<imbrandon> SpamapS: for most no, for things the scale of wikimedia , very much so
<imbrandon> and yes it was well above 10%
<imbrandon> i do need to redo those control test though on modern stories
<imbrandon> maybe when it gets cold outside and i need a heater in my garage like sabdfl, i'll see if i can peek out the newly installed google fiber :)
<SpamapS> imbrandon: I'm betting nginx closes the gap and makes it a lot less about the kernel
<SpamapS> imbrandon: given the scale of.. Facebook.. Google.. et. al. I think the linux kernel *probably* has scale figured out
<imbrandon> btw you ever had a competent ccna dude tell you that you was killing the whole row of racks in the DC's core sw, hahah intresting moment it is
<SpamapS> imbrandon: I mean, FB rewrote PHP before they tried to run on FreeBSD
<imbrandon> SpamapS: actually your likely right, that makes a huge diff and thats why is cuz nginx does it more like bsd and apache uses builtins
<imbrandon> ahhh
<imbrandon> SpamapS: ahahahahhahhahha
<SpamapS> imbrandon: yeah, nginx and varnish both do things "the new way" not just loading everything off to libc
<imbrandon> just a php to c compiler :) roadsend is rewriting php, and qucurious already did in java for php5.4 100% coverage AND suposidly preforms identical to php + apc
<imbrandon> err php to c converter, stillneed gcc to compile hiphop code
<imbrandon> :)
<imbrandon> roadsends php llvm compiler is native tho
<imbrandon> but not done, only one done is the java one, and hiphop is "done enough"
<imbrandon> to use for most things except wordpress
<imbrandon> ( wordpress uses a few newer apis it hasent grown yet )
<imbrandon> they even released a jit hiphop runtime too reciently , not had time to even fire it up tho
<imbrandon> but its said to work "inplace" like the php-cgi kinda
<imbrandon> i wonder if i'm too damn old to go back and turn a cs to a doctorate and do some cool un-holly php compiler stuff
<imbrandon> :)
<SpamapS> hazmat: bug 926550 ? I absolutely have to upload that in the SRU or it will never pass verification
<_mup_> Bug #926550: No way to test proposed updates to juju <rls-mgr-p-tracking> <juju:In Progress by hazmat> <juju (Ubuntu):Triaged> <juju (Ubuntu Oneiric):Triaged> <juju (Ubuntu Precise):Triaged> < https://launchpad.net/bugs/926550 >
<SpamapS> hazmat: you are marked as IN Progress on that
<hazmat> SpamapS, indeed i am
 * hazmat checks the branch
<SpamapS> hazmat: got something? I seem to recall a very simple change :)
<hazmat> SpamapS, it is pretty simple http://paste.ubuntu.com/1001944
<hazmat> SpamapS, needs another test i suppose.. but how to test it functionally?
<hazmat> oh.. i guess just try it and see
<hazmat> un momento
<_mup_> juju/proposed-support r487 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<SpamapS> hazmat: you can actually fully test it, you will just have "proposed" enabled completely
<hazmat> SpamapS, yup
 * SpamapS sets maintainer on all his charms...
<SpamapS> I have a challenge for any bored charmers
<SpamapS> I want to see 3 services tied together in a way their authors never intended, but that is useful
<SpamapS> like, vsftp <-> mysql <-> ??? that shows uploading to vsftp and then the file is owned by the same user who can log in to the web app
<SpamapS> like, maybe a pam_mysql subordinate or something
<MarkDude> imbrandon, for your amusement http://lists.fedoraproject.org/pipermail/advisory-board/2012-May/011612.html
<MarkDude> SpamapS, that sounds like a great challenge. That sortof thing is what draws me to juju :)
<SpamapS> MarkDude: shall I assign the bug to you then? ALLLLLLRIGHTYTHEN
<SpamapS> ;)
<MarkDude> Well not so mcuh
 * MarkDude is good at talking, bbq, and wearing a penguin suit
<MarkDude> sometimes talking while wearing a penguin suit
<SpamapS> MarkDude: that sounds like 3 services that weren't intended to be integrated
<MarkDude> Think Jono, but less charming
<MarkDude> lol
<SpamapS> juju deploy markdude-in-a-penguin-suit
<SpamapS> juju deploy bbq
<MarkDude> good point
<SpamapS> juju add-relation markdude-in-a-penguin-suit bbq
<MarkDude> awesomeness
<SpamapS> I imagine you'll call that relation 'spatula-zipper' or something of that sort
<MarkDude> that or things that *just sorta happened*
<SpamapS> We'll cal that the "unholy" stack
<Daviey> juju destroy-enviroment  # party killer
<MarkDude> Never was my intent to be known like this. Minus the bbq thing ;)
<SpamapS> Daviey: every party needs a pooper thats what we invited juju fer
<Daviey> heh
<SpamapS> Daviey: perhaps MaaS could grow support for match-light charcoal?
 * MarkDude wishes there were a way I could juju deploy sense-of-humor to some Fedorans
<Daviey> SpamapS: sounds like a power control provider.. so sure.
<SpamapS> MarkDude: closest thing you will get is a subordinate for 'sl'
<SpamapS> Daviey: it is, though cooling is an issue on systems plugged into it. :)
<MarkDude> maybe next versions
<MarkDude> time to leave the coffee shop, later folks
<imbrandon> maas match-light --constraint=196-arms
<Daviey> SpamapS: if match power control is bucket of water, it's less idempotent.. which will be an issue.
<Daviey> maybe bucket o water + hair dryer.
<SpamapS> imbrandon: did you see that Debian may drop Wordpress
<SpamapS> Daviey: I think in order to make charcoal idempotent, you have to be omnipotent
<imbrandon> no but not suprising and its been talked about before
<SpamapS> imbrandon: how's that "pull from upstream" charm going then? ;)
<imbrandon> SpamapS: imho probably for diff reasons i think it should too , drupal , wordpress, phpmyadmin et al really move to fast for debians pkg model
<imbrandon> SpamapS: its done and on github, i'm doing some more "cool stuff" before i get it promed
<Daviey> hrm, i'm not sure everyone will be happy if phpmyadmin went to juju only.
<SpamapS> imbrandon: its really about upstream commitment.. if the security patches aren't going to be backported by upstream to anything.. then its not a good fit for distro
<imbrandon> Daviey: yea i dont think juju only is the way either, dont know a great way, but i can agree with their arguments
<SpamapS> Even though mysql isn't disclosing their security bugs, they're at least releasing them as patch-only releases
<imbrandon> SpamapS: right
<SpamapS> thats really the issue w/ wordpress et.al
<Daviey> SpamapS: i hoped the mysql issue was making progress?
<SpamapS> Daviey: the only progress is that we're crossing our fingers that they don't break stuff
<imbrandon> and even if they did no one would use it, the web changes to much even for a secure wp to be used over 5 years for the 80% market
<SpamapS> Daviey: *everybody* is in the same boat
<imbrandon> SpamapS: not really we can choose to do ci and make the commit ment to make sure the newest is best not just dput and pray but thats against alot of history too
<imbrandon> i think pulling it from the archive with a eeasy installer like composer to get apps ( e,g npm for nodejs ) is gonna be the way to go
<imbrandon> drupal has already moved to composer as has zendframework and symphony and such
<SpamapS> imbrandon: I'm saying we're crossing our fingers on the mysql stuff
<imbrandon> oh yea
<imbrandon> mysql is ALMOST as much magic to me as X
<SpamapS> CI is actually exactly what we'll be doing with the juju charms
<imbrandon> i can configure MYSQL tho
<imbrandon> :)
<imbrandon> SpamapS: right thats what i was getting at , i'm just still sick as hell and i'm sure making even less sense than i do most dayss
<imbrandon> i am feeling better tho and on my 4th gatorade today
<imbrandon> heh
<imbrandon> no caffeine for me is kickin my rear
<SpamapS> yeah I get a bad headache when I try to ignore the siren call of that sweet sweet adenosine disruptor :)
<imbrandon> btw i tell yea i snagged a lil dell insp 1101
<SpamapS> imbrandon: are you just a swirling vortex of moderately powered hardware or what?
<imbrandon> not very much horsepower at all but i'm plsently suprised and unity isnt half bad on it
<imbrandon> nah, i just got rid of that macbook i had at uds, it was a pos
<imbrandon> then gave my kid brother the other MBP
<imbrandon> so i only have my mini no and the lil dell + ipad
<imbrandon> now*
<imbrandon> less management and not too bad off, not a mba, but i can take the $$ trade off considering i got it basicly free for helping a friend out the other day for a half hour
<imbrandon> keeping 2 os's updated on 3 machines and that was just my daily workstations not servers etc was quickly a pita
<imbrandon> archived my osx into a vm that is off till needed and lil-dude ( yup thats the hostname ) and my mini are running my interpretations of dell dev distro  :)
<imbrandon> whatever it was called
 * m_3 likes "swirling vortex of moderately powered hardware"... thinks SpamapS is feeling poetic or something
<imbrandon> i still think a "dev" desktop machine in the cloud would be an ok charm
<imbrandon> hehehe
<imbrandon> i have noticed though unity is far less annoying on the 10.1 screen
<imbrandon> well apart from the icon launcher is HUGE if not hidden
<imbrandon> but not making me want to kick a cat like on my desktop
<imbrandon> that and the only hotkey i need to rember is super
<imbrandon> heh
 * m_3 remaps super
<imbrandon> SpamapS: what list is dd taling about it on
<imbrandon> m_3: i did on my desktop so it would work with my muscle mem of OSX
<m_3> imbrandon: yup
<imbrandon> i flip ctl and super
<imbrandon> actually looking over at the windows keyboard on ther it would be alt and super
<imbrandon> but on mac its ctl and super
<m_3> I actually have a mac keyboard on my pugetsystems.com desktop
<m_3> so it matches my laptop
<m_3> and I don't have to confuse the poor muscles
<imbrandon> so bottom left of mmy keyboard is ctl->alt->super->space  maped in all os;s
<imbrandon> yup
<imbrandon> thats exactly why i buy the bluetooth mac keyboards they work with everything
<imbrandon> and the reason i had it seperate at uds as the old macbook keymap was diffrent but the bt keyboard and the mbp and the usb all match
<imbrandon> heh
<imbrandon> other wise it woulda been kinda silly to have a sep keyboard , i'm sure it  looked now that i think about it
<imbrandon> heh
<m_3> yeah, I really like these little mac bt kbds
<imbrandon> yea i'm on like #5 now, 3 still in full time use
<imbrandon> 2 have seen better days and are donor boards for keys and such
<SpamapS> imbrandon: debian-devel
<imbrandon> the one i use on my desktop in the other room i have a bar that connect it to the magic trackpad and gives it induction recharging
<m_3> cool
<imbrandon> so its the same form as the full usb one
<imbrandon> when connected
<imbrandon> ponly trackpad and no number pad
<SpamapS> m_3: I emerged from my mothers womb laden with the sorrows of the world on my back, each one crying out for prose and poem that might transform them to joy like so many caterpillars turn to butterflies
 * SpamapS farts
<imbrandon> m_3: yea if you have a trackpad as well , try one , they are very nice
<imbrandon> http://www.amazon.com/Twelve-South-12-1101-MagicWand-Connects/dp/B004L9M0AO/ref=sr_1_3?ie=UTF8&qid=1337727556&sr=8-3
<imbrandon> mine is acutlly another brand but almost identical, it just is a bar that snaps on the back at the top over the battery bar on both of them
<imbrandon> hahah and their pictures of it down below have the trackpad hooked on the left and mouse on the right, like i do anyhow but just not hooked up on this desktop with a caption of "photoshop with two hands" ... thats too funny
 * m_3 snaps repeatedly
<m_3> oh wait... need to find the beret
<m_3> imbrandon: nope, haven't tried the trackpad
<m_3> was hoping to get the little mouse with the trackpad on its back working... but no love last I tried
<imbrandon> yea i got it before i got a real mbp so i could have multi-touch on the older white macbooks and was hooked
<imbrandon> oh it is
<m_3> yeah, multi works great on the mbp
<imbrandon> and its the whole top of it
<imbrandon> imho i would get the pad before the mouse and stick with a mighty mouse or normal roller mouse
<imbrandon> cuz the magic mouse refuses to click AT ALL if your hand are dirty in the slightest
<imbrandon> sweaty , anything
<imbrandon> i'm constantly needing to wipe the top glass off for it to fully work
<SpamapS> I was shocked the other day when 3 finger drag on Ubuntu sent my firefox window flying off the screen
<m_3> man that was right over the plate... I'm gonna let it pass tho :)
<imbrandon> heh try 3 finger push then SpamapS you'll love that
<imbrandon> its the expose iirc , it is on osx at least
<imbrandon> i think unity too
<SpamapS> whats this osx you speak of?
<SpamapS> ;)
<imbrandon> :) the thing that unity got all the muti touch goodness from
<imbrandon> :)(
<imbrandon> hehe
<m_3> the thing my wife makes me keep on the tv
<m_3> I should switch it while she's out of town... see how long it takes to notice
<imbrandon> no but seriously think about how you drag in the browser with two fingers without thinking now, when you learn them all even the 4 finger combos , man its very very nice, almost like an emacs mouse user
<SpamapS> I try to ignore that the mouse even exists really
<imbrandon> like i drag 4 down and it shows "Launchpad" aka the same thing you get in unity when you press super, whatever thats called
<imbrandon> aka all my app icons
<imbrandon> and then up to see all my running apps and all their windows at once
<m_3> imbrandon: how'd you launch the unity launcher?
<imbrandon> yea i use as many mouse and trackpad gestures as most linux peeps traditionally use in emacs keyboard shortcuts to exit when they are done :)
<imbrandon> m_3: on osx the equiv is 4 fingers down
<imbrandon> pull
<m_3> oh, sorry I thought you meant on unity
<imbrandon> it launches "Launchpad.app"
<imbrandon> i did
<imbrandon> i hit super to do that
<imbrandon> just super
<m_3> I've been trying to figure out the command or signal to get the launcher to launch
<m_3> besides the super key
<imbrandon> oh thats all i know so far
<imbrandon> not dug enough,if i find it tho i'll holler
<imbrandon> but i've begun to just hit super and type, kinda anying i cant then just arrow down to the icon i need to tab then arrow
<m_3> I'd like command-space for that, a key-combo and not a single key
<imbrandon> but i'm getting used to it
<imbrandon> oh yea, cmd-space spotlight is killer
<m_3> the one that's killing me right now is the timing for ctrl-A
<imbrandon> but imho the dash and launchpad are more alike than cmd+space spotlight although technicly your right
<m_3> it's like it's not engaging the control key half the time I press ctrl-A
<imbrandon> ctl+a bckspace ?
<m_3> this is for tmux
<imbrandon> ahh hrm, might be your key too, tried another ?
<m_3> ctrl-a to prefix everything
<SpamapS> m_3: there's something wonky with keyboard + trackpad on the macs
<m_3> but with the capslock mapped to control, I'm a _lot_ faster in doing ctrl-a
<imbrandon> yea , thats what i use it in too ctl+a then backspace == one wondow back
<imbrandon> over and over till i get where i want
<SpamapS> m_3: my Air especially has something weird with timing.. left-Alt-tab shouldn't bring up HUD, but it does unless I hold it down for a while
<m_3> it's like it's not getting the timing right... but it's not consistent
<m_3> ha!
<imbrandon> hahah
<m_3> yes, I've noticed timing issues in general for hud and dash
<SpamapS> I want to turn off HUD.. its not configurable tho :-/
<imbrandon> ^5 SpamapS
<m_3> it's like I've got to press alt as 'ta-dah' to get the hud up
<imbrandon> bout time
<imbrandon> lol
<m_3> win 16
<imbrandon> m_3: esc+y
<imbrandon> better than /win N
<imbrandon> esc+N
<imbrandon> 11 starts with q
<imbrandon> err not esc+N , esc N
<imbrandon> seperate
<imbrandon> but yea, i've grown very used to that key combo too and less wrong window messages that way tooo :)
<m_3> imbrandon: I don't have that bound in irssi that way
<imbrandon> ahh i thought it was default that way
<m_3> could be :)
<imbrandon> ive had the same irssi since breezy tho
<imbrandon> configs that is
<SpamapS> my irssi goes back to when bitchX's upstream did like online hari-kari and removed all trace of bitchx
<imbrandon> heh
<m_3> ha
<SpamapS> I even re-did my bitchx theme .. osbxwannabe
<imbrandon> i flirted with xchat and bip for a few weeks
<SpamapS> http://irssi.org/themefiles/osbxwannabe.png
<SpamapS> hahaha.. wow.. look at that
<m_3> thought about quassel, but you just really can't beat irssi at this point
<imbrandon> but cant shake irssi no matter how much i try
<imbrandon> nice
<SpamapS> Last-Modified: Sat, 17 May 2008 15:39:06 GMT
<SpamapS> I bet thats when irssi.org migrated, I think I did that theme way before 2008
<m_3> :)
<JoseeAntonioR> hi guys! I'd like to know if any of you is interested in driving a session in the Ubuntu User Days, which is for users who are very new to Ubuntu. it can be about juju (as long it's very very basic), or what you want to do it. the User Days are going to take place between June 23-24.
 * SpamapS dives into the nearest trashcan
<imbrandon> funny thing is i think i saw one yesterday on flickr of mine from close to the same time, /me looks
 * m_3 scatters
<m_3> JoseeAntonioR: jk... sure, I can do one
<JoseeAntonioR> :P
<m_3> JoseeAntonioR: this is an IRC event right?
 * m_3 checks what he's committing to (for once)
<imbrandon> yea in the classroom like normal
<JoseeAntonioR> m_3: yep!
<JoseeAntonioR> imbrandon: hey, and what about you?
<imbrandon> a second juju one ?
<JoseeAntonioR> m_3: is there any particular topic you're interested in?
<m_3> JoseeAntonioR: just call it charmschool or intro to juju or something similar
<JoseeAntonioR> imbrandon: you can choose another topic, if you want to
<imbrandon> or did you mean something else ?
<imbrandon> hrm , i'll do one on from not ahving juju installed to a full working drupal install with custom themes and all
<imbrandon> that should be enough diffrent
<JoseeAntonioR> imbrandon: remember it's for new users in Ubuntu, they're just familiarizing with the environment
<imbrandon> oh right
<imbrandon> let me think on it a day
<imbrandon> and i'll poke ya tomarrow
<JoseeAntonioR> imbrandon: there are some suggestions here: https://wiki.ubuntu.com/UserDaysTeam/CourseSuggestions
<imbrandon> if you still have a spot
<JoseeAntonioR> great, thanks!
<imbrandon> kk
<JoseeAntonioR> we sure will :)
<imbrandon> JoseeAntonioR: how about "command line basics aka for the ultra new users
<JoseeAntonioR> imbrandon: that would be great, that session always has a great audience
<imbrandon> kk cool
<JoseeAntonioR> imbrandon: when you're ready, just tell me the session and the time you want it to be
<imbrandon> JoseeAntonioR: i normally just tell daniel or whom ever to pencil me in as a floater so others can have their prefered times as any will work ok for me
<imbrandon> and i'll fill in with one thats left
<JoseeAntonioR> imbrandon: if that's fine for you, that'll work for us too
<imbrandon> kk yea , i'll just check it the day before and see where ya put me
<imbrandon> ty
<JoseeAntonioR> thanks to you!
<JoseeAntonioR> command line basics, right?
<imbrandon> yup
<JoseeAntonioR> great, thanks!
<imbrandon> zomg my music went from "Halestorm - I miss the missery" to "Alabama - Lady Down on Love" banshee needs to grow a brain like iTunes :)
<imbrandon> man thats not even cool, i was rockin out too
<imbrandon> lol
<imbrandon> got two 4x8 sheets of holed pegboard and some 1x1's to add a tall cube like backing on the right side and behind my monitors "walls for hangin stuff" that i wanna really feel better so i can get put up
<imbrandon> my psudo desk frankenstin is quickly becomming a fortress
<JoseeAntonioR> imbrandon: I printed some ubuntu posters to cover my empty walls
<JoseeAntonioR> they look pretty cool
<imbrandon> m_3: one other thing i would add is if you do want a mouse and not the pad, get one with the blue led's in them if you are mobile more than not, its not all hype and those blut track lights really do pick up on most any surface even shiney ones MUCH better
<imbrandon> JoseeAntonioR: :)
<m_3> imbrandon: yeah, I haven't been doing much blender recently... that's my sole reason to mouse really
<imbrandon> MS calls their bluetrack and logitec its TrackAnywhere or somerthing, but they are all blue leds
<imbrandon> yea and they arent much more if any then the other mice
<imbrandon> maybe 5$ diff or something
<SpamapS> doh, I just realized the need-maintainers.txt thing was pulling from the wrong dir.. a static copy of the charm store
<imbrandon> sometimes they are branded at "couch mice" too
 * SpamapS updates
<imbrandon> oops
<imbrandon> someone tried to explain that is was because the blue light wave is larger than red , and while i know thats tecnicaly true i'm not sure mice are that sphostica  but maybe
<SpamapS> even more oops was fork bombing the t1.micro that I have it running on with 81 bzr pull's
<SpamapS> oooooops
<imbrandon> HAHAHAHHAHA
<imbrandon> oh man then form Beaste Boys to Alan Jakson, this thing is gonna make me make my own playlists isnt it
<imbrandon> bleh
<SpamapS> Heh, I just heard Coldplay's crappy ass "You gotta fight"
<imbrandon> oh i'[ve avoided that so i would not want to murder them
<imbrandon> there are a few covers that are as good or better, but man coldplay should not have even considered beaste boys, and stuck with like a vanalla ice songs or somethin
<imbrandon> i rag on them alot tho as it was my ex's fav bands
<imbrandon> heh
<imbrandon> :)
<imbrandon> now Chris Cornell: Billy Jean, better than MJ and thats hard to do. Johny Cash: Hurt, is so much win i dont even wanna hear the NiN version ever again
<hazmat> m_3, looks like the oneiric branches are back
<hazmat> m_3, who did the magic?
#juju 2012-05-23
<SpamapS>  00:01:44 up 163 days,  9:12,  1 user,  load average: 0.42, 13.39, 16.99
<SpamapS> doh
<SpamapS> [14115521.537845] drizzled invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0
<SpamapS> why is it always the database that gets oom'd first!
<SpamapS> http://fewbar.com/charms/need-maintainers.txt
<SpamapS> Looking much better!
<negronjl> SpamapS: when was the last time you updated need-maintainers.txt ?  AFAIK  all of my charms have maintainers but, they still show on your black list :)
<SpamapS> negronjl: I have a few that aren't showing right either
<negronjl> SpamapS: ... the list of shame is not working right ... :)
<SpamapS> negronjl: something in LP or bzr is not working right
<negronjl> SpamapS: I've been having random issues with bzr today ...
<SpamapS> negronjl: AHA
<SpamapS> charm getall did bzr checkouts
<SpamapS> so I have to do 'bzr update'
<SpamapS> doh
<SpamapS> negronjl: the next run should fix most of that
<negronjl> SpamapS: nice
<SpamapS> so to be clear, bzr and lp are working right.. SpamapS just had a bug :)
<negronjl> SpamapS: lol ... clearing the air :)
<hazmat> m_3, spoke too soon not fixed quite yet
 * SpamapS wonders how much faster this would be if he were using http: bzr instead of ssh+bzr
<hazmat> SpamapS, probably not much
<hazmat> SpamapS, it takes about 7m to updates across all the charms for me atm
<hazmat> without ssh
<hazmat> that's across oneiric and precise charms though
<hazmat> which is about 365 charms
<hazmat> ~8m30s to be more precise
<imbrandon> oh noes, so head was not producing good code so i took a walk, only to the other side of the desk, went to fix one loose cable and have now tore down my full setup , figured it needs a good cleanup anyhow
<imbrandon> 5 min turns to 2 hours
<imbrandon> m_3 / SpamapS : lets see if i can ask this in short way but get my full meaning ... ok so what about a distinguishment between a service charm and a data charm that really only wants to dump its code thats diffrent or in addiotion to the say wordpress service charm
<imbrandon> but not really provide an interfact, just a data/code dump in reality
<imbrandon> gotta head out, i'll poke yall later
<mgz> is the version of juju exposed inside the python package anywhere?
<mgz> I only see it in setup.py
<marcoceppi> mgz: `dpkg -l | grep juju` will give you the version :)
<mgz> that's not much use for using within juju itself in a user agent string though :)
<marcoceppi> mgz: Versions (should) hit when the go port is finished. Not sure if they're going to put them in the Python version
<marcoceppi> Hey hazmat, what's the conversation happening on the mailing list about the merge for docs?
<hazmat> marcoceppi, its the thread you started.. https://lists.ubuntu.com/archives/juju/2012-May/001568.html
<hazmat> marcoceppi, my comments are here https://lists.ubuntu.com/archives/juju/2012-May/001607.html
<marcoceppi> hazmat: that merge doesn't touch the other section of the docs. Would you like me to just update the other section as well to include the new information (I'm confused about what's holding what up)
<hazmat> marcoceppi, yeah.. i'd prefer the getting-start doc not have a bunch of local provider specific information, and that it just link to provider specific docs where that information should reside
<hazmat> the information added for local provider will need to grow for other corner cases issues..
<hazmat> such as ufw usage or existing dns server on the host
<marcoceppi> hazmat: cool, I'll move it over to that section. In that respect should we also move getting started EC2? Actually I'll reply back to that thread...
<hazmat> marcoceppi, yeah.. i'd like to see it start of with a configuring a provider section, which just looks to individual provider pages, and then continues forward with keygen and bootstrap
<hazmat> s/looks/links
<marcoceppi> good point
<marcoceppi> I'll also add in MAAS configuration since it seems to be missing
<hazmat> marcoceppi, awesome, thanks
<m_3> marcoceppi: hey... btw, how'd the maas environment work out?  that was pure kvm VMs right?
<marcoceppi> m_3: I used VirtualBox and it went pretty well: http://marcoceppi.com/2012/05/juju-maas-virtualbox/ It's not the *best* MAAS experience because WOL doesn't seem to work, but it was a fun starting point
<m_3> marcoceppi: awesome man!
<jcastro> woo, queue is 19!
<marcoceppi> m_3: I look forward to trying again with some real metal and maybe Xen
<m_3> marcoceppi: yeah I'm itching to get a little more hardware right now... should have some extra boxes after the move
<jcastro> I think I will get another hp microserver
<jcastro> see what you guys started?
<m_3> ha!
<m_3> I really like mine
<marcoceppi> I wonder how hard it would be to create a "public" MAAS service
<m_3> I think the extra $$ for the ilo sucks... but it's worth it
<m_3> marcoceppi: just for demo purposes?  or as a real service
<marcoceppi> demo purposes at first, but ideally a public multi-tenant MAAS setup
<m_3> hmmm... sounds hard
<m_3> I'd love to see a simple way to try juju in general though (not nec maas)
<marcoceppi> I'd have to look into how the users work in MAAS, suppose that would be the real limiter. If you could assign units from a larger pool to users directly in MAAS I don't think it'd be that hard, if not it would probably be an uphill battle
<marcoceppi> m_3: LXC :P
<jcastro> what I want to see
<jcastro> is a workload, one on MAAS, and one on Openstack
<jcastro> and see if there's a difference
<jcastro> and them compare/contrast the pros/cons of doing certain workloads on certain configs
<m_3> jcastro: roger... that's planned if we can scrounge the hw
<m_3> jcastro: we discussed waiting until constraints could handle "rack-local" deployments... but we really wouldn't have to wait if it's just one rack :)
<jcastro> m_3: hey so, assuming your out this week for gluecon, what's your ideal review day?
<m_3> it's all rack local
<jcastro> I think I'll start penciling in a schedule
<m_3> jcastro: Tuesday
<m_3> jcastro: ranked preferences T, Th, W
<jcastro> perfect
<m_3> jcastro: man thanks for setting that stuff up... it'll help out a bunch
<jcastro> it's all juan and kapil
<jcastro> I just knew what to steal
<jcastro> unless I get more comments on the proposal I'll just implement the new wiki page today
<jcastro> and then we'll be good to go
<m_3> cool
<jcastro> after that we have some governance changes
<jcastro> which sound dumb but we need them, and we'll likely never use them (a council for the charm store etc.)
<jcastro> but jono and I will do all that work
<m_3> hazmat: I'll follow up with clint and the lp peeps later today and try to get a non-interruptive fix for the oneiric charms... not happy with their current suggestions for restacking, but don't have anything better
<m_3> jcastro: understood
<m_3> time to run to gluecon... catch y'all later
<mhall119> m_3: do you know of any problems with the postgresql charm and relations for precise?
 * nathwill is working on updated owncloud charm... 
<nathwill> i think i'm going to try and deal w/ the sqlite vs. mysql by setting a bool standalone config option...
<avoine> mhall119: I do, you must be carefull that it's not a hostname that goes into your pg_hda.conf
<avoine> because it must be an IP address
<mhall119> avoine: thanks, I'll check on that
<nijaba> SpamapS: just left the charmers team to join the inactive one...
<SpamapS> nijaba: cool hah.. you're way ahead of me. :)
<nijaba> SpamapS: not that I am that proud of it, but it's the right place for me in all fairness
<SpamapS> nijaba: indeed, we need to be *realistic*
<SpamapS> nijaba: I just set it up so that inactive-charmers retains write access to the charms though, so that you can keep doing work on the charms you care about
<nijaba> SpamapS: cool, thanks
<SpamapS> nijaba: so.. you going to keep working on limesurvey and roundcube?
<nijaba> SpamapS: trying to.  hoping that the madness of the past few month will go down soon
<SpamapS> nijaba: http://fewbar.com/charms/need-maintainers.txt ...
<nijaba> SpamapS: k
<marcoceppi> SpamapS: just updated my charms
<SpamapS> marcoceppi: woot! thanks
<negronjl> 'morning all
<SpamapS> negronjl: good morning. Note that you are off the wall of shame completely now. :)
<SpamapS> ^5
<negronjl> SpamapS: ahh ... I have regained some of my honor ... :)
<SpamapS> negronjl: most importantly, you have avoided my nagspam
 * jcastro fixes his charm maintainership
<jcastro> SpamapS: ok I see you created inactive charmers
<jcastro> should I move people there?
<marcoceppi> jcastro SpamapS what's the inactive criteria?
<jcastro> marcoceppi: not wanting to review charms
<jcastro> basically, ~charmers are people who want to review charms and promulgate
<marcoceppi> Ah, gotchya
<jcastro> marcoceppi: for canonical employees I'll assign them days, for you and brandon you guys can just work on it whenever you want like you do now
<jcastro> negronjl: mire, which day is good for you for reviews?
<SpamapS> hm
<SpamapS> we need --force to be able to force even E:'s
<jcastro> SpamapS: you're preferred day is friday still?
<negronjl> jcastro:  In order of preference ... Mon, Tues, Wed
<SpamapS> jcastro: err, no way
<SpamapS> jcastro: friday is *slammed*
<SpamapS> jcastro: I already have patch pilots on friday :)
<negronjl> SpamapS: Are you talking about --force in promulgate ?
<SpamapS> negronjl: yeah
<SpamapS> negronjl: it needs to ignore everything
<jcastro> SpamapS: ok what days work for you?
<negronjl> SpamapS: I don't think we should --force on errors
<SpamapS> negronjl: I need to promulgate pictor, so that Dustin can commit/push to it for instance
<jcastro> jamespage: what days work for you for charm reviews?
<SpamapS> negronjl: its necessary
<SpamapS> negronjl: shouldn't, but need the ability to do things wrong sometimes.
<negronjl> SpamapS: My thoughts are that if we have an E: and you still want to promulgate it then, maybe the proof script needs to re-think what an E; is
<SpamapS> negronjl: not having a maintainer *is* an E
<jcastro> lynxman: are you going to be able to do 4h a month on charm reviews? lmk.
<SpamapS> but I'm promulgating *so somebody else can fix it*
<lynxman> jcastro: sure :)
<SpamapS> negronjl: keep in mind that promulgate is also to *change* the branch, not just to add a new one
<negronjl> SpamapS: A thought is .... why don't _we_ fix it and then promulgate it
<jcastro> lynxman: what days work for you?
<negronjl> SpamapS: for maintainer that is
<SpamapS> negronjl: because I'd rather the commit that says Dustin is the maintainer come from Dustin. :)
 * SpamapS just downgrades promulgate and handles it
<lynxman> jcastro: just assign me whichever days you need mmore people :)
<SpamapS> negronjl: --force should mean --force. Not --ignore-warnings
<jcastro> SpamapS: ok, just need what days are good for you and we're done
<SpamapS> negronjl: its wrong, but we're not children, we should have powers to override policy when its getting in the way of GTD :)
<negronjl> SpamapS: ok ... I'm convinced
<SpamapS> jcastro: M,W,Th
<negronjl> SpamapS: Are you fixing it or should I ?
<SpamapS> negronjl: Heh, I've moved on. Perhaps we can fix it if it ever comes up again? :)
<negronjl> SpamapS: lol ... all that convincing for nothing :)
<SpamapS> negronjl: I'm a real s*** like that.
<SpamapS> negronjl: I trust you to prioritize it appropriately. :)
 * SpamapS translates: DO IT NOW OR SPAMAPS WILL CRY
<negronjl> SpamapS: lol ... I'll get it done in a minute
<SpamapS> You know what we need tho? We need an audit log of promulgate's activities.
<SpamapS> Like, I'd like to see who promulgated what
<negronjl> SpamapS: good idea ... when can you have it ready :)
<marcoceppi> SpamapS: how would you audit that?
<SpamapS> marcoceppi: I just want it available. I don't know how I'd use it.
 * SpamapS is like Johnny 5.. need iiiinnnpput
<marcoceppi> Right, how would you track that though? Webservice?
<marcoceppi> I don
<marcoceppi> I don't think there's a way to track that in lp currently
<SpamapS> LP probably has it in some way
<SpamapS> just not exposed
<SpamapS> Even if its just "last touch by..x"
<SpamapS> marcoceppi: launchpad logs almost everything that the distro package publisher does
<SpamapS> I want similar logs for charms
<negronjl> SpamapS: do we want warnings to abort promulgate by default ?
<negronjl> SpamapS: you could always use --force to override them
<jamespage> jcastro, wednesdays would be good for me
<SpamapS> negronjl: I like that it aborts on warnings
<SpamapS> negronjl: but I wonder if we should separate the two concerns
<SpamapS> negronjl: E's are definitely "never do this unless you know what you're doing" .. but warnings might just be "I'm moving the ownership of a branch so a team can improve it"
<negronjl> SpamapS: for now ... let's leave it aborting on warnings as well as errors .... it gives you the opportunity to decide whether you want to fix the warnings or not.
<negronjl> SpamapS: you can always just use the --force switch and override it all but, promulgate gives you a chance to decide
<SpamapS> negronjl: right, I'm thinking --ignore-warnings should have more friendly language than --force
<negronjl> SpamapS: I can just put them both
<SpamapS> negronjl: I think it makes sense. --force would ignore both, --ignore-warnings would naturally just ignore warnings.
<negronjl> SpamapS: ok .. I'll have it done in a minute
<SpamapS> heh.. doh.. my wall of shame hasn't been updating for about 19 reasons
<SpamapS> now I just figured out I was still using bzr+ssh so the cron job that did bzr pulls was broken :-P
<negronjl> SpamapS: shame on you :)
<SpamapS> indeed
<negronjl> SpamapS: https://code.launchpad.net/~negronjl/charm-tools/ignore-warnings-modified-force/+merge/107082
<SpamapS> negronjl: approved, thanks for being my bit^H^Hest friend. :)
<negronjl> SpamapS: lol ... no worries f^H^H^H^^Hriend :)
<lynxman> lots of ^H going around eh? :P
<SpamapS> lynxman: yeah, make sure you have all your shots ;)
<lynxman> lol
<hazmat> negronjl, please don't reject the branches
<hazmat> for oneiric
<hazmat> negronjl, this is a hosting problem not an mp issue
<negronjl> hazmat: ahh ...
<hazmat> negronjl, there's discussion on the lists about
<negronjl> hazmat: I'll leave them alone for now then ...
<negronjl> hazmat: what list ?
<hazmat> negronjl, all of them ;-)
<negronjl> hazmat: damn ....
<negronjl> There going to be a lot more ^H flying around here
<negronjl> I rejected two MPs already because of that ...
<hazmat> negronjl, yeah.. i saw, i'm putting them back into needs review.. we'll hopefully have the branches back this week
<negronjl> hazmat: can you also put a comment as to why you are putting them back on ... That way It will remind me of what happened ...
<negronjl> hazmat: It will also prevent another poor soul to do what I just did .. :)
<hazmat> negronjl, these are the branches with problems fwiw http://paste.ubuntu.com/1003347/
<hazmat> negronjl, yup, i'm putting comments in them as well (re mp)
<hazmat> personal charm branches are fine
<negronjl> hazmat: thx
<SpamapS> hazmat: isn't that like.. *all* of the official branches?
<hazmat> SpamapS, just oneiric
<hazmat> SpamapS, and most of them yes
<hazmat> SpamapS, these 6 seem to be okay.. http://jujucharms.com/charms/oneiric
<SpamapS> hazmat: hazmat mumble-server was added after the distro branch
<hazmat> SpamapS, yeah.. i suspect that was a reason
<SpamapS> hazmat: glance, rabbit, and nova* were all already in precise.. so that must have protected them
<hazmat> SpamapS, namely they where pushed to after the distro series stuff
<hazmat> SpamapS, well there are some that where in precise already that are showing up here
<SpamapS> hazmat: so is the problem just the bad stacking right now?
<hazmat> zookeeper comes to mind
<hazmat> SpamapS, yes
<hazmat> er.. aren't showing up
<SpamapS> hazmat: so IIRC, we just need to rename all the precise branches, then co/reconfigure all the oneiric branches, then re-rename the precise branches
<hazmat> SpamapS, yeah.. i've been talking to m_3 about it
<hazmat> the problem arose in renaming the precise branches
<hazmat> SpamapS, it would be easier to avoid the distro tools and just push the branches to a new series location, and promulgate the new ones
<SpamapS> hazmat: we *must* use at least some of the tools in LP because the REST API does not allow marking a series as "Active Development"
<hazmat> SpamapS, that's distinct from the branch mangement
<SpamapS> hazmat: it is, but its completely wrapped up in branch-distro at this point
<SpamapS> so, we'll need to submit patches to LP to decouple the two
<hazmat> SpamapS, easy to just have a separate charm specific tool for it imo
<SpamapS> hazmat: right, but it has to be run inside LP, by a LOSA.. so we can't just do it w/o getting patches into LP
<hazmat> SpamapS, ugh. oh
<hazmat> SpamapS, its a one line fix to launchpad/lib/lp/codehosting/branchdistro.py to switch from stacking on branch name to id is my understanding
<hazmat> so that we can rename the branch afterwards
<mhall119> jcastro: do you know if anybody is working on any 3d rendering charms?
<jcastro> not afaict
<jcastro> 83 official charms now, queue down to 15.
<jcastro> nice work fellas, that's a spicy meatball!
<mhall119> nice
<negronjl> SpamapS: https://code.launchpad.net/~clint-fewbar/charm-tools/add-proper-help/+merge/107118  ... Approved
<SpamapS> negronjl: danke
<negronjl> SpamapS: np
<SpamapS> negronjl: now to add bash completion :)
<negronjl> SpamapS: heh ... cool
<SpamapS> tired of hitting charm rev tab tab tab tab
<negronjl> jcastro: ping
<negronjl> jcastro:  about patch pilot
<mhall119> woot!  Got my summit charm working
<nathwill> so would juju-log be the best way to communicate an autogenerated password to the admin during a CMS setup?
<SpamapS> $ charm get n
<SpamapS> nagios                 nfs                    nova-cloud-controller
<SpamapS> newrelic-php           node-app               nova-compute
<SpamapS> mmmmmm... tab completion
<SpamapS> hazmat: sorry to be a pest, but whats the status on an 'enable proposed' option?
<hazmat> SpamapS, i tried out the branch and its working
<hazmat> SpamapS, i'll propose it for merging
<SpamapS> hazmat: sweet, thanks :)
<negronjl> SpamapS: what's the option ( enable proposed ) for ?
<SpamapS> negronjl: so we can test updates in precise-proposed
<negronjl> SpamapS: ahh... thx
<hazmat> enabled via juju-origin: proposed
<negronjl> SpamapS: https://code.launchpad.net/~clint-fewbar/charm-tools/add-bash-completion/+merge/107127 ... approved
<SpamapS> negronjl: we are on a roll. :)
<SpamapS> 35 charms out of 78 need maintainers.
<negronjl> SpamapS: heh ... we are .  We should now be off the hook for reviewing for a week :)
<SpamapS> Whoo! halfway there
<SpamapS> negronjl: you should. I haven't reviewed hardly anything ;)
<negronjl> SpamapS: I get bored waiting for CloudFoundry to finish ( and then do it again and again and again ) so, I review ( apparently sometimes I break reviews )
#juju 2012-05-24
<mhall119> jcastro: any idea why django wouldn't be able to connect to postgres when db-relation-changed is called, but can connect after that?
<mhall119> I'm getting the right username and password now
<mhall119> and I can ssh into the django unit and manually run the manage.py commands from there and they connect to postgres just fine
<mhall119> but running the same commands inside the db-relation-changed hook and they fail
<mhall119> http://paste.ubuntu.com/1003969/ is my db-relation-changed hook
<hazmat> mhall119, what's the output?
<koolhead17> SpamapS, ping
<mhall119> hazmat: output of which?
<mhall119> hmm, ok, it looks like postgresql's db-relation-joined creates the user account, then passes the login info to summit's db-relation-changed which tries to connect and failed, then postgresql's db-relation-changed actually gives the summit unit access to connect with those credentials
<mhall119> so I need to run summit's db initialization calls *after* postgresql's db-relation-changed hook, not before, how do I do that?
<mhall119> hazmat: jcastro ^^ any ideas for me?
<hazmat> mhall119, output of the db-rel-change hook
<mhall119> hazmat: see above, I seen what's happening now
<hazmat> gotcha
<hazmat> mhall119, i'd fix postgres charm to not put the into into the rel till its ready to be used
<hazmat> that sounds like the fundamental issue
<hazmat> its telling other things about accounts to use, that its not ready to accept
<mhall119> what's the difference between -joined and -changed for hooks?
<hazmat> mhall119, joined is executed when a remote/related unit is first seen (for each remote unit).. changed is execute after join, and after a remote unit changes its settings
<mhall119> so really postgres should be giving access from the remote unit during it's -joined hook, not it's -changed
<mhall119> unless there's something in my summit charm that would be called after -changed on postgresql
<hazmat> mhall119, well.. have you verified/echo'd the username/password.. they might not be set the first time the summit's changed hook is invoked?
<hazmat> yeah.. you are..
 * hazmat read the hook again
<mhall119> hazmat: yeah, they get into the juju_settings.py
<hazmat> hmm.. so the problem seems to be
<hazmat> http://jujucharms.com/charms/precise/postgresql/hooks/db-relation-joined
<mhall119> but postgres has pg_hda.conf or whatever that acts like a firewall, and until that allows my summit unit access it can't run syncdb or migrate
<hazmat> creates the user
<hazmat> but it waits till changed to give net access
<hazmat> http://jujucharms.com/charms/precise/postgresql/hooks/db-relation-changed
<hazmat> which is decidedly odd imo
<hazmat> and a bug
<hazmat> and it should be reloading instead of restarting sighup style
<hazmat> alternatively it should set some sentinel value on the rel
<hazmat> that the creds are really usable
<hazmat> hmm.. ic
<hazmat> this wants an ip dance
<hazmat> oh.. it used to, yeah.. this is just odd
<mhall119> I'll ask m_3 in the morning, it looks like he is the original author
<hazmat> mhall119, try using this postgresql charm.. juju deploy cs:~hazmat/precise/postgresql
<hazmat> i fixed it to give the access in db-relation-joined
<mhall119> hazmat: running now, I'll let you know
<hazmat> its at lp:~hazmat/charms/precise/postgresql/trunk is the store hasn't bundled it yet
<mhall119> 2012-05-23 21:52:31,555 unit:postgresql/0: hook.output ERROR: /var/lib/juju/units/postgresql-0/charm/hooks/db-relation-joined: line 41: syntax error near unexpected token `newline'
<mhall119> /var/lib/juju/units/postgresql-0/charm/hooks/db-relation-joined: line 41: `  echo "host $(get_database_name) ${user} ${remote_host}/32 md5" >> '
<mhall119> hazmat: ^^
<hazmat> ugh.. copy n paste error
<hazmat> mhall119, fixed in new rev
<mhall119> 2012-05-23 22:34:46,010 unit:postgresql/0: hook.output ERROR: /var/lib/juju/units/postgresql-0/charm/hooks/db-relation-joined: line 57: syntax error: unexpected end of file
<mhall119> hazmat: ^^
<surgemcgee157> Need to find me lucky charms. Also, has anyone found a "user freindly" way of putting config file on the server with juju?
<surgemcgee157> Or must they be packaged with the content repo and moved around with the charm.
<surgemcgee157> ohh ya, charm name --> golden-crisp
<koolhead17> i wish we could use juju 4 trystack
<SpamapS> koolhead17: why can' we?
<SpamapS> koolhead17: no S3?
<koolhead17> SpamapS, trystack account is cool. we can easily provision and show demo up and running
<koolhead17> SpamapS, https://trystack.org/ a testbed for openstack
<SpamapS> koolhead17: I know what trystack is
<SpamapS> koolhead17: why can't we use it?
<koolhead17> SpamapS, so we can use it? waoo
<SpamapS> koolhead17: I don't know!
<SpamapS> koolhead17: you said we can't
<SpamapS> I'm asking why you say hat
<SpamapS> that rather
<SpamapS> koolhead17: if they expose S3 and EC2, we can use them today
<koolhead17> SpamapS, well let me ask the guys maintaining it.
<koolhead17> HP folks i suppose
<SpamapS> koolhead17: I believe there are two regions and they are both somewhat different
<SpamapS> anyway, I'm out
<koolhead17> SpamapS, will keep you updated. laters
<hazmat> mhall119, there's a new version up if you want to give it another whirl... :-(
<hazmat> SpamapS, trystack has no storage
<hazmat> at least not of lastweek
<koolhead17> hazmat, your correct its just compute node
<hazmat> i'm out as well
<twobottux> aujuju: Creating volume group in nova-volume Juju charm <http://askubuntu.com/questions/141552/creating-volume-group-in-nova-volume-juju-charm>
<vrturbo> I've been trying an openstack deployment using MAAS server and juju charms, I've spent the last 3 days trying to get it to work and there is alway something within openstack that doesn't work
<vrturbo> any one had any success with openstack deployed via juju ?
<victorp> jcastro, jamespage, trying to transfer a css file to a wordpress instance as part of a charm, but cant find how to do it. Can you point me to somewhere that gives example/how to?
<victorp> I would like not to have to upload the css file to a repo or something, but to bundle it in
<victorp> the charm
<m_3> mhall119: hey... so the db-relation-changed in pg... remove the netmask from the hba config.  The real fix is for the charm to make sure it's always using ip... this has been a pain b/c it's switched back and forth between dns name and ip addr w/ netmask
<mhall119> m_3: my problem was that the pg_hba settings weren't being set until *after* my summit charm was trying to run syncdb against it
<m_3> mhall119: ah... yeah just move it to joined then
<m_3> mhall119: the orig thinking was that you exposed the db _after_ it was sufficiently created by the changed cycles
<m_3> mhall119: but understand the need
<mhall119> m_3: I believe that's what hazmat has done, so you'll just need to merge his changes
<m_3> mhall119: sorry pg still needs some tlc... it's been on the list but below the pain-threshold :)
<m_3> mhall119: needs pg9 replication too
<jcastro> victorp: if you include it as part of the charm the entire charm gets copied over
<jcastro> so you can stick it in like /stuff or something
<jcastro> and then as part of the charm copy it over to the WP dir
<mhall119> jcastro: hey, if I'm generating an admin password for Summit, how do I let the deployer know what it is?
<jcastro> mhall119: http://jujucharms.com/charms/precise/limesurvey
<jcastro> you can make a default
<jcastro> and then have a config option to change it
<jcastro> (see the end of that readme)
<mhall119> jcastro: hmm, default passwords aren't ideal
<mhall119> I suppose I should learn config.yaml
<jcastro> can you not set one and then just make the user set one via a config option?
<mhall119> I don't know, I don't know how the config options work yet
<mhall119> I'm currently setting a randomly generated password
<jcastro> oh dude, config options are the best!
<jcastro> yeah that's fine
<jcastro> and then just say "set a password via config"
<jcastro> let me show you an example
<jcastro> http://jujucharms.com/charms/precise/solr/config
<jcastro> actually, using limesurvey: http://jujucharms.com/charms/precise/limesurvey/config
<victorp> jcastro,  thanks
<jcastro> SpamapS: ping me when you're around
<jcastro> SpamapS: I'm only on for half a day today (leaving for the weekend) and I'd like a quick catch up with my bro
<melmoth> Hi there. PLaying with maas and juju to see how it works. I m wondering... How do you guys manage name resolution ?
<melmoth> like, i have installed maas-dhcp, i add 2 nodes in maas, i juju bootstrap
<melmoth> it will use one of those node, and juju status needs to be able to resolve th ename of this node.
<melmoth> i have to guess its name by looking in /var/log/maas (and finding some stuff that looks like ip) untill i found the right ip of the niode
<melmoth> and then set it in /etc/hosts before i could juju status. I bet there are more convenient way to do that. no ?
<melmoth> hmmm, the maas server itself can resolve the name... I guess the idea is to use this one as a dns then ?
<hazmat> melmoth, you have to point juju to the maas server..
<melmoth> hmmm. make sense. but does not makes me happy :)
<melmoth> the maas server is a kvm vm, juju runs on the hypervisor
<melmoth> so maas is actually using the machine juju runs on to resovle names itself.
<melmoth> i wonder if there is a way to have specific dns resolution path for a given user...
<hazmat> melmoth, so your saying juju on the host can't resolve the vm that maas is running on?
<melmoth> not exactly.
<melmoth> when i run juju status, juju try to ssh on one of the box i have addedd in maas
<melmoth> and this is the box it cannot the name of
<hazmat> juju asks maas for the host names
<melmoth> well, it should.
<melmoth> but it does not here
<melmoth> if i remove my entry i added in /etc/hosts, juju status fail
<melmoth> 2012-05-24 16:25:19,238 ERROR Invalid host for SSH forwarding: ssh: Could not resolve hostname node-000023560101.local: Name or service not known
<hazmat> sounds like your network is setup quite correctly
<hazmat> er.. isn't
<hazmat> .local is a zeroconf address
<melmoth> the actual name is localdomain, i guess this is just a "formating cut"
<melmoth> hmmm, funny. on the maas server the domain name is localdomain
<melmoth> but frmo my hypervisor, i need to put only local (as it appear in the error message) in /etc/hosts
<melmoth> is it a problem if i associate ip with nodes myself directly with cobbler ?
<melmoth> this way i could put stuff in /etc/hosts and be sure it s the same once and for all.
<vrturbo> reboot the juju zookeeper, mans should fix your problems
<vrturbo> then look in /var/lib/misc/dnsmasq {tab} {tab}
<vrturbo> also don't use .local breaks stuff
<vrturbo> ops replace mans with mdns (stupid auto correct)
<vrturbo> or while logged onto your maas server run
<vrturbo> tcpdump -npi eth0 not port 22 host {node ip that doesn't resolve}
<vrturbo> then reboot the node with dns issues
<vrturbo> watch the tcp dump as it comes up
<vrturbo> should see some traffic on port 5353
<melmoth> it s not a node that has dns issue, its the main hypervisor. If i reboot this one, all the world reboot :)
<melmoth> but i ll try to fiddel with cobbler manuall to be sure each node have always the same ip
<melmoth> this way, it should works as i want, as long as maas does not fiddle with this itself.
<vrturbo> hmm I run maas and juju zookeeper on virtual nodes
<vrturbo> didn't want to blow away a full hardware nodes for their services
<melmoth> do you run juju on a vm or the hypervisor itself ?
<vrturbo> vm
<melmoth> ah, ok.. that would be another option
<vrturbo> intact you can run juju from any node
<vrturbo> but when you boot strap it deploys the zookeeper
<vrturbo> without the zookeeper you can't connect to the environment
<vrturbo> there an't much doco on MAAS
<vrturbo> it's been a big pain for me
<vrturbo> dnsmasq + MAAS + cobbler is alittle buggy
<vrturbo> I've seen a launchpad blueprint to take cobbler out of the MAAS server, maybe things will be better then
<twobottux> aujuju: Juju error: Server refused to accept client <http://askubuntu.com/questions/141687/juju-error-server-refused-to-accept-client>
<surgemcgee> So many questions...
<surgemcgee> What is the url for the entire charm repo? Fill in the blank --> charm getall _____
<imbrandon> SpamapS: btw, i dident get my golden-star for the Ubuntu charm idea :)
<surgemcgee> Does the above statment mean that my comments are being considerd/noticed? (Not trying to be crass)
<surgemcgee> 3--  A clover, haha
<jcastro> hmm, "charm getall" used to work didn't it?
<jcastro> I distinctly remember a command to snag them all at once
<SpamapS> imbrandon: Hah, true.. that was your idea. :)
<SpamapS> jcastro: it "works"
<SpamapS> but its so bloody slow
<SpamapS> I think we may need to maintain a tarball somewhere with all of them so people can wget the whole archive
<negronjl> 'morning all
<SpamapS> I bet we can maintain that in charm-tools as a download file actually
<imbrandon> SpamapS: how ?
<imbrandon> the last bit
<SpamapS> launchpad has an API for uploading files
<jcastro> negronjl: I'm going to assign you a card for the graphs measuring review queue performance if that's ok with you?
<imbrandon> ahh ok i got ya, i was thinking actually IN charm-tolols
<imbrandon> tools*
<negronjl> jcastro:  Sure ... I have some questions when you get a chance re: patch pilot
<imbrandon> i see what ya mean though, works kinda like a reverse debian watch ?
<jcastro> negronjl: yeah,  mira we should have a quick charmers G+ if we can
<negronjl> jcastro:  ok ... invite me
<jcastro> SpamapS: imbrandon: you guys got time for a quick pow wow?
<imbrandon> yup let me login real fast
<imbrandon> been going back and watching all the juju sessions i missed and some i dident as refeshers, good good stuff
<jcastro> handy, having the important sessions recorded. :)
<imbrandon> yea
<imbrandon> would be cool if we could get all of them even if audio only
<imbrandon> as i just have them in a minimized browser anyhow ignoring the vid
<jcastro> invites sent
<surgemcgee> I feel like I've wonderd here. You all going to a hangout?
<surgemcgee> surgemcgee has stolen the blue moon!
<SpamapS> jcastro: yes b.r.t
<SpamapS> jcastro: curses.. something requires a reboot. 2 minutes
<hazmat> surgemcgee, why do you want to do charm getall?
<imbrandon> SpamapS: https://groups.google.com/forum/?fromgroups#!topic/ansible-project/x_0gDQv4co4
<imbrandon> SpamapS: thats the email chain i spoke of
<SpamapS> right, Ansible
<imbrandon> and thats the one wheere i was saying "hey whats next?" he asked
<imbrandon> negronjl: ^^
<imbrandon> jesus, i think i used all my energy for the rest of the week on that call tho, i seriously think i need to go back to the doc's ( already had one apt since uds )
<surgemcgee> I will just grab the ones I need, no biggie. Also, the charms are installing puppet. Shouldn't puppet be installed on the dev client and puppetmaster be used in the install script?
<surgemcgee> And is that DNS entry nessessary?
<imbrandon> SpamapS: is charm getall smart enough to update the local cs too ?
<SpamapS> imbrandon: it will run 'mr update' if it sees a .mrconfig in the dir already
<imbrandon> kk
<SpamapS> imbrandon: I think it might also run 'charm update' in that case too
<imbrandon> that would be helpfull
<imbrandon> lol
<imbrandon> incase the maint dident inc the rev
<imbrandon> i guess
<SpamapS> no it just fetches the list again
<imbrandon> ahh
<SpamapS> so any new charms will get checked out
<SpamapS> the whole thing is really crap, just use charm get :)
<imbrandon> kk, yea between that and the series , i dont see a need ( well anytime even semi soon but not gonna say never ) for cs instructions
<imbrandon> like i thought
<imbrandon> SpamapS: crappy but exists > nothing
<imbrandon> SpamapS: e.g. my php in enterprise + world theory :)
<SpamapS> alright, txzookeeper is in Debian NEW queue.
<SpamapS> on to txaws
<jkyle> I have a question for juju+maas. Can you declare a specific target for a juju deployed service?
<jkyle> For example, say I have a "monitoring services" charm and I want to deploy to a specific class of machine or a specific machine
<james_w> jkyle, it sounds constraints are what juju provides for what you are wanting to do
<jkyle> cool, I'll search the docs :)
<james_w> https://juju.ubuntu.com/docs/constraints.html
<james_w> jkyle, https://juju.ubuntu.com/docs/constraints.html#provider-constraints
<marcoceppi> jcastro: are we getting rid of this or what? https://docs.google.com/spreadsheet/ccc?key=0AoW1nhI7IMt3dFRvSFdkZmNqQ0t3RjZ2QTR2Z19teWc#gid=0
<jkyle> james_w: I see "orchestra" referred to. I thought that MaaS + Juju was the successor to Orchestra. can you use cobbler classes directly?
<james_w> jkyle, I'm guessing that's just a leftover artefact
<jkyle> I think using cobbler management classes would be stellar
<jkyle> as a sort of "pool" specifier
<marcoceppi> jkyle: I don't think you can use cobbler classes with MAAS + Juju yet
<marcoceppi> to my knowledge, only maas-name can be used by juju constraints as of now
<imbrandon> sweet, more swag ... email subject of "blitz.io: You just got a tshirt!" is always cool :)
<MarkDude> imbrandon, just shared the zombie comic and the pinup albums- let me know what you think.
<imbrandon> nice , i'll check em out in a few
<imbrandon> i think i have enough cough medicine in me to make it till bedtime :)
<imbrandon> well and actually be productive somewhat
<MarkDude> jcastro, right now Fedora is having its elections. Next week is townhalls. I was hoping to get a sentence or two from you as far as how cool it will be to have Juju in the repos
<MarkDude> This will be easier with some board members on it
<MarkDude> No big hurry tho :)
<imbrandon> MarkDude: he said something about flying home today a few hours ago so he is likely mid-ait
<imbrandon> air*
<MarkDude> Oh cool
<MarkDude> I will email him then
<MarkDude> is it jcastro@ubuntu.com?
<marcoceppi> MarkDude: according to LP it's jorge@ubuntu.com
<MarkDude> Cool, that was my 2nd guess
<MarkDude> ty marcoceppi
<mhall119> hazmat: your postgresql changes seem to have done the trick, thanks!
<hazmat> mhall119, awesome, thanks
<jcastro> http://news.ycombinator.com/item?id=4020455
<jcastro> upvotes please!
<mhall119> jcastro: AWS has ARM hardware now?
<jcastro> we simulate
<jcastro> so when this hits the market
<jcastro> people can be ready
<jcastro> it's a simulated Calxeda box
<mhall119> oh, ok
<mhall119> there's something almost zen-like about watching my instances install and configure themselves in debug-log
<jcastro> I know
<jcastro> can you believe people used to launch instances like this by hand?
<jcastro> mhall119: on tuesday you should link up with m_3, he's been at a conference but worked with chris on the django/summit stuff
<jcastro> I am pretty sure you guys could sort out your issues in a few minutes of facetime
<mhall119> jcastro: hazmat sorted out the last of my issues
<jcastro> ah, rock and roll then
<hazmat> mhall119, thanks for your patience on those round trips..
<hazmat> jcastro, oh.. that's what its for
<hazmat> i was wondering why we have amis for that stuff
<jcastro> hit the ground running when they ship!
<mhall119> jcastro: check it out: http://bazaar.launchpad.net/~mhall119/summit/charm/revision/2
<mhall119> that's all I had to change between the initial charm generated, and what summit needed to work
<mhall119> and most of that is because Summit does stuff in a funny way
<hazmat> mhall119, do you need to do the init-summit and pullapps against the real db?
<mhall119> hazmat: no
<hazmat> cool
<hazmat> just checking
<mhall119> those just setup the local disk env
<imbrandon> oh crap, i just learned some new nginx magic SpamapS , 4x the current ludicris numbers avail now ... with just a bit of creative request queuing upstream
<imbrandon> hrm , time to get this on one of my sites and really test it more than local, only bad part is it requires a patch to nginx
<imbrandon> :(
<SpamapS> imbrandon: does nginx have some kind of request de-duper for when you have to fetch from the backend?
<SpamapS> imbrandon: have always thought caches should do that
<SpamapS> but I've never pushed optimization far enough where that would have been enough of a win
<imbrandon> SpamapS: not sure, but this is basicly making all the requests pool in nginx then only handing php 1 at a time
<imbrandon> and man, its MUCH better at queueing than even php-fpm
<SpamapS> imbrandon: interesting
<imbrandon> and to be honest i thought thats how it worked internally to begin with but i guess not
<imbrandon> that and i think i figured out why i always peek arround 500 concurrent connections and could never figure out why
<imbrandon> i was doing a bit of math wrong and assumed 8192 connections * 4 worker processes == win
<imbrandon> but it actually == about 500 concurrent due to some wonky math
<SpamapS> GWB style math :)
<imbrandon> sooooo i may have found a few bits that will make some nice additions to the base config :)
<imbrandon> lol
<SpamapS> Fool my servers you can't get fooled again.
<imbrandon> still need to test it all out tho
<surgemcgee> What is the best method of moving templets around? I wan't a simple vhost in sites-available. Can I do this nativly?
<surgemcgee> Also, is there a way to package two charms together. The one is more of a generic template for the main one and I need people to know this.
<SpamapS> ugh, looks like txaws is incompatible with twisted 12 ...
<SpamapS> surgemcgee: There are two ways to get that done
<SpamapS> surgemcgee: there's a very experimental 'charm splice' command that I developed, but its a bit crackful as it can't really relate the two charms together (well it can, but it emulates this in a really weird way)
<SpamapS> surgemcgee: the other way is to turn the detailed part into a subordinate charm, and have the generic service charm as the primary..
<SpamapS> hazmat: ugh, lp:txaws is a mess
<SpamapS> hazmat: no tags.. any idea what revision 0.2.3 was taken from?
<SpamapS> hazmat: I *think* it is r139
<SpamapS> Ahh.. txaws works fine w/ twisted 12 .. missing files in the tarball
<hazmat> SpamapS, that looks about right
<hazmat> SpamapS, agreed its inanely difficult to tell, i had to resort to a diff
<SpamapS> hazmat: I just used log to find commits before/after release
<SpamapS> hazmat: but yeah, diff confirms.. sort of (since some files are not in sdist)
<SpamapS> aaaaannd txaws is now in Debian NEW queue
 * SpamapS moves on to juju
<SpamapS> man this is so much easier when I don't need a debian sponsor. :)
#juju 2012-05-25
<hazmat> m_3, gluecon looks awesome
<hazmat> m_3, pls do a write up
 * hazmat is wishing he had gone
<m_3> hazmat: yup will do... it was great
<m_3> wish they would've accepted my talks...
<hazmat> m_3, "RT â@adrianco: If your instance AMIs depend on an external repo to configure post boot you have a big SPOF.â <- Bingo.."
<m_3> hopefully that'll be different next year though... juju came up in a couple of presentations... and I talked to folks a lot
<hazmat> m_3, right on
<hazmat> s3 repos hopefully ftw
<m_3> that guy's talk rocked!
<m_3> they manage lots of static amis though... they're quite ripe for something like juju
<m_3> and it was nice to realize we'd been just playing with more scale than they usually use for one app
<m_3> (most of their load is really in big CDNs... not on ec2 proper)
<hazmat> m_3, did you see derek richardson's talk?
<hazmat> er.. collison
 * hazmat googles for all the slides he can find
<JoseeAntonioR> guys, is there an indirect way to deploy joomla through juju?
<bkerensa> JoseeAntonioR: My cousin might bring me some Brazillian Pisco ;)
<JoseeAntonioR> bkerensa: what the... I think there's no brazilian pisco!
<bkerensa> :D
<bkerensa> we shall see
<SpamapS> JoseeAntonioR: somebody wrote a joomla charm once.. its been a while though
<imbrandon> SpamapS: happen to know a list of deps for charm-tools ? finally getting it ported to osx today ( or trying ) and seems to be missing a few things, works over all but like proof gets this
<imbrandon> bholtsclaw@ares:~/Projects/local/charms/precise/newrelic$ charm proof
<imbrandon> abort: Unknown script hl
<imbrandon> usage: charm [ --help|-h ] [ script_name ]
<imbrandon> script choices are:
<imbrandon> /usr/bin/charm: line 14: exec: list_commands: not found
<imbrandon> most of it looked to be bash when i perused the source, but i dident inspect everything
<imbrandon> ohh nevermind
<imbrandon> its fskin bsd's readlink is broken
<imbrandon> and thus OSX's
<imbrandon> i always forget that
 * imbrandon patches
<SpamapS> imbrandon: the readlink thing can be solved by installing GNU coreutils IIRC
<SpamapS> imbrandon: we need to get rid of the readlinks I think
<SpamapS> imbrandon: I was thinking I'd switch charm-tools to work like juju-jitsu, with autotools setting the dirs, rather than trying to be all clever :)
<imbrandon> yea, that would rock, there is a bit more too but i'm on the phone
<imbrandon> one secd
<imbrandon> and yea i installed coreutils via brew
<imbrandon> to semi solve it
<imbrandon> anyhow yea, sed is diffrent etc too but gnused is installed via brew too
<imbrandon> problem is without autotools its a PITA to switch everything cuz the gnu coreutils are prefixed with g on osx
<imbrandon> so ginstall gsed etc
<imbrandon> you CAN put them in the path before the BSD ones but it breaks other things
<imbrandon> sooo i'd rather not do that, esp in a package i plan to distriubute
<imbrandon> :)
 * imbrandon is trying to think of the best way, but i think a ./configure that has some if/else for the platform will likely be the best
<imbrandon> then i dont have to maintain a patch, and it can be "upstreamed" except for the install forumla like juju
<imbrandon> ideally i've been working on a way to upstream it ALL if possible but in a way that the current ubuntu devs / maintainers dont have to care it is or really even realize it
<imbrandon> etc
<imbrandon> still a WIP :)
<imbrandon> oh hey SpamapS , thought of something new, with the Author spec just now going in, would it be wise to diffrentiate the Author and Maintainer like git ( and i assume others like bzr but do not know ) seperates the author and commiter for commits ( e.g. a sponsored commit ) in the metadata , i think that would allow for something akin to what we're wanting to do with per-charm-commit access to be a little clearer too that way, kinda like the diffre
<imbrandon> not 10000% sold on it, but figured now would be the time to bring it up rather than later
<imbrandon> hrm , ok totally seperate train of thought here, back on my coreutils and other gnu/bsd tools "issue", how far fetched do you think it would be to have on install the juju and related tools create a chroot env or something like the python virtualenv that puts the gnuutils and the many other things i've come accross so far that would be ok to hack on MY machine but not good for long term or wide use, so a terminal could be fired up on osx and then 
<imbrandon> sounds complicated but i think it would be a lot less complicated than the current hacks i'm contemplating and would work other distros too with very little care for their current setup that way
<imbrandon> because really in every context juju cares about OSX is just a BSD* distro
<hazmat> Author spec?
<imbrandon> err the maintainer field
<imbrandon> got myself mixed up
<hazmat> i'm not sure if a separate author spec is adding significant value
<imbrandon> yea
<imbrandon> not really a whole spec
<imbrandon> just mean while were adding maintiner field to the meta data it mighjt be good idea to add author also
<imbrandon> as to have a clear seperation
<imbrandon> and really was just a passing thought, it might not be all that great, but it is done in other things like RCS and debian packing so not tooo far fetched
<imbrandon> but figured now instead of a month from now would be the time to think about it
<marcoceppi> I think, for the most part (and quite some time) the author will be the maintainer
<imbrandon> likely so for 99% of them
<imbrandon> was more of to not have to change it later and go though a change again since we;re making one similar change now
<imbrandon> tis all
<imbrandon> but yea your likely correct
<imbrandon> btw heya marcoceppi , hear you had some of that nasty bug too, hope ya feelin better
<marcoceppi> much
<imbrandon> :)
<imbrandon> i got it pretty bad for a few days but i dont think near as bad as you from what jorge said
<imbrandon> btw that tillo mashup thing is slick , nice job :)
<imbrandon> twillo*
<m_3> trello?
<m_3> :)
<imbrandon> oh yea twillo is the sms thing
<imbrandon> i always mix them up
<m_3> marcoceppi: yeah, I agree, your mashup is cool
<marcoceppi> strapello?
<m_3> yup
<imbrandon> yea
<marcoceppi> thanks :) need to push up some caching so Jorge's page doesn't take three years to load
<imbrandon> hahaha
<imbrandon> todays goal for me is to finish wrangling the kickstrap ubuntu them into a sphinx theme and base django app for the ubuntu-community-themes pkg, i'm like 99% done , well i am done with the django one , sphinx close but its a weird templating lang
<m_3> hazmat: latest plan on fixing the oneiric stacked branches was to rename the base (precise) branches back, then fixing the stacking, then renaming the precise branches again.  This sounded like a complete clusterquack to me... I'm gonna explore other options. nothing new from the lp team, they were mostly focused on lp changes to prevent problems in the future.  I'll chase clint down and brainstorm once there's more caffeine in the mix.
<m_3> SpamapS: please budget a little time to discuss ^^ later today
<hazmat> m_3, hmm.. yeah.. that seems like a reasonable solution, cluster quacky indeed.. but we need to restore access to fix.. er break.. and then rename.
<hazmat> that's all kinds of retarded i agree
 * hazmat dons a PC hat
<hazmat> no offense intended
<SpamapS> m_3: I think thats actually an ok plan.
<SpamapS> Its just a result of the weird distro model which we've inherited. We won't repeat the problems with quantal
<m_3> this weekend then
<hazmat> SpamapS, you mean do the invalid restack, to only temporarily make the branches in accessible prior to renaming..
<m_3> maybe we can even arrange to have a lp superpower person available too
<hazmat> the ideal solution james_w pointed out was stacking on the id
<hazmat> er. the branch id
<hazmat> so it doesn't die on rename
<m_3> hazmat: bugs are in to fix that going forward
<m_3> but it won't help now
<m_3> well... maybe it would
<james_w> there is a way to avoid breaking all the precise branches while doing it
<james_w> but it's more work
<m_3> oh?
<m_3> james_w: do tell
<SpamapS> hazmat: yes, that would be good. Another good solution would be to fix the charm store/charmworld to not care about the branch name for official branches
<james_w> the stacking could be fixed directly, but someone would have to work out the branches they were supposed to stack on, and find out the ids of them
<m_3> james_w: oh... that doesn't seem to bad then
<SpamapS> james_w: is the ID not accessible via LP API/ssh+bzr ?
<james_w> SpamapS, hmm, maybe it is
<m_3> james_w: and a _whole_ lot safer
<m_3> i.e., can be done right now
<james_w> not directly that I know of, but it's possible it's available
<hazmat> m_3, its not something we can use right now
<hazmat> m_3, even if it was available
<james_w> what would renaming the precise branches break?
<m_3> james_w: access via the lp:charms/precise aliases... as well as access via the store ( juju deploy mysql )
<hazmat> james_w, long term nothing, temporarily it would break anything looking for charms, and existing branches that want to pull/push
<hazmat> er.. on the push side it might get uglier
<hazmat> if somebody pushes in the middle of this op..
<hazmat> james_w, m_3, it won't break the store
<m_3> scheduling it over the weekend is just a crapshoot
<m_3> huh?
<hazmat> the store assembles zipfiles for download
<m_3> sure it will... the store is looking for
<hazmat> it doesn't serve directly from bzr
<m_3> oh, you mean the store has it cached
<m_3> ok
<james_w> the store won't see anything new, but it's not going to delete all the charms is it?
<hazmat> it polls bzr to look for new revs
<hazmat> james_w, no the store and charm browser never delete things
<james_w> and it would presumably pick up again when they were renamed back
<james_w> the lp:charms/precise aliases won't break
<hazmat> hmm.. actually the charm browser will.. on the assumption that the upstream branch changed.. but it will fetch them again when their back in place.. the metadata would remain though
<james_w> and modern bzr uses those I believe, so most pulling and pushing would still work I think
<m_3> well maybe I'm just being a panty-waiste then
 * m_3 puts on _his_ PC hat
<james_w> not renaming would be better
<james_w> but likely needs help from someone with lots of LP internals knowledge, and probably a web-ops too as it will probably require some db/fs syrgery
<SpamapS> the rename is API doable
<SpamapS> as is the stacking change
<SpamapS> so we'd have, what, 15 minutes of exposure?
<SpamapS> I say do it
<hazmat> yup
<hazmat> its not ideal as a long term solution for other rels, but it works for restoration
<m_3> ok, I'll test it out on a single branch and then go from there if that works
<james_w> SpamapS, the stacking change can be done over the API?
<hazmat> james_w, so if we wanted to do this in the future with out running branch-distros.. could we just push the branches and then set some api flag to mark the series active?
<hazmat> ie.. bypass most of branch-distros
<james_w> hazmat, yeah, you could
<james_w> you can't manipulate the series, but that's disjoint anyway
<james_w> once the series is created, you can push the branches, then use the API to make the official branch links
<james_w> but it's a one-line fix to branch-distro to avoid this
<james_w> it doesn't get the new branch names "right" but I understood that was going to be changed in the charm store anyway
<SpamapS> james_w: well not API, but we can bzr reconfigure and push, right?
<james_w> SpamapS, I don't know if that will change it to stack on the id rather than the name
<james_w> we can experiment
<m_3> SpamapS: that's my understanding of the current plan... still needs to be tested
<SpamapS> Oh I was thinking of just changing the stacked on name.
<james_w> there are a couple of scripts we can run to definitely fix it up from the LP size while the precise branches are renamed
<m_3> we rename the base, then bzr reconf, then push that back up, then rename the base back to trunk
<SpamapS> the ID is a nice idea, but if I can't get it, I'd rather have a solution for today and then fix the net distro branching.
<james_w> you can't reconf to a name that doesn't exist
<james_w> hmm
<SpamapS> s/net distro/next distro/
<james_w> how about this
<m_3> right, so we rename /trunk back to /precise (the base braches)
<james_w> 1. branch from the correct name to the wrong name for the branches for precise
<SpamapS> james_w: heh right, so we rename, branch, rename, reconf, push
<james_w> 2. done
<m_3> _then_ we should be able to reconfigure
<SpamapS> james_w: Ahh so just fake it basically?
 * SpamapS tries that
<james_w> SpamapS, it's perfectly valid from bzr's point of view, just a bit messy
<james_w> might confuse a few people
<james_w> it could be cleaned up, but there would be working branches with no interruption
<m_3> oh, so you mean we create new /precise's instead of renaming from /trunk?
 * SpamapS tries it w/ hadoop-slave
<m_3> cool
<james_w> m_3, yeah, they will stack on trunk, which will make for slower access to the oneiric branches, but it should fix them
<SpamapS> since that one is borked in precise anyway
<SpamapS> works
<SpamapS> james_w: I knew we gave you  a laptop for a reson.
<SpamapS> reason even
<m_3> so we still bzr reconfigure those branches too?
<SpamapS> somebody give me a new keyboard ugh
<james_w> heh
<james_w> m_3, that would be good
<m_3> or just leave the fake-out in place?
<SpamapS> lp:charms/oneiric/hadoop-slave works
<m_3> whoohoo!
<m_3> well that was a lot easier than we were making it :)
<james_w> I don't know if it will stack on the id at that point, so no-one is allowed to rename the precise  branches again
<SpamapS> all I did is push lp:charms/precise/hadoop-slave -> lp:~charmers/charms/precise/hadoop-slave/precise
<m_3> right
<james_w> bzr branch -d lp:charms/precise/$charm lp:~charmers/charms/precise/$charm/precise
<m_3> but we really should bzr reconf, then we can remove the extra /precise branch
<james_w> someone in ~charmers can put that in a for charm in $charms loop
<james_w> m_3, +1
<SpamapS> james_w: does that pull/push automatically or does it happen all on the other side?
<SpamapS>   stacked on: /~charmers/charms/precise/hadoop-slave/precise
<SpamapS> still stacked on the name :)
<james_w> SpamapS, it's client side, it's just easier to type :-)
<SpamapS> james_w: gotchya. Doing that now
<james_w> err, I probably got the paramters the wrong way round
<james_w> bzr branch lp:charms/precise/$charm lp:~charmers/charms/precise/$charm/precise
<james_w> or whatever works, you get the idea :-)
<james_w> bzr reconfigure --stacked-on lp:charms/precise/$charm lp:charms/oneiric/$charm is the next step I think
<SpamapS> james_w: yeah the -d was wrong
<SpamapS> Ok thats running
<SpamapS> the oneiric branches should be resurrecting one by one :)
<james_w> then deleting all the /precise branches
<SpamapS> m_3: can you verify alice-irc works?
<james_w> which can be scripted, but last I looked it wasn't directly available from Launchpadlib
<m_3> james_w: or maybe --stacked-on lp:~charmers/charms/oneiric/$charm/trunk?
<m_3> SpamapS: checking
<james_w> m_3, yeah, that's probably better, you're right
 * james_w -> lunch
<m_3> james_w: thanks!
<james_w> np
<m_3> SpamapS: can branch lp:charms/oneiric/alice-irc, which shows parent_location = bzr+ssh://bazaar.launchpad.net/+branch/charms/oneiric/alice-irc/
<m_3> but don't know how to see what that's stacked on
<james_w> m_3, bzr info lp:charms/oneiric/alice-irc should show it
<SpamapS> m_3: Stacking is unchanged at this point
<m_3> ok, as expected, still stacked on /precise
<m_3> right
<SpamapS> its slow going
<SpamapS> I'm up to hadoop-mapreduce
<negronjl> 'morning all
<imbrandon> is there a hdfs charm yet ? i seen a nice addon for that , that adds a external REST api to do crud ops on it
<imbrandon> moins
<SpamapS> imbrandon: well really the hadoop charm is deploying HDFS
<m_3> so how do we tell http://paste.ubuntu.com/1006653/ to stack upon itself?
<SpamapS> m_3: I think we want it to stack on precise
<SpamapS> m_3: just.. the precise we have.. not the precise we renaemd away from. :)
<m_3> ohhh...
<imbrandon> SpamapS: nice, i may poke that rest api later then and see if it will make a good sub charm for hadoop
<m_3> ok, so what happens in q?  we change everything to stack on top of ....../trunk for quantal?
<m_3> so the base is follwing series going forward?
<m_3> anything different for LTS?
<SpamapS> m_3: I'm not sure. We need to think about this. We haven't really given thought to how we'll do updates.
<SpamapS> m_3: My thinking is we'll just evaluate changes to make sure they won't break existing deployments, and then just push them into the branches.
<imbrandon> SpamapS / m_3 : this is where i said the drupal branch model would work better
<imbrandon> let me see if i can find a good infographic and real example and show yall. cuz i'll never sell/explain it right
<imbrandon> and its really just a minor change but signifgant one
<SpamapS> imbrandon: yeah, we need good ideas
<SpamapS> otherwise we'll put our brains on it..
<SpamapS> and then.. well.. thats going to get ugly ;)
<imbrandon> lol
<imbrandon> well its one of the things that came from drupal with their "not invented here" mentality that actually is better than most anywhere else
<imbrandon> unlike most of the stuff thats half baked
<m_3> at this point I'm leaning towards let's backport changes manually and not use any _help_ that stacking might give us for this process
<imbrandon> tl:dr its basicly flip the charmname and the series in the url ( that also means the series is a branch )
<m_3> tried bzr reconfigure --stacked-on lp:~charmers/charms/precise/alice-irc/trunk lp:~charmers/charms/oneiric/alice-irc/trunk
<imbrandon> but there is lots of good reasons why
<m_3> bzr: ERROR: Lock not held: LockableFiles(lock, bzr+ssh://bazaar.launchpad.net/~charmers/charms/oneiric/alice-irc/trunk/.bzr/repository/)
<m_3> maybe
<m_3> i need to do it locally then push
<imbrandon> soo all the charms become like this lp:~charmers/charms/alice-irc/precise   , and trunk is the current DEV series , like right now Q but each series gets a branch of the charm not a LP series
<imbrandon> make sense ?
<m_3> nope, still could not acquire remote lock when trying to push the changes... locally restacked though
<m_3> might need lp for that
<m_3> http://paste.ubuntu.com/1006670/ looks like exactly what we want
<imbrandon> thats how the few k modules on drupal.org do their thing, each one has http://code.drupal.org/<project>/<module-name>/<drupal-target-major-release>
<SpamapS> m_3: stacking doesn't really help you. Its just an efficiency play for Launchpad IIRC
<imbrandon> so like drupal.org/views/views/7.x
<m_3> nice
<SpamapS> imbrandon: so there's a branch per charm, and the one people push to (lp:charms/foo) is the current dev series?
<SpamapS> imbrandon: thats *exactly* how it works now.
<m_3> alias bbranch='bzr branch --dont-you-dare-stack-anything-ever'
<SpamapS> imbrandon: sorr, there's a branch per charm, per series
<SpamapS> m_3: don't crush LP just because you don't like stacking. :)
 * m_3 is just a hater :)... need more coffe
<m_3> e
<imbrandon> SpamapS: http://drupal.org/node/1015226
<imbrandon> SpamapS: no reversed from now
<imbrandon> SpamapS: where the series is set by the charm as a branch
<SpamapS> imbrandon: but its conceptually identical
<imbrandon> SpamapS: yes but fixes the problems like this
<m_3> seriously though, I'm talking out of ignorance and need to take more time to learn the tool
<imbrandon> that link is their policy
<imbrandon> SpamapS: http://drupal.org/node/1015226
<SpamapS> imbrandon: the only problem we have now is that the LP tools and the charm store don't jive
<imbrandon> take a gander real fast
<imbrandon> SpamapS: right, but there are other small nuances that make it nicer too
<imbrandon> like being able to share merges easier
<imbrandon> beteween serises
<SpamapS> You can do that w/ the current system too
<m_3> SpamapS: so when your run is done, we should get #lp to unlock the series so we can push up the restacking changes, then remove the extra /precise branches
<imbrandon> sure, i dont mean you cant, just a nicer workflow that happens to fix this seris lp issue too
<imbrandon> is what i ment
<imbrandon> worth a look at least
<SpamapS> Like, how hard is it really to type 'bzr merge lp:charms/precise/foo' to get the latest goodness from precise in your branch?
<SpamapS> What you want is git
<SpamapS> we've talked about this
<SpamapS> its up for discussion
<imbrandon> well yea but it could be done here too
<imbrandon> when i thought about it later
<m_3> manual backport by the maintainers is cool for now
<SpamapS> whether I type bzr merge lp:charms/foo/precise or lp:charms/precise/foo .. matters not. Right?
<SpamapS> As long as they share ancestry
<SpamapS> so it goes smoothly
<imbrandon> sure sure, i am not trying to say the way we have is totaly or even in any way broken
<SpamapS> then we can automate it
<imbrandon> just that there is "this other way thats cool and not tooo much of a change for some added stuff"
<SpamapS> I'm suggesting that their policy, and ours, are not really different at all.
<imbrandon> they arent
<SpamapS> Theirs is just optimized for git
<imbrandon> thus why i said a small change but really in overall it is signifigant
<imbrandon> well the way they use named branches its really alot like bzr too
<imbrandon> as it WAS based on cvs
<imbrandon> hehe
<imbrandon> or feature branches , howeever you call them, same thing
<imbrandon> anyhow it boils down to one main thing, trunk or master or whatever is never the branch that is published from
<imbrandon> ever
<imbrandon> this declaritively saying in the branch what its for
<imbrandon> not just what series on LP its pushed to
<imbrandon> thus would work on any hosting service
<imbrandon> and in out situation that would make branchname === series name
<SpamapS> m_3: Ok, I'm going to bzr reconfigure all the stacking now in oneiric, MMkay?
<m_3> I think what james_w put out should work without having it locally
<SpamapS> imbrandon: branchname is basically immaterial in the current system. The fact that the charm store chose 'trunk' was arbitrary. Its really "the series+charm official branch"
<m_3> we just gotta get the series open for pushing
<imbrandon> SpamapS: ok, i'll stop now, but please do read over that link a little sometime even in passing, just cuz ytou konw i dont explain things the best either on irc
<imbrandon> SpamapS: i fully realize that
<imbrandon> SpamapS: i'm saying about how it COULD be , and the good things it brings if dione that way
<m_3> SpamapS: i.e., `bzr reconfigure --stacked-on lp:~charmers/charms/precise/alice-irc/trunk lp:~charmers/charms/oneiric/alice-irc/trunk`
<imbrandon> like i said i dont mean its broken now
<SpamapS> imbrandon: so whats the advantage of swapping series and charm name in the path, which is all I see.
<imbrandon> SpamapS: ive been trying to explain that but your equating to whats there now insead of looking over all what it would mean i think
<imbrandon> and yes thats technically all it would be
<SpamapS> Because to me, there is a psychological benefit to grouping all of the charms together as a single entity per series, something that we make sure all works together.
<imbrandon> well you still have that
<imbrandon> you dont loose it
<SpamapS> what do I gain? I see nothing in this policy that changes anything we're doing
<imbrandon> you just may need a diffrent view to see it
<imbrandon> you gain the control in the charm branch its self over the series AND you lose the headache of LP not jiving
<imbrandon> think about it like this
<imbrandon> when you fix a bug in juju core
<imbrandon> you push it to lp as lp:juju/some-named-fix
<imbrandon> because that has value in the name, same thing for the series instead of trunk becomming rolling
<imbrandon> and then the headaches now
<imbrandon> we have charm/series-aka-branch
<imbrandon> not trunk
<imbrandon> and again i dont mean the way we have is broken, just that there really is value in the other small change
<imbrandon> but yea i'm terrible about explaining concepts on irc , so glance over the link as you have time and i'll slap a quick image in gimp togather that will i think illistrate it better too, as its not an urget thing, it could be done anytime in the next 6 months before q releases and still have the same impact, no need to rush it
<imbrandon> that way i can present it to the list too , hopefully in a cohearant manner
<SpamapS> imbrandon: no, I push the fix to lp:~clint-fewbar/juju/some-named-fix
<SpamapS> imbrandon: and with charms, I push to lp:~clint-fewbar/charms/precise/foocharm/some-named-fix as well.. and then propose for merging into lp:charms/precise/foocharm
<SpamapS> imbrandon: and then if that fix is good, and introduces no regressions/etc, and the branches have not diverged, it can be directly pushed to lp:charms/oneiric/foocharm
<SpamapS> imbrandon: if they have diverged, at worst, I must bzr merge it into oneiric.. a workflow that I do not love the mechanics of, but works fine in terms of how straightforward it is
<imbrandon> umm you just said the same thing i thought i just had
<imbrandon> but if thats not how i said it thats exactly how i mean, yes , thats the current flow and is exactly right and exactly shows what i mean with the other but ... yea , i need to get better at abstract --> real value conversion articulation
<SpamapS> Sure, a picture may help. :)
<SpamapS> james_w: when I tried using bzr reconfigure on one of the branches I got this: If your kernel is 2.6.30+, you don't need to follow this guide anymore as the kernel includes a proper driver.
<SpamapS> dohhh
<SpamapS> james_w: take-2 .. when I tried using bzr reconfigure on one of the branches, I got THIS: bzr: ERROR: Lock not held: LockableFiles(lock, bzr+ssh://bazaar.launchpad.net/~charmers/charms/oneiric/hadoop-slave/trunk/.bzr/repository/)
<james_w> SpamapS, heh, I was going to say that was a very unexpected message from bzr!
<james_w> SpamapS, would you run again using "bzr -Derror -Dlock" please?
<james_w> it should spit out a traceback, and the last stanza of your ~/.bzr.log should have some information on the locks that are being taken and released
<SpamapS> james_w: http://paste.ubuntu.com/1006739/
<james_w> SpamapS, try "bzr break-lock bzr+ssh://bazaar.launchpad.net/~charmers/charms/oneiric/hadoop-slave/trunk/" and then try again
<SpamapS> Break held by clint-fewbar@bazaar.launchpad.net on crowberry (process #1853), acquired 11 minutes, 15 seconds ago? ([y]es, [n]o):
<SpamapS> Broke lock bzr+ssh://bazaar.launchpad.net/~charmers/charms/oneiric/hadoop-slave/trunk/.bzr/branch/lock
<SpamapS> james_w: lock broken, same issue
<james_w> SpamapS, do you have a traceback?
<SpamapS> http://paste.ubuntu.com/1006767/
<SpamapS> james_w: ^^
<SpamapS> james_w: almost like its a deadlock or something
<james_w> SpamapS, if you break-lock again and do the bzr info, does it show the old or new stacked location?
<james_w> SpamapS, ah, it may work better if you call "reconfigure" with the sftp:// URL rather than the lp: or bzr+ssh: one
<Urocyon> Any good examples of how people are integrating juju and chef?  My google-fu isn't working for me right now.
<m_3> Urocyon: not yet... we've got tasks for charms that call chef-solo recipes, and a subordinate charm that might be able to register a service-unit as a node in a role with a chef server
<m_3> Urocyon: also probably a charm to help deploy chef-server
<m_3> Urocyon: it should be easy to call recipes from within charm hooks... the trick is splitting up the logic into "installation" logic -vs- "relation" logic
<Urocyon> I don't see any chef related charms in jujucharms.com/charms/precise  is there someplace else I should look for those?
<m_3> Urocyon: no, they don't exist yet... was just mentioning the todos :)
<SpamapS> Urocyon: we have tons of ideas, but very little time. Are you interested in experimenting?
<m_3> yes, please help!
<Urocyon> i've just started getting my feet wet here.  But I have some time today to experiment.
<m_3> Urocyon: you can see somewhat similar things done for puppet... lp:~michael.nelson/charms/oneiric/apache-django-wsgi/trunk, and lp:charms/puppetmaster
<SpamapS> Urocyon: do you have an existing chef system that you're interested in hooking up to juju?
<SpamapS> james_w: weird, sftp does seem to have worked.
<SpamapS> james_w: though I think stacking is now borked
<m_3> ha! nice
<james_w> SpamapS, it's a bzr bug then, it's missing some required handling for smart servers
<SpamapS> sftp://clint-fewbar@bazaar.launchpad.net/~charmers/charms/oneiric/hadoop-slave/trunk/ is now stacked on ../../../precise/hadoop-slave/trunk
<james_w> double bug!
<SpamapS> but now I can't 'bzr info -v lp:charms/oneiric/hadoop-slave
<m_3> actually, that's correct right?
<m_3> relative path for the ~charmers user
<SpamapS> hadoop-slave may be a bad example
<SpamapS> since we sort of half unpromulgated it
<m_3> bzr info lp:~charmers/charms/oneiric/hadoop-slave/trunk looks fine though
<SpamapS> m_3: hm
<SpamapS> so, the other ones don't stack on ../../..
<m_3> and I can branch it from there, so it's behaving ok
<SpamapS> they stack on an explicit /~...
<SpamapS> m_3: can you branch from lp:charms/oneiric/hadoop-slave?
<SpamapS> bzr: ERROR: Not a branch: "bzr+ssh://bazaar.launchpad.net/+branch/precise/hadoop-slave/trunk/".
<SpamapS> I get that because the stacking is relative
<SpamapS> instead of absolute
<m_3> nope... no love from lp:charms/oneiric/hadoop-slave
<SpamapS> yeah thats borken
<m_3> others look to be stacked on /~charme...
<m_3> but I'm having problems with alice-irc...
<m_3> bzr info lp:~charmers/charms/oneiric/alice-irc/trunk looks fine (abs path on base)
<m_3> but bzr info lp:charms/oneiric/alice-irc is borked
 * m_3 more confused now
 * SpamapS considers just going --unstacked
<m_3> oh!
<m_3> didn't know that existed
<m_3> of course
<SpamapS> Yeah
<SpamapS> its simpler
<SpamapS> for this particular conundrum
<m_3> plus like a hundred million
<SpamapS> eventually we'll have to make it work stacked
<SpamapS> but Yeah, let me try that
<m_3> I'd ask why, but...
<SpamapS> m_3: it will be important when we have 1000+ charms with 1000+ revs each
<SpamapS> m_3: but we'll probably be using git by then. :-P
<SpamapS> ;)
 * m_3 no comment
 * SpamapS hates relying on solutions that don't exist yet
<m_3> I know!
<SpamapS> alright
<SpamapS> unstacked works
<SpamapS> F it, I"m unstacking all oneiric branches
<SpamapS> and then we can delete the extraneous 'precise' branches
<m_3> ok
<SpamapS> and then we can actually *MOVE ON WITH OUR LIVES*
<SpamapS> hrm, we need to fix charm list to show the non-dev-focus branches as aliases, that wuld be cool
<m_3> oh, lp:charms/oneiric/<blah>?
<SpamapS> yeah
<m_3> does it matter?
<SpamapS> yeah
<SpamapS> so I can say "list all the oneiric charms"
<SpamapS> right now I can't really
<m_3> ugly multiple grep pipes
<m_3> but yeah
 * m_3 likes charm list a lot
<SpamapS> even the grep pipes aren't "official"
<SpamapS> since in theory we could have a branch owned by ~charmers that has been unpromulgated
<SpamapS> (which at this point is only ceremonial since the charm store does not delete.. but still)
<m_3> oh, I see
<SpamapS> sftp://clint-fewbar@bazaar.launchpad.net/~charmers/charms/oneiric/appflower/trunk/ is now not stacked
<SpamapS> m_3: try appflower
<m_3> yikes... no info from the alias
<SpamapS> wtf
<m_3> long-form gave info fine
<m_3> branching now
<SpamapS> perhaps have to repromulgate?! that looks really weird
<SpamapS>   parent branch: bzr+ssh://bazaar.launchpad.net/~4-ja-d/charms/oneiric/appflower/trunk/
<m_3> branched fine from both
<imbrandon> SpamapS: but you could if you just said list all oniric charms in lp:charm/  with a oneiric branch :)
 * imbrandon stops
<m_3> SpamapS: the orig author is still in the set of branches (not saying stack)... jorge was in alice
<SpamapS> imbrandon: you're smarter than that. :)
<imbrandon> heh
<SpamapS> m_3: ah, so maybe hat branch got deleted?
<m_3> hmmm... lemme dig
<SpamapS> m_3: I don't think info works on any aliases
<SpamapS> bzr: ERROR: Parent not accessible given base "bzr+ssh://bazaar.launchpad.net/+branch/charms/precise/alice-irc/" and relative path "../../../../../~jorge/charms/oneiric/alice-irc/trunk/"
<m_3> works perfectly well on lp:charms/hadoop
<SpamapS> yeah
<SpamapS> but I think thats only by mistake
<SpamapS> m_3: I think thats just because the relative path matches
<Urocyon> sorry, got distracted there for a minute.
<m_3> SpamapS: yeah, I'm getting the same results for other oneiric aliases
<m_3> Urocyon: np... nature of the medium :)
<m_3> SpamapS: I guess it doesn't matter if we can branch
<m_3> actually, bzr info lp:charms/oneiric/jenkins looks fine
<SpamapS> charm list | awk -F/ '/lp:~charmers\/charms\/oneiric/ {print $4}'
<SpamapS> best I can do for a "list of oneiric charms"
<Urocyon> Yes, I have a chef server running currently.  I was looking at juju as a way to provide some orchestration for my deployments... like making sure servers came up in a paritcular order, or that existing servers could react automatically to say a new webnode coming on line, or to integrate a new database server into an existing cluster.
<m_3> SpamapS: good enough
<m_3> Urocyon: sounds like a perfect integration case
<m_3> Urocyon: not nec the easiest _start_, but...
<SpamapS> Urocyon: ordering is hard, and juju can't really help you with that..
<SpamapS> Urocyon: juju can help you make your servers wait for their dependencies, so they in effect end up in the right "order"
<Urocyon> that seems sufficient, and maybe even preferable to specifying order.
<SpamapS> Urocyon: Right, it usually is. :)
<m_3> right... ordering between different service, no prob... within the units of a single service is hard
<SpamapS> Urocyon: simplest thing you could probably do would be to have charms which feed data into databags and run chef in response to the changes in configurations.
<SpamapS> Urocyon: one thing that may be difficult to get over is that juju wants to provision everything
<Urocyon> m_3 I thought maybe juju's various hooks and event handling could help there.    When x happens, then add role y  to server Z .. sort of thing.
<SpamapS> IMO, juju should decouple its provisioning parts from its event handling parts.. but thats a large discussion. :)
<m_3> Urocyon: yeah, it's easier to add a juju machine to chef, than the other way around atm
<m_3> Urocyon: so perhaps start with the specific case of adding a new db to an existing app
<SpamapS> m_3: FYI, unstacking progressing on all branches
<Urocyon> I was looking prevously at cloudformation, which has a similar limitation.
<m_3> SpamapS: nice... you are a god among mere mortals
 * SpamapS deflects that comment to james_w
<SpamapS> Urocyon: the reason for the provisioning "limitation" is that there is great power in knowing that you will have a clean box at the start of things. :)
<m_3> you're just jealous of his new laptop
<SpamapS> pretty much
<SpamapS> carbon fiber > aluminum
<SpamapS> Urocyon: eventually I think we'll have a mode for juju where you just install it on existing servers and they phone back home and start doing their orchestration duties.
<m_3> yeah, I was pretty sad they didn't have extras for the judges
<SpamapS> m_3: I'm just tired of being a walking billboard for Apple. ;)
<SpamapS> But damn.. System76 and ZaReason make, frankly, f'ugly laptops
<m_3> SpamapS: that might not be too hard to get rigged up... enroll an existing machine with juju
<SpamapS> m_3: we might even be able to do it w/ jitsu :)
<SpamapS> m_3: just convince a machine it is running the local provider.. ;)
<m_3> I'm tactile, so the aluminum just feels good
<m_3> ha!
<SpamapS> m_3: or maybe orchestra/maas
<m_3> hmmmm
<SpamapS> m_3: and then just plug in machine ids using our handy dandy zk admin privileges
<SpamapS> jimbaker: ^^ get on it
<m_3> really though, what's in the way of installing juju packages and passing it credentials in databag
<SpamapS> m_3: haha.. "credentials"
 * m_3 sigh
<SpamapS> meaning, the address of ZK? ;)
<m_3> yes, that
<SpamapS> m_3: don't worry, we'll have ACLs soon
<SpamapS> I think
<m_3> zk's gotta get a new node put in, but jitsu can do that really
<m_3> Urocyon: fun project :)
<m_3> SpamapS: what sort of battery life are you getting nowadays on your little one?
<m_3> I really was wishing I had that the past couple of days... we were short on juice at gluecon
<SpamapS> m_3: 5.5hrs usually
<jimbaker> SpamapS, unprovider sounds like a good idea to me
<SpamapS> m_3: sometimes only 5 if I watch too many videos :)
<m_3> wow
<SpamapS> m_3: yeah, precise added the "deep" sleeping where it basically turns everything off if all I'm doing is staring at the screen for a few seconds
<SpamapS> deep C6 or whatever its called
<m_3> oh nice
<SpamapS> which realistically, we all do quite a bit of
<m_3> was that a tweak of out of the box
<SpamapS> its standard
 * m_3 looks the other way and whistles
<SpamapS> the only tweak I did was choose Unity 2D
<SpamapS> seriously :)
<m_3> my mbp13's still a hog... less than half that time
<Urocyon> Where is the code for existing charms stored?
<m_3> Urocyon: jujucharms.com has browsable links to code.launchpad.net/charms
<m_3> Urocyon: sorry, the lp:charms/hadoop repo urls are actually "launchpad" shortcuts
<m_3> we just get used to throwing "lp:" around in here without warning or explanation
<SpamapS> Urocyon: the charms are stored/developed in bzr branches on launchpad. You can also use the built in "charm store" by just typing 'juju deploy foo' .. but there are issues with doing that in production (you can't change the charm for instance)
<m_3> atp-get install charm-tools && charm getall && grab coffee
<Urocyon> I haven't ever done any work with launchpad, but I'm guessing that's a bit like github, and that I should fork the charms I need and deploy using those?
<m_3> Urocyon: in short, yes
<SpamapS> m_3: ugh.. charm getall sucks man.
<SpamapS> <-- author of charm getall
<m_3> afaik, the biggest difference between launchpad and github is that lp has a lot of test/build-related stuff to make sure ubuntu packages are all green
<m_3> otherwise, you can pretty much assume common features
<m_3> Urocyon: btw, github.com/charms
<SpamapS> Urocyon: we'd rather that you use the charms as-is, and feed back any changes you see necessary so that all can benefit
<SpamapS> Urocyon: unlike chef, juju is designed for encapsulation so you don't have to fork (though you can of course)
<SpamapS> m_3: I'm still not convinced that having a github mirror is a good thing. Just confuses people when they're ready to start doing real charm store dev.
<SpamapS> m_3: think how hard its going to be for somebody who forks them on github to actually get that change back in to LP
<m_3> SpamapS: true, and we haven't had that much call from users to have them hosted on github
<m_3> SpamapS: pull requests were gonna be manual until we decided if it was worth it
<m_3> I'm happy to just turn them into redirects whenever people think we should
<m_3> happy with that being a ~charmers decision... and not just mine
<Urocyon> sure, just trying to the the swing of launchpad here.
<m_3> Urocyon: oh right... we're just off on a tangent :)
<SpamapS> *DOH* .. just noticed that txzookeeper's packaging states MIT/X/Expat but txzookeeper is LGPL3
 * SpamapS is undaunted by the Debian NEW rejection.
<Urocyon> when contributing, is there an official way your hooks should be written? i.e. write hooks in bash.
<SpamapS> Urocyon: nope
<SpamapS> Urocyon: thats intentional. Write them howe you want. :)
<SpamapS> wow unstacking is *slow*
<SpamapS> Just realized its still going.. rabbitmq-server
<m_3> I saw something about cycles in stacks being a problem... hope we haven't created any of them
<SpamapS> m_3: the problem with unstacking is that now each branch gets bigger for launchpad to deal with. But thats ok in this instance where verything is just all screwed up
<Urocyon> SpamapS: hmmm, just getting my feet wet, I see what you mean about decoupling provision and event handling...
<SpamapS> Urocyon: yeah, it comes from the goal being to take advantage of the cloud's magic ability to give you lots of shiny clean servers. :)
<SpamapS> Urocyon: but there's value in being able to use juju on dirty old servers too. :)
 * SpamapS wonders if that is a euphamism for waiters in tiki lounges
 * SpamapS heads to lunch
<Urocyon> SpamapS: well I enjoy taking advantage of the magic shiny clean servers, I just am doing it with chef at the moment.   Most of the charms it seems overlap with chef though - with a focus on configuring a service.
<Urocyon> I have chef to do that.  I just want to better manage relationships and add on event handling.
<Urocyon> SpamapS: just seems like I'm not getting as much use out of the existing charms that way.
<hazmat> SpamapS, yeah.. it basically has to copy the history back
<imbrandon> SpamapS / m_3 : wait , did i miss something , i thought the plan was to move in the other direction ?
<hazmat> SpamapS, ugh.. i thought i'd fixed that ..
<SpamapS> imbrandon: whats the other direction?
<SpamapS> hazmat: you thought you had fixed what?
<hazmat> SpamapS, the licensing stuff in txzk
<SpamapS> hazmat: oneiric branches should all be fixed now
<hazmat> SpamapS, sweet
<SpamapS> hazmat: Yeah I think you fixed it in trunk, but I didn't fix it in the packaging
<SpamapS> hazmat: so it failed NEW review in Debian. :-P
<SpamapS> I'll fix it. Should have double checked anyway
<hazmat> SpamapS, james_w, m_3 thanks for fixing those branches
<SpamapS> hazmat: hey did you say you have sup working with ruby 1.9 ?
<hazmat> SpamapS, yup
<SpamapS> hazmat: I had to make some.. evil symlinks to get it to start up
<hazmat> SpamapS, i just run it from git .. with a -Ilib bin/sup and alias
<SpamapS> hazmat: Ok so "run it as crack"
<SpamapS> good idea :)
<SpamapS> hazmat: I'm running from an old git snapshot that I patched to work with gpgme2
<hazmat> SpamapS, well i'm on stable branch.. so not quite
<hazmat> SpamapS, yeah.. that integration seems a little borked still to me
<hazmat> i've got the gem installed, but its not detecting it.. haven't really investigated
<SpamapS> hazmat: it is. Its just useful for reading encrypted and verifying signed messages. Sending, I still just pipe to clearsign
<hazmat> SpamapS, by crack you mean running mostly unmaintained stuff like sup ;-)
<SpamapS> hazmat: gpgme2 has an entirely different API and sup has never been updated to use it
<hazmat> gotcha
<SpamapS> hazmat: I keep meaning to try heliotrope
<hazmat> SpamapS, i'm going to bite the bullet and do gmail
<hazmat> i like sup.. but having my email on all my devices and easy to check from anywhere is a huge win still
<SpamapS> hazmat: having *access* on all your devices you mean. ;)
<hazmat> SpamapS, fair enough
<SpamapS> hazmat: I have to agree though. The web interface I do have is basically total crap.
<hazmat> SpamapS, only downside is loss of gpg
<james_w> I thought about setting up heliotrope the other day, but it looked a bit too rough
<james_w> e.g. I didn't see a mention of authentication
<SpamapS> Perhaps the sup guy just gave up and started using gmail too? ;)
<james_w> he doesn't appeared to have committed for a month :-)
<hazmat> or maybe its just done ;-)
<SpamapS> Software that is "done" is so boring ;)
<imbrandon> software is never done
<SpamapS> wooot! txaws in Debian unstable. :)
<imbrandon> sweet
<nathwill> is there any way to inspect relation settings similar to the config settings accessed with juju get service? something like juju relation-get service1 service2 ?
<nathwill> i know you can see them being set in debug-log, but not aware of a more ondemand kind of way to inspect...
<SpamapS> nathwill: yes you can inspect all of them
<SpamapS> nathwill: you'll need to tell relation-get where to find them, and which context you mean
<SpamapS> nathwill: so you need to pass -s /path/to/agent/socket and -r relation:id
<SpamapS> nathwill: you can get the relation:id from the 'relation-ids' command
<nathwill> SpamapS, this is inside a hook?
<SpamapS> nathwill: if you're already inside a hook, don't need to pass -s
<nathwill> right. ok...
<SpamapS> nathwill: and if you're in a *relation* hook, then you don't even need to pass -r
<nathwill> um, how can i find the socket?
<SpamapS> nathwill: lsof works :)
<SpamapS> python  4597 root   11u  unix 0xffff880065374000      0t0  10917 /var/lib/juju/units/vivo-1/.juju.hookcli.sock
<nathwill> haha, ok. sugarless solutions work :)
<SpamapS> hmm, dunno how to set JUJU_CLIENT_ID actually
<jimbaker> nathwill, an alternative is to use jitsu run-as-hook
<SpamapS> nathwill: you can also use 'jitsu run-as-hook'
<SpamapS> JUJUJITSUJINX
<jimbaker> ;)
<SpamapS> say that 5 times fast :)
<nathwill> jitsu you say... i've heard about it, guess i'll have to go fiddle w/ it
<imbrandon> heh
<jimbaker> SpamapS, i gave up just saying juju-jitsu
<SpamapS> hrm looks like 0.6 never made it to the PPA
<jimbaker> nathwill, jitsu run-as-hook is good for inspecting settings. so you can do stuff like $ jitsu run-as-hook mysql/0 relation-get -r db:0 -
<jimbaker> and if you had a service unit of mysql/0 with a relation of relation id of db:0, you would get those settings
<SpamapS> ahh right, they need to update launchpad so I can use '{latest-tag}' in bzr recipes
<nathwill> k, thanks folks. i'm trying to ascertain the best way of setting a password for a default user in owncloud.
<nathwill> owncloud doesn't separate the db config from the default admin config like WP
<jimbaker> you could do $ jitsu run-as-hook mysql/0 relation-ids db to get a relation id; juju status of course to get the service unit you wish to inspect; etc
<jimbaker> where again i'm just some standard examples like db for a relation name
<nathwill> so there's no automagical way to hide the db business if trying to use remote mysql host... i've got some ideas, just want to look at what the users are going to have to go through to get the pwd out of the settings
<jimbaker> nathwill, yes, the flip side is that this stuff is in ZK
<nathwill> i'm almost thinking a config option is best, and forget the autogenerating bit
<SpamapS> nathwill: this is a common enough problem, I think we need to solve this in juju core. type: password would be cool, where it just generates a random password on the first deploy, and then you can 'config get' to see it
<nathwill> SpamapS, that would be amazing.
<SpamapS> nathwill: another option is to just put it somewhere that the admin can access it through juju ssh
<nathwill> yeah, i thought about writing it to a tightly perm'd file... it just feels kludgy to do that type of thing
<SpamapS> I think juju needs more ways of feeding data back from charm->admin though
<SpamapS> debug-log is a really awful way of doing it
<nathwill> maybe a config-set option to create config parameters during hook execution?
<SpamapS> config-set has come up a lot
<SpamapS> and its my favorite option
<SpamapS> because then charms can generate passwords that they know will work with the intended software
 * nathwill nods
<hazmat> as long any hook set values are namespaced away from service config, sounds good
<hazmat> SpamapS, nathwill its basically 'constant' as i recall
<hazmat> have a good weekend folks
<nathwill> ditto hazmat
<SpamapS> aye you too hazmat
<bkerensa> nathwill: someday you will be a jedi :)
<imbrandon> nice, drizzel + nginx natively talking to it + nodejs taking the json output and turning it pretty with express templates
<imbrandon> i think i might have found a reason to give up php
<nathwill> oh dear. another one drank the koolaid
<imbrandon> heh the only new thing in there that i havent been useing is express templates ( well and drizzel but right now its the same as mysql in my setup )
<imbrandon> i just like the fact that nginx talks directly to drizzel/mysql, i dont mean proxying to it, i mean talks to it, then spits out cvs/json/etc
<imbrandon> :)
<imbrandon> oh and does it in a non-blocking way to the eventloop keeps going , meaning 10k+ connections and not blink
<nathwill> i was referring to the nodejs koolaid. the other business sounds cool
<imbrandon> ahh well js koolaid maybe, node though is fskin FAST only thing i seen close it it is nginx and it still cant keep up
<imbrandon> nathwill: trust me i was right with you about js/nodejs/jsp crap untill about ohhhhhh 2 months ago and did somehting with a co-worker by accident , then i had to give it a real look again, its a totaly diff landscape than 3 years ago even let alone 12 or 15 when i wrote off JS
<imbrandon> :)
<nathwill> imbrandon: i hope you're right. i fear what the js monkeys will do to the web when they have control of the backend. my experience to date has confirmed this; though i admit i have not investigated very deeply
<imbrandon> i dont think its a good app platform, but its a good async utility / api one if each bit is kept self contained
<nathwill> imbrandon, that makes sense.
<imbrandon> oh and did i say fast
<imbrandon> lol
<imbrandon> i was pulling sql rows with selct * limit 0, 1000 earlier then spit it out as json and ran seige on it just for an unscientiffic test
<imbrandon> and my cpu gave out before i could make it cause errors in the requests
<imbrandon> and that was just with like 5 minutes and no optimizations etc etc
<imbrandon> curious to see how the disk io is on it tho
<imbrandon> something to try out the next few days
<imbrandon> but i'm fairly sure all my php REST stuff is going to be replaced with small node apps over the next few months the way it looks
<imbrandon> on the back end at least
<imbrandon> pair that with nginx rev proxing all that apps from diff ports into one interface and microcache it , might be onto something big there :)
<imbrandon> SpamapS: you realize that nginx will even act as a mongo or mysql proxy , you basicly just list the upstreams the same way we did the php-fpms
<imbrandon> this thing is starting to become a hammer and everyones got nails i need to pound on
<imbrandon> i'm starting to think nginx is a tcp proxy first and httpd second
<nathwill> we use nginx for proxying imap. works pretty well
<SpamapS> imbrandon: it doesn't really grok the mysql protocol tho
<SpamapS> imbrandon: also, Drizzle has an HTTP+JSON plugin, so you don't really need nginx to translate
#juju 2012-05-26
<imbrandon> well the translation was not that biggie
<imbrandon> but the poling and proxying was
<imbrandon> and it groks it ewnought to do sql queries for r/w
<imbrandon> enough*
<SpamapS> imbrandon: I think I'd rather just have drizzle do the HTTP/JSON
<imbrandon> server {
<imbrandon> location /mysql {
<imbrandon> set $my_sql 'select * from cats';
<imbrandon> drizzle_query $my_sql;
<imbrandon> drizzle_pass backend;
<imbrandon> }e.g. no middlew man like php etc , just nginx and database for some things, even does escaping of user input etc
<imbrandon> and is no blocking
<imbrandon> but yea its not for everythging, but i can soo see a few uses for this and with a little creative thinking some really cool stuff
<SpamapS> imbrandon: nginx is the middle man there. You can just do an HTTP target with the HTTP+JSON plugin
<imbrandon> SpamapS: sure then you loose the cluster though
<imbrandon> that you may have behind backend
<SpamapS> imbrandon: also I'm not sure I like the direct DB access anyway. SOA arches tend to evolve more sanely.. and you want to be able to do things like insert a cache there.
<imbrandon> and also its for mysql and mongo and all not just drizzle
<imbrandon> thats exactly what i was doing with it, looking up a cache hit, if not sent to php , if php puts it in mongo then next round nginx gets form mongo and no php hit
<imbrandon> but i'm sure there is soooo much more it could do with a bit of thought
<imbrandon> https://github.com/simpl/ngx_mongo
<imbrandon> SpamapS: ^^
<imbrandon> drizzel kinda bad example with the json as it does it already, but see how utterly simple that makes db clustering
<imbrandon> well simple clustering
<imbrandon> and can use some nifty sql tricks in locations too
<imbrandon> :)
<imbrandon> hell make it understand python and you nginx/wsgi/webpy would all become one thing :)
<imbrandon> lol
<SpamapS> imbrandon: you're.. caching.. in mongo?!
<SpamapS> wtf?!
<SpamapS> imbrandon: *drizzLE*
<imbrandon> full page caching just as a poc tsting out mono
<imbrandon> mongo*
<imbrandon> not like for real
<SpamapS> imbrandon: we have this thing.. memcached.. its like.. awesome. :)
<imbrandon> hahah yup and redis even better
<imbrandon> ( for that )
<SpamapS> imbrandon: and membase for when you want really long lived cache items. :)
<imbrandon> never tried that one
<SpamapS> its memcachedb but not t3h suck
<SpamapS> anwyay, time to sign off for the weekend
<imbrandon> but yea, its was just as a proof of concept thing, trying all this new stuff oout i found
<imbrandon> coopl cool , later
<SpamapS> imbrandon: sometimes I'm jealous that you get to work on actual scaling architectures and not just pretend to work on them in a metaverse like I do. :-P
<imbrandon> heheh trust me , i'll drag you in enough ;)
<imbrandon> i start getting rich and might even have to bribe ya too
<imbrandon> :)
<imbrandon> seriously tho, enjoy the weekend, i'll be round i'm sure
<SpamapS> woot! txzookeeper accepted in Debian
<m_3> SpamapS: awesome
<SpamapS> indeed
<SpamapS> juju should be next
<_mup_> Bug #1004937 was filed: juju should provide more useful error when no ssh keys avaliable <juju:New> < https://launchpad.net/bugs/1004937 >
#juju 2012-05-27
<nathwill> if i make a merge proposal to update an existing charm, does the merge proposal stand alone, or should i open a bug as well?
<marcoceppi> nathwill: it will stand alone and should show up in the review-queue
<nathwill> ok, cool. thanks marcoceppi
<nathwill> of course right after doing so, i realize that i overlooked something o_O
<marcoceppi> :D
<nathwill> i got the db portion (users, etc) for owncloud setup so that units can share a backend, but... owncloud also needs to share files in this config, so now i need to add an nfs backend :P
<marcoceppi> nathwill: can the charm just re-create those config files for each unit?
<nathwill> marcoceppi, it's the user data like uploaded files, calendars etc that's problematic. i've already got the config-file re-creation taken care of
<marcoceppi> nathwill: ah, yeah you'll want to implement the "shared-fs: mount" relation: interface
<nathwill> k. i put it down as "storage: \n interface: mount", is shared-fs an established name?
<marcoceppi> shared-fs is an established name
<nathwill> k. i'll switch it up then :)
<marcoceppi> all the shared filesystem charms should be using it
<marcoceppi> sooner or later ;)
<nathwill> makes sense.
<nathwill> thanks for the heads-up
<stayarrr> Hi all, I get an error while trying juju on ubuntu 12.04 lts
<stayarrr> I installed everything for local dev
<marcoceppi> stayarrr: what's the error?
<stayarrr> and if I try to execute juju bootstrap
<stayarrr> i get
<stayarrr> internal error Network is already in use by interface virbr0
<stayarrr> Do you know how I can reset virbr0 interface?
<marcoceppi> stayarrr: could you run `groups` and paste the output?
<stayarrr> marcoceppi: oliver adm cdrom sudo dip plugdev lpadmin sambashare
<marcoceppi> stayarrr:  the problem is you're not in the libvirtd group, http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage covers the details how how to fix this (See "error: Failed to start network default"); however, you just need to run the following:
<marcoceppi> sudo usermod -aG libvirtd oliver
<marcoceppi> newgrp libvirtd
<marcoceppi> then try to juju bootstrap
<marcoceppi> newgrp will make it so you don't have to log out then back in again for the group modification to take place. Though it'll only affect the terminal window you run it in. Each new terminal window will need to execute that command until you restart or log out/back in again
<stayarrr> ok, thx
<stayarrr> as i know, the bootstrap exec should be finished or is it a daemon?
<stayarrr> I ran juju bootstrap
<marcoceppi> stayarrr: bootstrap command will finish almost instantly, however it does several things in the background that can take up 20 minutes to complete depending on your internet connection
<stayarrr> output: 2012-05-27 10:04:02,477 INFO Bootstrapping environment 'local' (origin: distro type: local)...
<stayarrr> 2012-05-27 10:04:02,478 INFO Checking for required packages...
<stayarrr> 2012-05-27 10:04:03,525 INFO Starting networking...
<stayarrr> okay...
<stayarrr> thx so far
<marcoceppi> It should say bootstrap finished
<marcoceppi> if it doesn't then the bootstrap is hanging somewhere'
<marcoceppi> stayarrr: http://paste.ubuntu.com/1009298/
<stayarrr> hm... okay aborts without feedback after STARTINg Networking
<marcoceppi> stayarrr: try running juju -v bootstrap
<marcoceppi> for more verbose output
<stayarrr> marcoceppi: no output so far, hanging on INFO Starting networking...
<marcoceppi> interesting.
<marcoceppi> Do you have any KVMs or Virtual Machines setup on your computer?
<stayarrr> marcoceppi, this OS runs in a VM
<stayarrr> if it helps
<marcoceppi> Ah, that might be the issue. What type of virtualization?
<stayarrr> a standard hardware virt.?
<stayarrr> I run ubuntu in vmware installed via an iso its completely virtualized os with  virt. hardware config
<stayarrr> network is NAT
<stayarrr> marcoceppi: Okay, I tried several (maybe all) vm ware configs, seems not working. I found this bug-report: https://bugs.launchpad.net/juju/+bug/920454
<_mup_> Bug #920454: juju bootstrap hangs for local environment under precise on vmware <local> <juju:New> < https://launchpad.net/bugs/920454 >
<anos> i have a question about load average?
<SpamapS> stayarrr: I went ahead and marked that bug Confirmed/High. the local provider uses libvirt's networking, and I would suspect this is not working well with the vmware guest additions.
<hazmat> stayarrr, could you pastebin ifconfig on the virt machine?
#juju 2013-05-20
<jamespage> mthaddon, hey - do you guys have a python based snippet for install/upgrade hooks for the exec.d/*/charm-pre-install stuff (as added to rabbitmq)
<jamespage> I'm rebasing lp:charms/rabbitmq-server on the python based version we want to land for openstack ha support
 * hazmat notices that haproxy charm extends and breaks the website interface
<mthaddon> jamespage: sure, one sec
<mthaddon> jamespage: http://paste.ubuntu.com/5683534/ is from the postgresql charm (so in this case, run_pre is set to true by default in the function in question)
<jamespage> mthaddon, thanks
<mthaddon> np
<jcastro> rick_h_: heya, had a look at your blog post
<ahasenack> jcastro: I missed the last charm school, is there a recording somewhere?
<jcastro> yep
<jcastro> one sec I am updating the page as we speak actually. :)
<ahasenack> :)
<jcastro> I'll post the new stuff on the list in ~10min
<ahasenack> ok, thanks
<jcastro> marcoceppi: actually I need that youtube link of your charm school again. :)
<marcoceppi> jcastro: np
<marcoceppi> jcastro: http://www.youtube.com/watch?v=yRcqSjOGweo
<jcastro> ahasenack: ^^
<ahasenack> jcastro: thanks
<rick_h_> jcastro: cool, I've not looked at it in a couple of days. Plan on revisiting to try to simplify and shorten a bit.
<rick_h_> jcastro: any feedback appreciated.
<bbcmicrocomputer> do we have any PostgreSQL charm maintainers around, the charm store PostgreSQL charm appears to be undeployable?
<bbcmicrocomputer> bug #1164202
<_mup_> Bug #1164202: Postgresql charm hangs in "pending" during installation <postgresql (Juju Charms Collection):New> <https://launchpad.net/bugs/1164202>
<bbcmicrocomputer> looks like it may have been in this state for a while
<marcoceppi> bbcmicrocomputer: I've deployed the postgresql charm "recently" without issue
 * marcoceppi tries again
<marcoceppi> bbcmicrocomputer: what version of Juju are you using?
<bbcmicrocomputer> marcoceppi: ah, not ppa
<bbcmicrocomputer> marcoceppi: you're going to say I need ppa right?
<marcoceppi> bbcmicrocomputer: no, I just want parameters to recreate the incident :)
<bbcmicrocomputer> marcoceppi: ah, standard package version, EC2
<marcoceppi> Are you using pyjuju or gojuju?
<bbcmicrocomputer> marcoceppi: PyJuju
<marcoceppi> cool, let me switch and give it a shot
<bbcmicrocomputer> marcoceppi: thanks
<bbcmicrocomputer> marcoceppi: I always get the error - 'Hook error: /var/lib/juju/units/postgresql-0/charm/hooks/install The relation 'replication' has no unit state for 'postgresql/0''
<marcoceppi> bbcmicrocomputer: It deploys and moves to started on gojuju, still waiting for my pyjuju environment to finish. See if I can't patch it really quickly
<bbcmicrocomputer> marcoceppi: same error?
<marcoceppi> bbcmicrocomputer: it doesn't error out with juju-core 1.10.0, still waiting for aws to give me a machine so I can veify with juju 0.7
<marcoceppi> bbcmicrocomputer: It works in juju 0.7 as well. Moves to started. The log is filled with "Ignoring partially constructed relation: replication:0" but I don't get any errors
<bbcmicrocomputer> marcoceppi: ok, that's mildly annoying :)
<marcoceppi> Could be something that was fixed between 0.6 and 0.7 (if you're not using ppa)
<bbcmicrocomputer> marcoceppi: yep, I'll give the ppa a try
<bbcmicrocomputer> marcoceppi: thanks
<marcoceppi> bbcmicrocomputer: hopefully that'll sort that
<bbcmicrocomputer> marcoceppi: is our official support status ppa only now?
<marcoceppi> Uh, I don't think we have an offical support status. To my knowledge we've always told people use the PPA and now we're kind of herding people in the juju-core direction
<marcoceppi> Given that's where the development focus is
<JoseeAntonioR> hey guys! I don't know if any of you would like to give a Juju session for OpenWeek tomorrow, at 13 UTC
<jcastro> our official support status is what's in distro for 13.04
<jcastro> 12.04 is pending an SRU for -core
<jcastro> we need to sort that actually
<bbcmicrocomputer> marcoceppi: ok, yep, ppa solved it, thanks
<jcastro> Daviey: heya, would now be a good time to bother you about SRUing juju core?
<marcoceppi> bbcmicrocomputer: np, glad it worked out
<Daviey> jcastro: no, in 1hr would be better
<jcastro> ack
<hazmat> bbcmicrocomputer, there was a fix in 0.7 for this
<hazmat> jcastro, given we're doing etc alternatives i think we should sru 0.7 as well (its backwards compatible)
<hazmat> its basically just bug fixes
 * jcastro nods in agreement
<jcastro> evilnickveitch: around?
<Daviey> jcastro: He is on UK time and has a life.
<jcastro> oh sorry, wasn't paying attention to the time
<jcastro> but I see you're around. :)
<evilnickveitch> jcastro, hey, was just getting something to eat
<jcastro> no worries, sent you an email
<evilnickveitch> cool
<jcastro> hey marcoceppi
<marcoceppi> jcastro: hey
<jcastro> any objections to removing drupal6 from the store?
<marcoceppi> nope and nope
<marcoceppi> I can unpromulgate it now if you want
<jcastro> lets do it
<jcastro> Daviey: should I catch you tomorrow wrt. backports to 12.04?
<marcoceppi> jcastro: it should be unpromulgated
<Daviey> jcastro: wfm
<marcoceppi> jcastro: we've got a problem with the unpromulgation. ~charmers doesn't own the branch, lynxman does
<jcastro> :-/
<marcoceppi> I
<marcoceppi> 've unpromulgated, so as long as the gui respects what's in the store now it *should* work
<arosales> marcoceppi, jcastro on the drupal charm is there a bug report on the error?
<jcastro> https://bugs.launchpad.net/charms/+source/drupal6/+bug/1181219
<_mup_> Bug #1181219: Install hook fails <drupal6 (Juju Charms Collection):New> <https://launchpad.net/bugs/1181219>
#juju 2013-05-21
<kyhwana> hrm, going through the juju getting started at https://juju.ubuntu.com/docs/getting-started.html in 12.04, trying to use lxc, in "In ~/.juju/environments.yaml, add a section for type "local":" I get error: environment "sample" has an unknown provider type "local" when I run juju bootstrap
<kyhwana> (i've got LXC already setup with network bridging)
<sidnei> kyhwana: the go port (1.x) does not support the local provider yet, you have to use the python version (0.7.x)
<jcastro> Daviey: want to talk juju SRUs with me?
<Daviey> jcastro: no.  But can do.
<jcastro> hah
<jcastro> G+?
<Daviey> jcastro: OK
<Daviey> jcastro: you initiate ?
<jcastro> it's ringing
<Daviey> is it?
<jcastro> mgz: ping
<fdehay> Hi Do you know if juju1.10 supports ec2-uri parameter in environments.yml for ec2 type cloud that are not ec2 (ie cloudstack)?
<mgz> jcastro: hey
<jcastro> heya
<jcastro> so ... let's talk backport/SRU/whatever of everything to LTS
<jcastro> https://news.ycombinator.com/item?id=5739061
<jcastro> users are running into problems
<jcastro> so I'm going to chase down the SRU for maas.
<jcastro> and I'm thinking everything should be 12.04-able
<mgz> so... we can probably take the juju-0.7 and juju-1.10 packages to backports, I don't see the value in working off older versions
<jcastro> I agree
<jcastro> Daviey had some concerns about the update-alternatives
<jcastro> how would it work
<jcastro> is it like the PPA where we'd still default to .7 but allow the switch to 1.10?
<mgz> well, it currently leaves it on juju-0.7 by default
<jcastro> that's fine
<mgz> and reqires a user running a command to switch, so should be fine for the install from distro case with pyjuju
<jcastro> that seems fine to me
<Daviey> mgz: I was thinking it might make sense to backport co-installability aswell
<mgz> yeah, we should just base off the raring work
<jcastro> is .7 as an SRU and 1.10 as a backport a good idea? Or stick both in backports?
<mgz> either sounds possible, whatever the distro guys prefer
<jcastro> mgz: do you have time to drive this?
<jcastro> I really have nothing to offer other than "we need to fix this, someone help me oh please god."
<mgz> probably, but I can't do much past the putting a branch up for review, someone else with powers needs to do the kicking
<jcastro> if you can get me a branch and CC me on it, I can hunt down and kick
<fdehay> hi again, can someone help me with the differences between pyjuju and juju-core in terms of coverage in functionnality?
<fdehay> ie: does juju-core support non-ec2 ec2 compatible cloud? (cloudstack) - I get an error using ec2-uri parameter in a new environemnt of type ec2
<mgz> fdehay: don't think anyone has tested cloudstack specifically, but the ec2 and s3 endpoints are hardcoded to aws, yeah
<mgz> if you want to install from source it would be reasonably easy to play with that, at least you can report a bug against juju-core
<fdehay> thanks, we need to use cloudstack (internal cloud for now) so we will test juju using 0.7 for now I suppose
<fdehay> I can try to report a bug againt  juju-core (never tried before :))
<mgz> you have a launchpad account? just go to https://bugs.launchpad.net/juju-core/+filebug
<fdehay> ok submitted https://bugs.launchpad.net/juju-core/+bug/1182508 hope it clear enough :)
<_mup_> Bug #1182508: juju cannot connect to cloudstack (non-ec2 ec2 cloud) <cloudstack> <ec2> <juju-core:New> <https://launchpad.net/bugs/1182508>
<wedgwood> marcoceppi: I remember at UDS we had someone offer to help with charm-helpers refactoring
<wedgwood> marcoceppi: I can't recall who
<marcoceppi> wedgwood: Was it one of the gui guys?
<wedgwood> marcoceppi: may have been. if you don't know either I'll go back to the video
<marcoceppi> wedgwood: Oh, during UDS. yeah, I can't recall but it should be in the video
<wedgwood> marcoceppi: it's in the notes, thankfully. it was bac
<marcoceppi> ah, perfect
<wedgwood> bac: I'm ready to land a pile of changed in charm-helpers if you're interested in reviewing
<wedgwood> s/changed/changes/
<bac> wedgwood: otp.  be with you shortly
<arosales> jcastro, I think roaksoax was motivated after yesterday's chat and wrote up
<arosales> http://www.roaksoax.com/2013/05/getting-started-with-maas-and-juju
<mattyw> in juju-core is there somewhere I can view the whole debug-log? I was supposed to watch my env come up using "juju debug-log" - but forgot and now it's bringing everything up
<mattyw> ^^ /var/log/juju/all-machines.log
<AskUbuntu> How can I see the juju-core debug-log without tailing it? | http://askubuntu.com/q/298273
<marcoceppi> mattyw: thanks for documentating that on AU!
<mattyw> marcoceppi, no problem, I was sure I'd been told to put stuff like that on AU
<mattyw> marcoceppi, didn't realise there was a bot to put it here - that's cool
<mattyw> AskUbuntu, I guess you're a bot?
<marcoceppi> It is, but it's nice to have it in a "more visable" place, since Google doesn't index irclogs very well
<sarnold> irc's only twenty years old, give them some time..
<jcastro> sarnold: yeah but we also get metrics on AU
<jcastro> so in say, 6 months if that question has 10k views, we know we suck at making that feature visible, etc.
<sarnold> jcastro: heh, maybe, I've viewed a lot of questions just cause they sounded neat. :)
<sarnold> 50 upvotes, yeah.. :)
<jcastro> well then, a mindshare win on top of that, awww yeah!
<mattyw> jcastro, you got a moment?
<jcastro> I'm on a call but I can listen on IRC. :)
<jcastro> hey marcoceppi
<jcastro> with tinytiny rss almost done
<marcoceppi> jcastro: yo
<jcastro> all we're missing is a jabber charm
<jcastro> "Big company shutting down services you use? No worries, I got your back."
<marcoceppi> ejabberd would make a nice charm
<sidnei> jcastro: did you see this one? http://www.holovaty.com/writing/aws-notes/
<sidnei> jcastro: also unrelated, but i'd love to see a mention of juju on http://devopsweekly.com/
<sidnei> marcoceppi: ^
<marcoceppi> sidnei: Yeah, there was a HN article about it https://news.ycombinator.com/item?id=5738252 jcastro was all over it
<sidnei> marcoceppi: what about the devops weekly one?
<marcoceppi> sidnei: I'm not familiar with them, but it would probably be good to be mentioned on there
<marcoceppi> Might be a good place to get mentioned when we have a stronger puppet/chef story since then seem to favor those a lot
<adam_g> hmm
<adam_g> hazmat, 0.7+bzr628+bzr629~raring1 from PPA is installing an empty package, similar to txzookeeper last week
<hazmat> ugh
<hazmat> adam_g, i don't think that's the issue here though, but noted, i'll take a look
<adam_g> hazmat, nah, i just noticed a raring deployment failed because it was missing the config-get binary
<adam_g> hazmat, actually maybe its not entirely empty (the agent at least is running) but the binaries are missing
<adam_g> bins are going into /usr/lib/juju-0.7+bzr628+bzr629~raring1/bin/ instead of /usr/bin/
<hazmat> looks like mgz tweaked the recipe...
<hazmat> er.. the packaging. http://bazaar.launchpad.net/~juju/ubuntu/raring/juju/0.7/revision/41
<hazmat> adam_g, i'm redoing the build now.. for precise since it failed..
<hazmat> but i might need to revert on the packaging changes as well.. tbd
<adam_g> hazmat, hmph, cant imagine how that packaging change would cause that
<hazmat> adam_g, well it was previously doing 0.7+revno.. so there are some changes in there from the last 8hrs
<hazmat> agreed its strange
<hazmat> greetings KyleMacDonald
<KyleMacDonald> hey dude
<jcastro> marcoceppi: hey so
<marcoceppi> jcastro: yeah?
<jcastro> did the stack problem with removign drupal6 from the store get resolved?
<marcoceppi> jcastro: no, I need lynxman to help out but he's not on IRC
<marcoceppi> Since it's stacked on /his/ branch
<jcastro> anyone have his contact info?
<marcoceppi> jcastro: I don't, but based on his lp account I can assume his jabber is his email. If not I can just open a bug and assign it to him in hopes he gets it
<jcastro> can you fire off an email?
<jcastro> I think he'd just Do It.
<marcoceppi> jcastro: yeah, firing now
<jcastro> <3 thanks
<arosales> marcoceppi, I don't see a drupal bug for the removal for the charm store
<arosales> https://bugs.launchpad.net/charms/precise/+source/drupal6
<arosales> marcoceppi, could you confirm
<marcoceppi> arosales: created
<arosales> marcoceppi, ok, perhaps it is taking some time to propagate to https://bugs.launchpad.net/charms/precise/+source/drupal6
<marcoceppi> arosales: this is the actual link: https://bugs.launchpad.net/charms/+source/drupal6
<arosales> ah ok so not specific to the precise branch
<marcoceppi> the specific precise branch is "gone" due to unpromulgation, but it's still "technically" there because of the stacking issue
<arosales> marcoceppi, gotcha. perhaps going forward we make the bug report and give x time for the maintainer to respond before pulling?
<wedgwood> marcoceppi: bac: https://code.launchpad.net/~mew/charm-helpers/refactor-to-core/+merge/164980
<marcoceppi> arosales: ack, though there are outstanding bugs since november ;)
<arosales> marcoceppi, I think jcasto may also propose a similar idea to the list (for folks not here)
<bac> wedgwood: thanks
<wedgwood> it's enormous. for that I am sorry, but the individual commits are reasonable.
<arosales> marcoceppi, true the meta name has been there for some time.
<marcoceppi> wedgwood: I'll take a look as well for <standard peanut gallery> remarks
<wedgwood> marcoceppi: you'll probably be more interested at the next round of commits where I add in the command-line tool
<marcoceppi> wedgwood: definitely
<arosales> marcoceppi, the main idea is to give maintainers a warning and an opportunity to respond.
<marcoceppi> arosales: right, I understand as much
<arosales> marcoceppi, thanks :-)
<marcoceppi> Eager to start the store cleanup :)
<arosales> marcoceppi, its all in the name of quality which is good.  :-) jcastro is tracking this particular work in https://trello.com/c/j2F2zsbY
#juju 2013-05-22
<_mup_> Bug #1182774 was filed: JuJu not found after having installed juju packages. <apt-get> <juju> <juju:New> <https://launchpad.net/bugs/1182774>
<jamespage> could one of the charmers team +1 https://code.launchpad.net/~james-page/charms/precise/ceph/fix-notify-clients/+merge/161835
<jamespage> its quite a trivial fix and resolves a common race condition
<bbcmicrocomputer> jamespage: k, will take a look
<bbcmicrocomputer> jamespage: k, done
<jamespage> bbcmicrocomputer, ta muchly
<jamespage> bbcmicrocomputer, if you are feeling brave - https://code.launchpad.net/~openstack-charmers/charms/precise/mysql/ha-support/+merge/165059 :-)
<bbcmicrocomputer> jamespage: hmm, I think this would take me a day (at least)
<jcastro> marcoceppi: so tldr
<jcastro> jono started with juju
<jcastro> and was on pyju, installed charm-tools
 * jcastro makes explosion sounds
<jono> will file a bug to provide context
<marcoceppi> Okay, Well I woke up this morning, upgraded my computer, and now pyjuju is disappeared from update-alternatives for juju
<marcoceppi> But that has nothing to do with charm-tools
 * marcoceppi awaits bug
<jcastro> hey so from your email, it seems you changed the region after you had deployed something?
 * jcastro will just wait for the detail in the bug
<marcoceppi> jcastro: OH HE INSTALLED CHARM-TOOLS and that depends on pyjuju and everything borkd?
<jcastro> yeah, I believe that's the case
<jcastro> he had installed goju but I don't think he did the update-alternatives
<jcastro> and from looking at the log he was using pyju the whole time
<jcastro> unless we added timestamps to goju recently?
<jcastro> the console output I mean
<marcoceppi> no, only pyjuju does console timestamps
<jcastro> right
<jcastro> that awkward phase, where we still tell people about pyju in the docs.
<marcoceppi> So if you install juju-core, then juju-0.7, update alternatives re-configures to use juju-0.7 as it has higher weight in update-alternatives
<jcastro> marcoceppi: this is why I pray for the local provider every day
<jcastro> yeah
<jcastro> what is unexplained is how he was working fine until he installed charm-tools
<marcoceppi> jcastro: we really just need to change charm-tools to suggest juju, it's not a dependency by any means to run
 * marcoceppi double checks packaging
<jono> https://bugs.launchpad.net/ubuntu/+source/charm-tools/+bug/1182905
<_mup_> Bug #1182905: charm-tools requires extra environments.yaml config <apport-bug> <i386> <saucy> <charm-tools (Ubuntu):New> <https://launchpad.net/bugs/1182905>
<jono> jcastro, why why doesn't juju-core just default to using the go version?
<jono> brb
<jcastro> because the go version doesn't have the local provider yet
<jcastro> and it should, I suspect you had juju installed already and installing core installed goju, but by default we don't switch you from pyju to goju unless you are explicit about it
<jcastro> the ideal is of course, we have a local provider ready for goju and that removes like all the complications you just had, other than the -tools thing
<jono> jcastro, dude, as I have said twice, I didn't have it installed :-)
<jono> this is a new machine running saucy, I haven't installed juju on it before
<marcoceppi> jcastro: based on the bug, it looks like the flow as "Install juju-core, deploy stuff, install charm-tools, charm-tools magically installs juju (0.7), try to deply somemore, juju 0.7 envs not compat with juju-core, weep"
<ahasenack> yeah, if I try to install charm-tools here, I get
<ahasenack> The following NEW packages will be installed:
<ahasenack>   charm-tools juju juju-0.7 mr python-cheetah
<ahasenack> I don't even have a juju-core package installed, I use the trunk built version in GOPATH
<marcoceppi> ahasenack: yeah, charm-tools recommends Juju
<ahasenack> marcoceppi is right, and then it defaults to pyjuju, and env file and actual environment is not compatible
<jono> evilnickveitch, started using your docs by the way
<ahasenack> does charm tools work with gojuju?
<marcoceppi> I can move juju from recommends to Suggests, but it really *does* recommend juju, it just needs to either recommend juju-core instead
<jono> evilnickveitch, I am testing juju by using them, is there a place you want me to file docs bugs/
<jono> ?
<ahasenack> I don't even remember what it is
<marcoceppi> ahasenack: charm-tools doesn't rely on juju at all, so yes :)
<evilnickveitch> jono, cool - for the moment just email me
<marcoceppi> I'm going to just add juju-core | juju to the recommends, should resolve this for future users
<evilnickveitch> jono - and don't listen to anything the constraints section says, it's all wrong
<jono> evilnickveitch, ok
<jono> thanks
<evilnickveitch> jono - btw are you checking out the branch or using the online version?
<jcastro> marcoceppi: <nod> that sounds like a plan
<jono> evilnickveitch, online version
<evilnickveitch> jono, ok, well, it is currently updating quite often, so refresh before you flame me :)
<jono> evilnickveitch, np :-)
<jcastro> marcoceppi: hey, moving forward maybe we should test the charm-tools/juju installation stuff in a container or something
<marcoceppi> jcastro: we can make it part of charm-testing since we have the jenkins infrastructure already
<marcoceppi> err, charmtester*
<jcastro> yeah
<marcoceppi> Could also push it in to qa.ubuntu.com
<marcoceppi> Though we have plenty of good testers already like jono :P
<jono> not sure I am a good tester
<jono> but I am a tester :-)
<jcastro> you find things that advanced users won't
<jcastro> like I suspect your findings will be different from what say, kapil would find.
<jcastro> evilnickveitch: I send mattyw your way wrt. doc inline videos
<marcoceppi> jcastro: I've patched it in the ppa, waiting for it to be merged for saucy
<evilnickveitch> jcastro, yeah, I saw that thanks :)
<evilnickveitch> another sucker... er, I mean helper...
<mattyw> evilnickveitch, jcastro I plan to do a test video at some point over the next few days to see if it's good enough
<evilnickveitch> mattyw - cool! I'll mail you the specs we are using and a suggestion for something to record :)
<jcastro> yeah, make the test video count for something. :)
<nextrevision> is there a way for a charm to set config values when running a hook, say config-changed?
<marcoceppi> nextrevision: no, charms can't set configuration, only "humans" can
<nextrevision> marcoceppi: so then best practice around providing, say a 'port' and 'ssl' config option, would be to trust the user to set the port to '443' and 'ssl' to true
<marcoceppi> nextrevision: You can just have them set "ssl" to true then have the charm make an opinion on what the port should be, so if ssl is true, port will just be 443 (then remove port from configuration)
<nextrevision> marcoceppi: ok, have you found that it is better to include a port configuration parameter or to leave that logic up to the charm itself? has there been much demand for non default port configuration settings?
<marcoceppi> nextrevision: depends on the service, I think it's expected most web service will run on 80/443 but it really depends on what people's expereince is when setting up a service. Some services default to things like 8080, etc
<nextrevision> marcoceppi: ok sounds good, thanks!
<marcoceppi> ahasenack: did you install charm-tools yet?
<ahasenack> marcoceppi: no
<marcoceppi> Cool
<mattyw> jcastro, what happens in the weekly charmers meeting?
<jcastro> status updates, syncing with charm work
<stub> juju packaging seems to have gone wonky. pyjuju no longer gets installed to /usr/bin/juju (which might just be update-alternatives problems), and gojuju now just dumps itself into /usr/bin/juju so the docs on https://juju.ubuntu.com/get-started/ are now out of date
<marcoceppi> stub: yeah, I just noticed that this morning. Not sure what broke it or where to report it.
<stub> Not to be confused with the other getting started docs at https://juju.ubuntu.com/docs/getting-started.html ;)
<jcastro> Weekly Charm meeting hangout will be here: https://plus.google.com/hangouts/_/d0ea0bcfb82ba886057c147953ed10eaabff6831?authuser=0&hl=en
<jcastro> arosales: marcoceppi m_3 mattyw and anyone else ^^
<arosales> jcastro, thanks
<mattyw> jcastro, arosales  where can I find the notes? / agenda
<arosales> mattyw http://pad.ubuntu.com/7mf2jvKXNa
<mattyw> arosales, thanks
<arosales> mattyw, sure, np. We'll note that next time to paste with the hangout URL.
<mattyw> marcoceppi, https://code.launchpad.net/~mattyw/charms/precise/mongodb/auth_experiment/+merge/162887
<sinzui> hazmat, I think juju 0.7+bzr628+bzr630~raring1 is on crack. I got the update and now have no commands. Well I do, they are in /usr/lib/juju-0.7+bzr628+bzr629~raring1/bin. How do you propose I solve this? downgrade?
<mgz> that's not good...
 * sinzui could change PATH
<mgz> what heppends if you run update-alternatives?
<mgz> I'd expect the install to have borked on the version, rather than happening then not linking
<mgz> I did keep that stuff out of the ppa semi-deliberately, but it was asked for...
<mgz> probably need to look at the scripts again
<mgz> looking at the apt log might be enlightening
<hazmat> mgz, its been crack for two days.. it looks like the packaging changed?
<hazmat> mgz, mostly atm i see intermittent test failure on the builders atm, but  not able to reproduce locally
<marcoceppi> hazmat: I'm in the process of writing a bug report for it, since my install is "broken" as well
<sinzui> I just downgraded to raring's universe package.
 * sinzui hopes lxc-ls still works with this version
<marcoceppi> Actually, there's already a bug: https://bugs.launchpad.net/juju/+bug/1182774
<_mup_> Bug #1182774: JuJu not found after having installed juju packages. <apt-get> <juju> <juju:New> <https://launchpad.net/bugs/1182774>
<hazmat> marcoceppi, sinzui origin back to distro ?
<sinzui> I did switch back to distro to get a working juju
<marcoceppi> hazmat: I've just been using absolute path
<mgz> m_3 aksed why the ppa wasn't using update alternatives, which is because I didn't want the ppa actually tracking a branch daily any more... but I switched it, and apparently it also breaks things
<mgz> was trying to fix the breakage from hazmat applying my distropatch on the branch
<hazmat> mgz, where it should have been..
<mgz> I don't really agree, but it was too late at that point anyway
<hazmat> marcoceppi, sinzui other option is specify the branch directly
<hazmat> for origin, till the ppa is working again
<sinzui> thanks hazmat, I locked to old version. I'll unlock when I see a fixed version
<koolhead17> jcastro, grrrrrrrrr. u never replied back to me
<koolhead17> :(
<jcastro> hmm?
<jcastro> oh, wrt. that openstack guy?
<jcastro> so I responded to him with like 3 emails of options for him
<jcastro> but he never responded. :-/
<koolhead17> jcastro, haha. ok
<koolhead17> Daviey, poke poke
<koolhead17> did you see my email
<Daviey> koolhead17: hey
<Daviey> koolhead17: i did indeed.
<Daviey> And i fully agree
<koolhead17> Daviey, well you want me to write a big RANT if you think that will work
<koolhead17> am telling you clock is ticking
<Daviey> tick tock
<Daviey> i know :)
<koolhead17> if that will put some noise in ear of our big bosses in canonical
<koolhead17> grrrrrrrr
<koolhead17> i will be most unhappy to see all our hard work going down drain that`s it
<koolhead17> :P
<jono> jcastro, filing more bugs:
<jono> https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1183109
<_mup_> Bug #1183109: 'juju destroy-environment' missing from 'juju help' <apport-bug> <i386> <saucy> <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1183109>
<jono> https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1183110
<_mup_> Bug #1183110: Handshake error when first bootstrapping with AWS <apport-bug> <i386> <saucy> <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1183110>
<jcastro> yeah!
<thumper> hi jcastro
<jcastro> thumper: heya
 * thumper looks for jono too
<jcastro> jono's done like 6 bugs today dude, you will be busy!
<thumper> jcastro, jono: did you two want to have a quick hangout?
<jono> most of it small fry stuff I think :-)
<jcastro> I am literally out the door in 5
<thumper> jcastro: np
<jcastro> I just wanted you to be aware of the incoming papercuts
<thumper> :)
<marcoceppi> jono: your second one might be related to https://bugs.launchpad.net/juju-core/+bug/1042106
<_mup_> Bug #1042106: environs/ec2: certain operations should be retried if possible <juju-core:Confirmed> <https://launchpad.net/bugs/1042106>
<thumper> I've not actually looked at the bug list for some time
<thumper> probably should
<jono> very possibly
<jcastro> that sounds like a good idea thumper
<jono> marcoceppi,
<thumper> :)
<jcastro> I think in general
<jcastro> when running a status
<thumper> it is very easy to get stuck into new stuff
<jcastro> we suck at telling the user what is up
<thumper> and forget what is there
<jcastro> it's like ... some weird error
<thumper> that I agree with
<jcastro> when we can just say "Deploying the instance and the juju agent, hang tight yo."
<thumper> heh, is there a locale for that?
<thumper> man that would be funny to implement
<jcastro> "Deploying with Juju ... reticulating splines."
<thumper> EN_st for "street english"
<jono> so, running juju status after I have detroyed my env gives me:
<jono> jono@forge:~$ juju status
<jono> error: The specified bucket does not exist
<jono> is that normal?
<jcastro> yes
<jono> lol
 * jono files bug
<thumper> heh
<jcastro> it should say "You've destroyed the environment, there's nothing here." and so on
<jono> that makes no sense
<jono> you guys can wishlist it :-)
<thumper> jono: what it means is "oi, dumbass, there is nothing there"
<marcoceppi> aka "Dat bukkit's gone, da env probs destroyed yo"
<jono> you should run all errors through gizoogle's engine :-)
<sarnold> hunh, 'apt-cache search jive' returns nothing..
<thumper> marcoceppi: fyi, plugins should land today
<jcastro> well, we're too far along for these to be wishlist, it's getting to the time where we should really start polishing these up
<marcoceppi> thumper: Awesome, can't wait for the next release
<jono> jcastro, totally
<thumper> jono: can you tag bugs "cli feedback" or something?
<jono> thumper, sure
<jcastro> hey
<thumper> ta
<jcastro> papercuts
<jcastro> that's the perfect tag
<thumper> that too
 * thumper realises that he should integrate the new logging shizzle
<jcastro> then when local lands, and he sees juju switch
<jcastro> our lives will be complete
<thumper> jcastro: local is third on my list
 * thumper double checks
<thumper> yeah, third
<thumper> need to get general containers first, and fix machine addressability
<thumper> then comes local provider
<jcastro> right
<jcastro> can't have local without containers!
<marcoceppi> thumper: so, we can expect local provider in a week then ;)
<thumper> so general containers should give us a better co-location story
<thumper> pfft
<marcoceppi> containers will be awesome, I'm really looking forward to that as well
<jono> https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1183116
<_mup_> Bug #1183116: 'juju status' output when no env is non-intuitive <apport-bug> <cli-ui> <i386> <papercut> <saucy> <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1183116>
<jono> tagged with cli-ui and papercut
<thumper> jono: awesome
<kyhwana> marcoceppi: i'm hanging out for user namespace support in the kernel
<jcastro> man, a status bug just reminds me of how badly I want juju-top
<marcoceppi> jcastro: you mean juju watch?
<sarnold> juju-top sounds neat :)
<jcastro> marcoceppi: sure
<marcoceppi> jcastro: http://i.imgur.com/OhvuaEg.png we have the power now!
 * thumper sighs
<marcoceppi> hah
<thumper> I want to implement top (aka watch, observe) correctly
<thumper> and plugins, while hacky, and may work...
<thumper> is icky
<marcoceppi> thumper: you could, it'd just just superced the plugin :P
<thumper> aye
<jcastro> robbie was pretty clear to me not to distract the core team with my crazy juju top idea.
<thumper> jono: while messing around, I'd love it if you could think about what you'd like to see as 'help topics'
<jcastro> thumper: but if we find someone _new_, mwahahaha
<thumper> jono: like yesterday, I went 'juju help constraints' and was frustrated that there wasn't anything there
<marcoceppi> jcastro thumper actually, it'd probably be better to write it as juju top and leave juju watch as something for core to land
<thumper> jcastro: roger tells me that most of what we need is implemented already...
<thumper> just the plumbing that needs to be sorted out
<dpb1> thumper: I was told to contact you as well, any chance we can get a quick look at: #1182224  -- from the outside looking in it seems easy and it blocks charms migrating from pyjuju.
<_mup_> Bug #1182224: relation-list returns null with json-output <juju-core:New> <https://launchpad.net/bugs/1182224>
<jcastro> thumper: yeah, I'd rather have containers first, heh
<thumper> dpb1: interesting...
<jono> thumper, sure
<thumper> dpb1: triaged, but not sure how soon I can get someone to it
<jono> one thing that is bugging me is 'juju status'
 * thumper isn't really in charge
<jono> what an incomprehensible pile of nonsense :-)
<thumper> haha
<jono> I am working on an alternative way of presenting that information :-)
<thumper> jono: file a bug with what you want to see :)
<jono> thumper, will do :-)
<thumper> jono: trust me, status is only going to show you more shit
<jono> all I care about with status is:
<dpb1> thumper: I was told you *were* in charge.  believe in it!
<jono>  * are my machines there
<jono>  * is my shit connected
<thumper> dpb1: hmm, by whom?
<jono>  * where is it exposed
<thumper> jono: 'juju status --summary'
<thumper> jono: it isn't there yet, but might be an idea
<dpb1> thumper: oh, just kidding with you.  Just was trying to get a hold of mark and was told you could fill in.
<marcoceppi> +1 to a --summary flag, but I truly love the verbosity of the output from status
<dpb1> thumper: thanks for looking at it.  I'll be patient. :)
<thumper> dpb1: heh
<jono> thumper, I think juju status should be the summary and juju status -v gives you the detail
<jono> :-)
<jono> filing bug now
<jcastro> I don't even like running juju status
<jcastro> what I want is to run juju watch or whatever
 * thumper agrees with jcastro
<jcastro> and have a realtime representation of status
<thumper> just think of the charm schools
<jcastro> instead of "watch juju status"
<jcastro> right
<jcastro> so basically, a CLI version of the GUI
<thumper> anyway...
 * thumper goes to land code
<jcastro> but for charm schools it doesn't matter, we show the real gui
<marcoceppi> so many tools and swamp code would break if it was required to be juju status -v
<jono> thumper, jcastro, marcoceppi: https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1183129
<_mup_> Bug #1183129: 'juju status' difficult to read and provides unneccessary information <apport-bug> <cli-ui> <i386> <saucy> <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1183129>
<jono> tagged with cli-ui
<marcoceppi> jono: I wonder if this would be better served as juju summary, or something similar. juju status has always been tradtionally presented in a way that was designed for both humans and machines to consume
 * marcoceppi collects thoughts for the bug report
<jcastro> but with the API
<jcastro> that handles the machine consumption
<jono> marcoceppi, I don't think machine friendly output is human friendly
<jono> I recommend the primary way we recommend people check status provides a people-friendly format
<jcastro> right
<jono> no reason why juju status -yaml or something could not present another format
<jcastro> so now that we have an api that machines can consume
<jcastro> make status nice for us!
<marcoceppi> Is the API actually out for juju-core?
<marcoceppi> I mean, other than the websockets thing that kapil posted to the list a few weeks ago
<thumper> one difference right now, is the format param doesn't control the volume of information
<thumper> just how it is presented
<jono> marcoceppi, I agree that exposing json or yaml formatted status would be awesome so when I subprocess it from python I can consume it
<jono> but that should be a switch, not the default UI
<gQuigs> I'm getting: ERROR [('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')] whenever I try to run juju deploy
<gQuigs> any ideas how I should troubleshoot?  What is the url that "etherpad-lite" for instance actually tries to load/
<gQuigs> ^ in juju deploy etherpad-lite
<marcoceppi> jono: thumper I think git does a good job of this when viewing the juju log. There's things that control format, then things that control data volume "prettyprints" that might be interesting to have in a status command
<marcoceppi> gQuigs: Can you try juju deploy with a -v flag for more verbose output?
<marcoceppi> (then put it in a pastebin somewhere like http://paste.ubuntu.com)
<gQuigs> marcoceppi: oops, I forgot to post that: http://pastebin.ubuntu.com/5691834/
<marcoceppi> gQuigs: Are you behind a firewall or proxy?
<gQuigs> it seems like it must be an SSL issue on my machine... because using openssl works fine to store.juju.ubuntu.com
<gQuigs> marcoceppi:  technically yes, but not anything that interesting (standard NAT/Firewall box)
<thumper> https://blueprints.launchpad.net/juju-core/+spec/s-cloud-juju-core-plugin complete
<kyhwana> gQuigs: O.o your SSL is being MITMed?
<marcoceppi> gQuigs: I wasn't able to replicate the error on my machine using juju-0.7, I'm not sure what would be causing that error except twisted bad certs locally or an actual mitm
<gQuigs> If I'm being mitm it's juju specific.. so I guess that leaves bad certs..
<jono> marcoceppi, yeah, I think there are a few options for improving it
<marcoceppi> gQuigs: There might be something else, but that's the first time I've encountered that error in particular
<gQuigs> yup, ca-certificiate reinstall fixed it. sorry for the noise
<marcoceppi> gQuigs: no problem! Glad that's all it took to resolve
<paraglade> any ideas why juju-0.7 errors with this during deploy:  ERROR [('PEM routines', 'PEM_read_bio', 'no start line')]
<paraglade> verbose output here: http://pastebin.ubuntu.com/5691915/
<sarnold> bah, why couldn't it at least include the filename that failed..?
<thumper> dpb1: found the source of your bug
 * thumper considers
<dpb1> thumper: ohh?  easy fix?
<thumper> still looking
<thumper> interestingly, the HasLen check in the tests succeed, as (nil, HasLen, 0)
<marcoceppi> paraglade: you're the second person to be having SSL issues trying to connect to the store today
<thumper> dpb1: yeah, one line fix, and I found the line
<thumper> not sure if anything else will break
<marcoceppi> paraglade: The the errors are different, it looks like txaws is having problems loading CA certs
<paraglade> marcoceppi: awesome :)
<marcoceppi> paraglade: If it's anything like the last users issue, trying reinstalling ca-certificiates as that resolved the issue for them
<paraglade> well I just deleted the .juju/cache and my local data-dir and I am not able to deploy
<paraglade> s/not/now/
<paraglade> sorry
<dpb1> thumper: ya, that is a problem. It strikes me as wrong to put the fix in the charm since it's not proper json text.  But I can see the quandry.
<marcoceppi> paraglade: so, it's resolved?
<kyhwana> https://www.ssllabs.com/ssltest/analyze.html?d=jujucharms.com < welp
<thumper> dpb1: I'm looking at it now
<marcoceppi> kyhwana: jujucharms.com isn't the charm store
<sarnold> marcoceppi: hrm, http://paste.ubuntu.com/5691944/
<sarnold> oh. what -is- the charm store? :)
<dpb1> thumper: k.  if you need anything just ping.  I'll be afk in a bit
<thumper> dpb1: I think it'll be fine
<marcoceppi> sarnold: I believe it's store.juju.ubuntu.com but I'm not 100% certain. I'd have to dig through the source code again to verify
<paraglade> marcoceppi: yup.  my mysql node is starting up now on a local provider
<marcoceppi> paraglade: ah, didn't realize you were using the local provider
<kyhwana> https://www.ssllabs.com/ssltest/analyze.html?d=store.juju.ubuntu.com&s=91.189.95.66
<marcoceppi> kyhwana: I don't personally run the store, so I can't comment, but this line "This server supports insecure suites (see below for details). Grade set to F." seems to be why they've decided to give it an F ranking
<marcoceppi> kyhwana: if you'
<marcoceppi> d like you can open a bug against the charm store with that information. You'd at least get a response from those maintaining it
<kyhwana> marcoceppi: yeah, the anon cipher suites don't provide any authentication
 * marcoceppi looks for charmstore project
<marcoceppi> kyhwana: if you want to open a bug about it, do it against the juju-core project. It looks like the charmstore is housed inside that project.
<kyhwana> marcoceppi: alright, will probably have to wait till I get home
<marcoceppi> kyhwana: at the very least they'll be able to respond with why it is as it is
<thumper> dpb1: fix submitted for review
<mwhudson> juju-core on arm ping
<hazmat> kyhwana, thanks
<hazmat> mwhudson, better ping luck with that on #juju-dev
<mwhudson> oh i didn't know about that channel
<hazmat> mwhudson, ping cheney (aussie) about it.. he's got a bank of arm test runner
<hazmat> runners
<hazmat> tz delta isn't quite good for it atm
<mwhudson> ok
<mwhudson> i'll email again then
<mwhudson> cheers :)
<hazmat> kyhwana, marcoceppi fixed re ssl
<hazmat> clear cache on page to verify
<kyhwana> hazmat: hmm?
<hazmat> kyhwana, https://www.ssllabs.com/ssltest/analyze.html?d=jujucharms.com
<kyhwana> hazmat: oh, awesome
<kyhwana> tho https://www.ssllabs.com/ssltest/analyze.html?d=store.juju.ubuntu.com&s=91.189.95.66 is still showing F
<hazmat> kyhwana, sadly i don't have access to that one, i'll raise a flag to those that are
<kyhwana> hazmat: ahh ok, cheers. :)
#juju 2013-05-23
<marcoceppi> I found a weird regression
<hazmat> marcoceppi, how so?
<marcoceppi> in juju-0.7 I can do `rsync -avz -e "juju ssh -e <juju_env>" charm/0:/path/to/file ./` but that fails in juju-core (logging bug) since it's trying to parse everything after the unit is given in the command chain
<sarnold> charm/0:/foo  works? wow
<marcoceppi> sarnold: when you set -e to juju ssh, it does, it basically computes to something like this: juju ssh mysql/0 rsync --server --sender -vvlogDtprze.iLsf . "/var/log/juju/*"
<sarnold> marcoceppi: very neat :)
<marcoceppi> hazmat: not sure how likely this is to get fixed in core, looks like a discrepancy with the way arguments are parsed compared to 0.7
 * marcoceppi considers a juju-rsync plugin as an alternative
<hazmat> yeah.. if the latter form works i'd consider it an oddball
<hazmat> ie rsync after the unit spec
<marcoceppi> Should I not bother a bug report then? Given this appears to be "expected" behavior in juju-core?
<hazmat> marcoceppi, for ssh compounding to the unit name to me seems like a non feature.. if it also applies to juju scp where the common form is host:/path then its worth a bug.. else not imo.
<marcoceppi> hazmat: ack
<hazmat> hmm
<hazmat> marcoceppi, by charm/0 you met unit/0 ?
<hazmat> ment
<hazmat> marcoceppi, you ever head out to lost dogs?
<marcoceppi> hazmat: yes, so something like mysql/0 - what I was saying is that rsync command works with 0.7 pyjuju but not juju-core
<marcoceppi> hazmat: I live near it, but I've not traveled that way
<marcoceppi> I hear they have good pizza and beer though
<hazmat> marcoceppi, indeed they do.. and good sandwitches.. i hit them up on a regular basis.. was debating a journey out this evening
<hazmat> marcoceppi, this form.. `rsync -avz -e "juju ssh -e <juju_env>" charm/0:/path/to/file ./` working isn't worth a bug.. it is if this one doesn't though imo..  juju ssh mysql/0 rsync --server --sender -vvlogDtprze.iLsf . "/var/log/juju/*"
<hazmat> omg .. -vvlogDtprze.iLsf
<marcoceppi> hazmat: the neither work because juju says: error: flag provided but not defined: --server
<hazmat> marcoceppi, so the latter one is worth a bug.. anything after the unit/machine should be passed through to ssh
<marcoceppi> hazmat: ah, that sounds lovely, if it was an invitation I sadly have plans tonight. Any other day I would be down for beer and sandwiches
<marcoceppi> hazmat: ack, so i the latter is fixed, the former will also be repaired as the former expands to the latter
<marcoceppi> s/so i/so if/
 * marcoceppi files a bug
<hazmat> marcoceppi, cool, and icc
<dpb1> thumper: awesome, thanks
<mwhudson> hmm
<mwhudson> how do you deploy onto quantal?
<thumper> mwhudson: whazzup?
<thumper> mwhudson: simplest way is to set default series in environments.yaml
<mwhudson> done that
<thumper> you also need quantal charms
<mwhudson> aah ok
<mwhudson> and there aren't many?
<thumper> right
<thumper> most are lts only
<thumper> I guess you could branch the precise charms locally
<mwhudson> fun fun fun when the lts kernel doesn't boot with the firmware you are using
<thumper> then use a local charm to push to quantal
 * thumper waves his hands
<thumper> then magic happens
<thumper> sorry, not push
<thumper> deploy local:foo
 * mwhudson sighs
<thumper> I don't know the details...
<thumper> but seems possible
<mwhudson> yeah, must be possible
<thumper> with not too many hoops
<marcoceppi> mwhudson: you can also deploy the precise version of a charm to a quantal environment (not recommended). I believe if you do juju deploy cs:precise/mysql with a quantal default-series you should get the precies charm on a quantal machine
 * marcoceppi double checks
<mwhudson> marcoceppi: don't bother, you don't
<mwhudson> because that's what i did earlier
<marcoceppi> mwhudson: juju-core or juju-0.7?
<mwhudson> 0.7
<mwhudson> because i'm deploying to armhf, see earlier :)
<marcoceppi> ah, fun fun
<marcoceppi> yeah, you're going to need to branch the charms, some thing like mkdir ~/charms/quantal; bzr branch lp:charms/mysql ~/charms/quantal/; juju deploy --repository ~/charms local:mysql; unfortuantely
<marcoceppi> okay /me really needs to go
<mwhudson> ta
<sarnold> having done two (very simple) charms so far, it seemed like it'd be easy to have precise and quantal versions.. but now there's also raring, and soon saucy, all before the next lts.. On the one hand, having versions for all of those sounds a bit kind, on the other hand, doing the work to make them happen is enough work that I haven't yet...
<sarnold> .. and I'd feel bad if someone found problems with e.g. a quantal version six months after I'd stopped even using it..
<sarnold> is there a suggested best practice here?
<mwhudson> would one expect juju (0.7) to survive the bootstrap node unexpectedly rebooting?
<mwhudson> 2013-05-22 22:35:55,402: twisted@ERROR: Unhandled Error
<mwhudson> Traceback (most recent call last):
<mwhudson> Failure: zookeeper.NodeExistsException: node exists
<mwhudson> in machine.log doesn't look good :(
<sarnold> mwhudson: I think I heard that's not supported..
<mwhudson> yay
 * mwhudson runs destroy-environment
<mwhudson> one more time can't hurt right?
<sarnold> I assume they go to environment heaven, where the clouds are .. cloudy, and zookeepers have a lot of cool animals to tend
<AskUbuntu> -bash: metrik@Metrik-Corp-Server1-MASS-Control:~/.juju$: No such file or directory | http://askubuntu.com/q/298941
<mattyw> I keep getting the "cannot get latest charm revision: no charms found matching" when deploying a local charm, any idea how to debug this, I'm sure my yaml files are valid. It seems similar to this but I even get it when removing the symlinks I have https://bugs.launchpad.net/juju-core/+bug/1129319
<_mup_> Bug #1129319: Local charm deployment not working if symlinks are used <juju-core:New> <https://launchpad.net/bugs/1129319>
<hazmat> mattyw, perhaps there's an error on the charm itself
<hazmat> mattyw, really simple go program to verify a charm.. http://paste.ubuntu.com/5693306/
<mattyw> hazmat, looks like it was my metadata.yaml having an error
<mattyw> hazmat, that's a great little tool thanks
<mattyw> hazmat, it would be useful for that tool to be available in juju somehow - like a juju verify-charm command or a plugin
<hazmat> mattyw, agreed.. plugins are a little problematic..
<hazmat> for go ironically
<hazmat> no compilation/distribution model
<mattyw> it doesn't seem like doing juju destroy-service or juju remove-unit on a charm which has failed works, anyone else seen this?
<sidnei> mattyw: you have to mark it as resolved first, so that the stop hook gets called
<mattyw> sidnei, ok thanks
 * hazmat prefers juju-deployer -T -W
 * hazmat files a bug report for force
<hazmat> bug 1089289 and bug 1183309 fwiw
<_mup_> Bug #1089289: remove-unit --force <juju-core:Confirmed> <https://launchpad.net/bugs/1089289>
<_mup_> Bug #1183309: destroy-service should have a force option <juju-core:New> <https://launchpad.net/bugs/1183309>
<mattyw> marcoceppi, ping?
<marcoceppi> mattyw: pong
<mattyw> marcoceppi, thanks for the legacy_juju review, it's already in charm helpers as the juju-gui guys are using it as well, shall I still submit and MP to get it into the main part of helpers?
<marcoceppi> mattyw: If the code already exists in the new charm-helpers project I wouldn't worry about it. If not, then I'd say open a merge request for it (if you're interested in seeing it live on)
<mattyw> has anyone tried deploying the postgresql charm using juju-core? I'm seeing an error during it's call to relation-list saying -r isn't defined as a flag
<marcoceppi> mattyw: it's a known bug in juju-core
<marcoceppi> mattyw: https://bugs.launchpad.net/juju-core/+bug/1172895
<_mup_> Bug #1172895: relation-list incompatibility with pyjuju: -r <juju-core:Fix Committed by fwereade> <https://launchpad.net/bugs/1172895>
<mattyw> marcoceppi, ok cool, just wanted to know if I should be raising it or was it already captured
<marcoceppi> mattyw: looks like it's actually targeted for the next patch release of juju-core, (1.10.1) not sure when that will be though
<sinzui> hi ~charmers. I have a branch that needs review: https://code.launchpad.net/~sinzui/charms/precise/mongodb/restore-from-dump/+merge/165408
<sinzui> maybe wedgwood should look at it ^ since webops want the feature
<wedgwood> sinzui: did they?
<wedgwood> sounds like something webops would ask for
<wedgwood> I'll have a look today
<sinzui> wedgwood, mthaddon reported the issue in regards to jujucharms.com
<wedgwood> sinzui: I suppose I should say I'll *try* to get to it today. I've got a sprint next week that I'm prepping for. If I don't get to it today, it'll be a while.
<mthaddon> sinzui: ftr, wedgwood is no longer in webops, is now in IS projects, but is obviously still involved in charm reviewing generally
<sinzui> wedgwood, thank you for trying. Does this sprint involve updates to an elasticsearch charm (I saw a references to it in an rt)
<sinzui> mthaddon, thank you. I did not know that
<mthaddon> sinzui: we're in squads now, rotating between web0ps, projects and operations
<sinzui> oh
<wedgwood> sinzui: and next week's sprint regards that reorg. No elasticsearch work that I'm aware of
<mthaddon> wedgwood: it may be one of the targets for paired juju programming, but I can't confirm that at the moment
<sinzui> okay. I have a todo to update our version of the charm to release 0.90.0. Once done, I think I will propose it to replace the current charm as it addresses most of the issues that have been brought up about the current charm
<paraglade> marcoceppi: look like I am not getting the SSL issue again but this time I am getting the error when tying to bootstrap an EC2 instance ( http://pastebin.ubuntu.com/5694293/ )
<paraglade> did you happen to have a fix for this?
<marcoceppi> paraglade: not this specific error, as I mentioned yesterday someone was having "similar" ssl issues, they reinstalled ca-certificates and that resolved it for them. I'm honestly not sure about this error as I've not encountered it in Juju before
<marcoceppi> paraglade: outside of filing a bug, that would be my only suggestion :\
<paraglade> ack
<sarnold> paraglade: you could use strace or fatrace to try to find out _which_ file causes that error.
<sarnold> paraglade: knowing that might get you to a point where you could investigate ..
 * paraglade shakes head
<paraglade> well turns out that I have some *.pem files in my .juju dir that juju does not like.  thanks sarnold strace helped.  I am now able to bootstrap.
<sarnold> paraglade: woo! :)
<bac> hi marcoceppi, there is a new team ~juju-gui-charmers that has ~juju-gui and ~charmers in it.  we'd like the branch of the GUI that team owns to be the official charm.  can you do that or is that an m_3 thing?
<marcoceppi> bac: I should be able to, let me take a look
<bac> thx
<marcoceppi> bac: juju-gui has been re-promulgated to https://code.launchpad.net/~juju-gui-charmers/charms/precise/juju-gui/trunk
<marcoceppi> I'm deleting the ~charmers branch
<bac> marcoceppi: cool, thanks
<bac> marcoceppi: not yet, please
<marcoceppi> bac: ack, np
<bac> marcoceppi: there is a bug i'm working with orange resolve.  they only look at ~charmer owned branches for the charm browser, not the official one
<bac> marcoceppi: i'd like the ~charmers branch to remain until that bug is fixed
<marcoceppi> bac: Oh, that's right. No problem - not sure how you'll want to handle keeping that update (if at all) but I'll leave that up to you
<bac> marcoceppi: hopefully by resolving the bug v. soonish
<bac> marcoceppi: the upside of this change is now ~charmers can push to that location, which was one of the motivations (he said stating the obvious).
<m_3> bac: marcoceppi: the ~juju-gui team branch has been the official one used when `juju deploy juju-gui`
<m_3> the problem is that jujucharms.com points only to ~charmers branches and needs to be fixed
<m_3> there's a bug on that somewhere
<m_3> marcoceppi: we can't delete the ~charmers stuff with all the junk stacked on top of it still... at least I don't know how to do that without getting lp help from #IS
<bac> m_3, orange has a fix to that bug about to land.
<bac> m_3, hmmm, hadn't considered stacking...
<marcoceppi> m_3: yeah, I tried to delete it before bac said to wait, and was met with newman saying "no no no, you didn't say the magic word"
<m_3> lp:charms/juju-gui was matching lp:~juju-gui/charms/precise/juju-gui/trunk and that was the one deployed by the CLI
<hazmat> m_3, unfortunately the code base has changed so greatly from the version on jujucharms.com  i'm reluctant to do a quick fix..
<hazmat> i guess its not the end of the world..
<m_3> hazmat: yeah, that ones been sitting there for a while... I wanted to get together over UDS-time to hash out a decent soln
<m_3> so current situation is that `juju deploy juju-gui` deploys lp:~juju-gui-charmers/charms/precise/juju-gui/trunk, but jujucharms.com still points to lp:~charmers/charms/precise/juju-gui/trunk
 * m_3 back to talks
<gary_poster> marcoceppi, very cool about juju-test.  I'll see if we can switch to it (may require switching our charm tests to juju core, which we ought to do anyway, but makes testing your work farther away)
<marcoceppi> gary_poster: you can use juju-test with pyjuju :)
<gary_poster> marcoceppi, oh cool!  I thought we would need the plugin stuff
<marcoceppi> gary_poster: naw, if it's in the path, just run juju-test intead of juju test
<gary_poster> awesome marcoceppi I'll pass that along
<gary_poster> thanks
<marcoceppi> gary_poster: plugins haven't even been "released" yet, they just landed in trunk yesterday
<marcoceppi> gary_poster: no problem. I'm all ears to feedback, what sane defaults should be, etc. just let me know!
<gary_poster> cool, will do :-)
<arosales> marcoceppi, +1 on the testing progress. Thanks for the work and update.
#juju 2013-05-24
<bkerensa> marcoceppi: ping
<bkerensa> 2013-05-23 17:51:54,962 INFO Starting networking...
<bkerensa> status: Unknown job: lxc-net
<bkerensa> 2013-05-23 17:51:54,967 ERROR Problem checking status of lxc-net upstart job.
<bkerensa> any ideas?
<marcoceppi> bkerensa: make sure you have 0.7 installed
<bkerensa> marcoceppi: I do looks like I still had the ppa too
<bkerensa> marcoceppi: ok still doing it with 0.7 and ppa purged
<bkerensa> :D
 * marcoceppi tries local provider with pyjuju
<marcoceppi> bkerensa: raring? I'm not able to replicate
<bkerensa> marcoceppi: saucy
<bkerensa> :)
<bkerensa> whats with people still using raring ;p
<marcoceppi> Ah, I'd recommend opening a bug. I'll spin up a saucy vm to give it a shot but I wouldn't be surprised if lxc in saucy has changed
<bkerensa> kk
<bkerensa> marcoceppi: this would be a juju bug though
<bkerensa> right?
<marcoceppi> bkerensa: yeah, though I don't think it'll be fixed as local provider should (is scheduled to) be completed this cycle for juju-core. So that'd be what people use
<marcoceppi> But it won't hurt to file a bug
<bkerensa> marcoceppi: and until such time as its fixed how can i deploy locally?
<bkerensa> :D
<bkerensa> I need to fix my charm
<bkerensa> ;p
<marcoceppi> bkerensa: Raring :P
<marcoceppi> Or, you could spin up a raring ec2 machine and deploy locally on that. Half joke/serious suggestion.
<bkerensa> heh
<bkerensa> if only rackspace was working
<bkerensa> =/
<bkerensa> I got free rackspace for two years and no juju support... first world problem
<marcoceppi> bkerensa: You could spin up a free rackspace machine then deploy using the local provider on that. We do something similar on EC2 to test local provder in our automated testing
<bkerensa> hmm
<marcoceppi> Or any machine that supports LXC in the kernel
<bkerensa> ahh a raring machine yeah
<bkerensa> I guess I'll do that
<mwhudson> um, using the haproxy charm were do log messages go?
<ehg_> hi guys, i'm using go juju, but would like to switch to pyjuju to set ec2-zone constraints - is it possible to switch between them like that?
<marcoceppi> ehg: You can switch if you have juju-0.7 installed. However, you can't switch on an already deployed environment. juju-0.7 and juju-core deployments aren't compatible but they can be installed side-by-side and managed using `update-alternatives --config juju`
<ehg> marcoceppi: thanks - that's a shame, gojuju seems to be much more reliable - the only things i need from pyjuju are zone constraints
<ehg> do you know if they're in the roadmap anywhere? if not, i might try and implement them myself :)
<marcoceppi> ehg: While I'm sure they're aware of it (as they're always moving to make up for features that haven't been ported yet) if you want to open a bug for it it'll help prioritze that feature
<ehg> cool, will do. i'm really liking juju btw, thanks!
<paraglade> is it possible to install the keystone (openstack) charm with the grizzly release using Precise series (12.04 LTS)?  I have tried several different settings for the 'openstack-origin' config setting and have not been able to figure it out.
<marcoceppi> jamespage: ahasenack ^
<jamespage> paraglade, yes
<jamespage> paraglade, but the updated version is still in final testing
<jamespage> paraglade, lp:~openstack-charmers/charms/precise/keystone/ha-support
<paraglade> cool thanks I will give that a try.  I have been having fun showing my peers how juju can standup openstack in 15 minutes as apposed to weeks :). Now I am getting the "now lets see you do that using grizzly"
<jamespage> paraglade, hoping to get the grizzly updates landed this week
<jamespage> paraglade, just working through upgrade testing (folsom->grizzly)
<AskUbuntu> cannot get latest charm revision when ttying to deploy a local charm | http://askubuntu.com/q/299502
<dpb1> Hi m_3: do you think you could finish up a review on: https://code.launchpad.net/~davidpbritton/charms/precise/landscape-client/add-landscape-relation
<marcoceppi> dpb1: He's at a conference this week
<dpb1> marcoceppi: thx, is there anyone else around that can review?  It's been sitting a month, and it's a pretty easy change. :)
<marcoceppi> dpb1: Yeah, if you assign it to "charmers" for review it'll jump in the review queue. I'll see if I can give it a look over today though, given how old it is
<dpb1> marcoceppi: done
<dpb1> marcoceppi: and thx. :)
<nfoata> Hi everyone, i'm trying juju (0.7 python) on Amazon EC2, I notice that the tag name stay empty and I do not see any juju constraint for tags. By any chance, someone lnows if this is possible or not, Thanks in advance
<jamespage> nfoata, I don't think the ec2 provider supports tags
<nfoata> thanks jamespage for your answer. indeed, i am seeing the python source code and I do not see anything concerning tag (/usr/share/pyshared/juju/providers/ec2/launch.py)
<nfoata> so it's reaaly seem not provided for now (maybe later)
<jamespage> nfoata, ec2 has no concept of tags
<jamespage> tags are a MAAS provider concept
<nfoata> when you create on amazon ec2 manually a VM you can add tag (key,value pairs)
<jamespage> nfoata, ec2 supports the other constraints such as memory, cpu, instance-type etc....
<jamespage> nfoata, hmm
<jamespage> nfoata, that does not work so well with juju as its juju that creates the instances on ec2
<nfoata> Thanks jamespage, for the other ones (cpu, instance-type, etc), I use the constraints and for the localization (region and default instance type) the environment.yaml and it works well. The tag name was just for improving the visibility into the amazon screen but it's not really important in the facts, so maybe at a next time. i had to go. Have a good weekend
<rideh> what is this madness
<jcastro> marcoceppi: got a sec?
<marcoceppi> jcastro: finishing lunch. be with you in a few
<jcastro> starting lunch
<jcastro> https://juju.ubuntu.com/Events/
<jcastro> basically  I added some text, but it's in the wrong spot, I need you to doublecheck my horrible tables on the wordpress page
<marcoceppi> jcastro: np. will review in a min
<marcoceppi> rideh: which madness do you speak of?
<marcoceppi> jcastro: man you've really got to close your tags up
<jcastro> hey so I inherited this page!
<jcastro> was not my idea to use tables.
<jcastro> but yeah I probably crufted it up
<marcoceppi> tables are absolutely correct for displaying data in a grid
<marcoceppi> but I still shame youuuuu
<jcastro> that's fine, I'm not ashamed of not knowing how to use tables. :)
<rideh> marcoceppi:  the entirety of this project, i just heard of it for the first time todayâ¦ awesome project
<jcastro> rideh: \o/
<marcoceppi> rideh: welcome! Let us know if you have any questions!
<wedgwood> marcoceppi: unless you've got anything to add to the reviews, I'm going to merge charm-helpers
<marcoceppi> wedgwood: fire away
<wedgwood> marcoceppi: thanks. done.
<wedgwood> marcoceppi: m_3: where (if anywhere) is the integration testing work being done?
<marcoceppi> wedgwood: what do you mean by that?
<wedgwood> I mean is there any code yet to run integration tests?
<marcoceppi> wedgwood: there's the juju-test (jitsu test) replacement, and there's charmtester which does our jenkins runs
<marcoceppi> Which are you looking for exactly?
<wedgwood> marcoceppi: just some idea of how things are shaping up.
<wedgwood> marcoceppi: specifically, something to tell my team about organizing tests and how they'll run
<marcoceppi> wedgwood: the charmtester is "stable" but there's lots of updates to be made to it this cycle. I just posted the first revision of the juju-test to the list, and will have an update about the whole testing harness framework thing next week - though that doesn't yet live in a repo. For the most part, executable in the tests/ directory, it can be whatever you want
<marcoceppi> https://lists.ubuntu.com/archives/juju/2013-May/002478.html
<wedgwood> ah. I hadn't gotten down to that mail folder yet
<marcoceppi> wedgwood: it's more or less the same as the jitsu test stuff, very open ended. When your team starts using it I'm all ears for feedback
<marcoceppi> I'm working on a testing "harness" which is basically exactly what charm-helpers is, just with functional testing in mind. So it's a python library with a shell interface that simplifies and abstracts a lot of the tedious testing stuff that currently lives in lib/test-helpers.sh
<wedgwood> marcoceppi: thanks. we stayed away from jitsu to avoid depending on something that we knew wouldn't be maintained. I'm sure we'll have some feedback quite soon.
<wedgwood> marcoceppi: awesome. I recommend you pull in the bits of the python helpers that were target at client-side.
 * wedgwood gets the link
<wedgwood> marcoceppi: marked "NOT IMPLEMENTED" http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/contrib/charmhelpers/__init__.py
<marcoceppi> wedgwood: fantastic
<marcoceppi> thanks for the link
<wedgwood> ctrl-?ctrl-?meta-j
#juju 2013-05-25
<kyhwana> soo,dumb question. After I deploy a juju charm, how do I ssh into it?
<kyhwana> If I do "juju debug-hooks smokeping/0" I get "INFO Connecting to remote machine 10.0.3.225..." then  ssh "Permission denied (publickey)".
<kyhwana> (I assume it uses id_rsa..
<kyhwana> nvm, got it. Must not have been copied properly
<kyhwana> well, after all that, that was pretty easy.
<kyhwana> Now I guess I should get it to use a smokeping template that you can configure. Hmm
<kyhwana> hmm, so I assume I can use the hook directory to copy a config to the newly deployed charm thing and the install script in the hook dir to do that.
#juju 2013-05-26
<kyhwana> There, submitted my first charm, hope I did it right x.x
<AskUbuntu> OpenShift charm | http://askubuntu.com/q/300457
#juju 2014-05-19
<sebas5384> someone there?
<sebas5384> :)
<didrocks> hum, is it me or juju debug-hooks is broken (on local provider), using the stable ppa?
* mbruzek changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP || Weekly Reviewer: mbruzek
<lazyPower> didrocks: i can't say i'm seeing the same behavior. I've been in debug hooks for the last 40 minutes working with the wordpress charm
<didrocks> lazyPower: are you using a local provider?
<lazyPower> i am, but i just verified i'm not on -stable. i'm running the 1.19 series :(
<didrocks> ah, that might explain
<didrocks> I have an install hook at fail
<lazyPower> when you debug-hooks in what do you see? just a plain jane tmux screen?
<didrocks> exactly, running as root with like: root@didrocks-local-machine-5:~#
<lazyPower> thats expected
<didrocks> I ran beforehand: $ juju debug-hooks vanilla/0 install
<didrocks> hum, isn't the prompt modified?
<lazyPower> did you attempt to resolved -r teh hook?
<lazyPower> you need to re-run the failed hook to regain the hook context that you're looking for
<didrocks> lazyPower: ah, so I debug-hooks
<didrocks> and from outside
<didrocks> I do resolved -r?
<lazyPower> yeah, that's something we mentioned at our last sprint is it would be nice to do all that from within the debug-hooks session rather than open yet another terminal and re-execute the failed hook
<lazyPower> but for now, thats the workflow. yep yep
<didrocks> lazyPower: indeed, that works!
<lazyPower> cheers! :)
<didrocks> thanks a lot, I probably misread the documentation :)
<lazyPower> didrocks: keep that in mind as you go through the process and file any bugs for things that are unclear to you. We can always use the feedback of fresh eyes.
<didrocks> lazyPower: right, I may amend and propose a pull request after rerunning the debug hooks one
<didrocks> I wasn't expecting it to work like that at all
<lazyPower> thanks :) I look forwarding to ack'ing your PR
<didrocks> yw! thanks to you :)
<rbasak> sinzui: make-release-tarball.bash is giving me: "godeps: cannot parse "/home/ubuntu/juju-core/juju-release-tools/tmp.s9oYZdPuOc/RELEASE/src/launchpad.net/juju-core/dependencies.tsv": cannot find directory for "github.com/errgo/errgo": not found in GOPATH" today.
<rbasak> sinzui: any help, please? This worked last week.
<rbasak> I see that github.com/errgo/errgo does appear to exist, and the commit hash mentioned is one behind HEAD.
<rick_h_> rbasak: that was moved to github this last week I believe
<rbasak> rick_h_: I'm using the 1.18 branch. "./make-release-tarball.bash 2291 lp:juju-core/1.18" worked last week, but doesn't work now (so dependencies.tsv hasn't changed I guess?)
<rbasak> 2293 is current which is what I was attempting to start with, but since that failed with the same message, It tried 2291 instead.
<rick_h_> rbasak: yea not sure. I just know in the last week in trunk that moved over to github and the dep is relocated. github.com/juju/errgo in trunk
<lazyPower> natefinch: I'm having trouble reproducing https://bugs.launchpad.net/charms/+source/wordpress/+bug/1317644 I issued a request for more info if you have time.
<_mup_> Bug #1317644: Can't install wordpress on local provider with trusty host <wordpress (Juju Charms Collection):Incomplete by lazypower> <https://launchpad.net/bugs/1317644>
<rick_h_> rbasak: so I'm betting they pulled the old location in errgo/errgo
<rick_h_> rbasak: oh hmm, maybe not https://github.com/errgo/errgo
<rick_h_> rbasak: so ignore me, no idea. Just know there's been some stuff crossing the commit wire right around your trouble
<sinzui> rbasak, when that happens, the juju devs have broken juju.
<rbasak> sinzui: but I tried the old revision that previously worked
<natefinch> lazyPower: I'll try to repro it again today... can't right now.
<lazyPower> ack. Thanks for taking another look natefinch
<sinzui> rbasak, CI doesn't see it broken though? yes, and the devs delete/move repos and GO knows that is bad
<sinzui> rbasak, get the latest scripts from lp:juju-release-tools which may have a fix
<rbasak> sinzui: I only checked out the branch ten minutes ago.
<sinzui> rick_h_, that project was removed a few weeks ago, I asked the developers to put it back
<sinzui> rbasak, I suspect the issue is caused when "go get" get the current packages for juju, then then the script pins the juju to the older revision, which has a different set of packages that were not gotten.
<sinzui> rbasak, which branch and revision are you building?
<rbasak> sinzui: is that a bug in the script then?
<rbasak> sinzui: "./make-release-tarball.bash 2291 lp:juju-core/1.18"
<rbasak> (2292 and 2293 also fail)
<sebas5384> hello :)
<rbasak> sinzui: I filed https://bugs.launchpad.net/juju-release-tools/+bug/1320891
<_mup_> Bug #1320891: make-release-tarball.bash fails with godeps failure <juju-release-tools:New> <https://launchpad.net/bugs/1320891>
<sinzui> rbasak, I am still building that to identify the issue
<rbasak> OK, thanks
<sinzui> ah, developers moved errgo to juju/errgo
<sinzui> they broke the build
 * sinzui ponders a shim
<rbasak> sinzui: but CI is still green?
<sinzui> CI wont break until a developer commits to 1.18, then they would see the deps are wrong
<rbasak> Perhaps CI should run periodically then, for processes that have external dependencies?
<sinzui> rbasak, They landing bot has deps manages by people, so there would be an undetected mismatch between  dependencies.tsv and what GO sees
<sinzui> rbasak, the devs are supposed to tell me when they fuck this up, otherwise I make them fix a critical bug
<rbasak> sinzui: it sounds like this should really be in the CI loop, so it doesn't need your manual intervention. Should I file a bug for that?
<sinzui> no
<sinzui> rbasak, I have 9 months of work. I wont get to it because the devs have to fix the regressions they introduce
<sinzui> They can always avoid the regression by tell me.
<sinzui> rbasak, I am adding a shim now
<sinzui> rbasak, pull again to get the fix
<rbasak> Trying...
<rbasak> sinzui: that worked. Thanks!
<mhall119> jcastro: ping
<jcastro> mhall119, yo!
<mhall119> jcastro: hey, I'm setting up the new Ubuntu Online Summit, which will be a mix of UDS, Developer Week and Open Week
<jcastro> ok
<mhall119> I've combined most of the development tracks from previous vUDS into a single "Platform Development" track, including core cloud stuff
<jcastro> ok
<jcastro> so you need content from us I take it?
<mhall119> jono suggested we could have a DevOps track too, to focus on the consumers of the cloud, woudl you guys be willing to lead that?
<jcastro> for sure
<jcastro> I just need dates, # of slots, etc and I can get to work
<mhall119> June 10-12
<mhall119> as many slots/hour as you need
<mhall119> 1400 UTC to 2000 UTC
<jcastro> ok
<jcastro> Are they hour slots or shorter?
<mhall119> still an hour
<mhall119> jcastro: I'll give you 2 rooms for now, so 2 slots per hour
<mhall119> I can easily add more rooms if you need them
<jcastro> ack, let me know when summit opens up
<mhall119> jcastro:
<mhall119> http://summit.ubuntu.com/uos-1406/
<mhall119> LP sprint is https://launchpad.net/sprints/uos-1406 for filing BPs
<jcastro> ok
<mhall119> note that it's 'uos' not 'uds' now
<jcastro> mhall119, do we have an announcement email I can copy and paste with all the info?
<mhall119> jcastro: only http://www.jonobacon.org/2014/04/03/ubuntu-online-summit-dates/ I still need to update the docs on uds.u.c to point ot the new name and tracks
<jcastro> ack
<stub> jcastro: Do charms that support both precise and trusty need to be pushed to separate precise and trusty branches, or just one or t'other? Also curious about merge proposals and stopping divergence if multiple branches in LP.
<jcastro> stub, it is my understanding that the multiple branches is a workaround until Juju supports series itself
<jcastro> but I lost track of the conversation on series and who is winning that argument, heh
<jcastro> but if you support both already then I would say don't push it
<jcastro> marcoceppi, WDYT? ^^^
<stub> I won't do anything  until it is made clear :)
<marcoceppi> stub: so, postgresql has been promulgated for both trusty and precise, for example
<marcoceppi> because it had tests and they succeeded
<marcoceppi> but they're now in two different branches, the trusty and precise versions
<marcoceppi> stub: I'm working on a tool that will fix this sync issue, should be ready later this week
<marcoceppi> as a stop gap for several things that will hopefully land in core and the store this cycle
<stub> I can manually sync in worst case. Did the syslog tests actually pass? I thought it was broken, and I couldn't find any documentation on the syslog interface to know how to fix it and needed to poke people.
<marcoceppi> stub: I ran the integration tests that were in the charm, whatever those test worked
<stub> Cool. Maybe I did get it right then :)
<stub> pasted the right magic incantations from the rsyslog 'docs'
<syannalfo> juju --debug -v status
<syannalfo> 2014-05-19 22:53:20 INFO juju.cmd supercommand.go:302 running juju-1.18.3-trusty-amd64 [gc]
<syannalfo> 2014-05-19 22:53:20 DEBUG juju api.go:179 no cached API connection settings found
<syannalfo> 2014-05-19 22:53:20 DEBUG juju.provider.maas environprovider.go:30 opening environment "maas".
<syannalfo> 2014-05-19 22:53:20 ERROR juju.cmd supercommand.go:305 Unable to connect to environment "maas".
<syannalfo> Please check your credentials or use 'juju bootstrap' to create a new environment.
<syannalfo> Attempting to connect to ntexc.maas:22
<syannalfo> Attempting to connect to 192.168.0.101:22
<syannalfo> 2014-05-19 02:14:52 ERROR juju.provider.common bootstrap.go:123 bootstrap failed: waited for 10m0s without being able to connect: /var/lib/juju/nonce.txt does not exist
<syannalfo> Stopping instance...
<syannalfo> Bootstrap failed, destroying environment
<syannalfo> Please help... :-( I have been at this for days
<syannalfo> My nodes are in ready state
<davecheney> syannalfo: can you ssh to either of those addresses directly ?
<syannalfo> please hold while i try
<syannalfo> no - i need my bub keys on the nodes?
<syannalfo> err public
<syannalfo> I can not ssh into the nodes
<syannalfo> my environments.yaml has the maas-oauth though
<davecheney> it is because of permissoins
<davecheney> or because those names do not resolve or are not routable ?
<syannalfo> both names ping ok
<syannalfo> hostnames that is
<syannalfo> ping edmgh.maas
<syannalfo> PING 192-168-0-104.maas (192.168.0.104) 56(84) bytes of data.
<syannalfo> 64 bytes from 192-168-0-104.maas (192.168.0.104): icmp_seq=1 ttl=64 time=1.12 ms
<syannalfo> etc
<davecheney> syannalfo: thta is not the machine juju is trying to contact
<syannalfo> juju should be trying to contact MAAS right?
<syannalfo> not the nodes
<davecheney> syannalfo: no
<davecheney> juju configures one of your maas notes as a bootstrap now
<davecheney> note
<davecheney> node
<davecheney> that node runs the database and api server
<davecheney> at that point all juju commands talk to that machine
<davecheney> which will itself talk to maas if required
<syannalfo> ok - so juju gets the dns names of the nodes from maas... then talks to the nodes and procures on of the nodes for itself
<syannalfo> one
<syannalfo> ok - so why is it not communicating ?
<syannalfo> does the user ubuntu have to be able to log into the nodes?
<syannalfo> I am using my own name as the administrator on my maas box
<syannalfo> so i can not connect to the nodes without setting up a keyboard and monitor on each node and setting up ssh keys
<syannalfo> and users
<syannalfo> is that right?
<syannalfo> Attempting to connect to ntexc.maas:22 (as who?)
<syannalfo> juju?
<davecheney> always the ubuntu user
#juju 2014-05-20
<AskUbuntu> Juju and no default VPC on AWS | http://askubuntu.com/q/469473
<AskUbuntu> juju deployment error on manually provisioned machine | http://askubuntu.com/q/469618
<AskUbuntu> Glance and ceph hook-failed "ceph-relation chaged" | http://askubuntu.com/q/469661
<gnuoy> So, adding structure to amulet tests for charms.
<gnuoy> I started out and ended up with something that looks a lot like nose I guess
<gnuoy> so maybe I should in fact be using that
<mattyw> marcoceppi, are you on reddit?
<marcoceppi> mattyw: I am, I saw your email
<marcoceppi> mattyw: my only concern is, why not post to r/ubuntu or other established communities?
<mattyw> marcoceppi, I  guess there's no reason - I just thought I'd start it and see what happened
<marcoceppi> I mean, it's cool don't get me wrong
<marcoceppi> not sure if we're /that/ big yet
<marcoceppi> so we should be cross posting still to larger communities that would be interested in juju
<mattyw> that totally makes sense
<jamespage> gnuoy, https://code.launchpad.net/~openstack-charmers/+activereviews
<jamespage> woser
<gnuoy> jamespage, I can grab those jacken ones in the morning for a start
<jamespage> gnuoy, just wondering how many of those should target /next instead of stable
<gnuoy> jamespage, the top 4 might be suitable for stable but the others aren't bug fixes so I don't think they're eligible ?
<jamespage> gnuoy, not sure your top4 are the same as mine
<gnuoy> jamespage, on reflection just the louis-bouchard ones look like fixes I think
<jamespage> gnuoy, I think tribaals are as well but they sweep alot of charm-helpers in
<gnuoy> ah, ok. I thought they were housekeeping
<jamespage> gnuoy, he fixed up apt local caching and block device detection
<jamespage> gnuoy, this is where stable is tricky
<jamespage> we really need to branch charm-helpers on release of the charms
<jamespage> and then backport selected fixes
<gnuoy> yeah
<jamespage> gnuoy, I added a "publish" target to the makefiles in most openstack charms btw
<jamespage> it pushes to precise and trusty branches
<gnuoy> ah, good to know.
<coreycb> gnuoy, jamespage, beisner:  I think we would benefit a lot if we find a way to deploy an environment once with amulet, and run several individual tests on that env where, if one test fails the rest will continue to run.
<coreycb> I'm not sure amulet is designed to do that though.  it looks to be designed to run one test per file, and raise a condition on failure.  e.g. amulet.raise_status(amulet.FAIL, ...)
<lazyPower> coreycb: Kind of. Amulet is great at deploying a topology and making assertions about whats transmitted over the wire, and validating system assertions (Eg: did this vhost get deployed, is this mounted, is a service responding when i query this port)
<lazyPower> coreycb: there's a pattern to build a test suite in a single file per topology - its all about method encapsulation. cory_fu wrote a great test template that exhibits that in the Apache Allura charm.
<lazyPower> coreycb: http://bazaar.launchpad.net/~johnsca/charms/precise/apache-allura/refactoring-with-tests/view/head:/tests/101-deploy.py
<coreycb> lazyPower, ok yeah that looks nice but I'm not sure it continues to execute all the tests if one fails
<lazyPower> coreycb: it will halt if you specify --fail-fast on the command line, otherwise it continues as expected in nose-test style.
<lazyPower> and --set-e may also be used to halt on first failure
<coreycb> lazyPower, hmm, even if amulet.raise_status(amulet.FAIL, ..) is called?
<lazyPower> correct. that should print to stderr and continue executing unless teh default behavior has been changed since the 1.3 series.
<coreycb> lazyPower, awesome sauce
<coreycb> lazyPower, thanks, that's great
<lazyPower> np, happy to help
<sebas5384> hey lazyPower! O/
<lazyPower> o/ sebas5384
<lazyPower> how goes the vagrant journey?
<sebas5384> fine! but I didn't continue using the juju trusty box
<sebas5384> that error is still happening for me :(
<lazyPower> about cloning a running container?
<sebas5384> but anyways i have the other one hehe
<sebas5384> yep
<lazyPower> Interesting... I'm otherwise preoccupied with other items in my queue but I'll make sure I circle back to that before the end of the week. I'd like to compare notes between what i've done and what you've done
<sebas5384> sure!! :)
<lazyPower> sebas5384: yeah our Precise box is pretty solid though - i use it every day
<sebas5384> ping me :)
<lazyPower> Sure thing :)
<sebas5384> yeah probably i should use it (the precise)
<sebas5384> oh! something that i'm doing, deploying openstack with juju all-in-one, but i was wondering if you have some neat bundle that I can use to use with juju after
<lazyPower> There's a few openstack bundles in the charm store you can model off of
<sebas5384> i planning to change the --to of the nova-compute-node to 0
<lazyPower>     bundle:~makyo/openstack/2/openstack  is the one i use for demo deployments.
<sebas5384> oohh great lazyPower! thanks
<sebas5384> i will give it a look
<lazyPower> sebas5384: make sure you pass a config option to cinder for the block device, otherwise it'll red out on your deployment and cause the bundle to not fully deploy.
<lazyPower> thats my word of advice :)
<sebas5384> hmmm get it, thanks! but i don't know what to set
<sebas5384> yet
<sebas5384> hehe
<lazyPower> niedbalski: (rehashing) Greetings. Have a moment to talk about your python_debugger merge for charm helpers? Great!
<lazyPower> The use case here is to basically eliminate breaking your install hook to debug that first leg of the charm run, right?
<niedbalski_> lazyPower, right. Actually i'm using that simple helper for jumping into a trace remotely without breaking the install hook
<niedbalski_> also could be a decorator @break, or something like that.
<lazyPower> niedbalski: interesting. So it spins up a remote debugger, what about in the instance of public clouds where that port 12345 isn't open by default?
<lazyPower> Ah wait, i see it imports open_port
<niedbalski_> yep, it first opens the public port, then register a close_port callback @atexit
<lazyPower> interesting. Let me import this and give it a run - what would i need to execute on my workstation to connect? Is this provided by pdb out of the box or do i need a supporting package?
<niedbalski_> should works out of the box
<lazyPower> niedbalski: pdb hostname:port?
<niedbalski_> lazyPower, telnet, netcat also works
<lazyPower> OH! I don't even need to invoke a debugger, this spins up a server for any TCP capable consumer?
<niedbalski_> lazyPower, yep.  In fact, i'm using emacs TcpClient
<lazyPower> this sounds pretty spiffy
<mbruzek> niedbalski_, wow
<lazyPower> niedbalski: dude, this is *awesome*
<niedbalski_> lazyPower, cool. I would like to integrate that on my soon-to-be-released emacs charms minor mode
<lazyPower> niedbalski: I'm going to +1 this
<lazyPower> niedbalski: it shifts you a bit lower in the queue, but i'll make it a point to circle back to this if nobody has gotten to it by end of week
<lazyPower> mbruzek: https://code.launchpad.net/~niedbalski/charm-helpers/python-set_trace/+merge/217956 <-- LGTM. Its got a marco assignee, so depending on if you want to cowboy this or not :)
<mbruzek> Thanks lazyPower I am in the middle of a review right now, but I will add that to my list.
<niedbalski_> mbruzek, :]
<mbruzek> niedbalski_, What is the difference between pdb and ipdb?
<mbruzek> nevermind, google found it
<rick_h_> mbruzek: yea, ipython ftw (although I like bpython but there's no bpdb
<rick_h_> )
<mbruzek> Thanks rick_h_
<mbruzek> niedbalski_, This pdb is great because it does not need a specific debugger to attach.  A generic TCP client will do!  Impressive.
 * mbruzek has only used ipdb before, so was not sure.
<mrjazzcat> marcoceppi:  Hey.  Is the charm-helper-sh package just for Precise?  I can't find it from my
<mrjazzcat> machine :)
<marcoceppi> mrjazzcat: yes, it's only precise, it's been obseleted for lp:charm-helpers instead
<mrjazzcat> marcoceppi:  cool thanks.  But, that means there must be a Trusty version of the Wordpress charm, as it relies on that package
<marcoceppi> mrjazzcat: the latest version of the charm no longer relies on that, the pieces it did need were moved inside the charm
<mrjazzcat> marcoceppi:  ok, thanks.  I just have to go find that version.
<mrjazzcat> marcoceppi:  strange housekeeping.  It looks like this is the trusty WP code:  lp:~justin-fathomdb/charms/trusty/wordpress/trunk
<mrjazzcat> marcoceppi:  sorry, I'm wrong.  That's the same version.  Arg.  Do you know where the Trusty WP charm source is?  Sorry to be a dork :)
<AskUbuntu> Juju bootstraping gomaasapi timestamp error | http://askubuntu.com/q/469778
<marcoceppi> mrjazzcat: well, it's not been moved to trusty yet, so there might still be a few broken things
<mrjazzcat> marcoceppi:  ah, that's why I can't find it!  OK, if it's not released, I'll live with what I have.
<marcoceppi> mrjazzcat: file bugs when you get errors!
<mrjazzcat> marcoceppi:  ok, will do.
<AskUbuntu> JUJU MAAS Bootstrap all on VM | http://askubuntu.com/q/469810
<AskUbuntu> Can't Deploy Wordpress | http://askubuntu.com/q/469848
#juju 2014-05-21
<AskUbuntu> juju sync-tools on maas 502 bad gateway error | http://askubuntu.com/q/469869
<thumper> o/ stub
<thumper> stub: I'm looking at the postgresql charm
<thumper> and considering two main things:
<thumper> * what is the minimum reasonable things I need to configure for ec2 so the db exists beyond the machine
<thumper> * how do I restore a backup that has been taken
 * thumper is looking at python-django, postgresql, and gunicorn
<thumper> probably going to use a subordinate charm for django for the app...
<stub> thumper: two (or more) units means you have a replica of all your data. pg_restore is the command to restore a database, and the charm doesn't help you do it.
<jose> hey guys, do you think a charm for an archive mirror would be useful? I know it wouldn't be able to run in a t1.micro aws instance, for example, but I think it would be a good idea to have one for, as an example, private clouds
<stub> The backup design and scripts predates me - I need to go over it, adding in PITR options.
<lazyPower> hey stub, did you place test_hooks.py in hooks to work around stupid path munging?
<lazyPower> (wrt the postgresql charm)
<stub> lazyPower: No, it ended up in there because that is where the person who wrote it put it :)
<lazyPower> stub: have a moment to lend your eyeballs to my path munging woes? I'm trying really hard in an area i just dont understand.
<stub> lazyPower: I tend to be an 'accept improvements, and polish them later' type gateway.
<stub> sure
<lazyPower> http://paste.ubuntu.com/7495968/
<lazyPower> wait, thats an outdated pastebin
<lazyPower> http://paste.ubuntu.com/7496027/ - there's my dir tree
<lazyPower> and the only reason i'm using nose, is because i've used it in context of a python project and it worked pretty well. it just 'knew' what was up. It was a similar layout - instead of hooks/ it was module_name/*  -- now the issue. When I run nosetests against tests, it fails under charmhelpers, importing stuff - but only in the context of testing
<lazyPower> when i'm running the hooks, they have no such import errors, so i can only assume its nose's path munging
<stub> 10_test_common.py?
<lazyPower> yeah, it gets to import hooks/common - from there it starts tree loading teh dependency chain and blows up
<lazyPower> 1 sec, let me push this branch to bzr and get you some output for context
<lazyPower> lp:~lazypower/+junk/bind
<lazyPower> http://paste.ubuntu.com/7496033/
<stub> At the top of your '10_test_common.py', you probably want something like 'import sys; import os.path; sys.path.append(os.path.join(os.path.dirname(__file__), os.pardir, 'hooks'))
<lazyPower> stub: i have hooks, and hooks/charmhelpers being added implicitly in the test file
<stub> With an absolute path, or a relative path?
<stub> If you use a relative path, the current working directory will hurt you
<lazyPower> it'll resolve to an abspath
<lazyPower> ah i stripped that in the version i committed
<lazyPower> what i had was; http://paste.ubuntu.com/7496035/
<lazyPower> same error
<stub> You really want to calculate the path from __file__, as that is the only fixed absolute path you can start from IIRC
<stub> 'from hooks import' means you need the parent directory of 'hooks' in the path. You are putting the hooks directory itself in the path.
<lazyPower> ooohhhh
 * lazyPower looks shifty
<lazyPower> i knew that
<lazyPower> well that'll resolve something later down the road, still failing. I think I'm going to follow the pattern of the pgsql charm and just move it to hooks
<lazyPower> i dont know what I did wrong, but there's something terribly wrong here.
<stub> try: [imports] except ImportException: print repr(os.getcwd()); print repr(sys.path); raise
<lazyPower> trying to get it to trigger, Its raising OSError which i've changed, but still not catching in the try/except block. Which means its probably raising from before this?
<lazyPower> stub: thanks for the help, its been food for thought. Doesn't seem to be working though
<stub> lazyPower: Yup. I'd need to see the traceback
<stub> I can have another look in about an hour
<lazyPower> stub: i never got the path to print. whats up in the +junk repo is pretty much the same status of the code save for the try/except block.
<lazyPower> I'm calling it a night.. its extremely late
<lazyPower> thanks again for taking a look o/
<jamespage> gnuoy, looking at tribaals resync branches now
<gnuoy> ak
<stub> lazyPower: btw. Make sure your hooks directory has a __init__.py or Python won't think it is a package and 'import hooks' will fail.
<jamespage> Tribaal, is there a bug for the apt race thing you fixed via charm-helpers?
<rbasak> axw: around?
<rbasak> axw: it should work with just juju-mongodb as it does in Trusty.
<rbasak> axw: we need to push this back in an SRU to Trusty, so I'm not sure we can just bring in a new dependency like that.
<rbasak> axw: any idea why the requirement changed?
<jamespage> gnuoy, I made the publish target in Makefile run lint and test first
<jamespage> if they fail you can't publish
<jamespage> gnuoy, does that make sense? should help stop lint and test failures getting promoted
<gnuoy> completely
<jamespage> gnuoy, tribaal's branches all landed
<gnuoy> jamespage, landed to stable ?
<jamespage> gnuoy, yes
<jamespage> gnuoy, I've not enacted 'next' yet
<jamespage> :-)
<gnuoy> kk
<jamespage> gnuoy, nsx fixups queued for review - https://code.launchpad.net/~openstack-charmers/+activereviews
<jamespage> gnuoy, if you like I can show you how to deploy that on serverstack
<gnuoy> that would be great
<jamespage> marcoceppi, just so you know I've branched the ganglia, ganglia-node and ntp charms precise->trusty under charmers - but I've not promulagted
<jamespage> needed charm store accessible trusty versions
<axw> rbasak: it works in Trusty without mongodb-server? ok then - I can take another look in my morning
<axw> rbasak: why does it need an SRU in Trusty, if it's only an issue in Utopic?
<rbasak> axw: I believe so, yes - we have a dep8 test that passed when juju-core was last uploaded to Trusty, and it worked then.
<axw> mk
<rbasak> axw: there are other fixes that we can't fix in Trusty, because the SRU process expects them to be fixed in Utopic first.
<axw> ah I see
<rbasak> Really, juju needs to be fixed so that a copy-forward to the next development release Just Works without needing any code changes.
<rbasak> I'm addressing that separately.
<axw> rbasak: understood. if I get a chance I'll look tonight, otherwise in my morning. if it's urgent, better get onto someone else in juju-dev
<rbasak> axw: OK, thanks. This has been delayed by weeks now, so I don't think another day will really make any difference.
<axw> okey dokey
 * axw bbl
<marcoceppi> jamespage: are they trusty ready? do they need to be promulgated?
<marcoceppi> typically branches shouldn't live under ~charmers that aren't promulgated
<jamespage> marcoceppi, meh - probably
<jamespage> I've been using them on trusty for ages
<lazyPower> stub: so, i had a nose plugin that was causing 90% of my headache.
 * lazyPower burnt it with fire
<hackedbellini> hey guys
<hackedbellini> something strange happened here: I cannot access juju-gui with the password on local.jenv
<hackedbellini> it was working until yesterday. Today I tried to login, using the password on the file, but it says "Unknown user or password."
<hackedbellini> anyone has any idea of what it might be? Or even where can I get the "real" passowrd, since the one on local.jenv is not working?
<hackedbellini> worse. "juju stat" is giving me this: http://pastebin.ubuntu.com/7498090/
<hackedbellini> And no, I can't use "juju bootstrap" to create a new environment since I'm on a production machine
<rick_h_> hackedbellini: did you change the password in the jenv file from when the environment was created?
<rick_h_> hackedbellini: and that looks like a local mongodb timeout?
<rick_h_> hackedbellini: and the gui talks over the api to juju so if there's issues there it could be more a juju failure than the login
<hackedbellini> rick_h_: no, I didn't change the password
<hackedbellini> and yes, I'm googling a little and found that it has something to do with mongodb. Do you have any tips on how can I debug this?
<rick_h_> hackedbellini: can you start by explaining a bit on the environment here. It seems odd that it's timing out mongodb on localhost. Maybe that is from the state server machine 0? Can you ssh to that machine and see if mongo is running?
<rick_h_> hackedbellini: maybe provide some info on the version and such of ubuntu, juju, etc this environment is on?
<hackedbellini> rick_h_: well, I think we fixed it. We restarted juju-agent-juju-local and everything worked again
<hackedbellini> really strange though. In our logs, the service restarted itself at 10am (UTC-3). Nobody did nothing for it to happen
<rick_h_> hackedbellini: well yay and hmmmm, wonder what that was
<hackedbellini> rick_h_: hahaha exactly my reaction to this, "yay and hrmmmm". If I find something, I'll report here or open a bug ;)
<rick_h_> hackedbellini: thanks, and glad it's working and not a bug in the GUI :)
<lazyPower> Guest80317: o/ Hey Ryan
<Guest80317> lol well i guess im guest
<lazyPower> jcastro: Guest80317 is the ILOC guy I was telling you about that may or may not be able to assist us in this endeavour
<jcastro> hi!
<Guest80317> hey
<Guest80317> i need to get my nick registered but ill get to that
<ILOC_guy> that will have to work for a min
<lazyPower> hah, nice handle ILOC_guy
<ILOC_guy> i figured that would work
<lazyPower> So, we have some new stuff coming down the pipeline, and one of which is a HyperV virtualization solution for people interested in Juju. How do you feel about pioneering VM image creation with us on HyperV?
<ILOC_guy> Well i think i could help ya out
<ILOC_guy> i will of course need some detail
<lazyPower> jcastro: ^ drive my man. drive away.
<jcastro> ILOC_guy, yeah that would be awesome!
<jcastro> basically, we want hyperv users to have an out of the box box (heh) so they can turn around and deploy onto hyper-v if they want
<ILOC_guy> that would be nice, since VMM isnt exactly the best software that i have used
<ILOC_guy> i dont know if you have used it but its about as much fun as a sharp stick in the eye
<lazyPower> that snap in based management console for the hyp-v stuff?
<lazyPower> i'd rather herd carts....
<lazyPower> *cats
<ILOC_guy> Yea its part of M$'s system center
<jcastro> well, as long as it's repeatable ...
<ILOC_guy> it can be but its not fast or flexable .... as im dealing with currently
<lazyPower> ILOC_guy: hows the overarching experience with building machine images from powershell?
<lazyPower> That's where we're going to see this go I would imagine. Building a base image with powershell and letting cloud-init handle all the heavy lifting inside the base box.
<ILOC_guy> PS is better....
<lazyPower> ILOC_guy: we should have a group sync at some point in the near future and talk about a path to success and get some feedback on how you would tackle this problem
<lazyPower> jcastro: thoughts?
<jcastro> yeah sounds good to me!
<lazyPower> ILOC_guy: pick a date thats not the 30'th through june 2nd.
<lazyPower> and a timeslot. I'll make a calendar invite and get everyone on board and we can go from there. give you a chance to talk to your boss if you want to do this on office hours or not.
<themonk> marcoceppi: hi
<themonk> Hello everybody
<ILOC_guy> Well my boss is in the M$ camp so i dont know if i can use HER work time to do it.
<lazyPower> ILOC_guy: no worries, if it's a hobby its a hobby :) Just a thought.
<ILOC_guy> well im not saying that i wont push the subject but i need to have ammo to take to this fight
<lazyPower> ILOC_guy: i'm crafting an email to the group of us that's responsible for our VM story. This way you can get answers out of band without tracking us all down on here. Thanks for agree'ing to help - we certainly appreciate the time to talk to us today.
<ILOC_guy> of course any time, its nice to be distracted from the every day roll
<themonk> how to restart unit agent?
<lazyPower> ILOC_guy: you've got mail.
<lazyPower> o/ sebas5384
<sebas5384> hey lazyPower o/
<ILOC_guy> well look at that i do... lol
<l1l> I have a region controller and I am trying to bootstrap node0 to precise, is that not possible?
<themonk> marcoceppi: need your help. in config-changed hook, i need to restart my server after configuration changed but do not want to run restart code for the first time like, install->config-changed-start
<themonk> marcoceppi: how to block restart code when config-changed runs after install hook?
<sebas5384> themonk: whats the problem with restarting?
<sebas5384> themonk: if you have, you can always make an intelligent function that knows about that rule
<themonk> sebas5384: problem is how can i restart before start hook because it is not started yet after install->config-changed
<sebas5384> themonk: you can divide in others scripts
<themonk> sebas5384: how do i know that confir hook is running for the first time?
<sebas5384> you can touch a file :)
<sebas5384> touch .installed
<sebas5384> if you don't started but you are doing: stop, start
<sebas5384> will show something like a warning, but not an error
<themonk> sebas5384: :) i thought about touch like solution. but is there any think juju provide?
<themonk> sebas5384: *anything
<sebas5384> themonk: well if juju can give you that i didn't know
<themonk> sebas5384: thanks :)
<sebas5384> themonk: o/
<lazyPower> themonk: juju doesn't have a concept of if a service is started or not. That's more the domain of whatever you're using to handle your process
<lazyPower> eg: upstart
<Djgerar123> HELLO
<Djgerar123> WHERE IS JOJO?
<Djgerar123> JUJU
<zerick> Is juju for Openstack reliable as Packstack ?
<zerick> (the juju charm)
<sebas538_> lazyPower and jcastro alpha draws: https://s3.amazonaws.com/uploads.hipchat.com/45897/307411/beJXOwF8csdcR9C/info_taller_juju_maas-01.jpg
<lazyPower> thats an awesome infographic
<lazyPower> i wish i read spanish
<sebas538_> nothing finished yet, we are going to do more series, etc...
<sebas538_> lazyPower: portugues !!!
<sebas538_> hehe
<lazyPower> that ^
<sarnold> cool :)
<sebas5384> :D
<sebas5384> jose maybe you understand hehe
<jose> sebas5384: spanish? :P
<lazyPower> haha
<sebas5384> hahaha
<lazyPower> jose: dont pull a lazypower mistake
<lazyPower> its protugues with three exclamation points.
<sebas5384> hahaha
<sarnold> portuguese usually is with three exclamation points
<lazyPower> sarnold: ever the ambassador of the obvious :P
 * sarnold shines his Capt Obvious badge
<sebas5384> https://www.youtube.com/watch?v=t85OkP9bQEw#t=407
<sebas5384> hehe
<jcastro> sebas5384, dude that is so cool!
<sebas5384> thanks jcastro!! :)
<sebas5384> jcastro: those are not the finals
<sebas5384> so any feedback will be great!
<sebas5384> :D
#juju 2014-05-22
<mattyw> morning all - is anyone working on a trove charm?
<rbasak> jamespage: good news! 1.18 rev 2294 passes dep8 in packaging in utopic in my local test.
<rbasak> Thanks axw!
<rbasak> jamespage: so next: do we want to upload that, or wait for an upstream 1.18 release?
<jamespage> rbasak, whens it due?
<rbasak> jamespage: and I don't see an MRE for juju-core, so what's the SRU plan for Trusty?
<rbasak> No idea when it's due. Nothing is listed on https://launchpad.net/juju-core/1.18
<rbasak> Is there a release schedule?
<axw> rbasak: sweet :) no worries
<therealmarv> Hello, Iâm trying to get juju running with vagrant on my mac. I used this guide: https://juju.ubuntu.com/docs/config-vagrant.html but my vagrant machine is not doing anything. Last log message is:
<therealmarv> Taking a nap to let Juju Gui get setup
<therealmarv> screenshot: https://www.evernote.com/shard/s6/sh/599d7275-386e-4673-a6aa-b980bdfb9674/47cfe11653f99b1cab31cb3f6234e6c2/deep/0/Vollbild-22.05.14-14-27.png
<rick_h_> lol
<rick_h_> who knew we were nap worthy
<rick_h_> therealmarv: do you know if you used the 14.04 or 12.04?
<therealmarv> 12.04. It is indeed a very long napâ¦ and seems without an end
<rick_h_> hmm, I've not used the vagrant workflow. I wonder if it's loading the first image for the lxc machine or somethng
<rick_h_> marcoceppi: or jcastro any idea on the delay at the gui deploy in this vagrant workflow? ^
<marcoceppi> therealmarv: rick_h_ that's a utlemming thing, but it's basically just deploying the GUI and it takes anywhere from 30s to 5 mins
<rick_h_> marcoceppi: yea, sorry I was going to ping him but don't see him around atm
<rick_h_> therealmarv: guessing based on your irc timing that it's well past that timeframe?
<therealmarv> Iâm sure Iâm waiting longer than 5 minutesâ¦. seems really like a nap without end ;)
<rick_h_> your vagrant image is tired!
<rick_h_> marcoceppi: is there a place to file bugs on the images?
<therealmarv> hehe
<marcoceppi> rick_h_: not yet, utlemming is going to be publishing the build process as open source by end of week. lazyPower is the one who knows most about this process
<rick_h_> marcoceppi: ty much, I'll move to bugging lazyPower and utlemming
<rick_h_> sorry for the trouble therealmarv, you're on the bleeding edge, which we greatly appreciate, and we'll have to work out some debugging tips and such
<therealmarv> no problem. I will try local lxc install now on a fresh virtualbox image.
<sander^work> Do anyone know if it costs anything to borrow the new "cloud in a box" for 2 weeks?
<sander^work> Or who might know the details about that..
<sander^work> Or what channel is most appropriate to ask in? :-)
<jcastro> sander^work, kirkland is the person to talk to, though I am pretty sure he's just going to say to use the form, heh
<lazyPower> therealmarv: are you using the trusty or the precise vagrant box?
<therealmarv> lazyPower: precise
<therealmarv> did not tried out trusty yet
<lazyPower> therealmarv: we're seeing intermittant issues with the trusty box. Precise is still the recommended basebox pending a fix for some fringe issues
<lazyPower> therealmarv: The Vagrant box is pulling down an intermediary redirector to place on the Vagrant image that will forward your requests to the JujuGui instance being deployed via LXC in the container. Did you tweak any of the settings in the Vagrantfile?
<therealmarv> no nothing. I followed strictly https://juju.ubuntu.com/docs/config-vagrant.html
<therealmarv> btw I wonder if something in environment.yaml is needed because it is still default to amazon
<lazyPower> ok. If you destroy the box and bring it back up does it progress past the gui setup nap?
<lazyPower> it should set the default to local during the boot sequence.
<therealmarv> Iâm guessing it is doing so (inside the vm). As I said Iâve done nothing special.
<therealmarv> Recreating the same VM with precise had the same result: nap without end ;)
<therealmarv> I also used everytime the 64bit version if this helps
<lazyPower> Hmm.. Ok.
<sander^work> jcastro, is there some online version of the courses involved with it? as I have hardware I could run it on.
<lazyPower> therealmarv: i'm booting, just a moment. Let me see if this is a wider problem than we are aware of
<therealmarv> Iâm testing trusty in the meantimeâ¦ also booting now
<lazyPower> therealmarv: when you get a chance can you pastebin me the output from /var/log/juju-setup.log?
<therealmarv> ok will doâ¦ my machine is now waiting again on the critical nap (GUI)â¦ letâs see if it goes furtherâ¦
<lazyPower> relevant output would also be fetching the output of juju status, and $HOME/.juju/local/log/all-machines.log
<therealmarv> lazyPower: oh wow. Trusty worked! Strangeâ¦ but I double checked precise which was napping forever
<therealmarv> Output http://pastebin.com/yL8cygDn
<lazyPower> Interesting. If Trusty works for you, tally ho then :)   You may encounter an issue with more than 2 or 3 machines deployed with juju complaining about being unable to clone running containers
<lazyPower> its intermittant though, so maybe it'll do fine
<lazyPower> therealmarv: thanks for the output. Looks like the redirector completed and the GUI didn't stand up - output from all-machines would be insightful if you've still got the instance up.
<therealmarv> I will rerun precise setupâ¦ now
<jcastro> sander^work, yep: https://insights.ubuntu.com/2014/05/21/ubuntu-cloud-documentation-14-04lts/
<jcastro> we're working on the html documentation now, but basically it's Juju/MAAS/OpenStack
<sander^work> jcastro, a few years ago, I got stuck with an particular error targeted at the bios/hardware.. I guess it's fixed now :-)
<sander^work> Cool.. documentation looks nice.
<jcastro> it really depends
<jcastro> how well the IPMI MAAS stuff will work
<jcastro> but there's been a ton of fixes this cycles, so I am willing to bet it will work
<jcastro> if not, at worst you'll have to manually power on the machines, but you can cross that bridge when you get there
<sander^work> No problem.. as i'm using some remotely controlled bladeservers to test with.
<marcoceppi> sander^work: jcastro it's also pretty trival to make a new power module for MAAS
<marcoceppi> it's a plugin based system
<marcoceppi> so if the blade server has an API you could make a new power type in MAAS that knows how to talk to it
<sander^work> Cool.
<sander^work> Does it integrate with powering it up, for testing, on vmware or hyper v?
<lazyPower> sander^work: there's incoming work to integrate it with HyperV, i'm unsure of the status of Maas with VMWare
<sander^work> Ok.
<lazyPower> sander^work: there's been some discussion over VMWare on the mailing lists. I would encourage you to join the mailing list and ask the community at large at the status and if anyone has any insights for you with regard to your specific requirements.
<therealmarv> lazyPower: juju-setup.log from precise: http://pastebin.com/t6EDmHTx
<therealmarv> lazyPower: Precise is napping forever. Triple checked now.
<lazyPower> therealmarv: thats the same output from the juju-setup log, can you vagrant ssh in and get me the output from $HOME/.juju/local/log/all-machines.log?
<lazyPower> sorry for the multi-log request, and thanks for helping!
<therealmarv> @lazyPower: here it is http://pastebin.com/qxi0ygSA np
<lazyPower> therealmarv: Brilliant, thank you!  Looks like the LXC container creation failed, and that caused the script to hang.
<lazyPower> I'll take this output and move from here. Thanks again therealmarv - you've been a big help
<therealmarv> lazyPower: look at the last line  ubuntu-cloudimg-query trusty released amd64 --format '%{url}\n'; confused by argument: trusty; + url1=; container creation template for vagrant-local-machine-1 failed; Error creating container vagrant-local-machine-1
<therealmarv> glad I could help :)
<whit> hey all, I'm having some issue with juju upgrade-charm and need a sanity check whether I'm seeing a bug or I'm just doing it wrong
<whit> I run it like this: juju upgrade-charm --switch=local:trusty/mycharm --repository=file/repo myservice
<whit> and it tells me it's updating the charm (and increments the version by 1)
<whit> when I look on the unit ala cat /var/lib/juju/agents/unit-uaa-0/charm/.juju-charm
<whit> the version number is several behind what was reported by upgrade-charm
<whit> all service units are resolved
<marcoceppi> whit: what does juju status show?
<whit> hey marcoceppi :), https://gist.github.com/whitmo/7acc7d87f7351df6cede
<whit> marcoceppi, maybe I jumped the gun.  I see the upgrading there
<marcoceppi> whit: yeah, it may take a hot seccond
<marcoceppi> whit: also you don't need to do --switch when just upgrading the charm
<marcoceppi> switch is more like "I want to go from charm store to local" or vice versa
<marcoceppi> it's just as crazy clobberful as the --to flag
<whit> marcoceppi, but local to local is fine?
<marcoceppi> whit: yeah, so you can do upgrade-charm and point to a different --repository and everything without using --switch
<marcoceppi> just as long as the revision file number is greater than what is currently deployed
<whit> marcoceppi, how is a revision tracked on a local charm?
<marcoceppi> the revision file
<marcoceppi> it holds an arbitrary number that represents it's "revision"
<whit> ah
<whit> marcoceppi, is bumping that necessary sometimes?
<marcoceppi> whit: never since 1.18
<marcoceppi> it'll automatically do a +1 bump on upgrade-charm
<marcoceppi> in past versions you had to do a -u flag, or manually increment it
<whit> marcoceppi, not seeing that
<whit> this is stuck at 001 (though juju is adding a -+1 to each description it pushes out)
<whit> anyway, thanks marcoceppi !
<whit> marcoceppi, I had some old juju hanging around. thanks again...
<rbasak> jamespage: do you have a minute? I've been testing "future" releases of juju so that the current Utopic bugs won't happen again.
<jamespage> rbasak, bit busy right now
<rbasak> jamespage: OK, I can leave it for now.
<jamespage> rbasak, tomorrow am?
<rbasak> jamespage: sure
<s3an2> I am doing a test deployment of openstack icehouse on 14.04, I am trying to use a dedicated network for ovs gre comunictions, however it is currently using the wrong NIC interface. I can see inside ./plugins/ml2/ml2_conf.ini the value local_ip that looks to be responsible for this but can not see how this can be changed to another IP on another NIC within juju
<s3an2> I did notice that a chef build had this issue but it was patched, you can see this here https://community.rackspace.com/products/f/45/t/3245
<arosales> jose: hello
<bodie_> I'm trying to figure out whether I can use Juju to deploy CoreOS units.
<AskUbuntu> Cannot Download Charm From juju-gui | http://askubuntu.com/q/470730
<sebas538_> bodie_: yeah that would be awesome
<sebas5384> I sow a talk about the future of containers
<sebas5384> and I think topology problems won't be the problem, because we will have something like "cluster of process"
<sebas5384> like openshift do with cartridge x gears
<niedbalski__> hey mruzek , Did you have a chance to review the remote pdb merge?
<bodie_> sebas5384, yeah, have you checked out deis.io?  I think it runs on fleet
<sebas5384> bodie_: hmm i will take a look
<bodie_> fleet being a container manager for coreos
<sebas5384> bodie_: yeah! I exactly that
<bodie_> basically open source heroku
<bodie_> looks soooo cool
<AskUbuntu> juju bootstrap time-zone 7:00 hours front of server | http://askubuntu.com/q/470751
<sebas5384> yeah
<sebas5384> bodie_: theres always missing the relation hooks, that are so good in juju
<bodie_> definitely great stuff
<sebas5384> jcastro: how juju's vision is aligned with paas using things like deis.io ?
<jcastro> we don't really do paas we're a tool for people to deploy paas'es
<sebas5384> jcastro: hmmmm that give me a lot to think
<lazyPower> sebas5384: with a bit of configuration several of our juju charms (see: rails) woudl work with some paas aspects of continuous deployment - however juju by itself is not a PAAS, its IAAS, you deploy PAAS's over IAAS to handle the scale of your PAAS
<lazyPower> which is a long winded way of repeating what jorge just said... sorry for the duplication.
<sebas5384> hmm thanks lazyPower
<bodie_> jcastro, I'm interested in thinking about that too
<sebas5384> i'm thinking if for a team of developers, juju is needed, because production can be exactly paas, becuase i can scale and do redundancy too
<bodie_> jcastro: do you think it would be possible to abstract a CoreOS cluster into a deploy target such that containers could be deployed on them as services?
<bodie_> or are you saying that's definitively not the goal
<jcastro> bodie_, I think that would be totally awesome
<bodie_> aye
<jcastro> as far as juju is concerned, that's just another cloud
<jcastro> the same as "AWS" or "Rackspace"
<bodie_> perhaps Fleet could be used as the deploy API
<bodie_> I'm still trying to wrap my head around some of this
<bodie_> basically, I'd really like to put together a thing where I can add CoreOS instances as needed
<bodie_> and then those are deploy targets for... perhaps juju, perhaps something else
<jcastro> I think maybe(?) hazmat has looked into stuff like this, not sure though
<bodie_> I think the folks at deis.io are trying something similar
<bodie_> cool, I'll have to ping him
<jcastro> I asked them on HN why they didn't just use juju for the heavy lifting but they didn't like the license
<jcastro> there was another similar PAAS that was doing some of the same things, the name escapes me though
 * hazmat steps out of charm authoring
<hazmat> bodie_, yeah.. deis.. pivoted heavily in the last few months.. from chef + docker + django.. to coreos + docker + django.
<hazmat> bodie_, re coreos as a target.. its conceivable.. in a couple of forms, though perhaps not what your think of... the cleanest separation with orchestration would be a set of charms.. one per app (set of containers) that upon instantiation would drive the core os containers, and reconfigure their sysd conf based on relation changes.
<hazmat> bodie_, more natively is tough since the docker full os container support is pretty primitive
<hazmat> i was playing around with the ubuntu-upstart images last night
<hazmat> on the public registry, but they have some issues.. if you install a some packages in them... plymoth hangs and sleeps.. looks like some events missing for full os container startup.
<bodie_> hmm
<sebas5384> hazmat: should be more like, app container, not full os
<sebas5384> or service container
<hazmat> bodie_, the primary issue for orchestration is  that a typical docker container is a brick... you get to cli and env params only at runtime. if you want to orchestrate you need to be able to change that, and the only way to do that with app containers is to restart them round-robin.
<hazmat> sebas5384, so for app containers.. we have a docker based charm in rethinkdb that illustrates using juju to orchestrate via restart of nodes on changes
<bodie_> interestnig
<bodie_> ting*
<hazmat> making containers work in a general way (and play nice with non containers) also means handling first class networking for containers
<bodie_> I was thinking perhaps if you want to alter things you simply plop down a new container and kill the old one once it's up
<hazmat> which is  something we're working on atm
<hazmat> bodie_, fundamentally for app containers you need to run multiples..  new one down and old one dead.. is risky around data volumes
<hazmat> sebas5384,  bodie_, what's the use case.  are you interested in containers or image based workflows or?
<sebas5384> hazmat: so docker runs into a lxc container(or not), and then to scale can be add more machines?
<hazmat> sebas5384, or add more containers on other extant machines
<sebas5384> hazmat: good question ;)
<hazmat> sebas5384, bodie_ you can do containers today with juju .. both lxc and kvm
<hazmat> even nest them :-)
<sebas5384> yeah!! thats why on of the main things why i'm using it
<hazmat> bodie_, also fwiw. i just pushed out at an etcd charm
<sebas5384> juju with openstack using docker images is interesting too
<hazmat> cs:~hazmat/trusty/etcd
<hazmat> sebas5384, it is.. although it gets really wierd there.. a full os container makes more sense at that layer/level.
<hazmat> ie.. doing things like block volume attach from cinder doesn't map to docker..
<hazmat> whereas a full os container would have no issues with that
<hazmat> sebas5384, the docker heat integration seems a bit better suited to single process app containers.
<sebas5384> hazmat: didn't know about that docker x heat integration
<hazmat> honestly.. i think lxc containers (full os) map better as nova compute instances... but i do like the image workflow and portability that docker brings to the table
<sebas5384> hazmat: yeah, exactly
<hazmat> i'll be at dockercon in a few weeks if either of you are around
<hazmat> there's a new openstack working group on containers as well
<sebas5384> now i'm thinking how i can do paas with juju(not just for deploying)
<hazmat> sebas5384, so full enterprise paas.. we've got some cloudfoundry charms in the works (pretty early stages)
<sebas5384> hazmat: nice! well i won't be there, but will be nice to have a hangout to discuss this kind o things :)
<hazmat> but it does basic app deploys into warden (cloud foundry's container impl ;-)
<hazmat> sebas5384, sounds good.. jcastro would you be up for organizing something like that?
<hazmat> hangout on air style
<sebas5384> nice hazmat taking a look at
<sebas5384> maybe i was using juju for the wrong thing after all hehe
<sebas5384> yeah! hazmat that would be great
<hazmat> sebas5384, there are definitely folks that treat juju as a paas.. and that can work.. but there's a pretty good blog post on what paas should mean from the cf folks
 * hazmat digs
<hazmat> sebas5384, http://blog.cloudfoundry.org/2013/10/24/essential-elements-of-an-enterprise-paas/
<sebas5384> thanks hazmat, would be my reading for later :D
<sebas5384> hazmat: Buildpack support and relation between each one
<sebas5384> is what i'm concerned
<sebas5384> because building an app with some custom configurations of the nginx service or php for example
<hazmat> sebas5384, in twelve factor apps.. buildpack is generally independent of service usage. ie. relations  inject env /conf variables for the built app.
<hazmat> re twelve factor app .. per heroku paas methodology though a bit more general than that .. http://12factor.net/
 * hazmat dives back into charm authoring
<sebas5384> hazmat and bodie_ thanks! great talk, I'm going to think more about this
<sebas5384> jcastro: please please can we schedule some hangout on air about this topic? i think is really interesting
<jcastro> sebas5384, like an openended discussion about paas/docker/containers, etc?
<jcastro> sure, I'll do it after the troubleshooting I and II and LXC debugging
<sebas5384> jcastro: yeah! and how Juju can be used as PaaS or just as IaaS
<sebas5384> great :)
<sebas5384> because the fact that a drupal or wordpress charm exist, paas (not completely) can be achieved
<sebas5384> with juju
<bodie_> I think I would see juju more as the tool that the cluster admins / paas provider would use to deploy the services that would act as the PaaS, managing containers
<bodie_> but I'm not sure I have all that straight in my head
<sebas5384> yeah bodie_ i'm confuse now about that
<sebas5384> because I have a startup implementing devops in the company, and in our clients
<sebas5384> so, i have to choose very well the tools we are going to use
<sebas5384> to automatize things
<sebas5384> hey lazyPower http://awesomescreenshot.com/02e2unug39
<sebas5384> hehe
<lazyPower> i'm everywhere
<sebas5384> haha spooky
<jose> arosales: pong
<jose> arosales: sorry, was taking an exam :) what's up?
<arosales> jose: hi. I hope you exame went well.
<jose> kinda :P
<arosales> :-)
<arosales> jose: wanted to thank you for all the work you are doing on improving quality in the charm store
<arosales> and also wanted to see if you would be interested in project that is very near and dear to me.
<jose> well, what is it about?
<arosales> The Great Charm Audit of 2014
<arosales> https://lists.ubuntu.com/archives/juju/2013-December/003331.html
<arosales> its also become important as we try to move charms forward into trusty
<arosales> it would be solid to audit charms and also get them moved into Trusty during that audit
<arosales> The thought is with amulet tests for each charm we can then test against Trusty and see if we can actually move it forward
<jose> audit is something I will try to work on, I think I have the checklist :)
<jose> about amulet tests... I'm not a python person, but I can write a basic deployment test, not sure how I would be able to handle service verification
<arosales> good to hear you are intrested in the audit :-)
<arosales> jose: we could work with you to get ramped up on amulet. A lot of hte work would be looking at releations and config testing per charm
<jose> arosales: well, I'l be more than happy to help wherever I can, that's for sure
<arosales> jose: its a good opportunity to learn how different charms works and get some good merge proposals
<jose> as well as learn some python in between!
<arosales> jose have you seen https://docs.google.com/a/canonical.com/spreadsheet/ccc?key=0Aia4W3c4fbL-dGs4SVBJMGRIdnlSMWhzSmo3WE1mZ1E&usp=drive_web#gid=0
<jose> yeah, it's open in my browser
<jose> there's *lots* to do in there
<arosales> ah great, so that list is sorted with highest priority at the top
<arosales> anything that is of interest that is in yellow needs done :-)
<jose> I think 'passes proof' is the easiest to get done :P
<hazmat> jose, tests don't have to be amulet
<hazmat> jose, its any executable just like hooks
#juju 2014-05-23
<Term1nal> Does it make sense to deploy an ubuntu charm??
<davecheney> Term1nal: sure
<Term1nal> Could I say............. deploy it to a container? :D
<Term1nal> Or do I just add a container and just do whatever it is I needed in that?
<Term1nal> Want to deploy a VestaCP setup, I suppose I could make it into a charm...
<arosales> Term1nal, you can deploy to a container, and we do have an Ubuntu charm
<arosales> Term1nal, http://manage.jujucharms.com/charms/trusty/ubuntu
<arosales> Term1nal, re deploying to containers https://juju.ubuntu.com/docs/charms-deploying.html#deploying-to-specific-machines-and-containers
<arosales> Term1nal, but VestaCP may make a good suborindate charm and should be pretty straight forward
<arosales> given this is the install method for VestaCP, http://vestacp.com/#install
<bradm> anyone about who knows about the python-django charm?
<AskUbuntu> Can I have multiple local juju environments? | http://askubuntu.com/q/470917
<lazyPower> bradm: i have limited knowledge of it, what would you like to know?
<bradm> lazyPower: I'm trying to get it to deploy a bzr based project, and its failing on the install hook
<bradm> lazyPower: ansible ends up barfing with a "fatal: [localhost] => /etc/ansible/host_vars/localhost: Expecting propert
<bradm> y name: line 1 column 2 (char 1)"
<bradm> lazyPower: and the task listed is "get mercurial source", which is odd given its a bzr repo
<lazyPower> sounds like it may have interpted the resource incorrectly. Can you file a bug with the output?
<lazyPower> + charm version
<bradm> lazyPower: I'm actually writing one up now :)
<lazyPower> thats briliant, thank you!
<lazyPower> Fixing problems, one bug at a time.
<bradm> its entirely possible my bzr repo isn't right or something, the docs on it are limited
<bradm> but I doubt its causing this issue
<lazyPower> bradm: did you set the vcs config options to bzr?
<bradm> lazyPower: indeed I did
<lazyPower> boo, i was hoping for an obvious fix
<bradm> lazyPower: my yaml file set vcs to bzr, and repo_url to my bzr branch on lp
<lazyPower> hmm, lp:~user/project/branch?
<bradm> lazyPower: yup!
<lazyPower> can you try giving it the bzr+ssh url?
<lazyPower> that may make a difference.
<bradm> lazyPower: should there be quotes around the strings?
<lazyPower> wouldn't hurt.
<bradm> I've tried both with and without quotes
<lazyPower> yeah, its not required
<bradm> I didn't think so, but the examples aren't consistant
<lazyPower> but in the off chance the yaml parser is getting funky during interpretation its best to quote them
<bradm> the weird bit is if you look at the ansible playbook, it shouldn't be doing the mercurial task, it shoudl be doing bzr
<bradm> which means the variables aren't being set right it seems
<lazyPower> do me a favor and run juju run --service python-django config-get and paste me the output
<lazyPower> scrub any sensitive details first ofcourse. *friendy reminder*
<bradm> lazyPower: http://pastebin.ubuntu.com/7504234/
<bradm> lazyPower: I'm learning django, the app I'm deploying is from the django tutorial :)
<lazyPower> right on. Exciting stuff learning new frameworks.
<lazyPower> I just recently got started with flask, and i'm loving it
<lazyPower> the sinatra of the python universe
<bradm> oh, interesting, I've heard good things
<bradm> I'm a sysadmin though, not a dev, so I pick frameworks to learn most relevant to what I'm doing workwise
<lazyPower> ok i just kicked off a deployment with relevant configs, 1 moment while my environment stabilizes.
<lazyPower> Yeah, i ride that line between dev and ops every day. I'm a developer at heart, op-smith by trade.
<lazyPower> its good to see others picking up the interest in developer <=> ops communication and mindset.
<bradm> lazyPower: well, you know I'm from Canonical, right? :)
<lazyPower> Nope, but I do now
<lazyPower> http://i.imgur.com/A85WCCz.png
<bradm> oho, I'm using trusty, that's probably a difference
<lazyPower> http://paste.ubuntu.com/7504242/
<bradm> and juju 1.18.3, but I doubt that'll make a huge difference either
<lazyPower> i got a 500 response code, i didn't remote in to see if its the code or a tanked deployment
<lazyPower> but it made it past deployment
<bradm> ok, let me redo this with precise and see
<bradm> lazyPower: I'm not sure I did the bzr branch right, it wasn't exactly clear what it needed, but thats another thing to debug once I get the install right
 * lazyPower nods
<lazyPower> I'll be around for another couple of minutes. its pretty late here and I should get some sleep
<lazyPower> i'll stick around to see if you hit paydirt though. would be nice to see you unblocked.
<bradm> cool, thanks
<bradm> just waiting for lxc to catch up to my juju commands now :)
<bradm> lazyPower: lxc seems upset at me for some reason, might take a few to fix
<lazyPower> ack.
<lazyPower> if you need anything further closer to 9am EDT I'll be back at it.
<lazyPower> best of luck to you bradm
<bradm> lazyPower: thanks heaps for your help, its definately pointed me the right way I think
<bradm> lazyPower: huh, I've tried using precise, I've tried using the dev juju version, I still get the same error
<bradm> lazyPower: filed LP#1322449
<_mup_> Bug #1322449: python-django charm fails to deploy bzr project <python-django (Juju Charms Collection):New> <https://launchpad.net/bugs/1322449>
<jamespage> marcoceppi: I made gnuoy a member of the charmers team - he was until he changed teams in Canonical and he's doing lots of work on charms and charm-helpers
<jamespage> gnuoy, welcome back!
<gnuoy> jamespage, thanks :-)
<caribou> Q: when the upgrade-charm hook is fired, should it have all the relations available ?
<caribou> I'm calling the function called when 'relation-changed' is fired from the upgrade-charm and apparently it doesn't see any of the relation that 'relation-changed' sees
<marcoceppi> jamespage: for future reference there's an application process. if gnuoy needs  access to stuff to work faster we can build teams to do so
<marcoceppi> jamespage: ie, if we need to create a charm-helpers maintainers team, etc
<jamespage> marcoceppi: oh - the document I read must be wrong then
<jamespage> marcoceppi: charmers is appropriate - he's working on charms and will be helping with reviews etc..
<jamespage> plus he's lots of good history
<marcoceppi> jamespage: https://juju.ubuntu.com/docs/reference-reviewers.html#join-us!
<jamespage> marcoceppi: I actioned under "Upon getting involved with these activities, we'll probably ask you if you'd like to join charmers"
<marcoceppi> jamespage: ah, well it's ask to apply, wording is vague there
<jamespage> marcoceppi: indeed
 * marcoceppi updates docs
<jamespage> marcoceppi: apologies if I subverted the normal process :-)
<marcoceppi> jamespage: no worries! gnuoy gets a +1 for charmer from me so process remains in tact
<jamespage> marcoceppi: excellent
<gnuoy> marcoceppi, thanks, much appreciated
<marcoceppi> gnuoy: I'll send you a welcome email in a bit
<gnuoy> thanks
<sebas5384> hello o/
<sebas5384> did some beer thinking
<sebas5384> about the paas thingy with juju
<bodie_> sebas5384, aye?
<sebas5384> bodie_: oye = ?
<bodie_> I was reading the log, and you mentioned you'd done some thinking about juju-paas.  :P
<sebas5384> oohh yeah
<sebas5384> hehe
<sebas5384> sorry
<sebas5384> so php, nginx charms, etc...
<sebas5384> can make possible the service scale
<lazyPower> sebas5384: go on...
<bodie_> he named the language that must not be named
<lazyPower> I don't see pearl mentioned anywhere in the recent history...
 * lazyPower hides
<lazyPower> s/pearl/perl
<sebas5384> so we can actually use juju as a paas, using some CD tool
<bodie_> you'd need a proper deploy target
<bodie_> I don't think you necessarily want to spin up a new cloud instance for each app, for example
<sebas5384> bodie_: thats not for always
<sebas5384> but for example
<sebas5384> when we make some deploy to production or stage
<sebas5384> the 80% problems are because of some difference between the environments
<sebas5384> and thats clearly a waste
<lazyPower> sebas5384: as was covered yesterday you can use juju to some semblance of a PAAS
<sebas5384> nice lazyPower
<lazyPower> well that wasn't supposed to have been sent yet, stupid itchy enter key
<lazyPower> anywho
<lazyPower> we're getting a proper PAAS in teh coming months with CloudFoundry - its under active dev now.
<sebas5384> so making php charms is not a crazy thing
<lazyPower> Well, depends on how it fits in your over-arching architecture
<lazyPower> There's a long standing todo item to charm up a PHP Framework like Cake, Symphony, Zend, and provide that to the community as well.
<sebas5384> i sow some issues about php-app charm
<lazyPower> so, it stands to reason that building PHP Application charms are completely within the vision and scope of juju
<lazyPower> but it may not offer the same benefits you would get from a PAAS like cloud foundry where you deploy CF, and then allocate app pools within CF and deploy like a madman
<lazyPower> heroku kind of set the bar for that, and PAAS's are all the rage these days
<lazyPower> sebas5384: However, if you have an idea about using juju as a paas - i'd love to see it.
<sebas5384> but for me offering paas using juju, is more friendly to a costumer that haves their own datacenter and build every machine from ground up
<sebas5384> lazyPower: great! i'm working on that
<lazyPower> sebas5384: I hear ya. I had the same thought a few months ago and dove head first into learning it. My first thought was "Putting the gui overtop of DO would be pure fire for their business model. They sell more VM's and I get a dead simple management interface for SOA"
<lazyPower> because lets face it, rick_h_'s team has a really awesome face for juju. The GUI gets more people interested in the product than anything else i've ever done to 'sell it' at talks.
<sebas5384> lazyPower: exactly, and the relation thing
<sebas5384> for example
<sebas5384> openshift had already good stuff happening
<sebas5384> but
<sebas5384> doesn't have gui, relation and topology portability that juju brings to the table so well
<lazyPower> That's true.
<sebas5384> so for me buildpack = bundle
<sebas5384> and i can have more than one bundle, of the same project
<sebas5384> always talking about app is not reality actually, because in the same project you have more than one app running
<sebas5384> we have project with node and drupal at the same time
<lazyPower> Right. You get into the territory where SOA principals apply. We're running services that utilize messaging to achieve a goal.
<sebas5384> lazyPower: nice
<sebas5384> i would love to see more of that
<lazyPower> sebas5384: you're already seeing that every day
<sebas5384> ooh right, service oriented arch
<lazyPower> yep
<sebas5384> hehe duhh Â¬Â¬
<lazyPower> I bought into that methodology a long time ago when i started learning the ruby ecosystem. SOA fits really well within those confines. My web apps were speedy and relied heavily on workers and worker farms to do the heavy lifting
<lazyPower> the end user see's a zippy website, while my infrastructure was beaming data bits around to different areas to do different things, such as compute metrics, send emails, etc. None of the heavy lifting was done from the same machine that was presenting the site to the user.
<sebas5384> lazyPower: thats what i want for my projects :)
<sebas5384> lazyPower: congrats! do you have some slides or post talking about that experience?
<lazyPower> MAAS has the same concept using celery and django. Celery is the work horse while django gives you the UX. TO the  user it doesn't matter whats happening under the hood so long as its fast, and easy to use.
<sebas5384> exactly
<lazyPower> Just the slides I linked you already for my juju talk.  My last job didn't allow me to talk about our tech much. I was under some pretty deep contracts.
<sebas5384> hmmm kreepy
<sebas5384> oh! i'm going to lunch over here :)
 * lazyPower shrugs
<lazyPower> marketing
<lazyPower> o/ have a good lunch sebas5384. Catch you 'round
<sebas5384> alwayssss
<sebas5384> hehe
<sebas5384> lazyPower: thanks! later :D
<sebas5384> sshuttle just is making my mac kernel panic hehe
<AskUbuntu> juj status can no lookup nodes by name | http://askubuntu.com/q/471224
<sebas538_> kernel panic again!! wtf! hehe
<AskUbuntu> Charm Creation General question | http://askubuntu.com/q/471276
#juju 2014-05-25
<AskUbuntu> juju bootstrap tries to get juju tools from amazon when in a maas enviroment | http://askubuntu.com/q/471718
<AskUbuntu> maas does not report to juju that the node is ready | http://askubuntu.com/q/471740
<AskUbuntu> juju spends bootstrap-timeout with a final message it cannot find /var/lib/juju/nonce.txt | http://askubuntu.com/q/471990
<AskUbuntu> juju handshake node error..why (juju.state open.g:93 TLS handshake failed: x509:certificate has expired or is not yet valid | http://askubuntu.com/q/472017
<bradm> lazyPower: fwiw the bug I was hitting with the python-django charm was LP#1318036, not sure why you weren't seeing it
<_mup_> Bug #1318036: ansible thinks /etc/ansible/host_vars/localhost format is json <Charm Helpers:Fix Committed> <https://launchpad.net/bugs/1318036>
<bradm> lazyPower: there's even a merge waiting to fix it, https://code.launchpad.net/~patrick-hetu/charms/precise/python-django/charmhelpers/+merge/216351
#juju 2015-05-18
<lazyPower> mbruzek: when you get settled in this morning can I get a review on https://github.com/chuckbutler/flannel-docker-charm/pull/16
<jcastro> rick_h_: hey when you do the new bundle spec
<jcastro> will old bundles stop working?
<rick_h_> jcastro: no
<rick_h_> jcastro: we'll do like we do now and probably go through a transform step
<jcastro> ok
<rick_h_> jcastro: but atm the new work is around just 'synchronized uncomitted changes in the gui' so we'll work with thumper and his planning work on that
<rick_h_> these bundles will be around a long while
<jrwren> is there a way to remove a container - which already had the unit which it was running removed - from a machine, other than force remove the machine running the container?
<lazyPower> jrwren: when you destroy the service, it should clean up the container as well as part of the machine reap process. if that's not the case - i'd file a bug.
<lazyPower> jrwren: oh wait i misread that - this is a problem with precedence, as the machine itself went away and the container is still registered on the environment?
<jrwren> lazyPower: no. I expected exactly what you just said. maybe it is bug.
<lazyPower> wwitzel3: (migrating here as its juju specific) - There's a lot of the OpenStack networking vendors coming aboard the Juju Openstack ecosystem
<lazyPower> see: midonet, plumgrid, etc.
<lazyPower> wwitzel3: it may be worthwhile to link akanda to our ISV Onboarding team to do outreach/contact post ODS
<wwitzel3> lazyPower: yeah, that's a good idea
<thumper> lazyPower, marcoceppi: who knows most about python django charms? looking for celery stuff
<rick_h_> thumper: it does celery ootb?
<thumper> I noticed that django-python relates to rabbitmq-server through amqp
<thumper> rick_h_: but it doesn't set up the celery workers, no?
<thumper> rick_h_: I noticed the celery config
 * marcoceppi shys away quickly
<rick_h_> thumper: oh, I didn't realize django shipped with that
<thumper> but I want the time based checker on master
<rick_h_> thumper: I'd guess that it might write out to settings.py but can't imagine what else it would do
<thumper> rick_h_: it does write out settings
 * rick_h_ pokes at the charm hooks
<lazyPower> thumper: iirc django needs a maintainer :)
<lazyPower> thumper: you seem to be really familiar with the codebase
<thumper> but I was wondering if someone else has already set up the worker bits
 * lazyPower nominates thumper
<thumper> lazyPower: maybe...
<thumper> lazyPower: at least I'd learn how to do it properly
<rick_h_> thumper: might ping the other end of the company as they use django more I think. SSO is a django app if I recall.
<lazyPower> I'm +1 on having a core maintainer owning a single charm
<lazyPower> if nothing, it gives you cause/effect to dig your hands in deep in the grits of charming
<rick_h_> lazyPower: greedy aren't we? :P
<thumper> lazyPower: perhaps while I'm waiting for lxd to implement the needed functionality, I should look more at django-python
<thumper> lazyPower: would really like virtual env support
<thumper> and python 3
<thumper> and django 1.8
<thumper> I'm using the current one with django 1.7 and it is fine
<rick_h_> thumper: oh interesting it does install the package for you heh
<rick_h_>   pip_install('django-celery')
<thumper> yeah...
<thumper> which I'd have thought would be in the requirements.txt anyway
<thumper> or at least, I was going to do that
<rick_h_> hmm, do yea it uses django-celery, sets up the config, and the relation bits.
<lazyPower> thumper: we have virtualenv support
<thumper> lazyPower: not in the python-django charm I'm using
<lazyPower> thumper: we're using that in a few charms, where the charm drops in the .venv and does the bits it needs to do
<lazyPower> oh that, yeah man, maintainer needed :)
 * lazyPower nudges thumper closer to agree'ing to take it on
<thumper> lazyPower: let me poke around before you throw it at me fully
<lazyPower> too late, already signed you up
<lazyPower> incoming MP marking you as the active maintainer
<lazyPower> #dealwithit
<thumper> lazyPower: what facility does charm helpers give me for writing service scripts?
<lazyPower> thumper: meaning upstart/systemd jobs?
<thumper> like: a celery worker
<lazyPower> thumper: we have jinja and cheetah support....
<thumper> lazyPower: ack
<thumper> lazyPower: so... write a template file, and put it in the right place...
<lazyPower> charmhelpers.contrib.templating iirc
<lazyPower> ye
<thumper> hmm...
<thumper> lazyPower: what's the status of a stand alone nginx charm?
<thumper> I currently have that smashed into my payload charm
<lazyPower> since python-django is a framework charm, meaning its providing scaffolding, there are some conventions you can follow - like making an implicit templates/contrib/etc. dir in your project that the charm scans and renders.
<lazyPower> thumper: there's one that exists... let me find it. its not ~recommended tho
<lazyPower> it was in progress then other things surfaced
<thumper> lazyPower: it didn't exist july last year when I needed it
<lazyPower> https://bugs.launchpad.net/charms/+bug/1356856
<lazyPower> thumper: ^
<mup> Bug #1356856: New Charm: NGiNX <Juju Charms Collection:Incomplete by marcoceppi> <https://launchpad.net/bugs/1356856>
<marcoceppi> plz don't use that
<thumper> rick_h_: who are the main users of django internally?
<thumper> rick_h_: who are using charms?
<marcoceppi> psh, who isn't using charms!
<thumper> I should poke
<thumper> marcoceppi: well... maas uses django
<thumper> marcoceppi: but I don't think we have a maas charm
<marcoceppi> thumper: we do have a vmaas charm iirc
<thumper> marcoceppi: for local?
 * marcoceppi searches
<marcoceppi> ODS wifi is a bit slow
<marcoceppi> thumper: https://jujucharms.com/q/maas
<lazyPower> https://jujucharms.com/q/vmaas
<lazyPower> ninja'd by marcoceppi
<marcoceppi> lazyPower: and mine includes all maas charms <3  ;)
<lazyPower> LOL - beat by 1 charm that wasn't even maas related
<lazyPower> you win
 * thumper wanders into a meeting
#juju 2015-05-19
<jw4> rick_h_: crazy how hard it is to successfully post something on linkedin :)
<rick_h_> jw4: :)
<nevermam> Hello All.. I am writing a charm for my product and trying to set/unset a config.yaml parameter from within a charm.Can anyone please tell me, how it can be done ??
<nevermam> basically I want to set/unset the option from within a hook
<nevermam> I am using bash shell scripts for writing the charm/hook
<marcoceppi> nevermam: you can't
<marcoceppi> nevermam: you can only get values, you can't set them
<marcoceppi> nevermam: you can get a value with `config-get <key>` inside a hook
 * Zetas waves to lazyPower
<lazyPower> Mornin Zetas
<Zetas> morning
<ejat> hi .. if i tried to run openstack using juju on aws, how can i link it to landscape ?
<lazyPower> cory_fu: tvansteenburgh: quick question for you re: charmhelpers. I can use the config() object to store machine state as well correct? Eg: i have a slew of incoming relation data I need out of relation context, and i can just stuff those in the config object and have it persist thanks to the data-backend in config now right?
<tvansteenburgh> lazyPower: yes
<lazyPower> ejat: I dont think OpenStack would run properly on AWS - as nova-compute would be running in a hypervisor. hypervisor in a hypervisor...
<lazyPower> ejat: but in teh spirit of answering your question - so long as the units you register w/ landcape-client can reach your landscape-server, no crazy config / concerns need apply.
<cory_fu> lazyPower: You should use charmhelpers.core.unitdata.kv().set / get instead
<lazyPower> ok so unitdata is a different config store that wont bleed into config-get versioning?
<lazyPower> > "versioned, transactional operation" -  exactly what i was looking for
<cory_fu> They are different implementations, and my opinion is that we should move entirely to unitdata
 * lazyPower hattips @ cory_fu
<cory_fu> I think the config tracking should be refactored to use unitdata as well
<ejat> i managed to up the horizon
<ejat> but connection timed out to login
<ejat> lazyPower: i just want to test the openstack charms
<ejat> testing purposes
<ejat> anyone can help me ?
<Zetas> ejat: have you tried using devstack or are you tied to using aws?
<Zetas> devstack works pretty well for simple openstack charm testing
<ejat> [Tue May 19 17:23:08.034535 2015] [:error] [pid 19918:tid 140668694456064] Login failed for user "Admin".
<Zetas> plus unless you make modifications to the charms deploying the openstack bundle to AWS will spin up a dozen fairly expensive users
<Zetas> s/users/servers
<ejat> $ keystone role-list
<ejat> +----------------------------------+----------+
<ejat> |                id                |   name   |
<ejat> +----------------------------------+----------+
<ejat> | 150a57a6033d41978068798b25b99631 |  Admin   |
<ejat> | 26b0725903ea49fbb1530d336a587e86 |  Member  |
<ejat> | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ |
<ejat> +----------------------------------+----------+
<ejat> tried to update password
<ejat> No user with a name or ID of 'Admin' exists.
<Zetas> hmm, im very new to openstack so im not sure what the issue would be.
<Zetas> you are able to login to the dashboard though right?
<Zetas> set the password with juju set keystone admin-password="mypass" ?
<ejat> i manage to change the password... case sensitivity ... but i still cant login horizon dashboard ;(
<ejat> still getting the same error ; [Tue May 19 17:50:54.594643 2015] [:error] [pid 19919:tid 140668694456064] Login failed for user "admin".
<Zetas> ejat: when you login to the juju-gui is the openstack-dashboard charm green and have all green relations?
<Zetas> (you did setup the relations right)
<ejat> yups
<ejat> all green
<ejat> ive check the horizon setting
<ejat> #OPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST
<ejat> is it correct ? or i need to put the full URL
<Zetas> hmm, thats strange
<ejat> of the keystone
<Zetas> run this command: juju status openstack-dashboard
<Zetas> get the public IP and in your browser go to https://ipaddress/horizon
<Zetas> for me its https://10.0.3.44/horizon
<ejat> http://paste.ubuntu.com/11229418/
<Zetas> yea i just hit your dashboard heh
<Zetas> https://54.238.21.206/horizon
<Zetas> works great
<Zetas> might want to add a constraint to the AWS security group to only allow connections to 443/80 from your home IP
<Zetas> if this is gonna be something you want private
<ejat> this is for testing purpose
<ejat> horizon up
<ejat> but i cant login the admin
<Zetas> what did you enter in the set password line?
<ejat> default .. ive tried to changed a few time
<ejat> including manually change in console just now
<ejat> apache log still shows login failed
<Zetas> hmm, well from your main juju box you should be able to use the command juju set keystone admin-password="mypass" and then login with admin/mypass
<ejat> is it because of the security group ?
<Zetas> If you deployed the bundle to aws without changing anything it should be fine. If you can access the horizon dashboard it wouldn't miraculously stop you from logging in
<Zetas> if there was a security group stopping communication between keystone and the dashboard that might happen but then your relation would be red probably
<Zetas> thats kinda the whole point of the relation thing, if it's green it should be fine. I haven't had your issue before so im not sure how to fix it
<Zetas> also i've not deployed openstack to aws fully
<Zetas> once i saw it spinning up a dozen servers i killed it lol
<Zetas> have you changed any of the default config values?
<ejat> nopes
<Zetas> have you watched the debug log while logging in? It might have some helpful info
<Zetas> i'll brb
<wwitzel3> lazyPower: ping
<lazyPower> wwitzel3: pong
<wwitzel3> lazyPower: got time for a quick hangout?
<lazyPower> wwitzel3: In the middle of something - can you give me some time to wrap up what i'm working on?
<wwitzel3> lazyPower: absolutely
<lazyPower> if its emergent i can bounce in
<lazyPower> ack, thanks - i'll ping when i'm at a stopping point
<Zetas> back
<Zetas> any luck ej
<Zetas> oh he left
<Zetas> hope he figured it out
<bdx> openstack-charmers, charmers: Does there exist anyone who has successfully implemented vlan networking using a juju charm deployment>
<bdx> ??
<bdx> openstack-charmers, charmers: Any chance at all??
<marcoceppi> bdx: you mean like using openvswitch?
<bdx> marcoceppi: totally
<bdx> marcoceppi:
<bdx> marcoceppi: It seems the charms support vlan networking to some degree......I haven't been able to figure it out....
<bdx> I have spent weeks trying to get it working with charm based deployment
<marcoceppi> bdx: the openstack-charmers would be able to answer this the best, but they're all at ODS this week
<marcoceppi> bdx: sorry to hear that! I'd recommend mailing the juju mailing list (juju@lists.ubuntu.com) if you haven't already about what you've tried and the issue you're seeing
<bdx> marcoceppi: I feel like the charms support this use case, but it is treated like a corner case in the sense that there seems to be little to no support for vlan networking
<bdx> marcoceppi: thanks man. will do
<marcoceppi> bdx: we are beefing up support for that, that much I know, but if there are any special measures the openstack-charmers can help. As I mentioned earlier they're all here in Vancouver at ODS so the email is the best way to start that conversation
<bdx> marcoceppi: Thanks, I'll get started on an email to your guys list concerning this......what im getting at though is that vlan networking shouldn't be considered a special measure right?
<marcoceppi> bdx: no, and it won't in the future, I think it's just an issue with getting the charms updated which I'm sure we'll be doing over the next few months
<bdx> marcoceppi: I see. Thanks for the insight on this
<bdx> marcoceppi: should vlan functionallity be removed from the charms seeing as they don't support it.....this is very misleading and can be very frustrating from an opperators point of view.
<marcoceppi> bdx: well vlan does work from what I recall
<bdx> *sp: operator
<bdx> marcoceppi: Yea....
<bdx> lol
<bdx> marcoceppi: vlans work when I handroll a stack
<bdx> marcoceppi: I have preformed introspection and post deploy configuration on a juju deployed stack and am just hitting my head on how this is expected to work using juju.....some documentation on networking configs using juju would be a huge plus++ tahts all
<bdx> marcoceppi: thanks for your insight none the less
<marcoceppi> bdx: I get your frustration, sorry it's not quite there yet. I'll make sure someone from the OpenStack Charmers team follows up with your email soon!
<bdx> marcoceppi: Thanks again
<thumper> lazyPower, marcoceppi: are either of you willing to help me on my Friday afternoon to look at imporoving the python-django charm?
#juju 2015-05-20
<natefinch> rick_h_: I don't suppose you're actually online?
<schkovich> hi guys! where could i find more information on state-port and syslog-port? im setting firewall on juju state-server. a bit more understanding on services exposed on those two ports would be helpful.
<schkovich> x248aGentÂ£
<stub> Design question. A client charm is related to the PostgreSQL charm, and needs to use a PostgreSQL extension. eg. it needs to tell the unit to install a package from a PPA.
<stub> Should the client charm be allowed to instruct the PostgreSQL charm to add the PPA and install the package? Or do I need to also have a subordinate charm that does the package installation?
<stub> subordinate seems messy (although trivial, as it would be a 3 line install hook and a pile of boilerplate to keep charm-proof happy)
<stub> But allowing a client service to instruct the db to install arbitrary packages from arbitrary locations? That seems a security problem.
<stub> Well, the third option is to have the operator specify the ppa and package to install in the PostgreSQL service's config. I'm guessing that is the best option.
<stub> (and I can now block the client until the operator has actually done it)
<lazyPower> cory_fu: ping
<cory_fu> Hey
<lazyPower> greetings, can you join me in#juju-gui for a minute? Frankban is asking about some specifics w/ SF that i'm not familiar with
<marcoceppi> lazyPower: tell him to join us over here ;)
<schkovich> is there a way to setup to which interface on machine n juju ssh n will be connect eg configure ssh proxy on the state server?
<lamont> I have a subordinate-charm question, which I suppose I could just code up and see, but it's not clear from the docs..
<lamont> if I have subordinate charms A and B, and A needs some info from B, is there a relation there?
<marcoceppi> lamont: both are subordinates?
<marcoceppi> presumably attached to the same primary?
<lazyPower> lamont: you can write a relationship that will allow them to exchange data, but you cannot deploy a subordinate to a subordinate.
<lamont> yes, same primary, both subord
<lamont> lazyPower: so I need to create a relationship for them to share info, just like they're real charms and everything. (DOH)
<lamont> that would go a long way to explaining WHY THE CHARM ISN'T DOING WHAT THE AUTHOR THOUGHT
<lazyPower> lamont: correct - adding a relationship/interface to do that data-exchange should be teh asme process as any other charm :)
<lazyPower> lamont: can i ask what you're trying to do so i have more context?
<lamont> hp-health thinks it's getting data from nrpe-external-master
<lamont> and it's all lies
<lamont> (it shows up as blank hostnames in the nagios files that are exported)
<lazyPower> thats fun
<lamont> I can see we're going to need to work on your understanding of the definition of "fun" :p
<lazyPower> yeah if you need to proxy information through - i ran into this w/ the docker/flannel-docker stuff
<lazyPower> i'm using flannel-docker to proxy info directly to docker (docker manages the docker config - no exceptions) - so teh subordinate is acting as an informatoin proxy and a payload delivery mechanism
<lazyPower> the mechanics are a bit alien at first glance
<lazyPower> it sounds like you have some of the same requirements for behavior in this particular setup
<lamont> hrm... and which is the primary charm there?
<lamont> properly, the main charm could proxy the nagios hostname from n-e-m and just populate it into all the other subordinate relations
<lamont> or n-e-m could write it to a file :D
<lamont> even though that feels wrong
<lamont> n-e-m needs a good rewrite from scratch
<lazyPower> lamont: thats highly dependent on context, and which is the manager of the concerns
<lazyPower> http://blog.dasroot.net/2015-charm-encapsulation.html
 * lamont will do some reading
<lamont> and, of course, lp:charms/trusty/nrpe is the perfect place to begin this quest
<tvansteenburgh> jw4: you around?
<jw4> tvansteenburgh: yes, but OTP for a bit
<jw4> tvansteenburgh: feel free to ask and I'll try to multitask :)
<tvansteenburgh> jw4: afaict, the Action.ListAll api does not return info about actions run on subordinates
<tvansteenburgh> jw4: wanted to check with you before filing a bug
<tvansteenburgh> jw4: subordinate actions also don't seem to send any updates to the AllWatcher
<jw4> tvansteenburgh: hmm - I don't know of a specific concept of subordinate actions, but I assume you mean actions on subordinate units?
<tvansteenburgh> jw4: yes
<rick_h_> natefinch: heads up I'm around now if you still need anything
<jw4> tvansteenburgh: actions and the actions watcher don't know about subordinates, so if you query actions on a unit it doesn't know to look for subordinates
<jw4> tvansteenburgh: seems like a viable candidate for a bug report
<natefinch> rick_h_: was wondering why there's no link to the juju blog on the top of jujucharms.com
<natefinch> rick_h_: most of core didn't even know there was a blog
<tvansteenburgh> jw4: thanks for confirmation, will file bug
<rick_h_> natefinch: because the blog isn't updated/big enuogh to warrant a link at the top :) and the homepage/community shows recent posts
<jw4> tvansteenburgh: cool
<rick_h_> "Latest from the blog"
<natefinch> rick_h_: fair enough. It looks like it's all external posts... do we have the ability to have our own posts hosted on jujucharms.com?
<rick_h_> natefinch: so this is from the aggregator with things tagged juju I thought? /me dbl check. https://jujucharms.com/community/blog
<rick_h_> natefinch: you'd have to check with the web team where/how the aggregation is pulled in.
<natefinch> rick_h_: ok, thanks
<natefinch> gah, I hate exceptions
<lazyPower> wwitzel3: hey have you ever seen this happen before?
<lazyPower> http://i.imgur.com/7CGtl8k.png
<lazyPower> wwitzel3: it appears I managed to trap 2 executing hooks at the same time
<bac> tvansteenburgh: ping
<tvansteenburgh> bac: pong
<lamont> what does relation-ids want for an argument?  interface name?
<lamont> yeah.  relation-name
 * lamont goes back to headscratching
<tvansteenburgh> jw4: fyi https://bugs.launchpad.net/juju-core/+bug/1457205
<mup> Bug #1457205: Subordinate charm Action data not reported by API <juju-core:New> <https://launchpad.net/bugs/1457205>
<jw4> thanks tvansteenburgh
#juju 2015-05-21
<schkovich> i am working with manual provider. each machine has 3 interfaces, public, multi-tenant service and private one. is there a way to configure juju to communicate on the private interface?
<schkovich> if i edit agent.conf and set apiaddresses to private one or ad private one to array am i at risk that setting might be overwritten?
<schkovich> is there preferable way to denote correct apiaddress? juju api-info gives me a list of all state sever IP addresses
<schkovich> i see, apiaddresses will be overwritten therefore state-server is setting it
<schkovich> not having an option to set preferred network makes manual environment truly manual
<schkovich> based on non-transparent rules juju is picking multi-tenant network to communicate on
<schkovich> which in turn makes setting firewall rules hard
<schkovich> instead of selecting private network and just allowing connections from all machines in subnet
<schkovich> firewall rules must be updated whenever a new machine is added
<schkovich> anyone on manual provider and selecting private network to use? state-servers, logging, storage...
<schkovich> there is a similar bug report https://bugs.launchpad.net/juju-core/+bug/1303204
<mup> Bug #1303204: manual provider allow specifying private address for connecting back to state-server <improvement> <manual-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1303204>
<schkovich> juju does use local-cloud but in my case i have two interfaces: one on rackspace multi-tenant service network which juju  would use and another one on the private network which i would like juju to use :)
<schkovich> there is also a similar question on askubuntu having 6 votes http://askubuntu.com/questions/611564/how-does-juju-get-the-private-address-of-a-node
<schkovich> i guess that in all cases logic implemented in juju/network/address.go will be used e.g. the first ip matching condition will be used :(
<Johncr1> Hello Folks, I am trying deploy a seervice in lxc-container but the container is stuck in pending state ?
<Johncr1> I have stopped ufw already
<Johncr1> containers:
<Johncr1>       2/lxc/0:
<Johncr1>         agent-state: pending
<Web> Big thanks to Juju team for all your work.  Without you my graduate project wouldn't have been such a success!
<jcastro> Web: oh really?
<jcastro> that'd be an awesome post to put on the list, sounds awesome
<Web> jcastro: Thank you directly.  You been a huge help.  I will post a detailed development blog article soon on mastersproject.info but the project was a questionnaire service with developer access http://themindspot.com
<jcastro> cool, let me know when it's up, we should be able to syndicate posts on juju.u.c soon
<Web> Cool. I will do.  Focusing on getting a job this week now that I have time but get to it soon.
<Web> PLus want to make sure that final grade is an A+++++++++++++.
<Web> first
<jcastro> heh, awesome
<schkovich> can someone explain me why in one case juju is deciding to connect to the state server on the public ip and in other to the private ip
<schkovich> machine agent.conf
<schkovich> apiaddresses:
<schkovich> - 10.181.139.18:17070
<schkovich> unit on the machine above agent.conf
<schkovich> apiaddresses:
<schkovich> - 162.13.183.96:17070
<schkovich> id does not make sense :(
<jcastro> I need a ~charmer to action on this if they have a minute: https://bugs.launchpad.net/charms/+bug/1457263
<mup> Bug #1457263: Remove galera charm from personal namespace <Juju Charms Collection:Confirmed> <https://launchpad.net/bugs/1457263>
<cjwatson> Hi, is there any way I can get juju's private mongod to not sit there spinning and calling select 100 times a second?  It's quite literally causing me to have to open a window to dissipate laptop heat any time I'm using juju
<jcastro> cjwatson: you probably want the guys in #juju-dev for that one
<cjwatson> Mkay
<lazyPower> jcastro: as thee LP Repository has been removed, the only leftover from that would actually come from UIEngineering. rick_h_ ^
<jcastro> yeah I wasn't sure if we still need to file an RT ticket?
<lazyPower> i dont think so
<lazyPower> i'm pretty sure that with the new stuff they can do it in-house.
<jcastro> yeah, rick_h_ I'd like to catch up with you today anyway if you have like a 10 min window for G+
<rick_h_> jcastro: sure thing
<rick_h_> jcastro: sooner is better for the next two hours I've got a window
<rick_h_> lazyPower: rgr, will look. not had a chance yet today
<jog> mgz, sinzui, do you know where I can find information on how to request a windows instance with juju?
<sinzui> jog: the series eg local:win2012/dummy-source
<rick_h_> jcastro: lazyPower replied to the bug
<rick_h_> lazyPower: glad you had the bug there because I thought it was the ~codership one.
<sinzui> jog: if you are using add-machine, you can use maas tags. I think like this juju add-machine --constraint tags=win2012
<jog> sinzui, ok
<lazyPower> rick_h_: happy to help :)
<jcastro> rick_h_: how about now?
<rick_h_> jcastro: sure
<thumper> lazyPower: you around?
<thumper> marcoceppi: you are at ODS?
<tvansteenburgh> thumper: marcoceppi flying home atm
<thumper> tvansteenburgh: ta
<lazyPower> thumper: i'm here
<mattrae> is it possible with juju gui and the local provider, to choose which machines i deploy a bundle to? when i click 'deploy this bundle' it asks if i'm sure, but doesn't give me an opportunity to choose which machines i want to use
<lazyPower> mattrae: that's an incoming feature in a future juju gui release.
<lazyPower> mattrae: aiui its been prototyped, and is in the alpha testing phase
<lazyPower> hatch: may have more information about this for you
<mattrae> lazyPower: ahh thanks :)
<hatch> lazyPower: it's actually beta, soon to be RC'd :D
<thumper> lazyPower: oh hai, I'm back here now
<thumper> lazyPower: do you have 10-15 minutes for a hangout?
<lazyPower> thumper: sure
<thumper> lazyPower: https://plus.google.com/hangouts/_/canonical.com/charming?authuser=0
<miken> Is there a way now to be able to upgrade a charm and set a (new) config option in one step?
<lazyPower> miken: afaik - no. you upgrade-charm, then juju set.
<lazyPower> juju upgrade-charm service && juju set service foo=bar - would be a hacky way to 'do it in one step' - but you'll still get the hooks executing as such. so 2x runs of config-changed in that sequence.
<miken> k, thanks. Yeah, I'd ideally like to have something set for the install hook, which I assume that won't.
#juju 2015-05-22
<redelmann> hi!, still remains this bug http://askubuntu.com/questions/602527/provide-data-doesnt-send-data-when-required-keys-not-satisfied-juju-charm-usin in services framework
<redelmann> ??
<mattyw> evilnickveitch, ping?
<evilnickveitch> mattyw, pong
<evilnickveitch> what's up?
<mattyw> evilnickveitch, still thinking about the documentation thing.... It occurred to me that unit numbering is going to go through a slight change in juju 1.25
<mattyw> evilnickveitch, and I wonder where I should put the docs
<evilnickveitch> mattyw, oh goodie
<mattyw> evilnickveitch, basically unit numbering will be like machine numbering - it will always go up, and be unique. so if you have mongo/0 and mongo/1 then you destroy a unit and add a new one instead of still having mongo/0 and mongo/1 you'll have mongo/0 and mongo/2
<evilnickveitch> mattyw - hmmm, okay. That doesn't sound too complex
<evilnickveitch> however, the best thing to do is to create a new page
<evilnickveitch> and put all the stuff in that
<evilnickveitch> PR it to the docs
<mattyw> evilnickveitch, will do
<evilnickveitch> and then it will be in the dev docs and we can work out which bits need to go where when it lands
<schkovich> when filing bug report at launchpad i got: "sstmp" does not exist in Juju Charms Collection; there is no doubt that charm exits https://code.launchpad.net/charms/trusty/+source/sstmp
<schkovich> what shall i do? play that im dummy and say that i don't know package name?
<evilnickveitch> mattyw - I forwarded you a mail which may help
<schkovich> uf
<schkovich>            Sorry, something just went wrong in Launchpad.                             Weâve recorded what happened,           and weâll fix it as soon as possible.           Apologies for the inconvenience.                             (Error ID:           OOPS-6b0d7c3b791acfb2e06289ac42f97c68)
<mattyw> evilnickveitch, much obliged
<schkovich> and to be even more convenient ssmtp charm does not exist in github mirror :)
<mattyw> schkovich, I'd probably just report a bug here https://bugs.launchpad.net/charms
<schkovich> mattyw: i already tried that
<schkovich> mattyw: if i answer to In what package did you find this bug? ssmtp in response i am getting error that such package does not exists; if i opt to play dummy and say that i don't know i am getting Error ID:           OOPS-6b0d7c3b791acfb2e06289ac42f97c68
<mattyw> schkovich, sounds like you're doing everything right. can you pastebin the details you want to enter and I'll see if I can get it in. Otherwise I'll ping a charmer when some are around
<schkovich> mattyw: thanks! http://pastebin.com/5XmrqX7N
<mattyw> schkovich, https://bugs.launchpad.net/charms/+bug/1457860
<mup> Bug #1457860: sstmp charm: Configuration cannot be changed/updated neither on installing nor later on using GUI/CLI. <Juju Charms Collection:New> <https://launchpad.net/bugs/1457860>
<mattyw> schkovich, if you can post a comment on the bug just so we can remember it's you we need to talk to :)
<schkovich> mattyw: done. thanks. coffee break. :)
<lufix> I've just tried the demo web UI of Juju, but I can't seem to find Actions in the UI (for example, mysql backup). Are they exposed in the UI?
<turicasto> hi guys
<turicasto> I'm not able to deploy local bundle on juju... I got this error: "No charm metadata @ %s', 'trusty/e-commerce/metadata.yaml". can someone help me?
<turicasto> obviously the metadata is in the e-commerce's directory!
<schkovich> turicasto: most likely either you don't have metadata.yaml, it is misconfigured or you submitted the wrong path to the local charmstore
<schkovich> turicasto: command should look like juju deploy --repository=opt local:trusty/puppet --to 4 where opt is the root of the local charmstore
<turicasto> schkovich, for the bundle too?
<schkovich> turicasto: sorry, i don't know, i guess that the command should be similar
<turicasto> schkovich, ok thank you!
<schkovich> turicasto: no mention. i just didn't evolve to using bundles yet ;)
<tvansteenburgh> turicasto: pastebin the cmd and output you're seeing
<mattyw> schkovich, make sure you're subscribed to that bug so you get updates about it
<schkovich> mattyw: i already did it :)
<mattyw> schkovich, awesome, thanks
<schkovich> mattyw: thank you Matthew. :) i truly don't get why i was not able to file the bug report. never happened to me in almost 10 years since im on launchpad :(
<mattyw> schkovich, np - it's happened to me once or twice before
<lazyPower> skay: did you notice the python-django charm is geting some additional love? :)
<lazyPower> skay: https://code.launchpad.net/~thumper/charms/trusty/python-django/support-1.7/+merge/259889
<skay> lazyPower: I did not! I'm subscribed to teh repo but my emails got lost in all my other launchpad emails
<lazyPower> i know those feels skay
<skay> lazyPower: I am happy!
<lazyPower> Thumper and I had a quick hangout lastnight over the future of python-django
<lazyPower> it looks like he's goign to start paving a path forward for co-maintainership
<skay> cool
<skay> hopefully I will have time freed up so I can start making merge requests with some suggestions
<turicasto> guys juju-quickstart is the only way to deploy bundle?
<evilnickveitch> turicasto - not the only way, you can use the juju-gui
<evilnickveitch> or juju deployer
<turicasto> evilnickveitch, there is a way to specify the local repository of my bundle?
<evilnickveitch> if you are using the gui, you can just drag and drop it
<turicasto> evilnickveitch, via command line?
<evilnickveitch> turicasto, if you don't want to use the gui OR quickstart then you will need to use juju-deployer
<evilnickveitch> and yes, you can specify a local bundle
<turicasto> evilnickveitch, how?
<evilnickveitch> http://pythonhosted.org/juju-deployer/config.html
<evilnickveitch> ^ docs for deployer - i can't say if they are up to date, we are phasing out the use of deployer
<turicasto> evilnickveitch, because I have problem with the deploy of my bundle,
<evilnickveitch> 'juju help deployer' would probably give better usage info
<turicasto> evilnickveitch, from gui i got this error:  "error occurred while deploying the bundle: ('No charm metadata @ %s', 'trusty/e-commerce/metadata.yaml')"
<evilnickveitch> turicasto, is this a charm/bundle you have written yourself
<evilnickveitch> ?
<turicasto> evilnickveitch, yep
<evilnickveitch> do you know if the *charm* itself works?
<turicasto> yeah it works
<evilnickveitch> how did you make the bundle?
<mbruzek> Hey lazyPower http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/67/console
<mbruzek> This flannel-docker test passed on LXC?
<lazyPower> wat
<lazyPower> welp, nothing to do here
<mbruzek> I don't know what is going on
 * lazyPower whistles and wanders off
<nodtkn> howdy... I am running into issues bootstrapping vivid nodes via maas and manual using juju 1.23.2-trusty-amd64.  Maas without juju is able to install nodes that I can log in to, but after bootstrapping the same node with juju I can  not ping the instance.
<nodtkn> If I deploy the node via maas, setup a manual environment specifying the node, and then attempt to bootstrap the node it appears to work, but jrunning uju status on the environment hangs
<lazyPower> nodtkn: juju status is reaching out to the unit on port 17070 - a couple things may have happened that would cause status to hang... 1 being that port is blocked, or the jujud service came online and tanked shortly thereafter
<nodtkn> Is anyone having good experiences with a stable release of juju and vivid on a node.
<lazyPower> nodtkn: admittely we typically stick with LTS releases on juju - but i know there is some work in teh beta repository wrt vivid and the new systemd init components.
<lazyPower> i'm not positive those made it into 1.23.3
<lazyPower> the fact its having an issue standing up w/ the 'maas provider' on juju, it leads me to beleive thats not the case.
<lazyPower> natefinch: i know you had run into some differences w/ juju and vivid - can you confirm that the 1.24 branch has the fixes in place w/ teh systemd init stuff?
<mgz> nodtkn: if you can get into one of the borked nodes, look at /var/log/cloud-init-output.log and /var/log/juju/*
<mgz> it may be an init system issue that you need a newer juju for the fix
<nodtkn> lazyPower: The last time I attempted that it did not end well... Do you have a link for how to do this?
<lazyPower> nodtkn: how to upgrade juju to the currently released -devel branch?
<mgz> nodtkn: for instance bug 1452511
<mup> Bug #1452511: jujud does not restart after upgrade-juju on systemd hosts <regression> <systemd> <vivid> <juju-core:Fix Released by ericsnowcurrently> <juju-core 1.23:Fix Released by ericsnowcurrently> <juju-core 1.24:Fix Released by ericsnowcurrently> <https://launchpad.net/bugs/1452511>
<mgz> and various other systemd related issues
<nodtkn> lazyPower: How to set a password via preseed or how to connect to the node that failed to bootstrap.
<nodtkn> mgz: thanks
<lazyPower> ah im not that fluent with the specifics of the preseed process, i can however point you to #maas and ping for assistance in doing so
<lazyPower> let me see if anyone from teh maas team is around atm to help troubleshoot nodtkn
<lazyPower> nodtkn: as soon as i get a response i'll ping again - i have an open request w/ the maas team that we need help.
<nodtkn> lazyPower:  thanks.  I have node that I first deployed via maas, then attempted to bootstrap directly.  The cloud-init.log is not very interesting
<lazyPower> nodtkn: i'm fairly certain the issues you're running into are due to running on stable, there's a ton of fixes inc. w/ the 1.24 release and beyond that are specific to vivid/systemd bugs
<nodtkn> lazyPower: ok... I could have sworn I got further about a month ago on older cloud init images.  I must of not actually been running on vivid.
<lazyPower> really sorry you ran into this snag, but the mailing list has been fairly active the last month with discussion on this very subject
<blake_r> lazyPower: I am here
<lazyPower> nodtkn: can you relay to blake_r the situation when attempting to connect to the maas unit post juju bootstrap? he can lend a hand in getting you into the unit so we can collect the logs
<nodtkn> blake: thanks
<nodtkn> blake: If I deploy the node via maas I can ping and ssh into the node.  If release and bootstrap the node via juju I can not ping the node or ssh into it.  What can I do via preseeds or modifying the cloud image to set the root password so that I can log into the node and look at the juju logs?
<lazyPower> blake_r: ^
<nodtkn> blake_r:  I sorry I referred to use as blake please read my comments above
<blake_r> sorry
<blake_r> taking a look now
<blake_r> nodtkn: hm that is wierd, so deploying with maas works fine
<blake_r> nodtkn: can you view the console of the machine? does cloudinit show an error when running userdata
<blake_r> nodtkn: because the only difference is the userdata from MAAS point of view
<blake_r> nodtkn: if you want to set the root passwd you can do that with some preseed modification
<nodtkn> blake_r it hangs* during sending of the ssh keys.  If I hit enter the console continues to scroll to a log in prompt. So hangs is not the best description...lingers?
<nodtkn> blake_r: run_parts has claimed to finish restaring the juju bridge interface.  and the sending of ssh keys has started.  I hit enter a few times and it will show the login prompt
<blake_r> nodtkn: okay so the only real way to get in is using password as it might be a bridge issue
<blake_r> nodtkn: give me a moment and I will try to come up with a preseed modification to do what you want
<nodtkn> blake_r: thank you
<hatch> In the new bundle format are all container technologies treated with the same synax? ie) lxc:new kvm:new
<nodtkn> blake_r: ping
<thumper> lazyPower: ping
<thumper> hmm later for lazyPower than I thought
#juju 2015-05-23
<jw4> anastasiamac: 2 AM on a Sunday Morning... bored much?
<anastasiamac> jw4: well isn't sunday for u too? :D
<jw4> haha
<jw4> no saturday morning for me
<jw4> 9 AM
<anastasiamac> :D
<anastasiamac> yes. m going...
<jw4> :)
<anastasiamac> i wish it was 9am on Saturday for me too though :D
<jw4> anastasiamac: lol.  Time waits for no-one... unless you hop on a plane across the date line
#juju 2016-05-23
<blahdeblah> Hi. Anyone know of a good example of combining multiple charms.reactive states? It's driving me a bit mental at the moment.
<blahdeblah> I've got these reactive handlers: https://pastebin.canonical.com/156991/ both executing in the same hook invocation. o.O
<blahdeblah> even if I remove the @when_all part on the 2nd method, it still executes
<blahdeblah> Or even a way to debug what reactive's doing when I call main, so I can work it out?
<rick_h_> blahdeblah: sorry, the reactive experts are cory_fu and marcoceppi and bcsaller who aren't around atm.
<rick_h_> blahdeblah: the big data charms are the latest biggest reactive things to try to peek at, those and the docker/swarm observable bundles
<rick_h_> as far as ones to look at
<rick_h_> blahdeblah: if those don't help I'd shoot off a layers email to the list as there's a growing community of folks that know this stuff pretty well there
<rick_h_> and I can make sure to poke you a reply from the experts
<blahdeblah> rick_h_: Yeah - wasn't expecting many of those guys to be around. :-)
<rick_h_> blahdeblah: heh, just covering my "hmm, sorry I don't know" :P
<blahdeblah> Is there a separate list for charms, or do you mean just the main juju users list?
<rick_h_> blahdeblah: the main juju@ one
<blahdeblah> rick_h_: Thanks - much appreciated; I'll check out the charms you mentioned.
<blahdeblah> (and do a bit more debugging to see if I can work out what's going on)
<veebers> Is the juju user features complete or WIP? The web docs mention the command 'user add' which doesn't exist, the help for the command 'add-user' mentions the command 'switch-user' which also doesn't exist. Anyone have pointers?
<rick_h_> veebers: it's in 2.0
<veebers> rick_h_: I'm running 2.0 from source
<rick_h_> veebers: you can add-user, send a register command to another user, etc
<rick_h_> veebers: I've run through it a couple of times
<rick_h_> user add I think was the pre 2.0 one
<rick_h_> veebers: and I think switch user was replaced with logout/login (to do a switch)
<veebers> rick_h_: ah aye, sorry wasn't very clear,  I can add user and reg. But no such command 'switch-user'
<veebers> ah ok
<veebers> the help strings for the command seem out of date then
<rick_h_> veebers: right, I think in testing that we decided to replace that with just logout/in. Docs still need updating there.
<veebers> (in multiple places).
<rick_h_> veebers: rgr
<veebers> rick_h_: Should I file a bug somewhere/
<rick_h_> veebers: sure, https://bugs.launchpad.net/juju-core
<veebers> rick_h_: sweet, will file that now
<rick_h_> veebers: <3
<rick_h_> and thanks for trying it out
<veebers> rick_h_: heh, I'm writing tests as I'm on the jujuqa team ;-)
<rick_h_> veebers: still ty
<veebers> nw
<rick_h_> veebers: and welcome, we've not had a chance to meet up yet
<veebers> rick_h_: Cheers, no not yet. Looking forward to it
<veebers> rick_h_: rats, how do I log back into the systems 'admin' user? (i.e. the user I used by default when I wasn't messing with user stuff)
<lazyPower> blahdeblah if you're still around i can lend a hand
<blahdeblah> \o lazyPower - I sure am
<blahdeblah> I was just about to have lunch, but now that you're here... ;-)
<lazyPower> o/ - so, i see some new decorators here that i was unaware of
<lazyPower> @When_all an @when_not_all -- those are wholly new to me. @when_all seems like a direct clone of @when - as its goign to fail if any of the conditions are not true
<lazyPower> and @when_any  is a if any of these are true, do your thangggg
<blahdeblah> I've been going by the doc at https://pythonhosted.org/charms.reactive/charms.reactive.decorators.html
<blahdeblah> that is pretty much my understanding of it
<blahdeblah> when is apparently just an alias to when_all
<blahdeblah> ditto for when_not -> when_none
<lazyPower> yep
<lazyPower> looks to be the case. #TIL
<blahdeblah> So I think there's actually a bug in whatever provides config.set.VAR
<lazyPower> Thats possible
<lazyPower> I know we've had some issues with the @when_file_changed decorator in the past
<blahdeblah> Because if I dump the corresponding hookenv config variables in the 2nd handler (which shouldn't execute), they are definitely set.
<veebers> rick_h_: ah I think I got it, I just had to change to a controller that 'I owned' i.e. not one created with a different juju user
<veebers> ok I take it back, I'm not 100% how that works. I was able to switch controllers which automatically logged me in as admin? At any rate, when I tried to login as new user again it gave me a message about setting a passwd
<lazyPower> blahdeblah - i know that i've used the config.changed state bits which are provided by layer-basic... its entirely possible there's a bug lingering around in some of that
<blahdeblah> I might have a bit of a dig into layer-basic and see what I can find.
<lazyPower> https://github.com/juju-solutions/layer-basic/blob/master/lib/charms/layer/basic.py#L126
<lazyPower> that should help make it easy, there's the init / changed methods handling those states
<blahdeblah> Looks pretty simple, TBH
<lazyPower> blahdeblah - if your layer is public i'd be happy to build it and give it a go at debugging the behavior
<blahdeblah> lazyPower: It will be public once I make it do something. :-)
<blahdeblah> But maybe I'll just have no shame and put it out there and ping you with the location
<blahdeblah> I'm pretty sure my peer relation hook could use some work as well - it seems to execute 20-30 times before stabilising.
<lazyPower> blahdeblah - i had that issue before i started gating on the data being available. This is why in the ~containers charms you'll see a lot of use of .connected and .available states (connected is as it sounds, available only gets set when we have all the actionable data)  - without seeing the code, i'm assuming that is the root cause.
<lazyPower> but i dont want to hold you up. I'll be around ~ another hour and a half if you want to run and grab lunch. I'll be around to riff when you get back
 * lazyPower is catching up on Sunday Night TV before the spoiler tweets land
<blahdeblah> lazyPower: thanks - will let you know how I go
<blahdeblah> lazyPower: So here's what I did after your last comment: https://pastebin.canonical.com/156993/ ; then I changed the other methods to gate on dynect.config-available, and it all worked as expected.
<blahdeblah> Personally, I think that's still a bug in config.set.* handling, since those states (AFAICT) are supposed to work for exactly this purpose.  But at least I've unblocked that path.
<ejat> rick_h_: juju mlist or askubuntu ?
<veebers> Surely if I add a user and grant them 'read' they should be able to execute 'status' on the controller? (i.e. they can deploy a charm, I want to be able to monitor it's status)
<ejat> rick_h_: bugs no 1584582
<ejat> https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1584582
<mup> Bug #1584582: ERROR juju.worker.uniter.operation runhook.go:107 hook "config-changed" failed (dns resolver) <rabbitmq-server (Juju Charms Collection):New> <https://launchpad.net/bugs/1584582>
<jamespage> morning all
<jamespage> happy monday :-)
<Rajith> Hi while using juju set command getting error unknown option
<Rajith> this is happening while deploying service
<jamespage> gnuoy, https://review.openstack.org/#/c/319779/
<jamespage> can you take a peek? working through Juju 2.0 related bugs this morning - bdx ^^ that should fixup your dvr issue - charm store accessible version in the bug report
<jamespage> gnuoy, two more for you:
<jamespage> https://review.openstack.org/#/q/topic:bug/1572575
<mup> Bug #1572575: Charm looks for JUJU_ENV_UUID but that does not exist in juju 2 models <bug-squad> <canonical-bootstack> <juju-2.0-api> <landscape-client-charm:New> <landscape-client (Juju Charms Collection):New> <swift-proxy (Juju Charms Collection):In Progress by james-page> <swift-storage (Juju
<mup> Charms Collection):In Progress by james-page> <https://launchpad.net/bugs/1572575>
<gnuoy> jamespage, tinwood if you get a chance can you take a look at https://github.com/openstack-charmers/charm-layer-openstack/pull/4 . It adds ha support to the Openstack layer. I think the method for passing kwargs to the config adapter within OpenStackRelationAdapters  and the implicit adding of the cluster adapter are the more controversial bits.
<tinwood> gnuoy, we're slightly pulling in opposite directions here.  I've pulled the code OUT of charm-layer-openstack into a charms_openstack module to make layered openstack charms easier to test.  Thus, all the layer has is the templates and a reference to charms_openstack in the wheelhouse.txt -- thus, I'll need to merge this code back into that at the appropriate time?
<tinwood> gnuoy, it's currently sitting at https://github.com/ajkavanagh/charm.openstack (but we'll hopefully (if agreed on approach) pull it into an openstack-charmers/charms-openstack git repo.
<gnuoy> tinwood, I don't think we're pulling in different directions. I think it will be straightforward to apply my changes to the module rather than the layer. I just did it againt the layer as the module work has not landed yet.
<tinwood> gnuoy, kk - I was just making sure *I* wasn't going in the wrong direction! :)
<stub> Is anyone else able to login to jujucharms.com ?
<stub> I'm getting a generic error page after the return from login.ubuntu.com
<gnuoy> stub, I'm able to login to it
<stub> gnuoy: ta
<jamespage> gnuoy, ok https://review.openstack.org/#/c/319787/ is OK
<jamespage> its counterpart failed - looking at that
<gnuoy> ok
<jamespage> and the change to n-ovs failed to bootstrap one test...
<lazyPower> blahdeblah - thats great! It may be worthwhile to file a bug against layer-basic to start the discussion around that behavior.
<jamespage> gnuoy, swift-storage review tests fail for mitaka due to out-of-date swift-proxy branches in LP - fixing that now...
<gnuoy> jamespage, can you make 17:00 UTC on a wednesday for charmers IRC meeting?
 * jamespage thinks
<jamespage> yah
<lazyPower> wait, we're having a meeting on wednesay? or is this specifically for the ~openstack charmers?
 * lazyPower doesn't see anything on the calendar for wednesday
<lazyPower> gnuoy ^
<gnuoy> lazyPower, openstack charmers
<lazyPower> Ok, phwew
<lazyPower> I thought I missed the memo
 * magicaltrout hands lazyPower the memo
<gnuoy> lazyPower, you'd be more than welcome if you fancy it
<lazyPower> i mean, if you need me i'll be there buddy
<lazyPower> gnuoy - whats the focus of the meeting? i may crash regardless so i can lend a hand while running support in here during off-hours.
<lazyPower> if there's going to be a knowledge dump or something :)
<gnuoy> lazyPower, Rap about all think Openstack charm related really
<gnuoy> https://wiki.ubuntu.com/ServerTeam/OpenStackCharmsMeeting
<gnuoy> jamespage, it's all booked with the fridge and I've created a basic wiki area https://wiki.ubuntu.com/ServerTeam/OpenStackCharmsMeeting
<jamespage> \o/
<gnuoy> jamespage, I've pushed hacluster up to github and have a project-config change ready to propose, is there anything else I need to do to get hacluster in?
<jamespage> gnuoy, I don't think so
<gnuoy> kk, ta
<jamespage> gnuoy, oh wait - does it have a stable branch?
 * jamespage looks
<gnuoy> jamespage, it does not have a stable branch
<jamespage> gnuoy, OK we can cut that once we move over if need be
<jamespage> gnuoy, I can't remember what I did to each repo proir to proposing
<jamespage> gnuoy, I think it was a) add tox and testr configuration stuff a b) add .gitreview so that git review dtrt once moved
<gnuoy> ah, I haven't checked those, thanks
<jose> gnuoy: only 30m for that meeting?
<gnuoy> jose, I was going by the server team meeting which is usually done within 30mins tbh
<jose> ah, ok :)
<gnuoy> jamespage, tinwood, do either of you do some mocking recently of apt_pkg? I'm sure I saw that fly by in a review
<tinwood> gnuoy, not that I specifically remember, but I'm pretty good at mocking things out now (properties notwithstanding)
 * tinwood shakes fist at code.
<jamespage> gnuoy, erm yes - in a action unit test
<gnuoy> jamespage, ah, I think I've got it, heat
<jamespage> gnuoy, ggnnnnnn
<jamespage> gnuoy, keep hitting that juju bug where the private/public address disappears
<jamespage> twice today
<gnuoy> yeah, it's a real pain
<jamespage> gnuoy, I'll poke the juju team again, but so far...
<jamespage> grag
<coreycb> jamespage, this is fixed up: https://code.launchpad.net/~corey.bryant/charm-helpers/systemd/+merge/287110
<cory_fu> lazyPower: The filebeat and topbeat charms were promulgated from ~containers, right?  And they're up to date with what's in the ~containers namespace?
<lazyPower> cory_fu - uhh, no. there's some slight changes that landed in repo that need to make it to the store
<lazyPower> but correct. promulgated sources point at ~containers
<lazyPower> hey dimitern o/   Re: your questions about amulet with juju2.  From what I remember, tvansteenburgh has landed some new code and PPA's to make getting at this easier. If you add ppa:tvansteenburgh/ppa and apt-get install python-jujuclient from there, that should be all you need to get amulet working under juju2.
<lazyPower> if i'm missing anything, i'm sure he will correct me :)
<dimitern> lazyPower: thanks, I'll give that a try and come back with feedback then :)
<tvansteenburgh> dimitern: you want juju-deployer from that ppa also
<dimitern> lazyPower: e.g. it will be much faster to use lxd + zfs backing to run amulet tests, rather than old lxc even with btrfs backing
<dimitern> tvansteenburgh: because amulet uses it?
<tvansteenburgh> dimitern: yeah
<dimitern> tvansteenburgh: (I haven't specifically needed the deployer yet)
<dimitern> tvansteenburgh: right, good to know, thanks!
<dimitern> tvansteenburgh, lazyPower: so by the end of the week I might poke you guys for a official review :) \o/
<dimitern> once I get the clustering working and a few other things, and ofc lots more tests
<lazyPower> dimitern i look forward to it :) good luck man, and hi5 on the progress
<dimitern> lazyPower: thanks! I was kinda scared by the apparent complexity of the layers, but it only took me a couple of hours following various guides to grok all the awesomeness..
<dimitern> indeed the interface layers I got almost immediately
<lazyPower> dimitern - we're highly interested in feedback in that dept. Such as how good our docs are, what was missing/good. etc.
<dimitern> lazyPower: well, as for feedback, some docs / guides need updating.. but that's true for most of juju's docs heh
<dimitern> lazyPower: I'll keep a list of comments / thoughts to share at the end of it
<lazyPower> dimitern dont tell me, go file bugs <3
<dimitern> hehe will do
<lazyPower> nick reassigns them to us when he identifies its an eco-wheelhouse thing
<lazyPower> so the more feedback the better off our docs will be. ta in advance :D
<dimitern> sure, no probs :)
<cory_fu> lazyPower: What's the status on the kibana charm support for beats?  I see that the promulgated version doesn't have the dashboard?
<lazyPower> cory_fu pending a review as pointed out last week https://code.launchpad.net/~lazypower/charms/trusty/kibana/add-dashboard-loader-action/+merge/295359
<cory_fu> Ah, you got the config change in there, I see.  Nice
<lazyPower> i didnt know if you wanted to wait fo rthis to come up to the top in the revq or get it triaged sooner as its a dpeendency for your work.
<cory_fu> I'll review that now, since I need to publish the Spark bundle using it
<lazyPower> ok, i have a rev increment on filebeat/topbeat as well
<lazyPower> let me finish this isv work and i'll switch feet to make sure that i've got latest in teh store. there were some fixes that made the layers easier to digest with the apt_layer
<cory_fu> Sounds good
<cory_fu> lazyPower: Please see review comments on https://code.launchpad.net/~lazypower/charms/trusty/kibana/add-dashboard-loader-action/+merge/295359
<cory_fu> If you can inform me what the correct fixes are, I'm happy to work on them as part of merging, since I need this for our bundle.
<cory_fu> The first couple seem self-evident, but I'm not sure how to handle the elasticsearch relation requirement
<lazyPower> cory_fu - i'm checking for the relationship in the action and failing if its not present
<lazyPower> is that not good enough?
<cory_fu> There are some gaps
<lazyPower> cory_fu - but thats bs
<lazyPower> config-changed is executed after the relationship hooks
<cory_fu> I don't believe that is true, and even if it's true for bundles, it's not true for manual deploys
<lazyPower> bundles have zero bearing over hook sequence
<lazyPower> cory_fu - did you deploy to verify the gaps?
<lazyPower> i did and it worked as expected. i didn't need to force a config-changed event
<cory_fu> I did manually test all of the issues I noted
 * lazyPower sighs heavilly
<lazyPower> i dont know why this worked for me and not for you then
 * lazyPower bootstraps and gets ready to sink more time in this
<cory_fu> Of possible relevance is that I'm testing on 1.25
<lazyPower> i dont think there was any change to hook sequencing of that nature between 1.25 -> 2.0
<lazyPower> i guess the easy fix is to implicity call config-changed from the relation-changed hook
<lazyPower> s/implicitly/explicitly
<cory_fu> lazyPower: Case one: I left the config at default value, added the relation, then changed the checksum option to trigger a config-changed hook.  That config-change hook failed with http://pastebin.ubuntu.com/16640068/
<cory_fu> The config-changed hook definitely did not fire after the relation hook until I manually changed a config value
<cory_fu> I can believe that relation-ids returns values during config-changed in a bundle deploy because the relations are established early.  That won't even depend on the hook order
<lazyPower> there's no good way to validate this then
<cory_fu> o_O
<cory_fu> I can replicate this entirely consistently
<lazyPower> i'm saying if ES is there
<lazyPower> i cant reasonably do this without some kind of state mechanism letting me know the data is there
<lazyPower> i've been spoiled with reactive
<lazyPower> and this is not reactive
<cory_fu> True
<cory_fu> But you can always use relation-get
<lazyPower> cory_fu - contributions welcome <#
<lazyPower> <3
<cory_fu> But I was wondering if you even care about the relation data, because it doesn't appear to be used in load.sh.  That just defaults to localhost anyway
<cory_fu> lazyPower: I'm perfectly happy to make the changes and run them by you.
<lazyPower> that vhost isn't created until elasticsearch is joined
<lazyPower> so it'll fail either way
<cory_fu> The action should be validating the presence of the vhost, then
<lazyPower> hmmm
<cory_fu> But should load.sh be using the relation data?  That was my main question
<cory_fu> Or is localhost ok?
 * lazyPower shrugs
<lazyPower> what makes more sense?
<lazyPower> its both going to the same place, just depends if its going through the proxy or directly to ES
<cory_fu> I don't know anything about kibana, so I have no idea
<cory_fu> Ok, so either will work?  That's fine, then.
<cory_fu> I just saw that it accepted the host as a param that wasn't being provided and figured it was a bug
<lazyPower> yeah, that proxy was an addon a while back so you dont have to juju expose es ot the world
<lazyPower> i really really really want to re-write this charm
<cory_fu> :)
<lazyPower> i just dont have the time :/
<cory_fu> Yeah, I understand
<cory_fu> Ok, I'll make the changes that I think are appropriate and run them by you
<lazyPower> kk thanks man. sorry i dont know juju hook-execution as well as i used to
<lazyPower> i blame you for this though... or credit. Either way, its a positive thing
<cory_fu> ha
<cory_fu> Indeed
<cory_fu> lazyPower: Was the intent of the load-dashboard action to support custom (manually uploaded) dashboard?  Or only pre-baked?
<lazyPower> both
<cory_fu> Ok, that's what I thought
<lazyPower> so long as it lives in $CHARM_DIR/dashboards/$DASH  and has a load.sh
<cory_fu> Yep
<lazyPower> which i think is outlined in the readme
 * lazyPower crosses fingers
<lazyPower> phwew, yeah i did cover that.
<cory_fu> Oh, good, it even talks about the localhost proxy
<cory_fu> I guess I should have read that before bugging you.  :p
<lazyPower> nah :D its all good. thats a common gotchya because we do this, its not necessarily required
<lazyPower> artifact of the charm being smarter than the deployment guide
<lazyPower> also, man, dang i see what you meant by it failed with action-get. you removed the config value and it still triggered calling the loader.
<lazyPower> thats a hairy assumption
<lazyPower> mbruzek - just landed tls/k8s. \o/
<lazyPower> ready for you to rev at your leisure sir
<lazyPower> cory_fu - did you want me to get the other concerns aside from the elasticsearch relation sniffing? I can fix the logic erorrs in the action/loader.
<cory_fu> No, I'll address them
<lazyPower> ack
<cory_fu> lazyPower: Does Kibana do anything useful w/o ElasticSearch?
<lazyPower> negative
<lazyPower> it no-ops without ES
<cory_fu> That's what I thought.  It should probably just report blocked pending the relation
<lazyPower> i mean, it does deploy kibana
<lazyPower> but kibana directs you to a status page that points out its unhealthy
<lazyPower> and doesn't tell you what it really needs, it just reports an error in the "ES Driver"
<cory_fu> Hrm.  Why wasn't it made subordinate to ES, I wonder?
<cory_fu> *shrug*
<lazyPower> yeah, its just a webapp that does very little server side processing.... it makes sense to make it a sub
<cory_fu> lazyPower: I assume that calling a dashboard's load.sh is idempotent and there's no significant cost to calling it again?
<lazyPower> correct, it will overwrite what was there before
<lazyPower> so if you load up a visualization and manually edit it, then re-run that ocnfig-changed block wave bye to your custom visualization that was a derivative from what was included.  (this was the original reason i made it an action)
<cory_fu> And it's not terribly expensive to re-load it?
<cory_fu> Wait, by manually edit it... Do you mean via the GUI or such?  Is that expected to happen?  If so, I'll add checking to not reload them automatically
<cory_fu> Actually, I think I'll do that anyway
<lazyPower> right. so you load it, and then you go into kibana and modify one of the "beats visualizations"  - like the mongodb connection visualizer to include latency. it will forefit that visualization modification. They have the same ID unless you explicitly make a copy of the visualization.
<cory_fu> lazyPower: This look good to you?  https://bazaar.launchpad.net/~johnsca/charms/trusty/kibana/dashboard-review/revision/27
<lazyPower> will TAL in a sec, verifying k8s
<lazyPower> cory_fu omg the return of sentinel files
<lazyPower> but man, you really gave this some TLC
<cory_fu> I know.  Without charmhelpers, there's no alternative
<cory_fu> I wonder if it wouldn't have been faster to convert it to layers.  ;)
<lazyPower> hooks/install may want a mkdir -p in the event it exists. (no idea why, but better to be ok with it existing than fail)
<cory_fu> mkdir -p on what?
<cory_fu> Oh yeah
<lazyPower> this looks great. i would +1 this as is. that comment was a nit
<cory_fu> Ok, if you're ok with that, I'm ok with the original review, so I can go ahead and merge it
<lazyPower> SGTM
<lazyPower> +1000 for hte contribution, thanks for being a fixer
<cory_fu> Hrm.  That charm is still depending on ingestion.
 * lazyPower awards cory the fixer badge
<cory_fu> :)
<lazyPower> nah i just kept that up to date as its the only canonical source i could find for kibana
<cory_fu> Should I re-promulgate that using the new workflow?
<lazyPower> we still need ot charm push it
<lazyPower> yeah
<lazyPower> i was going to take it, but if you're ok with owning it, i'm +1 to you owning it
 * lazyPower has ~ 20 charms he's responsible for already... 5 of which i'm actively maintaining
<cory_fu> Are you guys taking ownership of it (i.e., should I promulgate from ~containers)?
<lazyPower> i mean, you can :D
<lazyPower> we'll take it int eh namespace so it lives with logstash and the beats
<lazyPower> i suppose that makes the most sense, keep em all together
<cory_fu> Though it's not really anything to do with containers
<lazyPower> right
<lazyPower> it has more to do with little data
<cory_fu> I feel like this is a slight rough edge to the new process
<lazyPower> i've been pointing people at ~containers to deploy kibana for beats since step 1, its probably fine to push it there and re-point the promulgated copy
<lazyPower> i'll shepherd it
<lazyPower> pretty sure mbruzek will too
<cory_fu> lazyPower: Actually, I don't have access to ~containers
<lazyPower> cory_fu - adding you individually. If you like you can supplant that with the bigdata team so we can jointly maintain it
<lazyPower>   Write:
<lazyPower>   - containers
<lazyPower>   - johnsca
<lazyPower> should be g2g now
<cory_fu> lazyPower: Hrm.  I can't push to LP, though.
<lazyPower> hrm
<lazyPower> you cant push to lp:~charmers/charms/trusty/kibana/trunk?
<cory_fu> Î² bzr push lp:~containers/charms/trusty/kibana/trunk
<cory_fu> bzr: ERROR: Permission denied: "~containers/charms/trusty/kibana/trunk/": : Cory Johns is not a member of Charm Containers
<lazyPower> oh, ooooohhhhh
<lazyPower> i see what you did there
<lazyPower> hang on i can fix that as well
<cory_fu> I could push it to ~charmers, though, but you'll want to re-own it, right?
<lazyPower> well, i'm jsut now catching up to what you're concerned with. that the source branch is  not obvious living in ~charmers, and its only further compouding old behavior of who owns what that we dont want to persist
<cory_fu> Yeah, we should promulgate from the same namespace that the code lives in.
<cory_fu> Unless we move it to GitHub, of course
<cory_fu> I mean, I guess it doesn't technically matter where the source lives
<lazyPower> as the meme implies "Sooooooon"
<lazyPower> oh we care
<lazyPower> or we should
<lazyPower> when we go to patch it, if we have no idea where it came from we're kinda asked out
<cory_fu> Well, that's what the homepage and bugs-url extra-info are for, right?  ;)
<lazyPower> oh, yeah
<lazyPower> i suppose thats right
<cory_fu> But it will be easier if the namespaces match up
<lazyPower> yep i'll add ya
<lazyPower> or i would if marcoceppi didn't own everything i ever did ever
<cory_fu> :)
<lazyPower> marcoceppi <3 can you add cory to ~containers?
<cory_fu> lazyPower: Why I don't I just push it to ~charmers and let you re-own it?
<lazyPower> i'm ok with this too
<lazyPower> pretty sure the bugs url points to that location anyway
<cory_fu> lazyPower: Ok, pushed
<cory_fu> If you can re-push it to ~containers, I can re-promulgate from there
<cory_fu> Actually, I guess I can go ahead and do the store updates and you can do the code move at your leisure
<cory_fu> Or not.  Still unauthorized to `charm push` to ~containers
<lazyPower> cs:~containers/trusty/kibana-4
<lazyPower> i just pulled and pushed
<lazyPower> weird that you're denied when you're implicitly added, are you johnsca on launchpad?
<cory_fu> lazyPower: Oh, I'd need read & write perms to the unpublished channel as well
<lazyPower> hrm, i didn't know permissions were by channel. thats news to me
<cory_fu> Yep
<lazyPower> i thought it was ACL'd on the entity itself
<lazyPower> like, one step below that
<cory_fu> Nope.  Do `charm show cs:~containers/trusty/kibana id perm --channel=unpublished`
<lazyPower> yeah i see this now in the charm grant help output
<lazyPower> #TIL
<cory_fu> Makes sense, too.  You may well want different perms on unpublished, development, stable
<cory_fu> Might want unpublished to not be publicly visible, for instance
<lazyPower> yar, i just updated the unpublished channel. Shouldn't be an issue next time around
<lazyPower> warning: bugs-url is not set.  See set command. -- i do need to plug that warning though
<lazyPower> ok cory_fu, last bit thats needed that i see is re-pointing the promulgated link
<cory_fu> lazyPower: Ok, done
<lazyPower> boom ^5 on the teamwork
<cory_fu> lazyPower: Are topbeat and filebeat updated?
<lazyPower> thats also 2 closed bugs, great succes
<lazyPower> doing that now
<cory_fu> Those will both be -2 right?
<lazyPower> i believe so yes
<lazyPower> yep, current promulgated revisions are -1 on both filebeat and topbeat, so next push should rev to -2
<lazyPower> cory_fu ok both are pushed and published
<cory_fu> Thanks!
<lazyPower> should be all good in the hood for CWR now
<ryebot> tvansteenburgh, does amulet handle deploying different services to different series in the same deployment? e.g., service A -> trusty & service B -> xenial?
<cholcombe> lazyPower, are you good with sphinx docs for python?
<tvansteenburgh> ryebot, yes
<ryebot> tvansteenburgh: thanks, that's also true under juju2?
<tvansteenburgh> ryebot, yep
<ryebot> tvansteenburgh: thanks
<tvansteenburgh> ryebot, see the 'series' kwarg to Deployment.add()
<petevg> I have a newbie question: what does a "series is empty" error mean? (The context is that I'm doing "juju deploy hadoop-processing", and it fails on openjdk with 'cannot add service "openjdk": series is empty')
<petevg> "juju deploy openjdk" does work, as a standalone command. Am I doing the right thing if I deploy openjdk, then trying redeploying hadoop-processing?
<petevg> (This is on an Ubuntu Xenial box, deploying locally to lxc containers. My juju version is 2.0-beta3-xenial-amd64)
<tvansteenburgh> petevg: i think that was a bug in the bundle handling in beta3, it works with beta7
<tvansteenburgh> js
<petevg> tvansteenburgh: Cool. I didn't realize that I was that far behind. I'll update. Thank you :-)
<ryebot> tvansteenburgh: ah, needed to update the amulet lib to get it to work right; thanks again!
<cholcombe> i uploaded the ceph_api to pypi: https://pypi.python.org/pypi/ceph_api and docs: http://pythonhosted.org/ceph_api/.  Let me know what you guys think!
<petevg> Aha. I had the juju2 package installed from juju/stable. Removing it and installing juju-2.0 from juju/devel made everything a lot happier.
<blahdeblah> lazyPower: It actually turns out I was doing something bone-headed, which is not using the hook template from layer-basic.
<blahdeblah> Not sure if that requirement is documented anywhere, but I certainly haven't come across it in my travels.
<blahdeblah> Anyway, stub pointed out where I was wrong and next time I get a chance to hack on it I'll try going back to the original state-based way once I get my hooks right.
#juju 2016-05-24
<jamespage> gnuoy, https://review.openstack.org/#/c/319790/ if you please :-)
<jamespage> gnuoy, beisner: we really need to start putting series in metadata
<jamespage> means min version is 1.25.5 for folks not deploying via the charm store but I'm willing to faceup to that
<leon324> hello!
<leon324> I am having trouble here
<neiljerram> Morning.
<leon324> :)
<neiljerram> With charms?
<leon324> with juju installation
<leon324> I have installed maas and I want to install juju on one node
<neiljerram> Juju 1 or Juju 2?
<leon324> 1.25
<neiljerram> (By the way, I'm just another user - but I may be able to help.)
<leon324> my server runs ubuntu 14
<leon324> ok
<leon324> i got this error always "but I always get his error "bootstrap instance started but did not change to Deployed state: instance""
<neiljerram> OK, so what's the problem?
<leon324> and I have found tons of references on google but I cannot find a solution
<leon324> when i run "juju bootstrap"
<leon324> I got failed deployment on maas
<neiljerram> Right.
<neiljerram> So that's probably a MAAS issue, not Juju.
<leon324> no i dont think so
<neiljerram> Can you use MAAS on its own to deploy Ubuntu Trusty onto one of your MAAS-managed nodes?
<leon324> the nodes behave as they expect on other things
<leon324> yes they are on ready state
<leon324> its where they supposed to be
<neiljerram> So if you go into the MAAS web UI, and deploy Ubuntu Trusty on one of the nodes, does that work?
<leon324> i am not sure I understand you right
<leon324> they have deployed Ubuntu Trusty because they have pass the commission state
<neiljerram> Commission is not the same as deploy, I believe.
<leon324> the nodes pxe boot found maas and have deployed on them ubuntu and now they are on ready state
<neiljerram> 'Ready' means that MAAS knows about the hardware specs of the target node, but that there isn't any OS installed on it.
<leon324> I think the nodes are ok I have done this before but I made a format and now I got stuck here ;/
<leon324> yeah
<neiljerram> When there is an OS installed, the state should be 'Deployed'
<leon324> aaa
<leon324> ok then
<neiljerram> You need to click the Action button, select Deploy, select Ubuntu Trusty as the OS, then Go ... and see if that works.
<neiljerram> jamespage, or other Juju experts, could I trouble you with some general questions, about charms for Trusty vs charms for Xenial...?
<jamespage> neiljerram, hello
 * jamespage reads backscroll
<neiljerram> jamespage, no need for backscroll, this is a new question.
<jamespage> oh ok
 * jamespage stops reading 
<neiljerram> Apart from the upstart to systemd transition, I believe there's not really any general reason for charms to differ between Trusty and Xenial - is that right?
<neiljerram> And yet, (1) charm URLs explicitly encode the series, and (2) I think there are lots of charms that exist for Trusty that haven't yet been made available for Xenial.
<neiljerram> So I wonder if I'm misunderstanding something.  It seems to be that it would be a better overall design for charms not to be series-specific by default.
<leon324> where is this awesome Action button to install ubuntu on maas :D
<neiljerram> leon324, if I remember correctly, it's at the top right of the page for a Node
<neiljerram> *It seems to me ....
<neiljerram> jamespage, Would appreciate your thoughts/comments on those trusty vs xenial points.
<jamespage> neiljerram, you are quite right in that there is not a huge diff between trusty and xenial charms - all of the 26 core openstack charms support all series right back to 12.04 as well
<jamespage> 1) yes charm URL's do encode series - however it is possible to embed the series supported by a charm in the metadata as of 1.25.5
<jamespage> well 2.0 actually but 1.25.5 will not bork on them
<jamespage> series:
<jamespage>   - trusty
<jamespage>   - xenial
<jamespage> for example
<jamespage> when a charm that declares this is published to the store using charm push/publish, the store will sort out all of the series oriented URL's for backwards compatibility with 1.25.5 etc..
<neiljerram> Ah, nice.  And in the charm store, does the charm then get a series-independent URL?  Or does 'charm push' create two URLs, one for each series?
<jamespage> 2) yes that's quite true - a number of charms are on trusty but not xenial as yet - its really up to the maintainers of the charms to sort that out
<jamespage> neiljerram, I think the charmstore creates the URL's on the fly for different series, but there is also a series independent URL as well
<jamespage> neiljerram, the openstack charms don't do the series in metadata just yet but will do soon
<neiljerram> jamespage, Could you give me an example of a series-independent URL?
<jamespage> neiljerram, https://jujucharms.com/u/james-page/aodh/9
<neiljerram> jamespage, Thanks, this has been really helpful.
<jamespage> neiljerram, np
<magicaltrout> if you kid pulls the keys off your keyboard, you buy replacements, get them shipped from the US and they are the wrong type
<magicaltrout> is it then acceptable just to throw the laptop in the bin and buy a new one?
<stub> a new kid?
<magicaltrout> lol
<magicaltrout> both
<jamespage> gnuoy, sparkiegeek does not like my changes to the unit tests on https://review.openstack.org/#/c/319790
<jamespage> dosaboy, I still don't like the fact that rmq actively manages its /etc/hosts file with peer IP/host information, but it just saved my bacon fixing things up for Juju 2.0
<lazyPower> jamespage - a lot of erlang/java apps need the hostname hacks :/
<jamespage> lazyPower, indeed - but I don't think it unreasonable that Juju should provide a minimum viable environment for this type of thing
<lazyPower> its unfortunate that we dont always get FQDN, resolveable hostnames, etc.  - seems like such an oversight on any given cloud, but its more prevalent than i care to admit.
<jamespage> like the hostname of each unit should always be resolvable
<lazyPower> right, there was talk about folding DNS into juju
<jamespage> for example
<jamespage> dosaboy, gnuoy: https://review.openstack.org/#/c/320450/
<lazyPower> i would assume hostname would be part of that
<jamespage> bdx, ^^ you'll need that one as well
<jamespage> lazyPower, I really hope so
<lazyPower> jamespage - ready for the real fun bits? GCE generates hostnames > 64 characters and tanks just about any java service thats not running a java-8 jre
<lazyPower> \o/ yay
<jamespage> lazyPower, lol
<jamespage> lazyPower, wait - who's not running on java-8?
<jamespage> ;)
<jamespage> oh yeah - everyone...
<lazyPower> anyone deploying java5-7?
<lazyPower> which as you said, is everyone
<stub> lazyPower, jamespage : I get the impression juju-core thinks /etc/hosts is the responsibility of cloud-init, and vice versa (Cassandra is bitten by this too, so add Java to the list)
<jamespage> maybe I'll email list about this
<jamespage> stub, point is that juju control's cloud-init right?
<jamespage> stub, oh yeah the ol 127.0.1.1 java problem
<jamespage> I remember that welll
<tinwood> gnuoy, jamespage the new charms.openstack module is now (temporarily) at https://github.com/ajkavanagh/charms.openstack along with the barbican layered charm that uses it at: https://github.com/ajkavanagh/charm-barbican -- please take a look and see what you think when you get a moment?
<stub> I would think juju, as it also needs to handle the ip address changing
<gennadiy> hello everyone, time to time juju-gui is stuck and doesn't display new changes in env. some relation are missed some services too. how to restart it?
<rick_h_> gennadiy: just reload the page
<rick_h_> gennadiy: hatch or kadams54 might be able to help if it's more than that ^
<hatch> Hello
<hatch> gennadiy: hae you tried refreshing the page?
<gennadiy> yes. i tried
<gennadiy> can we restart process?
<hatch> what version of Juju?
<gennadiy> 2.0
<hatch> hmm
<gennadiy> also it doesn't display icons of local charms
<hatch> that is a known issue
<gennadiy> as i remember previously we have posibility to restart service
<hatch> gennadiy: can you open the browser console to see if there are any errors? ctrl+shift+i and then refresh
<hatch> gennadiy: also are you using the gui charm or the gui which ships with juju 2? `juju gui`
<gennadiy> a lot of
<gennadiy> charm
<gennadiy> does juju 2 has own?
<hatch> can you paste an example of what one of the errors are?
<hatch> gennadiy: yes simply run `juju gui` from the CLI :)
<gennadiy> combo?app/assets/javascripts/yui/datatype-date-format/datatype-date-format-min.js&app/views/utils-mâ¦:40 Unknown delta type: actionInfodefaultHandler @ combo?app/assets/javascripts/yui/datatype-date-format/datatype-date-format-min.js&app/views/utils-mâ¦:40(anonymous function) @ combo?app/assets/javascripts/yui/promise/promise-min.js&app/models/models-min.js&app/store/endpointâ¦:5onDelta @ combo?app/assets/javascripts/yui/promise/promise-min.js&app/models/
<gennadiy> charms:1 GET https://user-admin:b9827e013d95a78059cb80be4c3471c1@10.9.8.246/juju-core/charms?url=local:trusty/ixia-load-module-1&file=icon.svg 500 (Internal Server Error)
<gennadiy> seen the last - it's icon issue
<hatch> ahh
<hatch> hmm
<hatch> gennadiy: can you try the version of the gui that ships with Juju? To see if it's working, I believe this might be a regression
<hatch> I'm just investigating right now
<hatch> gennadiy: I'm assuming that this issue happened after you either a) deployed a charm with an action or b) performed an action on an active service - can you confirm either of these?
<gennadiy> i think we have charms with actions.
<hatch> gennadiy: yeah, judging from that error message you do. :)
<gennadiy> the same result with embedded juju gui
<hatch> gennadiy: ok thanks, this is a regression and we'll have to get on working on a fix right away
<hatch> gennadiy: would you be able to create an issue on our GitHub repository? https://github.com/juju/juju-gui/issues/new That way you will be notified as we work on it?
<gennadiy> may i fix it locally?
<gennadiy> i think i can edit js script on juju-gui server manually
<hatch> gennadiy: I'm actually still investigating - are there any other console errors that aren't what you've already pasted?
<gennadiy> no, only these
<hatch> gennadiy: both of those errors should have been able to self-recover from so I'm looking for other places in which an issue may have been introduced
<stub> marcoceppi: So how do I get from cs:~postgresql-charmers/postgresql to cs:postgresql now days?
<marcoceppi> stub: it needs to be promulgated, which is a new process, is postgresql ready to be updated to this?
<marcoceppi> stub: ie, does it upgrade cleanly, etc?
<jamespage> gnuoy, revised - https://review.openstack.org/#/c/319790/2
<jamespage> tinwood, looking
<gnuoy> jamespage, happy to +2 post-smoketest
<jamespage> gnuoy, okies
<gnuoy> jamespage, well, Jenkins lint + unit is enough
<jamespage> gnuoy, probably :-)
<stub> marcoceppi: yes, this is the best version.
<stub> marcoceppi: it has upgrade tests, is being used internally by several teams, and is what we are using to deploy production servers (the SSO uses this charm)
<jamespage> tinwood, https://github.com/openstack-charmers/charms.openstack forked from your repository
<marcoceppi> stub: cool, didn't doubt just wanted to double check
<jamespage> tinwood, setup.py has some deadwood from whereever you copied it from :-)
<marcoceppi> stub: so, going to pm
<tinwood> thanks jamespage; re setup.py -- oh yes, so it does!  I'll fix that.
<tinwood> That's one a wrote a while ago ...
<jamespage> tinwood, https://github.com/openstack-charmers/charms.openstack/pull/1
<jamespage> tinwood, I think that brings us inline with charms.reactive and oslo.*
<tinwood> jamespage, yes, that looks right.  just to check my heads on right; that changes the import from 'import charms_openstack' -> 'import charms.openstack', or does it?
 * tinwood is getting confused.
<jamespage> tinwood, no pe
<jamespage> nope rather
<jamespage> tinwood, example: https://github.com/openstack/oslo.messaging
<jamespage> import oslo_messaging
<jamespage> module name is oslo.messaging
<tinwood> jamespage, so the change only alters the name in pypi, and setup, pip, etc.
<jamespage> tinwood, yes that's correct
<tinwood> confusing as hell.  It explains why I've always kept them the same in the past.
<sg> Hi, Can any one please tell me when config.changed.<option> of basic layer will be set true? because i'm not setting any config option using 'juju set' command but also this state is set as true and triggering the decorated function.
<tinwood> jamespage, https://github.com/openstack-charmers/charms.openstack/pull/2 touchÃ© :)
<sg> I have given some default values in config.yaml and i'm not changing that value using 'juju set' command
<gennadiy> one more issue with juju - if it can't create machine it will not allow to remove service at all. i deployed service with wrong constraints value and now i can't delete it. " cannot assign unit "sipp/0" to machine: cannot assign unit "sipp/0" to new machine or container: cannot assign unit "sipp/0" to new machine: invalid constraint value: instance-type=m3.medium valid values are: [m1.tiny m1.small m1.medium m1.large m1.xlarge m1.t_tiny m1.t_small m1.demo]"
<lazyPower> gennadiy : juju remove-machine --force  is an option
<gennadiy> machine is removed
<lazyPower> and with no "machines" backing a service it should be trivial to juju destroy-service foo
<sg> when config.changed.<option> reactive state of basic layer will be set true? because I'm changing any of the default config values using 'juju set' command but also this state is set as true and triggering decorated function.
<lazyPower> sg - i believe that state will be true on the first-run of the charm. I may be incorrect.  The backing code managing those states can be found here: https://github.com/juju-solutions/layer-basic/blob/master/lib/charms/layer/basic.py#L126
<sg> I'm not understanding when exactly config.changed.<option> state will set as true
<lazyPower> sg - so it appears to me that on first execution, those states will trigger true, then if the value is != '', it will toggle the .set state, and if the value has changed from a prior context run, it toggles the .changed state
<cory_fu> sg: lazyPower is correct.  config.changed.<option> is always set during the first hook execution and thereafter only if the option changes from the previous value.  config.set.<option> is set as long as the value is non-empty and not false, even if that non-empty or true value is the default value.
<lazyPower> cory_fu - why/how does this work? https://github.com/juju-solutions/layer-basic/blob/master/lib/charms/layer/basic.py#L144 it seems to me that it should be guarded by an if statement checking if the value is none/empty and if its already doing that, its not-obvious
<cory_fu> lazyPower: config.get(opt) will return None if the value doesn't exist, or the value otherwise.  The value is then treated as a bool and if "truthy", the state will be set, otherwise it will be removed
<lazyPower> ah ok
<lazyPower> i figured there was some magic going on there
<cory_fu> bool(None) is False, bool('') is False, bool(False) is False obviously
<cory_fu> bool(0) is False
<lazyPower> so the toggle is a bit of a misnomer as its toggle_if_true()
<cory_fu> Everything else is True
<lazyPower> s/misnomer/potentially-misleading-me-because-i-dont-know-better/
<cory_fu> If you leave off the second arg, then it does a toggle.
<lazyPower> yeah
<lazyPower> ok, cool!
<lazyPower> the more you know
<cory_fu> lazyPower: Actually, that's wrong.  The second arg to toggle_state is actually required
<sg> cory_fu, Then if we provide default value for config.option and if we chnage that default value using 'juju set' command then only config.set.<option> state will be true right?
<cory_fu> sg: config.changed.<option> will be set for a single hook execution, just after it's changed.  config.set.<option> will be set indefinitely, just as long as the value is True-ish
<cory_fu> But if the default is True, and you change it to False, then config.set.<option> will *not* be set
<cory_fu> I considered adding a config.default.<option> that would just tell you if it's non-default
<lazyPower> blahdeblah   ^
<cory_fu> Would that be useful?
<lazyPower> blahdeblah - you'll want the last 15 minutes of scrollback, as this is directly proportional to your discovery the other night
<cory_fu> lazyPower: proportional?
<lazyPower> yeah, we were riffing over these states on sunday evening
<lazyPower> i was 80% right
<lazyPower> that last 20% though... i'd rather correct my oversights with evidence of the truth than leave it incorrect.
<cory_fu> lazyPower: http://www.prdaily.com/Uploads/Public/i-do-not-think-it-means-what-you-think-it-means.jpg
<cory_fu> ;)
<tvansteenburgh> lol, knew that was coming
<lazyPower> > corresponding in size or amount to something else. - mirriam webster tells me it means what i thought it meant
<cory_fu> tvansteenburgh knows me so well.
<sg> ok cory_fu, If I want to check the value other than default value I should use config.default.<option> state right?
<cory_fu> Oh hey, that state is already there, even.  sg: Yes, it sounds like you want the config.default.<option> state
<sg> I'm bit confused of these config states.
<sg> cory_fu: yes, I want the function to trigger when default value changed to some other value and even that value changes.
<jcastro> lazyPower: anyword on those fixes for that bundle for kibana?
<lazyPower> jcastro - ah charms have been revv'd, cory_fu  revv'd the bundle yesterday i do believe
<lazyPower> wrt the big d bundle
<lazyPower> i still need ot rev the beats-core bundle, as i space that off until you just drew my attention to it
<cory_fu> sg: So, config.default.<option> will be set as long as the value is the default value.  You can use @when_not('config.default.<option>') to trigger when the value is something other than the default.  But be aware that the state will be set / removed for *every* hook based on the value.  If you only want to trigger a handler once and only when the value is non-default, you can use something like this: http://pastebin.ubuntu.com/16657316/
<cory_fu> lazyPower: I rev'ed the big data bundle, but not the belk stack bundle
<cory_fu> jcastro: https://jujucharms.com/apache-processing-spark/ is up to date
<cory_fu> kwmonroe: I updated the apache-bigtop-base PR with the corrected patch
<kwmonroe> ack cory_fu - gracias
<sg> ok thanks cory_fu and lazypower.
<lazyPower> cory_fu https://github.com/juju-solutions/bundle-beats-core/pull/1
<cory_fu> lazyPower: No pretty picture?  :(
<lazyPower> oo 1 sec
<lazyPower> i forgot all about that
<cory_fu> :)
<cory_fu> It actually can be useful, because it can catch a charm rev that doesn't exist (though not an out-of-date charm rev, unfortunately)
<cory_fu> Plus it looks really neat.  :p
<lazyPower> cory_fu - refresh the comments and *bam*
<cory_fu> Hrm.  Your circles overlap
<cory_fu> Is that intentional?
<lazyPower> are you nit picking the svg?
<lazyPower> really?
<lazyPower> hang on i'll add anotehr 50 px to the annotations
<cory_fu> This is how your bundle will look in the store.  Don't you want it to look nice?
<lazyPower> I'm the reason we cant have nice things
<cory_fu> :)
<lazyPower> cory_fu give it another whirl?
<cory_fu> Looks really nice
<cory_fu> +1
<lazyPower> ta
<DavidRama> hello, is it possible to use juju + lxd with a bridge that is directly connected to my local lan (ie no private LAN + NAT) ?
<cory_fu> DavidRama: I'm afraid I don't really know anything about lxd bridging.  Can you perhaps explain more about your use-case?
<jamespage> gnuoy, thedac: could either of you take a look at https://review.openstack.org/#/q/topic:bug/1572575
<mup> Bug #1572575: Charm looks for JUJU_ENV_UUID but that does not exist in juju 2 models <bug-squad> <canonical-bootstack> <juju-2.0-api> <landscape-client-charm:Triaged> <landscape-client (Juju Charms Collection):Triaged> <swift-proxy (Juju Charms Collection):Fix Committed by james-page>
<mup> <swift-storage (Juju Charms Collection):Fix Committed by james-page> <https://launchpad.net/bugs/1572575>
<jamespage> (the open ones)
<thedac> jamespage: will do
<jamespage> they both pass smoke and I don't really think a full recheck is worth the CPU time
<thedac> jamespage: done
<jamespage> thedac, ta
<stokachu> so im trying to update my dokuwiki layer and running into this issue: https://paste.ubuntu.com/16660726/
<stokachu> using charm-tools 2.1.2
<stokachu> do i still have to export all the LAYER_PATHS etc?
<stokachu> lazyPower: ^?
<lazyPower> stokachu - right, if your trying to build with a layer that doesn't exist in the registry, you'll need to ensure you have $LAYER_PATH and $INTERFACE_PATH set
<stokachu> lazyPower: ok
<stokachu> lazyPower: so i set them all and still same error
<stokachu> heres the latest https://paste.ubuntu.com/16660929/
<lazyPower> stokachu - looks like thats bubbling up from the installer tactic, aka pip install directives/ package install directives
<lazyPower> link / paste to your layer.yaml?
<stokachu> https://github.com/battlemidget/juju-charm-dokuwiki/blob/master/layer.yaml
<stokachu> in the process of upgrading this layer
<stokachu> well charm i mean
<lazyPower> weird, none of the layers define extra packages than i can see
<lazyPower> stokachu - i'm not certain, can you file a bug against charm-tools?
<stokachu> sure
<cory_fu> kjackal: You asked if we would have a bundle that included both Spark and Hadoop, in particular for the Bigtop charms.  I think we will; it'll probably be something like the current realtime-syslog-analytics bundle, though I think we were thinking about renaming that (kwmonroe?)
<cory_fu> kwmonroe: Also, I assume we want to rename the apache-bigtop-spark charm to bigtop-spark to match the Hadoop charms?
<kwmonroe> yeah cory_fu kjackal, for the spark bundle, we should consider whether we want spark workers or nodemangers.  i'm assuming workers with no yarn at all.. so maybe 'spark-processing-hdfs' for a reference bundle, or something that more closes matches whatever it is that bundle does.
<cory_fu> Wouldn't something like bigtop-processing-spark-hdfs better match our current convention?
<kwmonroe> our current convention is hadoop-processing
<kwmonroe> http://jujucharms.com/hadoop-processing
<cory_fu> Oh, right
<kwmonroe> how about engine-pillar[-storage] where engine: {hadoop, spark, flink}, pillar: {ingestion, processing, visualization}, storage: {None (implied), cassandra, s3, etc}
<kwmonroe> my shed is too red for this right now.  mull on it, we can vote later.
<cory_fu> :)
<hatch> how do I deploy a namespaced bundle from the CLI?
<kwmonroe> hatch: pretty sure it's juju deploy cs:~<user>/bundle/<name>
<hatch> ahhh
<hatch> didn't try that format
<hatch> that was the one
<kwmonroe> cool
<hatch> tried all the others I guess
<hatch> haha
<hatch> thanks!
<kwmonroe> np
<kwmonroe> cory_fu: new bigtoppers looking good with that updated patch.  lots of "2.6 GB of 2.1 GB virtual memory used", but not cry babies.
<cory_fu> kwmonroe: Nice.
<kwmonroe> cory_fu: todos are then to merge the base layer, build/push/publish the charms, update bundle with revnos, build/push/publish the bundle, and then fixup PR 108?
<cory_fu> Sounds  right
<kwmonroe> cory_fu: https://github.com/juju-solutions/layer-apache-bigtop-base/pull/10
<cory_fu> You want me to click Merge?  :)
<cory_fu> Done
<cory_fu> kwmonroe: Oh, I need to create a JIRA for that patch, too
<kwmonroe> yeah cory_fu, and you may want to split the patch.. one for datanode ip reg and one for nodemgr vmem.  up2you if you think they'll enforce a 1-fix-per-jira policy.
<cory_fu> I think it would be good to split them
<kwmonroe> i think it would be good for you to split them as well
<cory_fu> kwmonroe: Can you walk me through how you created the patch JIRA?
<kwmonroe> of course cory_fu!  it's your birthday
<cory_fu> :)
<kwmonroe> dbd HO?
<cory_fu> Yeah man
<jcastro> lazyPower: http://paste.ubuntu.com/16663827/
<jcastro> so pretty
<lazyPower> Yeahhhh buddy
<lazyPower> That looks great. awesome work cory_fu and team
<lazyPower> jcastro - and thats deployed on the lxd provider right?
<jcastro> yeah
<jcastro> I did a pagerank action on spark, 10.6 seconds is my score
<lazyPower> not bad brobama
<jamespage> thedac, if you have a moment - https://review.openstack.org/#/c/319779/
<thedac> jamespage: I'll take look
<mpjetta> anyone know if the openstack-charmers hang out somewhere on irc?
<jcastro> in here
<mpjetta> ahh ok, thanks
<jcastro> though a few are on holiday, so the list might get you a faster response, depends
<mpjetta> ty
<magicaltrout> and time of day, more seem to be around earlier
<magicaltrout> marcoceppi: when i messed up my lp username a while ago you said I could create an organisation and push charms to that, was that correct?
<magicaltrout> or jcastro or someone who might know
<jcastro> you should be able to yeah
<jcastro> charms can live anywhere now
<magicaltrout> yeah but charm publish will still need a namespace in LP somewhere even if they live in github or somewhere, I believe?
<mpjetta> Iâve been trying to get https://jujucharms.com/u/openstack-charmers-next/openstack-base-xenial-mitaka/bundle/4 working for a few days now. I can bootstrap juju and get the openstack services UP and talking to eachother. However, I canât seem to properly create the internal and ext_net networks. I see from the description of the charm that the ext_net should be on the maasâ nodesâ eth1 interface but donât see the mappings or config to actually
<mpjetta> that network on the NIC
<lazyPower> magicaltrout - only if you wish your charms to be owned by a group in launchpad, vs your personal namespace
<magicaltrout> yeah lazyPower thats pretty much the idea
<lazyPower> oh i just read the supporting lines to that question/statement
<lazyPower> yeah you're on point.
<magicaltrout> that said i can't find how to create an org
<magicaltrout> even google fails me :(
<lazyPower> 1 sec i'll send you a link
<lazyPower> magicaltrout https://launchpad.net/people/+newteam
<magicaltrout> ta
<lazyPower> np, its hidden on "http://launchpad.net" as "register a new team"
<lazyPower> just fyi if you need to do this in the future
<magicaltrout> thanks
<magicaltrout> I need to start pushing some Apache stuff, so I wanted an org/team as a holder that was outside of my namespace to try and make it more inclusive
<lazyPower> makes sense
<magicaltrout> so we now have /apachefoundation for some charms we're working on
<petevg> cory_fu, kjackal, kwmonroe: I'm thinking of adding a README.md to layer-apache-bigtop-base, noting that the layer will run Bigtop.install for you, and then you want to run Bigtop.render_site_yaml and Bigtop.trigger_puppet explicitly in your own layer, to make it do its magic.
<petevg> Do we have any docs that I can base the rest of the README off of?
<petevg> Or should I just drop in my current understanding, based on the code that I've read today, and push it for review so y'all can tell me all of the things that I got wrong?
<magicaltrout> from an outsiders point of view, this is one of the better readmes https://github.com/juju-solutions/layer-hadoop-client/blob/master/README.md
<petevg> magicaltrout: Cool. I will give that a read-through, and steal its good ideas :-)
<cory_fu> +1 to magicaltrout's suggestion.
<magicaltrout> lazyPower: charm push . cs:~apachefoundation/trusty/joshua-full
<magicaltrout> ERROR cannot post archive: unauthorized: access denied for user
<magicaltrout> got any clue?
<magicaltrout> works when i push to my local namespace
<magicaltrout> oh hold on
<magicaltrout> good old google and aisrael https://github.com/juju/docs/issues/993
<magicaltrout> hmm no difference
<lazyPower> jrwren halp
<lazyPower> oh wait
<lazyPower> magicaltrout - try doing a charm whoami to refresh your groups and try again
<lazyPower> i dont know why this would work, but stranger things have happened
<lazyPower> mainly i want to verify you have apachefoundation in your user groups
<magicaltrout> yeah doesn't make a difference https://github.com/juju/charmstore-client/issues/33 i've landed here
<magicaltrout> so I'll dig up an update if there is one
<magicaltrout> see if it makes a difference
<cory_fu> aisrael: I'm not sure that issue is valid.  We have no trouble pushing to ~bigdata-charmers which is restricted
<lazyPower> magicaltrout - that says log out / log in and it should be fixed
<lazyPower> its due to SSO handing over the membership details, and it didnt exist at the time of initial login
<magicaltrout> yeah its a lie
<magicaltrout> logout does nowt
<lazyPower> crap :/
<lazyPower> herrings of redness abound
<magicaltrout> hehe
<magicaltrout> how does one go about upgrading charm tools
 * magicaltrout heads over to his favourite search engine
<magicaltrout> there's ppa's and stuff for this now, right?
<lazyPower> i was told never to reveal teh secrets
<lazyPower> everyone and anyone should be installing from archive per marcoceppi
<magicaltrout> last time i did any major juju installation it was from trunk, but i'm on a fresh-ish xenial install and just apt-got it ages ago
<lazyPower> but between you and i, if you fire up a virtualenv, you can rev charm-tools in that venv safely without polluting your environ.
<lazyPower> you'll have to pip install ./ from the charm-tools tree... but we dont recommend that *notice*
<magicaltrout> hmm
<magicaltrout> well marcoceppi says in that gh issue that 2.1.2 has the fixes and james can see the group
<magicaltrout> so something else is messed up as well
<magicaltrout> where does charm tools cache its login and stuff?
<magicaltrout> can I flush it?
<lazyPower> i think thats in .go-cookies
<lazyPower> its using a token-based auth mechanism
<magicaltrout> oh hold on, there's a major/major logout
<magicaltrout> not just a charm logout
<magicaltrout> bleh I've logged out of everything and i'm still not in my own group
<magicaltrout> even though i'm an owner
<magicaltrout> okay, so here's another angle
<magicaltrout> as a lowly user
<magicaltrout> if I login to the jujucharmstore.com
<magicaltrout> should it work?
<magicaltrout> jrwren says in that ticket that charm tools gets the groups from the charm store
<magicaltrout> "Sorry for the inconvenience, please pop back soon."
<arosales> magicaltrout: what charm tool version are you on?
<magicaltrout> arosales:
<magicaltrout> bugg@tomsdevbox:~/charms/trusty/joshua-full$ charm version
<magicaltrout> charm 2.1.1-0ubuntu1
<magicaltrout> charm-tools 2.1.2
<magicaltrout> latest afaik
<arosales> yup, that looks like the latest and what I have on my machine
<arosales> magicaltrout: and reading the backscroll sounds like you can't put to a group
<arosales> s/put/push/
<magicaltrout> yeah, but I also can't login to jujucharms.com (I don't know if users are supposed to or not)
<magicaltrout> so I'm wondering if they are related
<arosales> magicaltrout: you should be able to login to jujucharms.com
<magicaltrout> oh in that case my account is screwed up somewhere
<arosales> its via openid
<magicaltrout> yeah, i changed my id
<magicaltrout> i suspect jujucharms.com can't cope
<magicaltrout> woop
<arosales> magicaltrout: so https://jujucharms.com/login/ does not work for you?
<magicaltrout> no it tells me to pop back soon
<arosales> :-/
<magicaltrout> the USSO login works
<magicaltrout> is post auth
<arosales> https://help.launchpad.net/YourAccount/OpenID?action=show&redirect=OpenID
<magicaltrout> so I guess charm tools can't grep my groups because it will also get redirected under the hood
<arosales> under "Can I change my OpenID"
<arosales> magicaltrout: did you change our lp id recently?
<magicaltrout> months ago
<magicaltrout> probably 6
<arosales> hmm well that should have only affected sites you were already logged into
<magicaltrout> but i've only ever pushed stuff to my namespace and never logged into jujucharms.com
<arosales> and sounds like you have tried to log in/out a couple of times
<magicaltrout> yeah 2 different browsers, 2 servers with charm tools installed
<magicaltrout> same result
<arosales> given jujucharms.com/login does not work for you it feels like an OpenID issue
<magicaltrout> well openid works on LP
<magicaltrout> so I know it works in some domains
<arosales> good data point
<magicaltrout> actually arosales, full disclosure, a while ago some devs' hacked my account around on jujucharms.com because of my LP name change, so it may be that whilst the oauth stuff works on the push side to my namespace, its finding multiple users with the same ID or something
<arosales> magicaltrout: possibly
<arosales> magicaltrout: lets open an issue to track this
<arosales> magicaltrout: can you collect the error you see when pushing
<arosales> along with charm whoami
<magicaltrout> sure, where do you want me to file it?
<arosales> @ https://github.com/juju/charm-tools/issues
<magicaltrout> ta
<arosales> magicaltrout: if you add me to a ~apachefoundation I can try to replicate the problem
<arosales> but I am guessing its on the auth/SSO side
<arosales> magicaltrout: also be sure to mention you have done charm login/logout
<arosales> and jujucharms.com/login also fails for you
<magicaltrout> yup
<magicaltrout> added, i'm sure you'll have more success than me
<arosales> magicaltrout: and we should also ping in #canonical-sysadmin and see if they have any insights
<arosales> magicaltrout: https://launchpad.net/~arosales528 isn't me
<magicaltrout> oh well
<magicaltrout> i shall remove that imposter
<arosales> but https://launchpad.net/~arosales is  :-)
<magicaltrout> how dare someone share a similar name
<arosales> I know, right
<magicaltrout> alright i submitted #211
<arosales> magicaltrout: interesting whoami doesn't show apachefoundation for me atm
<arosales> I wonder if I push  . . .
<arosales> what to push though
<arosales> magicaltrout: interesting . . .
<arosales> $  charm push . cs:~apachefoundation/trusty/apache-boilerplate
<arosales> ERROR cannot post archive: unauthorized: access denied for user "arosales"
<magicaltrout> hmm
<magicaltrout> and whoami doesn't show you in it?
 * arosales will try login/logout since it was a new addition to the team
<arosales> magicaltrout: correct whoami hasn't shown me to be in apachefoundation group yet
<arosales> magicaltrout: still no luck on my side with pushing to that group, or seeing it my list of groups via whoami
<magicaltrout> weird
<arosales> magicaltrout: I don't see it in your whoami either
<magicaltrout> nope
<magicaltrout> all in all, i think i can safely say its not working :)
<arosales> alas, yes :-/
<magicaltrout> oh well, i'll leave the ticket lurking and see what someone says
<magicaltrout> thanks for validating it though arosales
<magicaltrout> its a bad day when a canonical employee can't do something on LP ;)
<arosales> magicaltrout: I created https://launchpad.net/~ai-charmers
<arosales> and I don't see that in my group list
<arosales> so this may be an issue of groups being updated.
<arosales> I'll update the issue with this
<magicaltrout> ta
<arosales> but its a good day when I see stuff like "cs:~apachefoundation/trusty/joshua-full" :-)
<arosales> just super  frustrating you can't share it :-(
<magicaltrout> hehe, well its no big rush, its not completely done, but i did want to push it to a development branch so I could demonstrate it to the guys on the project. That said, there's no huge rush, so I'll just sit it out.
<magicaltrout> I'm suppoesd to be talking to Alejandra at the end of the week about the apache page as well
<magicaltrout> which is good
<arosales> magicaltrout: sorry about the issue, I did add a comment to https://github.com/juju/charm-tools/issues/211
<magicaltrout> yeah its no problem arosales thanks for looking into it
<arosales> magicaltrout: ya ale mentioned that today, be great to get a nice landing page for apache projects
<magicaltrout> i'll file a ticket on GH jujucharms.com about my login as, even if the group suddenly appears, I should be able to login to the website
<arosales> magicaltrout: correct, I didn't see that issue
<arosales> I am able to login to jujucharms.com
<magicaltrout> yeah, i'm sure that must be LP nameswitch/devs hacking the database for me/ related
<arosales> but mainly we need to get folks to easily create an lp team and then start pushing to it
<arosales> magicaltrout: we should see a reply on the list when folks in the UK come back online during normal business hours
<arosales> not magicaltrout 24 hours :-)
<magicaltrout> bleh. And on that note, I'm off for at least 5 hours :P
<magicaltrout> thanks again arosales
<arosales> good night magicaltrout
#juju 2016-05-25
<jamespage> urulama, morning - I'm working through https://github.com/juju/charmstore-client/issues/61 whilst beisner is on leave
<jamespage> urulama, is there a nice way I can login once, and them propagate the usso credentials to a number of machines so that they can all push and publish charms?
<urulama> jamespage: hm. if you copy the store-usso-token in ~/.local/share/juju across machines, this should do the trick
<urulama> we need to provide you with better instructions how to hook CI
<jamespage> urulama, ok tried that, but charm whoami still thinks I'm not logged in on machines other than the one I generated that file on
<urulama> jamespage: hm. seems like ~/.go-cookies are used still. try copying that file
<jamespage> urulama, trying that now
<jamespage> urulama, +1 that fixed me up
<jamespage> thanks for the help
<jamespage> tinwood, thedac, gnuoy, beisner: OK charm push on change landing for master and stable branches is now live.
<tinwood> jamespage, excellent!
<jamespage> thanks to urulama for helping get the auth sorted out
<jamespage> +1 ta
<tinwood> urulama, excellent too! :)
<urulama> i think proper way would be to create a "bot" user to be used by CI
<urulama> tinwood: ty, it has some rough edges still, but, we'll get there :)
<jamespage> gnuoy, https://review.openstack.org/#/c/320817/
<jamespage> urulama, agreed - we already have a bot user
<jamespage> gnuoy, if you have time :-)
<godleon> Hi all, I have a question about openstack charms, do you have any plan to develop charm for projects in big tent other than core projects? e.g. magnum, murano ... etc
<jamespage> godleon, we're working on a way to make it easy to charm said projects - and the current team may pick off a few of those, but we'd love to have other contributors who know and use those projects working on the charms :-)
<jamespage> godleon, find that works best rather than the 'read the docs -> write the charm' approach :-)
<godleon> jamespage: thanks! I will spend some time to read the docs.
<jamespage> godleon, still wip but worth a look - https://github.com/openstack-charmers/openstack-community/blob/master/openstack-api-charm-creation-guide.md
<godleon> jamespage: And I have another question about nova-compute with LXD, is it possible to have two virt-type(KVM & LXD) simutaneously on the same openstack platform?
<jamespage> godleon, yes - but not on the same servers
<jamespage> godleon, one sec - letme pick out the reference for that
<godleon> jamespage: wow, you are so kind. :)
<jamespage> godleon, http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/README-multihypervisor.md
<jamespage> np :-)
<godleon> jamespage: can I manage multiple hypervisor's workload in one Horizon portal?
<jamespage> godleon, yes
<godleon> jamespage: How about the performance comparison between LXD and docker, havr you ever done this kind of test?
<jamespage> no
<jamespage> godleon, they target different spaces ...
<jamespage> system containers vs application containers
<jamespage> mutable vs immutable
<godleon> jamespage: ok, I didn't have this concept, sorry about that.
<godleon> jamespage: ok, I will dig into the LXD and multiple hypervisor architecture to evaluate if it can help me solve the problems in my project. Really appreciate for ur infomation. :)
<jamespage> godleon, I did a talk about this in austin - letme dig out the video
<godleon> jamespage: great
<jamespage> godleon, https://youtu.be/u511z0BGnw4
<jamespage> godleon, enjoy my shirt :)
<jamespage> godleon, the demo is missing due to some video problems - I'll ping gsamfira to see if he's re-recorded the demo for us yet...
<godleon> jamespage: haha, will do. Many thanks!
<godleon> jamespage: wow, good! Thanks!
<godleon> godleon: should I leave my email here?
<gnuoy> jamespage, do you happen to know if the juju native bundle unit placement syntax is documented anywhere?
<jamespage> gnuoy, probably but not sure where
<jamespage> gnuoy, what are you trying todo?
<gnuoy> lxc:nova-compute=1
<gnuoy> jamespage, fwiw I know it's lxd now
<jamespage> gnuoy, that does not work
<jamespage> gnuoy, you have to target the actual machines
<gnuoy> jamespage, ok, I can live with that, whats the syntax for targetting actual machines
<gnuoy> ?
<jamespage> gnuoy, read this - https://api.jujucharms.com/charmstore/v5/openstack-base/archive/bundle.yaml
<gnuoy> jamespage, ta
<jamespage> gnuoy, could you take a peek at https://code.launchpad.net/~james-page/charm-helpers/newton-opening/+merge/295693
<jamespage> ?
<coreycb> jamespage,  I fixed up the cinder.conf commit message.  I never realized you could edit it directly within gerritt.
<jamespage> gnuoy, https://review.openstack.org/#/c/320817/
<jamespage> all good to go
<gnuoy> jamespage, approved your ch change too
<jamespage> gnuoy, ta
<gnuoy> tinwood, I really like the clean look of the baribcan charm. The only thing that jars a little for me are the setup_amqp_req, setup_database and setup_endpoint being part of barbican.py. they seem standard and not specific to barbican at all. Can we push them down into the layer or module?
 * tinwood hadn't considered that.
<tinwood> gnuoy, I'll take another look.  I sure there's a dependency on one of them on the charm, but the other two could probably move.
<gnuoy> tinwood, do you agree in principle to moving them?
<tinwood> gnuoy, setup_amqp_req() and setup_database() are both independent of the charm and could happily go elsewhere.  setup_endpoint() seems more problematic, in that it needs access to some of the charm's properties.  I'm wondering whether that might be better elsewhere?
<gnuoy> tinwood, it was originally part of the charm class wasn't it? do you feel uncomfortable with it going back there?
<tinwood> gnuoy, I'm not sure.  The `keystone` object is an interface class instance.  It would be possible to have a `register_endpoints` with a sig register_endpoints(keystone : `keystone-interface`) which then turns around a calls the register_endpoints() on the interface, assuming all OpenStackCharms will have service_type, region ... admin_url.
<gnuoy> tinwood, It's safe to say all charms calling register_endpoints will have tose attributes
<gnuoy> s/tose/those/
<tinwood> I'm actually in favour of pushing the setup_amqp_req() and setup_database() basck to the handlers file in the charm.
<tinwood> gnuoy, but I did want to keep all the handlers together.
<tinwood> hmm.  It's because reactive forces us to put some functions in the reactive directory, but we're putting charm code in lib/
<jamespage> gnuoy, we need a better way todo feature discovery in the cloud - https://review.openstack.org/#/c/320972/1
<jamespage> I have this type of config option...
<jamespage> have/hate
<lazyPower> godleon - i've done some benchmarking in terms of starting containers but thats not much of a telling story. we both launch containers silly fast if you have the images cached. but jamespage was spot on with them targeting different spaces so its comparing apples to pineapples.
<gnuoy> jamespage, I take it cinder-backup doesn't register an endpoint in keystone?
<jamespage> gnuoy, hmm
<jamespage> it might
<jamespage> gnuoy, however...
<gnuoy> if only there was some sort of service catalogue...
<jamespage> gnuoy, I'd not want to query the service catalog for this; it should be done using charm semantics
<petevg> cory_fu, kjackal, kwmonroe: I've got a question about Bigtop automagic, using the layer-apache-bigtop-spark charm as an example: Does Bigtop know to tell puppet to install spark simply because there is a "spark" entry in the "hosts" dict that we pass to render_site_yaml? Or is there additional configuration info in the charm that I'm missing?
<cory_fu> petevg: It's actually the roles that define what gets installed: https://github.com/juju-solutions/layer-apache-bigtop-spark/blob/master/lib/charms/layer/bigtop_spark.py#L28
<petevg> cory_fu: got it. That makes sense, now that I think about it.
<petevg> Thank you :-)
<cory_fu> petevg: The Bigtop Puppet scripts have two methods for selecting what gets installed: components or roles.   Roles are more fine-grained, and let you sepcify things more precisely, while components infer a lot more based on what hosts are provided
<cory_fu> petevg: https://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/hieradata/site.yaml has a list of the components you can choose from, while https://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/manifests/cluster.pp has a (not 100% complete, it seems) listing of roles
<petevg> Ooh, useful. Will bookmark that, and also link in the README.
<cory_fu> Unfortunately, there doesn't seem to be much in the way of documentation of this stuff outside of the Puppet scripts themselves.
<cory_fu> Adding some would be a good patch to submit, I think
<deanman> Is it possible to deploy an openstack bundle on a Juju with manual provider? I can see from store that the default bundle requires MAAS.
<magicaltrout> marcoceppi stop spamming me! :P
<magicaltrout> arosales: they've fixed my CS login, if it makes you feel any better, I'm also listed in 0 teams ;)
<marcoceppi> magicaltrout: stop opening bugs on the wrong project ;)
<arosales> magicaltrout: glad to they got you sorted
<magicaltrout> just doing what arosales told me :P
<marcoceppi> arosales: stop opening bugs on the wrong project ;)
<arosales> magicaltrout: I hear the ~charmer group is a good group to be in
 * marcoceppi labored over that issue template
<arosales> marcoceppi: ?
<magicaltrout> Ctrl -a <backspace>
<marcoceppi> magicaltrout: :sadpanda:
<magicaltrout> hehe
<marcoceppi> arosales: charm-tools is only for proof, build, inspect, layers, create, and a few other things. Everything else (login, whoami, push, pull, grant, publish) is charmstore-client
<arosales> magicaltrout: you should at least be in charm contributor and apachefoundation
<arosales> marcoceppi: how is any normal human suppose to know that?
<arosales> marcoceppi: I install charm tools
<marcoceppi> arosales: well, because I created an issue template that tells people this
<arosales> I look at charm tools
<magicaltrout> templates are for wimps
<arosales> Version
<marcoceppi> arosales: https://github.com/juju/charm-tools/issues/new
 * marcoceppi tries so ahrd
<magicaltrout> marcoceppi: you know that rule about website loading speeds
<magicaltrout> it also applies to placeholder text ;)
<marcoceppi> okay, so trim the fat, got it
<magicaltrout> remove the first para for a start
<magicaltrout> I know you like thanking people, but there is a time and a place... namely a juju charmer summit
<marcoceppi> yeah, I was just thinking that as well
<magicaltrout> you also have a typo in para 2
<magicaltrout> agains isnt' a word
<arosales> marcoceppi: how hard is it for us to move the bug?
<marcoceppi> arosales: I already did
<magicaltrout> hyper links don't work, so just remove the hyperlink markup
<arosales> marcoceppi: so not very hard in general then
<marcoceppi> arosales: it is very very annoying, because gh doesn't have a way to "move" issues
<marcoceppi> and only admins of both repos can do it, so a select few in general
<arosales> But something a person can do
<marcoceppi> arosales: it's super SUPER dirty, ugly, and not friendly
<arosales> I am +1 for the template
<arosales> But
<marcoceppi> magicaltrout: I've been mulling over the idea of `charm bug` or something that can collect 90% of this from the command line
<arosales> Let's not make it cumbersome on someone giving feedback
<marcoceppi> magicaltrout: not sure if people would actually use it
<marcoceppi> I updated the issue template as well
<arosales> ideally we have one place like Juju to summit all bugs and then triage from there
<arosales> Make it easy for
<arosales> Contributors
<arosales> But that's ideal
<magicaltrout> I don't think charm bug is a bad idea, if i'm already in the command line, copying that stuff out of my terminal is clearly harder than me typing charm bug
<marcoceppi> arosales: sure, if we do that on Launchpad.
<marcoceppi> because you can not move issues around in gh
<marcoceppi> but we can in lp, but now you're subjecting people to lp
<arosales> Well gh repo admin can I think
<arosales> But that's technical
<arosales> All I am saying is take the burden off someone trying to give feedback
<magicaltrout> don't subject people to LP, most people will have a GH login, not so with LP. Finding anything in LP, including your own code is a pain in the ass :)
<arosales> +1 on the template helping them
<marcoceppi> arosales: you can not move issues between repos, pretend I ever said that
<arosales> Get to the right spot
 * marcoceppi spins up a third, bugzilla site ;)
<magicaltrout> juj deploy cs:bugzilla juju-charm-website-massive-bug-aggregator
<magicaltrout> +
<magicaltrout> u
<arosales> My feedback is make it easy on contributors even if it isore back end woek
<arosales> .. If it is more back end work
<arosales> +1 for templates to help folks get to the right place, but if that fails let's just take care of it on the back end, even if copy paste
<arosales> magicaltrout: would you mind if I demo'ed your mesos bundle?
<magicaltrout> not at all arosales
<magicaltrout> just bear in mind currently it doesn't support > 1 master
<magicaltrout> which i appreciate defeats the point slightly, but there is a fix in the works for that, its just wiring as opposed to anything technical
<arosales> magicaltrout: attending mesoscon in Denver next week and would like to present your bundle at a lighting talk
<magicaltrout> yeah i was thinking about showing up to MesosCon europe with a talk if they fancied it
<arosales> magicaltrout: could you point me at your latest bundle?
<magicaltrout> ah yeah i've not published it yet :P
<magicaltrout> should probably do that
<magicaltrout> https://github.com/buggtb/dcos-master-charm / https://github.com/buggtb/dcos-agent-charm / https://github.com/buggtb/dcos-interface
<magicaltrout> currently
<arosales> magicaltrout: thanks
<magicaltrout> i'll try and get the multi-master finished this week and get it published
<arosales> magicaltrout: oh no worries on working on it this weekend on my account
<arosales> https://www.irccloud.com/pastebin/7DJdv9mG
<magicaltrout> its like you're talking to me in a webpage....
<arosales> magicaltrout: have you seen the work SaMnCo and data art have done on mesos
<magicaltrout> I've not seen it, but he tried to hook us all up pre-apachecon and we all said hi then it went quiet
<arosales> Dang phone client
<magicaltrout> i mailed SaMnCo the other day to try and reboot it, but i've noticed if I mail him with more than one issue a day, the others don't get a response, so that went unanswered ;)
<arosales> I'll try to follow up with SaMnCo
<arosales> magicaltrout: could you
<arosales> Add marcoceppi and I to cc?
<magicaltrout> I'm not sure what they are upto , but it would be cool to get all of this stuff aligned, hipster tech and all that
<magicaltrout> will do
<magicaltrout> i also need to get a talk submitted to Oslo Devops Days, so I'll probably submit something similar to what I did in that ApacheCon talk with some more hadoop-y stuff to pad it out
<arosales> magicaltrout: good stuff, thanks
<kjackal> petevg: Thank you for the README PR. Docs is something (at least) I have to pay more attention. Good work!
<petevg> kjackal: thanks. Nice to hear that it's appreciated :-)
<kwmonroe> cory_fu: what's wrong with me?  why can't i see an issues tab here? https://github.com/juju-solutions/layer-apache-bigtop-nodemanager
<kwmonroe> and why does going to https://github.com/juju-solutions/layer-apache-bigtop-nodemanager/issues redirect me to PRs?
<cory_fu> That repo doesn't have issues enabled, apparently.  Probably some weirdness because of how it was forked
<cory_fu> We should probably re-own it anyway, so it's easier to make PRs
<cory_fu> kwmonroe: Anyway, issues are enabled now
<kwmonroe> gracias!
<bdx> openstack-charmers: when using nova-lxd, am I confined to using local storage, or does ceph work in some way as a backend for nova-lxd that I'm unaware of?
<tinwood> gnuoy, you about or eod?
<hatch> Is there a way to get the full log from a unit from the Juju CLI?
<lazyPower> hatch - yep
<lazyPower> juju debug-log -i --replay
<lazyPower> you'll need to pass the unit in the -i flag
<lazyPower> hatch if you dont want to tail after its done, pass -F
<lazyPower> or, juju debug-log --help for the full list of awesome
<hatch> lazyPower: ahhh that's it - I didn't read the docs for --replay because I didn't want to 'replay' them, I just wanted to see them
<hatch> heh
<hatch> thanks :)
<lazyPower> ye :) that'll get ya sorted
<lazyPower> cheers
<hatch> how do I destroy a service with units in error?
<lazyPower> hatch juju remove-machine --force #
<lazyPower> hatch then juju destroy-service will cleanly remove the service/charm from the controller
<lazyPower> that or juju resolved mything/0    all the way down as it fail-cycles until its removed
<hatch> thanks lazyPower
<hatch> saying to destroy a service and getting no feedback is not very good
<lazyPower> i dont know that i'm following what you're calling attention to. How did you not get feedback?
<hatch> I can spam destroy-service and because units are in error nothing happens
<hatch> and I get no feedback
<hatch> so I just sit here wondering why Juju is broken :)
<lazyPower> ah, well, do you expect the command to block until its removed?
<hatch> at the very least I'd expect a message telling me why it's not doing what I told it to do
<lazyPower> its doing the right thing in my mind. its a one-shot declaring a state. "Remove this thingy!!" and its trying its best to get there, and if it fails to do so it reports that.
<lazyPower> thats a disconnect between blocking commands and the fire/forget style state change you've defined.
<hatch> no it doesn't
<hatch> it doesn't report anything
<lazyPower> juju status sure does
<hatch> I guarantee it doesn't
<hatch> https://gist.github.com/hatched/9a374d20d007e019d3ec2045cf7edc1f
<lazyPower> thats funny. workload-status: error
<hatch> where there says that I have destroyed the service about 10 times?
<lazyPower> message: hook failed 'install'
<hatch> yep the hook failed - so why can't I destroy the service?
<natefinch> lazyPower: I'm with hatch.  juju destroy-service, IMO, should just take down errored units.  who cares if a unit is in an error state, I'm explicitly removing it
<lazyPower> you're making an assumption about what it should be doing
<lazyPower> and i dont agree with your assumptions
<lazyPower> "i said destroy service, and its still here D:"
<lazyPower> what if you want to debug that while its in life: dying?
<hatch> so you prefer for the command to just return with no message to the user
<lazyPower> and root out what the cause was?
<lazyPower> no, i want you to admit that you're conflating two issues
<natefinch> I definitely agree that if we KNOW that destroy-service is not going to work, we tell the user
<hatch> I said to Juju to destroy the service - I want it to destroy the service
<lazyPower> thing is
<lazyPower> juju status --format=yaml
<lazyPower> the life is going to be dying
<hatch> but it's not dying
<lazyPower> it received and is working towards that destructive state
<hatch> it's going to sit there forever
<natefinch> juju destroy-service foo
<lazyPower> the fact there's a hook error is regardless of what state change you just told it to take
<natefinch> ERROR: can't destroy service, unit in error state: foo/0
<hatch> ^^^ this
<hatch> 100x this
<natefinch> if we want to be pedantic and conservative and not destroy an errored unit automatically.... at least tell the user we're not going to do it
<lazyPower> thing is, it IS going to do it if you resolve it and it doesn't further error
<hatch> if you're not doing to do what the user intends to do then at least tell them why
<lazyPower> its not "uncommitting" that destroy directive
<natefinch> WARNING: unit foo/0 in errored state, will not be destroyed until resolved.
<lazyPower> that makes more sense
<lazyPower> i'm +1 to that
<natefinch> throw me a bone, FFS.  The user shouldn't need to know the internal details of exactly how everything works... we should help them to use juju
<hatch> YES!
<lazyPower> but i still stand that its 2 sep. issues.
<natefinch> I'm fine with it being two separate issues - do we or do we not destroy errored units automatically?   and, if we don't, we need to tell the user explicitly.
<natefinch> And honestly, I wish destroy-service were synchronous, like destroy-model is now
<natefinch> maybe with a flag to make it async... or vice versa....  I shouldn't ever need to type watch juju status in order to figure out WTF is going on in juju
<hatch> +1
<lazyPower> i'm going to leave this alone
<hatch> lol
<mbruzek> what?
<lazyPower> this is going off the rails into a really bad gripe session
<natefinch> lol true, sorry :)
<hatch> I'm going to file a bug about the destroy-service messaging
<mbruzek> natefinch: I watch juju status all the damn time
<mbruzek> It is the only way to figure out what is going on behind the curtain
<mbruzek> That wizard is up to some crazy stuff folks, I constantly watch juju status and tail the debug log
<natefinch> mbruzek: that's my point.  You shouldn't have to.  Like the way destroy-model is synchronous and actually tells you what it's doing.
<lazyPower> whats fun is something errors you destroy it, then figure out it was like a race condition and the charm can recover... it goes to started, then immediately destroys itself
<lazyPower> thats the best
<hatch> lazyPower: sorry I told it to do something - I expect it to listen :)
<hatch> I don't much care what it wants
<lazyPower> again. it listened
<lazyPower> you weren't prepared for the sequence of papercuts afterwords
<hatch> this is like going to a restaurant, placing an order with the server and then the server walks away
<hatch> you sit there forever waiting for your food
<hatch> it was sitting in the kitchen, but you forgot to tell them to deliver it
<hatch> not that you knew you had to say that
<hatch> because the server didn't tell you that
<lazyPower> look i'm not saying you're wrong, i'm saying teh way you're conveying it and griping is wrong, beause you're telling me it didnt do something that it is celary doing
<lazyPower> *clearly
<hatch> how is it doing it? It isn't destroying the service
<lazyPower> did you juju remove-machine # --force?
<hatch> I did
<lazyPower> teh service should have gone then
<lazyPower> its dead
<lazyPower> has no units
<hatch> it did
<lazyPower> is in state: dying
<lazyPower> then how did it not destroy?
<hatch> so destroy-service isn't destroy service
<hatch> it's "mark for destroy sometime in the future when a pretermined list of requirements have been satisfied - oh but you don't know what those are"
<hatch> that's quite the command ;)
<natefinch> it seems to me that if destroy-service had a --force flag, it would help a lot.
<lazyPower> thats not true either
<lazyPower> hatch - this. x1000 is this conversation
<lazyPower> https://xkcd.com/386/
<hatch> lol except this _is_ important
<hatch> users need to be informed about what's going on
<hatch> and if it's not doing what they said to do they need to know why
<mbruzek> Has anyone opened a bug about this problem? With the problem statement, and expected result. I think others would like to see this and possibly comment on either it is working as designed or a problem that will get fixed next release.
<hatch> mbruzek: here is one I filed earlier https://bugs.launchpad.net/juju-core/+bug/1568160
<mup> Bug #1568160: destroying a unit with an error gives no user feedback <destroy-unit> <observability> <juju-core:Triaged> <https://launchpad.net/bugs/1568160>
<hatch> I actually totally forgot I filed that one
<hatch> haha
<hatch> I file a lot of bugs
<hatch> :)
<hatch> mbruzek: I think there are really two points here - 1) why errors block removal actions and 2) lack of user feedback in any case
<mbruzek> hatch: natefinch: lazyPower: I commented on the bug ^.  I think the actual problem is the charm code had a bug, and would not destroy because a hook returned non zero and that is literally how Juju hooks works.
 * lazyPower smirks
<hatch> https://bugs.launchpad.net/juju-core/+bug/1585750
<mup> Bug #1585750: Destroying a service in error gives no feedback <juju-core:New> <https://launchpad.net/bugs/1585750>
<hatch> mbruzek: your comment doesn't really apply
<hatch> mbruzek: if you want to fix the error then fix the error
<hatch> you wouldn't destroy it if you wanted to fix it would you?
<mbruzek> in the charm code rather than fix juju
<lazyPower> stop isn't executed without first destroying the service or removing a unit
<magicaltrout> marcoceppi: ping
<lazyPower> you may not know its a problem until you've already issued the destroy stanza, and by your proposal - there is no way to really debug it. its just LOL bye.
<marcoceppi> magicaltrout: yo
<mbruzek> Actually hatch that is a valid test case when developing a charm. I want to make sure that the charm goes down cleanly
<magicaltrout> hey, 2 things
<magicaltrout> a) warning: bugs-url and homepage are not set.  See set command.
<magicaltrout> hmm nm
<mbruzek> hatch: There is no other way to call the stop hook that I know of.
<magicaltrout> ignore that one
<magicaltrout> b) https://jujucharms.com/u/apachesoftwarefoundation/ why did the charm I publish earlier vanish?
<marcoceppi> magicaltrout: are you logged in
<magicaltrout> i am logged in
<hatch> mbruzek: This is the first time I've ever heard of any reason to not clean up on error
<mbruzek> hatch: If the charm *-broken relations, or stop hooks, did something really important, I would TOTALLY want to know if they worked or not.
<hatch> mbruzek: but put yourself in the users shoes - how are they supposed to know any of this?
<mbruzek> Like backing up data to S3 or doing other important clean up things
<magicaltrout> i fully had a charm there earlier that I went to look at and stuff
<lazyPower> magicaltrout - its likely permissions
<hatch> mbruzek: there is literally 0 feedback or help given to the user
<mbruzek> hatch: I *am* thinking of the users. I would propose that you are using the wrong command.
<lazyPower> magicaltrout - pastebin teh output of charm show cs:~apachefoundation/mything
<hatch> mbruzek: so what command am I supposed to use?
<mbruzek> If there are errors in the hook, remove-machine
<mbruzek> or resolve the errors
<natefinch> mbruzek: In the long term, 99.9999% of users will not be charm authors
<hatch> ok so I'm supposed to know that when I try to destroy a service, if there is an error I need to remove the machines
<hatch> but I have to actually check juju status
<hatch> often
<hatch> to know what commands to run
<magicaltrout> marcoceppi: http://paste.ubuntu.com/16691257/
<lazyPower> magicaltrout   Read:
<lazyPower>   - apachesoftwarefoundation
<lazyPower> thats the issue
<lazyPower> magicaltrout charm grant cs:~apachesoftwarefoundation/trusty/joshua-full --acl=read everyone
<lazyPower> the reason you saw it was because you were logged into the store, and had access to "read" the charm
<lazyPower> if you checked it in private mode, you would not have seen it.
<mbruzek> hatch you and natefinch, when a hook returns a non zero return code Juju pauses to let the admin fix the stuff. If the stop hooks were doing some legit backup sequence, the admin would totally want to know if that is not working
<magicaltrout> you are correct lazyPower it is perms related.... that said jujucharms.com says I'm logged in
<lazyPower> ooo
<lazyPower> weird
<hatch> mbruzek: fine, but how do they know that?
<magicaltrout> and once I granted permissions it appeared, but i'm on the homepage and can see my face
<magicaltrout> which tends to indicate i am logged in
<hatch> how do they know to resolve the units?
<magicaltrout> anyhoo, weird, but works, thanks a lot
<lazyPower> hatch - clairvoyance obviously
<hatch> lazyPower: this is how my experience has been, yes
<lazyPower> as i said, feedback is fine
<mbruzek> hatch: Charms throw all kinds of errors, resolve is the mechanism to force Juju to proceed
<natefinch> I think that if the unit is in an error state *before* calling juju destroy-unit, we should warn the user and ask them if they want to destroy it anyway
<lazyPower> bu ti do take issue with wholesale deletion of a service in error state
<lazyPower> unless you make it flag based where thats not default behavior
<lazyPower> as cmars suggested which is nice middle ground
<lazyPower> cmars thanks for being the voice of reason in this
<cmars> reason? me? what? :)
<lazyPower> you're the only one who piped in with adding --i-mean-it
<cmars> :)
<lazyPower> which is nice middle ground
<lazyPower> keep default behavior, give the hatches of the world an option to wholesale delete
<hatch> I really don't understand why there is so much resistance to working hard on user experience with the CLI
<hatch> there is a problem, lets work on a fix
<hatch> not drag our heals because this is how it is
<lazyPower> hatch - because your campaigning for something thats really destructive and ops people will not appreciate it
<cmars> i think mbruzek has a good point, but i've certainly screwed up a charm with repeated `juju upgrade-charm --force-units ... ; juju resolved --retry` that I have no confidence its state reflects any kind of debuggable reality
<lazyPower> yup
<lazyPower> we've all been there but thats an edge case
<cmars> there's also the case where you're trying out different versions of charms, not really developing on them, just evaluating them
<natefinch> I think the problem might just be that a hook error can mean two wildly different things - one is "something external is wrong, you have to do this concrete step to fix it before I can continue" and the other is "there's a bug in the hook, good luck!"
<cmars> oops, i got the old precise version.. nooo
<hatch> lazyPower: I strongly disagree that leaving the user to "just have to know" what to do is a good idea
<cmars> with reactive its easier to be resilient in the face of externalities
<hatch> you should always do what they expect to do
<lazyPower> hatch - i didnt say dont give the user feedback
<lazyPower> i said dont wholesale delete my thing unless you are absolutely sure thats what i want
<lazyPower> again, conflating 2 issues
<lazyPower> please stop and re-read my exact statement of my stance in teh 3 lines above
<lazyPower> because its really exhausting repeating myself on this
<cmars> hmm
<hatch> lazyPower: I just want user feedback with instructions on how do what the user is intending
<hatch> lazyPower: what do you think the user wants to do when they type 'destroy-service' ?
<hatch> it's certainly not sit there
<natefinch> ERROR: unit is in an errored state, can't be removed. To remove anyway, use juju destroy-unit foo/0 --force.
<hatch> ^ this
<godleon>  lazyPower, good! thanks for your information. And..... the images caches is for LXD or for both?
<lazyPower> lxd requires you to import the image before you can even launch the container, so to be fair, both
<godleon> lazyPower, hmm...........it makes sense.
<ryebot> marcoceppi, lazyPower - what's the right way to fix this? https://github.com/juju-solutions/layer-basic/pull/70
<marcoceppi> ryebot: why are we just seeing this now?
<ryebot> marcoceppi still trying to figure that out - best guess is some package deps changed and are pulling in 8.1.1 now
<ryebot> cynerva, you experimented with this - can you comment?
<Cynerva> marcoceppi ryebot - not much info, i'm seeing the issue too, on xenial but not trusty
<marcoceppi> Cynerva: sure, but when I did a xenial deploy last week I didn't get this error
<Cynerva> marcoceppi yeah - seems like something changed today
<marcoceppi> so weird
<ryebot> marcoceppi yeah, last few hours, happened to both of us independently at roughly the same time
<marcoceppi> xenial has always had 8.1.1
<ryebot> marcoceppi yeah, but I think it wasn't being pulled in before
<ryebot> marcoceppi so we pulled in 7.1.2 first
<lazyPower> perhaps you used virtualenv: true, and that locked your pip version in a virtualenv instead of using system pip?
<marcoceppi> charm build and layer-basic haven't changed ina bit
<lazyPower> or perhaps thats the path forward?
 * lazyPower is not positive, but thinks that would have some good results, isolating from system deps and creating your own python tree of goodness
<marcoceppi> lazyPower: it's annoying to have to do that, but possible owrk around
<lazyPower> well we are getting kind of funky with python across series these days
<lazyPower> considering xenial is py3 default, trusty is py2 default, and we're somewhere in the middle of all that
 * marcoceppi does tests
<marcoceppi> lazyPower: py3 is on both trusty and xenial
<marcoceppi> that's not a problem, it's the versions of pip that's problematic
<marcoceppi> I can't remember why we had to upper bound pip for trusty, but we did
<marcoceppi> huh, maybe not. It looks like I just upper bound it for no good reason
<marcoceppi> ryebot: I think your pull request is OK actually'
 * ryebot reopens his PR victoriously!
<lazyPower> \o/
<lazyPower> marcoceppi - i think at the time there was a dependency issue like the frequent path.py woes. i dont recall the exact details, but it was a temporary solution to a temporary problem
<marcoceppi> lazyPower: possibly
<magicaltrout> marcoceppi: whats the latest beta?
<marcoceppi> beta7
<magicaltrout> is there a ppa for that?
<lazyPower> ppa:juju/devel
<magicaltrout> aye ta
<lazyPower> or grab charmbox:devel
#juju 2016-05-26
<stub> marcoceppi: I vaguely recall issues around newer versions building platform specific extensions
<godleon> Doesn't juju-gui supper xenial yet ?
<jamespage> gnuoy, https://github.com/openstack-charmers/charm-specs/blob/master/specs/newton/approved/sriov-support.rst
<axino> heh
<bbaqar> Hey guys .. while deploying a node using juju with maas .. cloud-init gets stuck at add juju-br0 (/usr/bin/python2 /tmp/add-juju-bridge.py
<bbaqar> any thoughts?
<marcoceppi> bbaqar: that's odd, is this juju 1.25?
<gnuoy> jamespage, did you in disable 'issues' in someway for openstack-charmers repos?
<bbaqar> marcoceppi: yes 1.25.5
<bbaqar> marcoceppi trusty
<bbaqar> marcoceppi so i got past that issue. Now i have my nodes in pending state .. i can ssh into then with ubuntu@ip
<bbaqar> marcoceppi: i can also ping the outside world from there
<bbaqar> marcoceppi but no juju process is running ..
<bbaqar> marcoceppi can i run the juju tools manually over there
<marcoceppi> bbaqar: not really, cloud-init is responsible for getting tools setup, I don't know if that runs before or after bridge creation
<bbaqar> marcoceppi i got past the bridge creation part .. cloud init has finished successfully
<bbaqar> marcoceppi still state of nodes are pending
<marcoceppi> bbaqar: is there an upstart job for juju* on the machines
<bbaqar> marcoceppi: there is only juju-clean-shutdown.conf
<marcoceppi> bbaqar: can you pastebin /var/log/cloud-init* logs?
<bbaqar> marcoceppi its seams like a connectivity issue ..
<bbaqar> let me sort that out
<aisrael> TIL if you're disconnected from a debug-hooks session (like the ssh connection drops), you can re-run the debug-hooks command and it'll drop you right where you left off.
<magicaltrout> the wonders of screen/tmux ;)
<aisrael> yep! I've used screen a lot, but not so much tmux. It impresses me every time I do, though.
<magicaltrout> every box I have, the first 2 things I install are Byobu and Mosh
<SaMnCo> marcoceppi: hi
<marcoceppi> cory_fu: SaMnCo is writing a bash layer but needs to connect to an interface layer. Is this how you would send data over to an interface layer? http://bazaar.launchpad.net/~ibmcharmers/charms/trusty/ibm-java/source/view/head:/reactive/ibm-java#L174
<marcoceppi> relation_call ?
<aisrael> magicaltrout: Ohh, I've never seen mosh before
<magicaltrout> aisrael: i live in the sticks and also just for when I'm moving my laptop around, saves me having to reconnect 100 terminals
<magicaltrout> saves me endless amounts of time
<jamespage> gnuoy, not that I was aware of
<aisrael> *nod* same here
<cory_fu> marcoceppi, SaMnCo: Yep.  Obviously bash is a bit more limited in data types that you can pass, so it's good idea to keep the interface signatures down to simple strings.
<gnuoy> jamespage, ok, strange. There is no 'issue' tab between 'cpde' and 'pull requests' on https://github.com/openstack-charmers/charms.openstack for example
<marcoceppi> cory_fu SaMnCo is trying to do provide from http://bazaar.launchpad.net/~canonical-is-sa/charms/trusty/grafana/interface-grafana-source/view/head:/provides.py my guess is `relation_call --state {rel}.available provide "type" "port" "description"`
<cory_fu> Yep
<cory_fu> Replacing "{rel}" "type" "port" and "description", of course
<jamespage> gnuoy, issues is turned off for that repo
<SaMnCo> hmm that's cool I didn't know about that relation_call stuff
<SaMnCo> I had seen it in the script, but didn't know how to use it
<gnuoy> jamespage, oh , ok, I failed to find whetever turns issues on and off
<jamespage> gnuoy, hit "settings"
<jamespage> for the repo itself, not the org...
<cory_fu> SaMnCo: Documentation is here: https://pythonhosted.org/charms.reactive/charms.reactive.relations.html#charms.reactive.relations.relation_call but it ought to also be mentioned in the jujucharms.com/docs if it isn't
<cory_fu> Also, that doc there is rather lacking as well
<gnuoy> jamespage, I did that and somehow failed to see  the "Issues " radio button thats fron and center. sorry for the noise.
<gnuoy> * front
<lazyPower> godleon - it does, if you're using juju 2.0 beta7 or > it ships with the controller. You can verify this by typing in `juju gui` after you've bootstrapped your cloud.
<icey> can somebody look at bumping the c-h version in pip? also, is there any chance to get that moving forward more regularly?
<marcoceppi> icey: we're looking to kill charm-helpers, what from it are you using?
<Spads> hm, it has some useful functions for idempotent operations on a system
<icey> marcoceppi: a lot of the contrib stuff, hookenv stuff
<Spads> have those been replaced?
<marcoceppi> icey Spads: we're working on pulling all the charm specific bits out to a new concise library, everything in contrib we expect to have the people who wish to continue those pull them out to their own projects. The openstack charms do this today with a new library called charms.openstack
<godleon> lazyPower: so it means I don't have to install juju gui additionally after bootstrapping controller on juju beta 7 or above?
<lazyPower> correct
<lazyPower> it ships with the controller
<godleon> lazyPower: can I install any charm on controller just like I did in juju 1.25?
<marcoceppi> godleon: if you want to, yes, but we still don't recommend it
<lazyPower> i wouldn't recommend it :)
<magicaltrout> lol `juju gui` opens in lynx
<lazyPower> marcoceppi gmta
<godleon> marcoceppi: oh? Why not?
<marcoceppi> magicaltrout: you have to have a web-browser installed on that machine, or use the --no-browser flag ;)
<lazyPower> hurray for launchy! its aggressively trying to get you to your gui
<magicaltrout> clearly I do, its called lynx :P
<lazyPower> via any browser necessary
<marcoceppi> godleon: well, it's the controller node, it's the thing managing all of Juju, and some software may conflcit with that (mongodb charm as an example). If you want to co-locate services on the controller, you should put them in containers
<godleon> marcoceppi: hmm, if I didn't remenber wrong, bootstrap node should be machine 0 in Juju 1.25, I just feel weird why machine 0 does not exist after bootatrapping controller in juju 2.0
<marcoceppi> godleon: it does, it's just in a different model
<marcoceppi> godleon: Juju 2.0 allows for multiple models to be created on a single controller, this cuts down on the number of bootstrap/controller nodes to run
<marcoceppi> godleon: try `juju switch admin` then juju status to see the lonely machine 0 running
<marcoceppi> `juju list-models` shows all the current models, and the `juju gui` has a drop down to switch/create more models
<godleon> marcoceppi: oh y3s I just saw the term `model` in the juju document today.
<godleon> marcoceppi: by the way, if I want to build an openstack with existing remote ceph, how can I do that?
<godleon> Write a charm to do that is the only way?
<lazyPower> godleon - i would ask that question the juju mailing list.  juju@lists.ubuntu.com - that way the openstackers can formulate a proper response. I dont have the expertise in openstack to answer that as is.
<godleon> lazyPower: ok, got it. Sorry for that.
<lazyPower> no worries! I just want to make sure you're finding answers to your questions :)
<godleon> lazyPower: thanks! :)
<godleon> Another question is.....will lxc/lxd support OCI standard?
<lazyPower> what is the OCI standard?
<gnuoy> jamespage, got a sec for https://github.com/openstack-charmers/charms.openstack/pull/4 ?
<godleon> lazyPower: https://www.opencontainers.org
<lazyPower> ah that i'm not sure of, probably another good question to post to the list
<jrwren> open container init.  afaik the answer is "nope, we dont need it" as I heard in a number of presentations re:lxd
<lazyPower> or jrwren seems to have the science
<lazyPower> :)
<lazyPower> jrwren hey did i see that there was a re-work merge of the plugin work that deprecated the MP for plugin caching?
<jrwren> opencontainers and docker/kuber is all about process containers.   lxd is about system containers.
<lazyPower> re: caching stuff for charm vs charm-tools
<jrwren> lazyPower: yes, basically it is still slow, but ONLY for running help now, but we cleaned up a mess of code and now can move forward with further fixes.
<lazyPower> oh nice
<jrwren> lazyPower: so... we are still on it and in a better place to better support the other core modules (i'm not calling them plugins because they are not plugins.)
<lazyPower> yeah, i think that was nomenclature used in the thread
<lazyPower> well cool. \o/ i wont campaign on your PR anymore then :D
<lazyPower> "Merge this change! Make charm great again"
<jrwren> lazyPower: thanks.  It was a good discussion and we are in a much better place thanks to all the points raised.
<godleon> jrwren: ok, then speaking of kube, what's the advantages of juju if compares to kube?
<jrwren> lazyPower: on a completely diff topic. I started looking at beats + kibana and I'm wondering if you have any wisdom re: moving from old to new and setting up a dashboard.
<lazyPower> well you'r ein luck if you want dashboards for packetbeat(still pending) or topbeat
<godleon> jrwren: I am in a loss, too many tools now.
<jrwren> godleon: I'm not knowledgable re: that, but others here are. My limited understanding is that juju makes a great tool for deploying a kubernetes infrastructure.  Kubes need to run somewhere and juju can help manage that somewhere.
<lazyPower> i'm not so savvy at visualizations to build one for filebeat...
<lazyPower> with that said, kibana just got a rev that adds config/actions to deploy the "beats" dashboard
<lazyPower> so you get a prefabricated layout, its probably not 100% applicable for your wants, but its easy enough to edit
<jrwren> lazyPower: newer than kibana 4.5.1? I just installed kibana yesterday!
<lazyPower> ewll it just got revved on Monday
<lazyPower> soooo, you got it
<jrwren> lazyPower: link to post or news that tells me how? It wasn't obvious.
<jamespage> gnuoy, pass unit tests and work for you in your charm?
<lazyPower> uhhh jrwren
<lazyPower> did you not read the readme?
<jrwren> godleon: more here: https://insights.ubuntu.com/2015/07/30/juju-kubernetes-the-power-of-components/
<lazyPower> https://jujucharms.com/kibana/  "Deploy / Add Dashboards"
<jrwren> lazyPower: i probably did not read the readme. apt install doesn't display a readme :]
<lazyPower> :O you didnt use the charm?
<gnuoy> jamespage, yes and yes. fwiw the unit test reproduces the bug against the current codebase and is fixed by my mp
<jrwren> lazyPower: WHOA!!! I wasn't using this charm. juju actions to load dashboards is AWESOME!
<jrwren> lazyPower: well done!
<lazyPower> now i feel bad for all the mean things i was saying about you over here just now ;)
<jamespage> gnuoy, going to get travis setup on that repo for the moment
<jamespage> gnuoy, feel blind with no automated CI
<gnuoy> jamespage, makes sense
<jrwren> lazyPower: lol. stop muttering under your breath ;]
<lazyPower> and to be completely fair it was a joint effort between myself and cory_fu to land that
<lazyPower> so props cory_fu we <3 you
<lazyPower> godleon - i think its easiest to put it this way. Juju at its core is a way to model open source operations. Kubernetes is an application container orchestrator.  They are two wildly and vastly different tools and disciplines
<lazyPower> godleon - for example, you can use juju to model a kubernetes deployment (the full infrastructure, including workloads you wish to deploy into k8s (needs citation)) however the same is not exactly true in reverse.
<kjackal> Hey lazyPower, do you have a minute? Its regarding the the ELK stack bunlde that is in the review Q.
<lazyPower> surely
<lazyPower> whats up kjackal?
<kjackal> The ELK bundle.yaml seems to be pointing to the containers namespace for logstash
<lazyPower> chicken/egg problem of which lands first
<kjackal> Did we promulgate logstsh?
<lazyPower> i dont think we did, let me check
<lazyPower> nope
<lazyPower> https://jujucharms.com/u/containers/logstash/trusty/5  <- is the most recent revision, and it has not been promulgated yet.
<kjackal> It is waiting for a review?
<kjackal> it=logstash/trusty/
<lazyPower> it should be....
<lazyPower> it might have been reviewed and nack'd
<lazyPower> i submit the entire belk stack at once
 * lazyPower goes fishing for a bug
<jamespage> gnuoy, https://github.com/openstack-charmers/charms.openstack/pull/5
<gnuoy> jamespage, merged and thanks for #4
<lazyPower> kjackal - hah! you reviewed it! https://bugs.launchpad.net/charms/+bug/1560167
<mup> Bug #1560167: New Charm Proposal: Logstash <Juju Charms Collection:In Progress> <https://launchpad.net/bugs/1560167>
<kjackal> lazyPower, awesome! So any comments I had seem to addressed
<lazyPower> nope
<lazyPower> i'm updating the layer real fast while you're "not looking"
<lazyPower> :)
<kjackal> :)
<lazyPower> i take that back, i do see a merge here from you
<lazyPower> so i think it just needs rebuilt and a re-push
<kjackal> cool
<cory_fu> lazyPower: What happened to bundletester in jujusolutions/charmbox:devel
<lazyPower> cory_fu - i dont think it was reintroduced after a long hiatus - it jsut recently got a bump to add tims ppa and get the prereqs
<lazyPower> so it should be ready to be re-added assuming i've got all the right things in place
<cory_fu> I'm confused as to why it was removed.  I was working previously
<lazyPower> it stopped working, and it was removed months ago
<cory_fu> Ok, I see the bug now where it stopped working due to an upstream dep issue in charm-tools
<lazyPower> cory_fu https://github.com/juju-solutions/charmbox/commit/79a5733a6a248c0e8f91bb3fdf8e616f4a4f612d
<lazyPower> looks like that commit is the culprit
<cory_fu> lazyPower: Yeah, I saw that.  I was just confused because I had been using it daily for a while (but clearly hadn't in some time), and kwmonroe kept trying to tell me that it had never worked.  :P
<lazyPower> LOL
<jcastro> lazyPower: do you intent to promulgate ~containers/beats-core at some point?
<lazyPower> "this doesnt work"
<lazyPower> "i use it all teh time"
<lazyPower> jcastro - err, we dont promulgate bundles? is this new?!
<lazyPower> can we do that now?!
<lazyPower> kjackal - pushed at cs:~containers/trusty/logstash-6  - do you mind reviewing from that instead of me hacking in a bzr repo for the purpose of that bug?
<jcastro> https://jujucharms.com/apache-processing-spark/
<cory_fu> tvansteenburgh: Is the old RQ ingestion / update stopped?  The top item in Charm Reviews is a 404
<jcastro> why wouldn't we promulgate bundles?
<lazyPower> jcastro #TIL
<lazyPower> so yeah, we can totally do that. the current beats-core bundle has been updated to deploy the latest on all teh charms and all the charms are pointed at promulgated revisions, so we should be g2g on that front, just need someone to flip the switch
<ryebot> What is RQ?
<lazyPower> ryebot Review Queue
<cory_fu> ryebot: Review Queue.  http://review.juju.solutions/
<ryebot> thanks
<magicaltrout> the place you send stuff to sit for years ;)
<lazyPower> sadtrombone.wav
<Odd_Bloke> s/sit/mature like a fine wine/
<lazyPower> rimshot.wav
<magicaltrout> lol
<tvansteenburgh> cory_fu: you'll need to ask marcoceppi, i can't ssh to it
<magicaltrout> I've got to 44 days, what upsets me the most is we're going to release 3.9 shortly, so I'll have to yank it and submit a new one ;)
<cory_fu> marcoceppi: Is the old RQ ingestion / update stopped?  The top item in Charm Reviews is a 404
<cory_fu> :)
<cory_fu> aisrael: You have two Cassandra RQ items locked; are you currently reviewing those?
<aisrael> cory_fu: No, I'm not. I'll go unlock those
<cory_fu> Ok
<cory_fu> thanks
<kjackal> hey lazyPower, the logstash charm says it is in  https://github.com/juju-solutions/layer-logstash is this accurate?
<lazyPower> yessir
<kjackal> I do not see any changes there after the 4th of April
<kjackal> Awesome!
<lazyPower> yep :) that was the last time it was touched. i can confirm
<ryebot> What's the best way to dynamically add a file or files to a service unit? Or even better, if possible, add it/them to all the units of a service?
<kjackal> Hey lazyPower, I am afraid we are going to need a new clean build of logstash before promulgating it
<kjackal> kwmonroe spotted the bug that we fixed some days ago in the elasticsearch interface https://api.jujucharms.com/charmstore/v5/~containers/trusty/logstash-6/archive/hooks/relations/elasticsearch/requires.py
<kjackal> We removed the broken hook bu the cs:~containers/trusty/logstash-6 still has it there
<kjackal> there is no point in promulgating this version of logstash as it will break when it gets tested in te context of the bundle
<jamespage> gnuoy, https://github.com/openstack-charmers/charms.openstack/pull/6
<lazyPower> kjackal ah, thats good feedback
<lazyPower> i think i have a stale interface in my path
<kwmonroe> lazyPower: friendly reminder to charm build --no-local-layers
<cory_fu> lazyPower: charm build --no-local-layers
<lazyPower> kjackal kwmonroe - ok rev -7 was just published. can you give that a whirl and let me know?
<kwmonroe> roger that
<lazyPower> ryebot your PR reverting pinning pip landed yesterday right?
<ryebot> lazyPower still under discussion: https://github.com/juju-solutions/layer-basic/pull/70
<lazyPower> ryebot ta, i've poked it with a stick
 * lazyPower now campaigns for ryebot - he has all the pips, all the best pips. 
<ryebot> :D
<ryebot> \o/
<rick_h_> huge pips?
<ryebot> You've never seen pips like this.
<lazyPower> and small pips, dont be a discriminator
<lazyPower> ryebot - is there a work around to this other than building from that mp? i'm failing left and right on a client deliverable due to this bug
<lazyPower> and whats funny is it happened today, not yesterday *magic shrug*
<ryebot> lazyPower: I haven't tried; the only thing I can think of is to make sure pip7.1.2 is installed on the host before installing the charm
<lazyPower> nope, unpinning and progressing forward to the future you promised me
<lazyPower> deliver me to the promised land
<ryebot> hahaha
<ryebot> I mean, I can just click the green button. :P
<ryebot> You're the guy with the rubber stamp though
 * marcoceppi narrows eyes at the mention of rubber and stamp
<jamespage> gnuoy, cargonza: https://github.com/openstack-charmers/charm-specs/pull/1
<lazyPower> ryebot DUDE how you gonna throw me under the bus like that?
<nottrobin> is there any way to search for available layers?
<ryebot> hahaha :D
<marcoceppi> nottrobin: http://interface.juju.solutions
<nottrobin> marcoceppi: nothing there for me
<marcoceppi> nottrobin: http://interfaces.juju.solutions
<jcastro> kwmonroe: I found a nitpick in the bundle
<nottrobin> ah great thanks
<jcastro> the charm for spark is named `apache-spark`
<jcastro> in the bundle you named it just `spark`
<jcastro> so in my mind I just went to jujucharms.com/spark
<jcastro> which doesn't exist
<nottrobin> marcoceppi: I see you wrote a django/gunicorn layer
<jcastro> so I decided to search for "spark"
<marcoceppi> I have
<nottrobin> marcoceppi: I don't suppose there's a layer for simply gunicorn?
<marcoceppi> nottrobin: probably not, well not yet at least ;)
<nottrobin> marcoceppi: cool. I may have a crack at it
<nottrobin> marcoceppi: the other layer I'd be interested in is a content-fetcher later, to grab and expand a tarball from a given URL
<nottrobin> do you know if anything like that exists?
<lazyPower> nottrobin oh! oh! oh! you may want to look into resources then!
<lazyPower> no need for a layer, juju exposes this to you natively in juju 2.0+
<nottrobin> yeah we were having a resources conversation a while ago
<nottrobin> I don't think it'll help, as it currently exists
<nottrobin> 'cos isn't it about pulling a specifically uploaded resource from the charm store?
<nottrobin> I'm talking about a user-specified payload
<nottrobin> lazyPower: ^
<lazyPower> ah yeah, the store hates charms that define resources and doesn't provide them..
<lazyPower> so when you go to publish it'll complain :(
<nottrobin> yeah. what I'm trying to write is a charm which runs a specific *type* of app
<nottrobin> but you provide the app
<nottrobin> (wsgi)
<gnuoy> jamespage, https://github.com/openstack-charmers/openstack-community/pull/8
<marcoceppi> nottrobin lazyPower you can upload a dummy resources to the store
<marcoceppi> then the user provide their resource either at deploy time, or later one
<marcoceppi> on*
<nottrobin> marcoceppi: I don't think what I want to do should be store-based
<marcoceppi> nottrobin: you don't have to have resources in the store at all
<marcoceppi> nottrobin: yhou can deploy resources for a charm from your local machine
<nottrobin> marcoceppi: okay maybe that could work then. do you have an example of what the commands for that look like?
<marcoceppi> nottrobin: mbruzek was documenting this, not sure where that is
<mbruzek> nottrobin: marcoceppi: it is currently a PR waiting to be reviewed. If you are OK with the non-reviewed copy: https://github.com/mbruzek/docs/blob/abd31c0962d73efea76a1381a857a279e27d384d/src/en/developer-resources.md
<nottrobin> mbruzek: assuming the commands more-or-less work, absolutely
<jamespage> gnuoy, merged
<gnuoy> jamespage, thanks
<jamespage> I'd still love to see some cookie cutter or charm create templates for these things...
<mbruzek> nottrobin: The commands to the Juju Charm Store are not working until juju-2.0 beta 8.
<mbruzek> nottrobin: but you can upload a resource to your controller using juju 2.0 beta 7 which is what I used
<nottrobin> sounds good
<nottrobin> I'll have a play
<nottrobin> mbruzek: thanks
<gnuoy> jamespage, yes, that'd be better, but in the meantime...
<jamespage> yah gotcha
<nottrobin> is there an expected timeline for juju 2.0 to be stable?
<nottrobin>  /released
<mbruzek> nottrobin: very soon
<bryan_att> hi, I'm looking for help on a Charm for the OpenStack Congress service. Working w/Narinder on #opnfv-joid,  going here also for support. Current error I have in deploying this charm is "hook failed: "install". Not sure how to debug that - no clear signs in the bootstrap log, pastebin'ed on #opnfv-joid.
<gnuoy> bryan_att, hi o/
<bryan_att> gnuoy: hi!
<gnuoy> bryan_att, what does the /var/log/juju/unit-congress*log say (on congress/0) ?
<bryan_att> gnuoy: is that log on the bootstrap node or on the target node?
<gnuoy> bryan_att, I've updated the guidelines at https://github.com/openstack-charmers/openstack-community/blob/master/openstack-api-charm-creation-guide.md fwiw
<gnuoy> bryan_att, the target node
<bryan_att> gnuoy: ok, that part was not clear. Let me look.
<bryan_att> gnuoy: I tried "juju ssh ubuntu@congress/0" - not found
<gnuoy> bryan_att, drop the ubuntu@ bit
<bryan_att> gnuoy: not found
<gnuoy> bryan_att, what does "juju status congress" give?
<bryan_att> https://www.irccloud.com/pastebin/J8qyCWkd/gnuoy%3A%20here%20you%20go
<gnuoy> bryan_att, ok, my bad, I assumed you were on unit 0 but you're on 3, so should be "juju ssh ocngress/3"
<bryan_att> gnuoy: ok, let me look
<bryan_att> gnuoy: here's part of the log https://www.irccloud.com/pastebin/MRzFi9i8/
<bryan_att> gnuoy: seems the same as what is logged on the bootstrap node
<gnuoy> bryan_att, look like the charm has a mix of tabs and spaces in it
<bryan_att> gnuoy: I saw that but it was not an error level issue
<bryan_att> gnuoy: picky parsers... if that's an error driver I can go fix it. but is that really a potential failure cause?
<gnuoy> bryan_att, I think it is
<bryan_att> gnuoy: ok let me check it and fix
<gnuoy> bryan_att, we did rejig the layers yesterday so if you haven't already it'd be worth going over that guide I mentioned again.
<bryan_att> gnuoy: ok, let me do that also. There were a number of things I had to change in the instructions btw
<bryan_att> gnuoy: I can pass them on here when I go thru it again if that's the best way
<bryan_att> gnuoy: or as issues in github - your choice
<gnuoy> bryan_att, git hub issues would be great
<gnuoy> bryan_att, I have a branch to go with the docs https://github.com/gnuoy/charm-congress
<gnuoy> bryan_att, thats the src of the congress charm so it should just be a case of building and deploying that
<bryan_att> gnuoy: ok, I thought I had to build it from scratch again based upon your guide
<bryan_att> gnuoy: the folder structure looks to have changed significantly so I was going to start over
<gnuoy> bryan_att, you can just clone that git hub branch, not sure if you've added anythin thats not in the guide yet?
<gnuoy> bryan_att, only difference is the source charm directory was renamed from "charm" to "src"
<gnuoy> thats the only directory structure difference I mean
<bryan_att> gnuoy: yes, I saw that. one point to clarify before I fork and update the charm - I have to do this for OPNFV under the Apache 2.0 license and that needs to be explicit in the charm files, (c) AT&T etc... is that OK for you? I assume all the charm files can carry comments for licensing etc.
<bryan_att> gnuoy: if not I will continue by developing the one I have started with based upon your guide
<gnuoy> bryan_att, yes, that's fine
<bryan_att> gnuoy: ok, let me work on it
<bryan_att> gnuoy: I can't seem to get the command "juju deploy <full path>/build/congress" from the guide to work. What's in the "build" folder is "deps" and "trusty".
<gnuoy> bryan_att, are you using juju 2.0 ?
<bryan_att> gnuoy: not sure how do I tell
<gnuoy> juju --version
<bryan_att> gnuoy: 1.25.5-trusty-amd64
<gnuoy> ok, so...
<gnuoy> cd build
<gnuoy> juju deploy local:trusty/congress
<gnuoy> should work, although you'll need to specify a cloud archive if its trusty
<bryan_att> gnuoy: ok, that looks more like the command I used in the last attempt
<gnuoy> bryan_att, http://paste.ubuntu.com/16712437/
<gnuoy> bryan_att, I'm not sure that the congress packages are actually availabe on trusty tbh
<bryan_att> gnuoy: ok, I'm not sure because I'm just trying to get a charm developed. So far I have been building it from github directly.
<bryan_att> gnuoy: I'm trying to remove the existing congress service from the last attempt. But "juju remove-service congress" is taking a while...
<bryan_att> gnuoy: I can't try the new charm until the old congress service is removed, but the command doesn't seem to be removing it (no error response or log entries related to the action). What should I do?
<gnuoy> bryan_att, does "juju status congress" show its in an error state?
<bryan_att> gnuoy: yes
<gnuoy> bryan_att, "juju resolved congress/N" where N is the unit number
<gnuoy> bryan_att, you might have to issue that a few time till its gone
<gnuoy> bryan_att, can you hangon before doing the redeploy ? I'm adding deploy from src atm
<bryan_att> gnuoy: ok, let me try that
<bryan_att> gnuoy: ok, let me know
<ryebot> cynerva and I are wondering - how do subordinate charms work with multiple series?
<ryebot> or do we just enforce a single series?
<bbaqar> marcoceppi: can you look at this cloud-init log http://paste.ubuntu.com/16713677/ and see if there is anything wrong .. node is still in pending state
<bbaqar> anyone else
<gnuoy> bryan_att, sorry, I've run out of day. I'll try and push an update up in  the morning
<bryan_att> gnuoy: np, I'll look for it
<bryan_att> gnuoy: thanks for your help!
<mpjetta> I changed some config and got 2x units stuck in â(config-changed) SSH key exchangeâ for some reason. any way to retry or debug what it is doing ?
<lazyPower> ryebot - you cannot mix/match subs across series
<lazyPower> juju wont let you, it will tell you the principal service and the subordinate must match series
<ryebot> +1 thanks lazyPower
<bbaqar_> Hey guys .. any idea why cloud-init would stay stuck for a long time on this /usr/bin/python2 /tmp/add-juju-bridge.py --bridge-name=juju-br0 --interface-to-bridge=eth2 --one-time-backup --activate /etc/network/interfaces
<bbaqar_> and after a while i loose connectivity to the node
<aisrael> In a reactive charm, how do you access code from the included layers? i.e., I include the storage layer, and want to call a method within it from my charm, but just `import storage` fails
<lazyPower> aisrael - unless its shipping alib, thats difficult. i dont think you can import reactive modules in/from one another.
<lazyPower> this was a problem statement raised by stub with the apt layer a while back
<aisrael> lazyPower: Hm. It'd be a really useful feature to have, though. I'll open a bug for it, unless there's already one
<aisrael> lazyPower: layer namespaces might be what I'm looking for
<lazyPower> Makes sense
<lazyPower> the ability to call the module by name, ergo: from storage import format_and_set_fire as cheers
<aisrael> lazyPower: exactly
<lazyPower> storage being reactive/storage.py
<aisrael> Adding it to storage now
<kwmonroe> cory_fu: i'm thinking we should make zeppelin standalone (vs spark subordinate).  you ok if i poke around for a rabbit down there?
<cory_fu> Yeah, as long as it's feasible (i.e., Zeppelin doesn't have some hard lib dependencies on Spark or otherwise have to be co-located) it seems like the better topology
<kwmonroe> yeah cory_fu, it's easier if it's on spark (because it can look at SPARK_HOME), but i think it would be better to discover the relevant info from the spark relation.  it's not clear to me if the zepp binaries include enough spark jars though.  i'll have to see what those debs look like.
<cory_fu> Presumably Zeppelin would then be able to work without Spark entirely?
<cory_fu> (Using some other backend)
<kwmonroe> yup cory_fu.. it's good as a general notebook (%md + %sh), but also has %hive and plenty of other interpreters that i think could be useful without spark... of course spark is super useful too (which is why we made it a sub in the first place)
<cory_fu> Yeah, +1, let's break it out
<magicaltrout> there's a bunch more always in motion on the mailing list as well
<magicaltrout> especially as its just become a TLP
<kwmonroe> what's this magicaltrout?  zepp is out of the incubator?
<kwmonroe> cory_fu: this is redundant, right?  basic: packages: would do the same thing? https://github.com/juju-solutions/layer-apache-bigtop-base/blob/master/layer.yaml#L58
<cory_fu> Yess
<cory_fu> That was a copypasta from the dist.yaml
<magicaltrout> yeah kwmonroe it graduated a couple of weeks ago
<kwmonroe> neat
<kwmonroe> dupey dupey here too cory_fu: https://github.com/juju-solutions/layer-hadoop-client/blob/master/layer.yaml#L4
<cory_fu> kwmonroe: Yep.  Well, hadoop-client and bigtop-base both originally copypasta'd it from hadoop_base, which copypasta'd it from dist.yaml.
<cory_fu> So it's in layer-apache-hadoop-base as well
<kwmonroe> mmmmm.  pasta.
<cory_fu> kwmonroe: Does the order that we apply the patches in bigtop-base matter?
<cory_fu> I guess it might
<kwmonroe> depends on how tolerant patch is to fuzz.  i can't remember if those built on top of each other (meaning patch-1 adds 10 lines, patch-2 starts 10 lines down), or if they were independent.
<kwmonroe> ear regardless, i'm like 99% sure the default patch invocation will find the blocks and do the right thing.  for sure we dont fuzz=0
<cory_fu> kwmonroe: https://github.com/juju-solutions/layer-apache-bigtop-base/pull/12
<bbaqar_> hey guys can i edit the juju cloud-init script ?
<magicaltrout> client asks for bespoke work to be done, and gives us a windows box to test on. We validate bespoke work on windows and hand it over. Client then tries to install bespoke work on Linux. We rework bespoke work to work on Linux. Client then says stuff isn't working, and I find out they're back to using windows
<magicaltrout> if lazyPower wasn't around, I'd be swearing, a lot
 * lazyPower eyes narrow as he reads backscroll
<lazyPower> in this instance, i'll allow it magicaltrout - so long as it follows the ubuntu CoC
<magicaltrout> hehe
<lazyPower> because yah, thats poopy
<magicaltrout> i dunno, clients suck
 * lazyPower sings the hot garbage song
<magicaltrout> technically its java, so you know, it shouldn't matter, but you tend to test on the target OS and then figure out the nuances later
<lazyPower> yeah, mbruzek i mean java is supposed to run everywhere
<mbruzek> It does
<lazyPower> but thats the rub, it was all marketing \o/ if your devs dont code it to be platform independent
<lazyPower> i used to be over this cloud based call center platform i inhereted by the name of five9, they would get really irate when i called in for support and was on linux.
<magicaltrout> the only java issues we face cross platform are like this one, filesystem access based stuff
<jrwren> bbaqar_: I think I have an open feature request for that, but no, currently you cannot.
<magicaltrout> with paths, separators, browsers putting in different separators etc
<magicaltrout> absolute killer
<lazyPower> critical path stuff
<lazyPower> yeah
<lazyPower> you'd htink there would be an abstraction for that by now, i mean its only been what? 3 decades?
<lazyPower> even *python* has that
 * lazyPower hides
<magicaltrout> hehe
<jrwren> write once. debug everywhere.
<magicaltrout> technically c:/ on windows is fine
<magicaltrout>  / on linux is fine
<cory_fu> kwmonroe: Not saying we shouldn't test that PR, but the patch order should be the same due to sorting by ticket number (in the filename)
<magicaltrout> but.. browsers still try and do the right thing and do c:\ ....
<magicaltrout> fail
<magicaltrout> users are worse though
<magicaltrout> they fill in all their variables incorrectly, every time ;)
<magicaltrout> never trust a user to fill a variable
<cory_fu> magicaltrout: Just use os.pathsep and os.path.join and you should be fine.  ;)
<jrwren> bbaqar_: i'm wrong. I never created a bug. if you want to, I will +1 it ;]
<magicaltrout> now funny you say that, we had more issues with paths by using the java filesep variable, than just hardcoding everything to /
<magicaltrout> because then you'd end up with weird stuff like c:\myfolder/path/to/my file
<magicaltrout> at which point I wanted to cry
<magicaltrout> anyhow, I digress. The real point is that clients suck.
<bbaqar_> jrwren: i ll definitely open one but is there anywhere i can get the script from?
<jrwren> bbaqar_: once a machien is deployed you can always get it from /var/lib/cloud/instance/cloud-config.txt  you'll notice its very small and basic for juju managed machines.
<bbaqar_> jrwren: cool .. let me check
<x58> lazyPower: Hey, this is Bert. I'm working mattrae
<x58> with *
<x58> lazyPower: One of the things I've noticed with etcd and static clustering is that you have to bring up the followers one at a time. I have noticed with the old etcd charm, and the new one, that the followers will both attempt to cluster at the same time, which goes horribly wrong because both do member add, and things go boom.
<marcoceppi> x58: I think lazyPower has left for the day
<x58> marcoceppi: He'll get it when he gets back then :-)
<marcoceppi> x58: absolutely
#juju 2016-05-27
<arosales> Hello
<arosales> GlueCon was an interesting conference. Seems Juju fits into a lot of uses cases an problem scerious represented
<arosales> There is similar conference in November folks interested in presenting their work should submit CFPs to
<arosales> http://defragcon.com/
<godleon> Hi all, can I use remote LxD to be Juju's cloud provider ?
<rick_h_> godleon: no, it does not yet support remote lxd endpoints, just the local one
<godleon> rick_h: oh ok, I found juju consume as many physical machines as the numbers of charms I drag and drop into the juju gui.
<x58> arosales: How's it going?
<lazyPower> x58 o/
<x58> lazyPower: Heya :-D
<lazyPower> Groovy, mattrae tipped me off you might be here :)
<x58> Yeah, I posted some commentary ^^^
<x58> Let me know if you need me to grab that backlog and repeat it.
<godleon> Does juju-deployer support juju 2.0 ?
<lazyPower> Ah i see
<lazyPower> but what i've seen out of the new layer  is they reconcile when the leader sets the info after both have  completed their membership add, that initial bring up failure is ok
<x58> On our machines we haven't had them cluster yet :-(
<lazyPower> I understand, i think its a race condition in the charms logic
<x58> The leader wins, and then one will do member add, and never seem to join the cluster, then the other will member add, and that one can't cluster because two nodes can't be in "new member" state at the same time.
<lazyPower> ah
<lazyPower> good insight!
<x58> Also, each of the new nodes has to have INITIAL_PEERS or whatever that variable is (I don't have it in front of me at the moment) of all the other nodes in the cluster
<x58> when you try to add two at the same time, they won't ever get a proper peerlist.
<x58> and fail to cluster.
<x58> You have to stagger them...
<x58> On the old charm, not the layered one, I tried adding a random sleep, and that worked about 30% of the time, mostly because I can't gaurantee that one happens after the other
<x58> So whle 1 might be sleeping 20 seconds, the other might sleep for 10, and that second one may have taken longer to install/bring up, and then no cluster ;-)
<x58> etcd is just a giant pain ;-)
<lazyPower> I can do a better job of coordinatig the registration (new state) and when we attempt to start etcd post rendering the defaults file. Each unit knows their identifier in that string, so it makes sense for me to figure out if we have registered with the leeader and are realy ready to attempt turn up.
<x58> I am getting my feet wet with Charms... so I'd definitely be interested in seeing how that is done.
<x58> but that sounds like it would potentially work.
<lazyPower> well lets start here :) https://github.com/juju-solutions/layer-etcd
<lazyPower> ill ping you on the PR so you can get some eyes on it
<x58> Oh, I'm familiar with that code :P
<x58> @bertjwregeer
<lazyPower> \o/
<x58> BTW, pip shouldn't be installed on Xenial, since it has pip3 installed by defauled.
<x58> I had to rm -rf pip*.tar.gz from wheelhouse
<lazyPower> pips coming from layer-basic
<x58> Ah
<lazyPower> we just merged or had a great discussion about merging something that fixes that
 * lazyPower goes and checks
<lazyPower> yep! https://github.com/juju-solutions/layer-basic/pull/70 landed
<x58> Awesome.
<arosales> x58: hello, going good just got some dinner
<x58> arosales: I'm working on Charms, helping debug a bunch of the OpenStack ones :P
<x58> Didn't even have to join your team ;-)
<x58> arosales: Mad respect for what you and your team do though. It's making my life easier :-D
<arosales> x58: we are one big team here, hopefully the charms also help you out as well :-)
<arosales> x58: glad you popped in here
<x58> arosales: mattrae asked me to, to be able to chat with lazyPower about the etcd charm =)
<arosales> Good suggestion by mattrae
<arosales> Lots of collective knowledge here
<x58> Yeah, I'll stick around :-)
<arosales> x58: also trying to make the openstack charm dev process more straightforward
<arosales> Getting some docs around the dev workflow now that one can deploy to LXD and dev on their desktop or laptop
<x58> Sounds fantastic :-D
<x58> The current openstack charms are interesting though, they spawn some insane amount of processes if you have a ton of cores on a node... overkill amount of processes :P
<arosales> x58: do you know of the openstack irc charm meeting?
<x58> I do not.
<arosales> That process spawning is a function of the openstack serviced though, I think
<x58> Well, the charm has a multipler on it by default that is set to 2, 2x the amount of CPU cores on a node.
<x58> Our nodes are 80 core monsters... so 160 keystone WSGI processes get started :P
<x58> arosales: When is the openstack irc charm meeting?
<arosales> Fit Nova?
<x58> Fit nova?
<x58> It's the API's right now, nova-api, glance-api, keystone-api, all of those have multiplers on them. Led to some interesting process lists ;-)
<arosales> x58:  I was just looking for the meeting time, and couldn't find it. gnuoy leads it and I'll ask him to repost to the list
<arosales> Sorry I meant is that multiplier on the nova charm ?
<x58> Yeah
<x58> I am not at work, so I can't give you the list of them, but we ended up setting them to 1, we'd like to set that to 0.5 since we are still running so many more API processes than required for the openstack cluster.
<x58> I think Matt may have entered info into Salesforce for that, since the multiplier only takes integers.
<arosales> Ok I'll take a look also
<stub> aisrael: The pull request at https://github.com/juju-solutions/charms.reactive/pull/51 fixes this and a number of other import related glitches, so you can do 'import reactive.storage' or whatever when you need
<stub> aisrael: That said, best practive seems to be to put your 'api' things in lib/charms somewhere, which can be imported just fine without the patch
<arosales> godleon: juju deployer has been updated for 2.0 but native juju deploy should just work now
<stub> aisrael: I need both, so I have a workaround in the PG charm. If you create a lib/reactive/__init__.py file like http://bazaar.launchpad.net/~postgresql-charmers/postgresql-charm/built/view/head:/lib/reactive/__init__.py, it gets invoked when you do an 'import reactive.whatever' and extends the search path to find the real reactive package (using standard mechanisms, not the horrible symlink hack I first came up with)
<stub> lazyPower: ^
<godleon> arosales: do you mean I can just use juju deploy to replace juju-deployer?
<x58> YEs
<arosales> godleon: correct
<godleon> ok....... it seems juju-deployer will phase out in the future......?
<arosales> godleon: bundle inheritance is the one feature that didn't come across that people do ask about
<arosales> Juju deployer should phase out
<arosales> Dev will be focused on native deploy
<godleon> arosales: I got it. Thanks! :)
<arosales> godleon: np
<godleon> lazyPower: thanks for your information about juju and kube. :)
<lazyPower> godleon np
<godleon> Hi all, is that possible ssh key didn't inject into machine when deploying it?
<godleon> I found I failed deploying charm because "hook failed: install", and when I executed "juju debug-hooks xxxx/x", I got permission deny as response.
<godleon> and I also found all charms I tried to deploy on that machine hang for a long time. then can not be solved by juju resolved
<jamespage> gnuoy, if you have cycles: https://review.openstack.org/#/c/320450/
<Alex_____> Hi, could anybody point me to the docs on using spot instances with juju on AWS please?
<kjackal> Hey Alex_____ , I do not think we support provisoning of spot instances. (At least couldn't find any docs for this)
<kjackal> Alex_____: How/why are you asking? I mean how did this question came about?
<kjackal> Hey gnuoy, gavriil here has some questions on Openstack and MaaS and he could use our help!
<gnuoy> hi gavriil
<gavriil> hello
<gavriil> i am interested in a "cloud platform" which will spawn and manage virtual machines for me
<gnuoy> gavriil, Openstack sounds like a good candidate
<gavriil> yes, indeed. I tried to install it manually on my 2 desktop pcs but i failed.
<gnuoy> gavriil, what problem(s) did you hit?
<gavriil> i used tutorial and the official documentation, but each time i hit different bugs that i couldn't overcome. Some of them were shared by other users and weren't resolved yet.
<gavriil> tutorials*
<gavriil> and others maybe caused by my novice level of networking skills
<gnuoy> gavriil, have you looked at conjure-up ?
<gnuoy> http://conjure-up.io/
<gavriil> no, does it have any specific requirement?
<gavriil> https://www.dropbox.com/s/gein73gkjlcoihr/systemCloud.pdf?dl=0 this is my setup.
<gnuoy> gavriil, I believe it can be used to either deploy Openstack fully in containers on your laptop or to utilise maas and give you some options about placing services. tbh I haven't used it.
<gnuoy> gavriil, tbh, I think the next steps would probably be to go through the process (with or without conjure-up) and shout when you hit an issue
<gavriil> my pcs are i5 4590 3.3ghz 16 gb ram and i7 860 2.8ghz 16 ram . Each one has 2 wired network interfaces and one wireless.
<gavriil> is it enough for a two node installation?
<gavriil> the second pc has 12 gb ram*
<jamespage> gnuoy, https://code.launchpad.net/~james-page/charm-helpers/swift-dev-support/+merge/295918
<gnuoy> gavriil, it depends on whether those two nodes need to supply the maas server and the juju bootstrap node or if you have other machines (or kvms) for those.
<gnuoy> jamespage, +1 can I leave you to merge it?
<jamespage> gnuoy, yah
<jamespage> gnuoy, and another - https://code.launchpad.net/~james-page/charm-helpers/is-ip-ipv6/+merge/295920
<gnuoy> jamespage, +1
<jamespage> gnuoy, ta  - landed
<gavriil> gnuoy, i can try to host the maas server on my VM or buy a another low end desktop PC( i3 ) with enough ram. Can you suggest me how to setup my machines and what network configuration is needed (if needed), before registering them to MaaS.
<gnuoy> gavriil, you could have the two servers maas is going to manage and maas on a dedicated network. The second nic on each of the servers maas is going to manage could be used to route traffic in and out of openstack once it's installed.
<gavriil> gnuoy, correct me if i got it wrong, i need 3 machines for maas deployment, one for maas controller and 2 for running openstack.  My router provides dhcp (internet connection) and all 3 machines should be connected to it. The first step is the installation of the maas controller which will automatically detect the dhcp of the router and start managing it. After that i will register the remaining two pcs to MaaS.
<gavriil> The last step is the use of juju to spawn containers that will host the openstack services?
<gavriil> gnuoy, can you please fill me in with any important logical steps that i probably ommited?
<gnuoy> gavriil, the logical steps seem fine. I don't think maas is going to automatically  detect the dhcp of the router and start managing it, it'll manage whatever IP range you give it. Also, the nodes under maas control do need internet access but you may choose to proxy that internat access through another machine, like the maas controller.
<gnuoy> gavriil, usually I'd have maas on a dedicated vlan so maas, somthing like https://docs.google.com/drawings/d/12YgpEucC0OADkVrVzwNXe1V7lwrvcsBfsTwPmz-h6Co/edit?usp=sharing
<jamespage> gnuoy, can you take a look at https://review.openstack.org/#/c/322035/ pls
<gavriil> gnoy,  Two parts are not clear to me. First, the nodes under maas control are connected to the first network, so they can have internet access through the router. So why should i proxy the internet access through other machines. Second, in your drawing there are 2 networks. The maas network is virtual or physical and what is its purpose? Thank you very much for your help!
<Alex_____> kjackal: sorry, was AFK. The question comes from the idea the juju can help data scientists and engineers to save money on AWS by avoiding EMR fee and using cheapest on-demand option which is spot instance one. Could you point me to the codebase wich does an AWS provisioning? I'd love to understand how hard would it be to add spot instance support
<kjackal> Alex_____ the provisoners (!?) live in juju-core
<kjackal> let me see if I can spot them
<kjackal> Alex_____: have alook here: https://github.com/juju/juju/tree/master/provider/ec2
<Alex_____> kjackal: thanks! if I think more about it - how do you think, would it be possible to work around by manually provisioning N spot instances and then using Juju just to deploy everything on running instances?
<kjackal> This is an option yes! However, I have a feeling that adding support for provisioning Ec2 spots is a contribution that the juju-core devs will find it hard to resist :)
<Alex_____> that sounds awesome :) Can I help somehow to make it even more attractive for the juju-code devs?
<Alex_____> I'm looking at https://jujucharms.com/docs/1.24/config-manual is that the right place for described workaround?
<kjackal> Alex_____ : The _not_recomended_way_ is what digital ocean is doing https://jujucharms.com/docs/1.24/config-digitalocean source in here https://github.com/kapilt/juju-digitalocean
<kjackal> Alex_____ Again Alex if you are to spend time in automating the process of spot instances you are strongly advised to do it in juju-core and offer it as a contribution
<kjackal> there are certain limitations when using maual provider (eg you cannot add-units)
<Alex_____> kjackal: got it. Thanks for letting me know, that helps a lot!
<kjackal> Alex_____  Actualy there is a email that just landed on the juju list that you might want to kep an eye on. The title is "Juju support for Synnefo", there the people from a cloud called okeanos are interesting in contributing a provider for their cloud
<kjackal> I am very curious what the recomnded way to approach this kind of a problem is. Perhapse Alex_____ you could also ask at the list of the proper way to extend an existing provider
<kjackal> I am sorry I cannot answer this, as I am not from the juju-core team
<Alex_____> kjackal: thank you so much for pointing this all out! I appreciate, kjackal. It is exactly the information I was looking for. I would keep an eye on the mentioned thread as well as ask this question on the list
<Alex_____> I'm building use cases for a workshop about open source data analytics tools that people can use for their hobby projects at home and cost is very important factor here
<kjackal> So Alex_____ why do you need EC2 spots?
<kjackal> Sorry go on
<Alex_____> sure! so the idea is: with recent adoption of Juju in Apache Software Foundation (BigTop especially, Zeppelin, etc) more and more people will start looking into it as an open source option for their hobby\part-time projects and those are people who love opensource and are cheapscaters (in a good way), so this value proposition will be dear to their hearts (I'm one of them :) ).
<kjackal> Sound cool!
<Alex_____> And then the same people will bring it thought the doors of their organizations to daytime jobs later on. So it should help the adoption
<Alex_____> Having an AWS provisioner that supports something like `add-unit` with spot instance for adding more workers to the instance group would be a dope
<gnuoy> gavriil, the first nic is used by maas  to provision the servers (dhcp, pxe etc) and when the servers are installing, traffic like packages updates is routed through the maas node, the second nic is not used at all. In fact the second nic could be attatched to any network you like since I was thinking it would be used to route traffic in and out of the vms that you spin up within openstack (they would act as the ext-port https://api.jujucharms.com/charmsto
<gnuoy> re/v5/trusty/neutron-gateway-5/archive/config.yaml)
<jcastro> Alex_____: https://bugs.launchpad.net/juju/+bug/945862
<mup> Bug #945862: Support for AWS "spot" instances <pyjuju:Confirmed> <https://launchpad.net/bugs/945862>
<jcastro> I agree 100%, I'll talk to the core team about it the next time we meet
<jcastro> Alex_____: it would help us out tremendously if you could add a comment to the bug with some of the things you've outlined here in IRC.
<Alex_____> jcastro: sure! I always fancied that nice lanuchpad account but was alway lazy to register :)
<magicaltrout> also point out you have a beard
<magicaltrout> you'll get more kudos
<jcastro> Alex_____: the reason I ask is it's one thing if I ask, like when I filed a bug. But it's a totally different priority when someone who uses it in real life +1's a bug
<Alex_____> jcastro: makes perfect sense, I
<jcastro> but yes, I have wanted this feature for a very long time, so I look forward to having evidence that someone would use it
<jcastro> for example, in a bundle, it would be nice to do something like:
<jcastro> servicename:
<jcastro>    allow_spot: true
<jcastro>    min_ondemand: 3
<jcastro> so if I add unit past three, then go spot instances, but I want enough ondemand instances to keep the service up and reliable
<jcastro> but other than that, go as cheap as possible
<jcastro> and then of course, we could have constraints on cost, just like we do for cpu and memory
<Alex_____> yup, that sounds like a prefect plan
<magicaltrout> it makes sense
<magicaltrout> especially on services that can fail hard
<Alex_____> jcastro magicaltrout did my best at https://bugs.launchpad.net/juju/+bug/945862
<mup> Bug #945862: Support for AWS "spot" instances <pyjuju:Confirmed> <https://launchpad.net/bugs/945862>
<gennadiy> hi everybody. we have openstack with disabled security groups - `enable_security_group = False`. so in this case juju can't create machine because it requires security group. do we have some juju parameters to prevent security groups creating ?
<gennadiy> another question do we have possibility to specify count of network interfaces?
<gnuoy> bryan_att, Hi, I've deployed successfully on trusty http://paste.ubuntu.com/16729946/ . Creating and listing a datasource is the full extent of my testing though :-)
<SaMnCo> marcoceppi cory_fu: ping
<SaMnCo> I used your advice from yesterday, nearly flawless victory, so thanks
<SaMnCo> but I have a problem with arguments of the provide function that are optional
<SaMnCo> I don't understand how to pass them in bash
<SaMnCo> essentially, I have 2 sets of arguments that can be used, combined or not
<SaMnCo> both optional
<SaMnCo> relation_call doesn't let me specify the names of the arguments, so I am sort of out of business
<SaMnCo> any thoughts?
<cholcombe> dosaboy, do you have experience with sphinx docs?
<dosaboy> cholcombe: not recently
<cholcombe> dosaboy, ok.  i was wondering why my :param list: blah blah isn't parsing properly with sphinx
<cholcombe> it's so picky about syntax it seems
<gnuoy> jamespage, you've given yourself a +1 on https://review.openstack.org/#/c/322035/1 , I don't think that's really the done thing is it?
<jamespage> gnuoy, I was using that to make your life easier - those are ones I think are ready to go
<jamespage> but I'd not told you that yet :-)
<jamespage> gnuoy, https://review.openstack.org/#/q/status:open+topic:charmhelpers-resync
<gnuoy> ah, ok
<gnuoy> jamespage, what did you and beisner agree was the minimum for osci to run to approve a charm helper sync?
<jamespage> gnuoy, we've agreed it change by change - but fwiw I think these are OK with a smoke only
<gnuoy> jamespage, yep, +1
<jamespage> gnuoy, https://review.openstack.org/#/c/322035/ is ready as well + 3 more on the general sync list
<marcoceppi> SaMnCo: I was under the impresion you could pass key=val to relation_call but I could be wrong
<SaMnCo> I tried that but it didn't work, I thought it would as well
<SaMnCo> Maybe I should try from scratch
<SaMnCo> let me do that
<SaMnCo> marcoceppi: would something around those lines be better?
<SaMnCo> https://www.irccloud.com/pastebin/NArzBWDD/script
<marcoceppi> SaMnCo: that wouldn't really work, I don't think , due to lack of context
<icey> we don't seem to be testing charm-tools with python3?
<jamespage> gnuoy, https://review.openstack.org/#/q/topic:charmhelpers-resync+status:open more ready to go
<cholcombe> jamespage, cinder question.  It looks like huawei needs an xml configuration file.  The HuaweiSubordinateContext that I return to Cinder is just a json blob.  Will cinder take care of writing that xml file or do I need to patch that also?
<jamespage> cholcombe, the subordinate is responsible for writing anything that is specific to it
<cholcombe> jamespage, gotcha.  ok that's fine
<jamespage> cholcombe, so for ceph - > ceph.conf and the client keys
<jamespage> cinder-ceph that is
<cholcombe> jamespage, i'm just going off of your vmware cinder driver code.  It looks like it returns a context back to cinder
<jamespage> cholcombe, it might pass some data back to cinder, but that's cause it need to be written into cinder.conf
<jamespage> sticking to the principle that only a single charm can own a file, it must be done that way
<cholcombe> ok i think i know what it needs then. Just the basic use this driver, here's the config file, etc
<jamespage> cholcombe, most likely
<cholcombe> how do we unit test layered charms that depend on packages that are installed via the wheelhouse?
<lazyPower> I'm pretty sure you can just pipe in the dependency list to tox...
<jamespage> cholcombe, by building the charm and writing amulet tests for it...
<icey> jamespage: issue is with dependencies
<jamespage> cholcombe, oh sorry - yes I see - well talk with tinwood and gnuoy - they have this figured out
<jamespage> cholcombe, icey: but broadley it involved building a tox virtual env, installing what you know is needed for unit testing and executing some unit tests...
<icey> jamespage: it gets more painful, trust me :)
<jamespage> icey, cholcombe: all I'm saying is don't repeat thinking that might of already been done in this space :-)
<cholcombe> jamespage, yup :)
<icey> jamespage: a big part of my issues seem to be that I'm migrating our python2 unit rests to python3+layers
<jamespage> icey, quite possibly
<magicaltrout> I was about to jokingly ask marcoceppi where you submit papers for the charmer summit
<magicaltrout> and then saw the button on the website.....
<marcoceppi> magicaltrout: haha already a step ahead!
<magicaltrout> :P
<magicaltrout> I'm on a talk submission afternoon
<marcoceppi> magicaltrout: well I may have one more for you
<magicaltrout> hook me up
<jcastro> marcoceppi: hey any word from design for summit.j.s?
<marcoceppi> jcastro: not yet
<jcastro> I would like to totally announce that badboy
<jamespage> gnuoy, three more to go if you have two ticks - https://review.openstack.org/#/q/topic:charmhelpers-resync+status:open
<magicaltrout> there you go, partner summit proposal submitted
<cory_fu> SaMnCo: I'm actually out today, but since I'm here for a second, I can tell you that reactive uses charmhelpers's cli module, and it looks like any params with default values ought to be treated as options that must be provided as --var_name=value
<SaMnCo> cory_fu: I'll try that
<SaMnCo> thanks, and sorry to put you out of your vacation
<cory_fu> SaMnCo: That said, the calling convention for methods via the CLI is necessarily more restricted that in Python, so there may be things that you simply cannot pass in from bash that you could from Python.  It would be up to the interface layer author to keep that in mind, I guess
<cory_fu> No worries.  :)
<cory_fu> Anyway, back to day off.  \o
<magicaltrout> jammy git
<gennadiy> hi. we have openstack with disabled security groups - `enable_security_group = False`. so in this case juju can't create machine because it requires security group. do we have some juju parameters to prevent security groups creating ?
<gennadiy> another question do we have possibility to specify count of network interfaces?
<cholcombe> there should be an award for converting old charms to layered :)
<SaMnCo> cory_fu, marcoceppi : the --var_name=value doesn't work
<marcoceppi> cholcombe: absolutely
<SaMnCo> https://www.irccloud.com/pastebin/5h6hqAme/call_api
<lazyPower> cory_fu marcoceppi  - btw charm build -r is like my new favorite thing ever
<bryan_att> gnuoy: have some time to talk about how you did that? I'm getting an error:  https://www.irccloud.com/pastebin/7GH0InRK/
<bryan_att> gnuoy: nevermind - it's the version issue again. I used the other command format, trying now.
<marcoceppi> lazyPower: what's the -r do?
<marcoceppi> ah, the reporting
<lazyPower> marcoceppi it generates a report of what changed (the delta) and runs `charm proof` on the assembled charm
<marcoceppi> we should consider making that the default
<lazyPower> +1 that sounds good to me
<bryan_att> https://www.irccloud.com/pastebin/ltEyPcEd/gnuoy%3A%20having%20some%20issues%20with%20the%20install%20per%20the%20current%20repo
<bryan_att> gnuoy: see the previous post - I put the note in the filename field... having some issues with the install per the current repo
<x58> jamespage: Thanks for all the help with the RabbitMQ charm :-)
<bdx> big-data: Any current plans for packetbeat?
<bryan_att> gnuoy: ping
<lazyPower> bdx yes, and dockerbeat
<lazyPower> bdx - i've been swamped with this etcd rework this week but i have plans on releasing layers for both packetbeat and dockerbeat in the next 2 weeks and proposing them against the beats-core stack
<lazyPower> s/stack/bundle
<magicaltrout> it was better when the charmstore login was broken
<magicaltrout> at least i didn't have to look at my face everytime i go there
<x58> Is there a way to tell juju deploy to only deploy a single machine at a time when deploying a bundle?
<x58> Running into a bug I think is related to how fast JuJu is asking machines to be deployed in MaaS: https://bugs.launchpad.net/maas/+bug/1586540
<mup> Bug #1586540: MaaS 2.0 beta 5 fails to assign IP address to nodes when multiple nodes go into deploying at once <cpec> <juju> <maas2.0> <MAAS:New> <https://launchpad.net/bugs/1586540>
<x58> Nevermind, that wasn't the bug we thought it was. I feel bad.
<x58> Here's a new bug instead: https://bugs.launchpad.net/maas/+bug/1586555
<mup> Bug #1586555: MaaS 2.0 BMC information not removed when nodes are removed <cpec> <MAAS:New> <https://launchpad.net/bugs/1586555>
#juju 2017-05-22
<kjackal> Good morning Juju world!
<dakj> Hello guys, any idea about this issue with Landscape-Dense MAAS? https://askubuntu.com/questions/906763/haproxy-reverseproxy-relation-changed-in-landscape-dense-maas-bundle thnaks
<lazyPower> dakj: have you re-used that haproxy charm by relating it to something else, then removing a relation?
<lazyPower> dakj: i'm not positive this is the case, but the error its complaining about is two vhosts with the same name.  So either the charm is coded in such a way this scenario happened, or some operation was issued against the charm that caused it to duplicate the  vhost. I'm not certain which is at fault here, but its most certainly a defect in the charm, unless you've manually edited that haproxy cfg :)
<dakj> lazyPower: to your first Q: no I used the bundle dedicated to deploy that and nothing else, no re-used. Second one: no, I've run its default cfg without manual edit.
<dakj> lazyPower: I've re-run more time that bundle and after the commit with Juju gui the error has been always the same.
<lazyPower> dakj: i'm deploying now
<lazyPower> give me a moment to let this deployment settle. If it errors i'll scrape the logs and file a bug and ping you with the bug id.
<dakj> lazyPower: ok, I'm waiting your result
<lazyPower> dakj: from a basic `juju deploy landscape-dense` i received no errors
<lazyPower> dakj: https://104.197.250.70 - the haproxy unit is available at this ip, and things seem to have settled correctly
<dakj> lazyPower: did you do that via command line or via gun?
<lazyPower> dakj: cli
<dakj> lazyPower: let me try that, but you used landscape-dense in my lab I used landscape-dense-maas
<lazyPower> dakj: ok. give the dense bundle a go and lets see if it behaves as we expect it to
<dakj> lazyPower: I think is the same
<lazyPower> I'm going to tear down this model unless there's additional validation you'd like to perform?
<dakj> lazyPower: https://jujucharms.com/landscape-dense-maas/
<lazyPower> yeah it appears that teh dense-maas bundle relies on container networking to alow multi-host scalability
<lazyPower> the -dense bundle doesn't offer that, and i did run this deployment against a public cloud as I dont readily have maas available to me
<dakj> lazyPower: I'm doing the commit of landscape-dense, five minutes and see the result.
<lazyPower> ack
<dakj> lazyPower: same issue
<lazyPower> dakj: i'm not certain what the issue is, but it would be a good first step to file an issue against the haproxy charm. https://bugs.launchpad.net/charm-haproxy/
<dakj> lazyPower: already done...no answer
<lazyPower> dakj: do you have that bug # handy?
<dakj> lazyPower: I wrote here: https://bugs.launchpad.net/landscape-bundles/+bug/1685212 and https://bugs.launchpad.net/charms/+source/haproxy/+bug/1686629 and https://bugs.launchpad.net/landscape-charm/+bug/1692061 and added a comment here https://bugs.launchpad.net/charms/+source/haproxy/+bug/1520305
<mup> Bug #1685212:  Landscape Dense MAAS: hook failed: "reverseproxy-relation-changed" for landscape-server:website <Landscape Bundles:New> <https://launchpad.net/bugs/1685212>
<mup> Bug #1686629: hook failed: "reverseproxy-relation-changed" for landscape-server:website <haproxy (Juju Charms Collection):New> <https://launchpad.net/bugs/1686629>
<mup> Bug #1692061: Landscape Dense MAAS: hook failed: "reverseproxy-relation-changed" for landscape-server:website <Landscape Charm:New> <https://launchpad.net/bugs/1692061>
<mup> Bug #1520305: reverseproxy-relation-changed fails configuration check failed <haproxy (Juju Charms Collection):New> <https://launchpad.net/bugs/1520305>
<lazyPower> dakj: thanks, i'll try to get some eyeballs on that today from the landscape team if they have any additional bandwidth
<lazyPower> dakj:  and just to be clear, you're destroying the model and running a fresh deployment every time you attempt to deploy the landscape bundle?
<dakj> lazyPower: yes I've destroyed that before to commit the bundle.
<Zico> Hi, I need some help on JUJU metrics setup. To get started, how to make it work on top of the scalable-wiki for example that is PHP and MYSQL?
<rick_h> Zico: so there's a hook you use in the charm. You'd need to add a metrics hook to one of those two charms. check out https://jujucharms.com/docs/stable/developer-metrics
<rts-sander> can multiple hooks run at the same time in the same unit?
<Zico> ~/charms/wiki-scalable/trusty/haproxy$ more layer.yaml  ---> includes: ['layer:metrics']
<Zico> in metrics.yaml --> metrics:   juju-units:
<Zico> juju metrics --all shall reflect the number of units after 5 minutes?
<rts-sander> "No more than one hook will execute on a given system at a given time. A unit in a container is considered to be on a different system to any unit on the container's host machine."
<rts-sander> so no
<Zico> Thank you Rick_H
<rick_h> Zico: np, have fun
<rick_h> rts-sander: right, only one at a time
<rick_h> rts-sander: so that the hook authors should be able to rely on things not going nuts during a hook execution
<Zico> Q: I created the layer.yaml, and added --> includes: ['layer:metrics'] in the layer.yaml and in metrics.yaml --> juju-units: ---> this supposed to give the count of units.. juju metrics --all is still empty (after 5 minutes).
<mattyw_> Zico, did you upgrade the charm when you added the layer:metrics layer?
<Zico> Mattyw_, No, I didnt upgrade, if I should do, please assist how ? :)
<mattyw_> Zico, so you added layer:metrics to an existing charm and then deployed it from scratch?
<Zico> Mattyw_, Charm was already deployed, I just added the layer.yaml and metrics.yaml... I thought it's hot deployable?!
<mattyw_> Zico, charms aren't I'm afraid. You'll have to do juju upgrade-charm
<mattyw_> Zico, but in that situation you may encounter this: https://bugs.launchpad.net/juju/+bug/1675851
<mup> Bug #1675851: juju collect-metrics reports failed to collect metrics: no collect application listening: does application support metric collection? when charm is metered <upgrade-charm> <juju:Triaged> <https://launchpad.net/bugs/1675851>
<mattyw_> ^^ so the best approach would be to rebuilt the charm and then juju deploy it
<cmars> anyone here tried using zetcd instead of zookeeper with big data charms yet? i'm interested in evaluating kafka and zetcd appeals to me...
<cmars> also, anyone here using kafka with golang clients? :)
<tvansteenburgh> cmars: well, i snapped zetcd Friday night after the announcement :)
<dakj> lazyPower: I made other tests and the results has been: using cli with these commands juju deploy landscape-dense and juju deploy landscape-dense-maas, landscape is deployed correctly while using both bundle via gui I obtained the same issue
<tvansteenburgh> cmars: snap install zetcd --edge
<cmars> tvansteenburgh, awesome!
<cmars> tvansteenburgh, is there a layer for snaps, like layer:apt?
<lazyPower> cmars: layer:snap
<lazyPower> https://github.com/stub42/layer-snap
<cmars> shouldve guessed :)
<lazyPower> well poo you left
<lazyPower> dakj: so you're saying it deploys correctly via cli but not the gui?
<ChrisHolcombe> does charm-tools support py3.5?
<ChrisHolcombe> i mean charmhelpers*
<ChrisHolcombe> lazyPower, you prob know this :)
<lazyPower> charm helpers should, yeah
<lazyPower> charm-tools shoudl as well
<ChrisHolcombe> are you installing that with pip3 or what?
<lazyPower> no it ships with layer-basic.
<ChrisHolcombe> oh i see
<lazyPower> charmhelpers is in the wheelhouse
<ChrisHolcombe> alright
<lazyPower> so i guess yes i'm indirectly doing that
<lazyPower> but the tooling is doing it for me :)
<ChrisHolcombe> :)
<ChrisHolcombe> i'll run tox on the build bits then
<Budgie^Smore> o/ juju world
<Zic> lazyPower: hi, if I specify a specific revision of the canonical-kubernetes charm bundle, does it also deploy the old revision of charm inside the bundle? (for the same purpose of last time, build a "iso-prod" preproduction cluster and test its upgrade before do it in production)
<Zic> (specify a specific, oh, well done...)
<lazyPower> Zic: yes, if you deploy an older charm/bundle, it should use those older resources instead of the snaps and deploy the 1.5 tarball based cluster.
<Zic> cool, will test that tomorrow :)
<Zic> lazyPower: how can I retrieve the bundle version of a already installed cluster? don't know if I really can after an installation
<lazyPower> Zic: juju status should giv eyou the charm revisions at the top
<Zic> yep, but how to guess the charm *bundle* revision? :x
<lazyPower> Zic: then just walk backwards in the bundle revisions in the store. i can lend a hand to get you the bundle rev if you get me the version of say, kubernetes-master.
<Zic> oh, I think I can do that, I'm not at office currently but I will mount my VPN to launch a juju status at the cluster
<Zic> http://paste.ubuntu.com/24625879/
<Zic> lazyPower: ^
<cmars> what's the method for ignoring files during charm build, in layer.yaml?
<cmars> any examples of this?
<tvansteenburgh> cmars: add key -> ignore: [list, of, file, or, dir, names]
<cmars> tvansteenburgh, doesn't work :(
<cmars> tvansteenburgh, the ignore: key goes at the top-level of layer.yaml?
<tvansteenburgh> yeah
<tvansteenburgh> cmars: hrm https://github.com/juju/charm-tools/issues/312
<cmars> tvansteenburgh, ok. yeah, i can't ignore builds, deps or .*.swp
<magicaltrout> lazyPower: is there persistent storage, cinder etc in CDK?
<bdx> JUJU_UNIT_NAME
<bdx> doesn't exist in the bash env anymore?
<lazyPower> magicaltrout: ceph rbd
<lazyPower> Zic: sorry i got sidetracked looking now
<bdx> http://paste.ubuntu.com/24626263/
<lazyPower> Zic: https://jujucharms.com/canonical-kubernetes/21/
<lazyPower> Zic: that has your master revision :)   and you can deploy this with juju deploy canonical-kubernetes-21
<tvansteenburgh> bdx: are you in a hook context when you run that?
<dakj> lazyPower: have you tryed to deploy that via juju gui?
<lazyPower> dakj: i have not
<lazyPower> dakj: were you saying it deploys as expected via the CLI but not via the GUI?
<dakj> yes I confirm that, using the cli with command juju deploy landscape-dense/landscape-dense-maas landscape is deployed right, while via gui or via cli using the command juju deploy cs:~landscape/bundle/landscape-dense-maas-25 in both task I've the same issue with HAproxy
<dakj> lazyPower: yes I confirm that, using the cli with command juju deploy landscape-dense/landscape-dense-maas landscape is deployed right, while via gui or via cli using the command juju deploy cs:~landscape/bundle/landscape-dense-maas-25 in both task I've the same issue with HAproxy
<lazyPower> dakj:  ok, so you state that landscape is deployed right, you're not experiencing any issues with the haproxy on that working deployment?
<dakj> lazyPower: at moment any issue with HAproxy. I don't know if that is a my problem on my system or is an issue on that bundle. My juju status is pasted here https://paste.ubuntu.com/24626715/ and here after login on landscape https://pasteboard.co/9tyPhi0Xg.png
<lazyPower> dakj: everything appears in order in that screenshot and your pasted status output
<lazyPower> so i'm not certain what's causing the issue, you've filed some detailed reports. The only thing I can think of to help anyone following up would be to describe int he parent bug against the bundle, that it was deployed via the gui
<lazyPower> that way its obvious when they attempt the deployment that it wont be `juju deploy`, its a drag and drop operation. there's some slight differences there in how juju processes the request but i dont know enough about the gui internals to know
<dakj> lazyPower: I'll update all post about that, indicating that issue is due from gui and not via cli.
<lazyPower> dakj: thank you for taking good care of the issues filed dakj. its certainly appreciated
<dakj> lazyPower: I've to thanks you and the other guys who helped me to resolve the issue with Openstack base via juju and all support to learn more thinks about Juju. I hope to to reciprocate you with my support.
<lazyPower> dakj: :) Welcome aboard o/
<dakj> lazyPower: see you, have a nice day.
<thumper> lazyPower: who knows about the namenode charm for k8s ?
#juju 2017-05-23
<lazyPower> thumper: what is the namenode charm?
<lazyPower> thumper: you're using big data terminology here...
<thumper> lazyPower: oh hai
<thumper> I was watching a deployment and noticed that it failed to install every time
<lazyPower> thumper: the kubernetes-master charm?
<thumper> i think it was a custom bundle
<thumper> used to scale testing
<thumper> on aws
<lazyPower> hmm, ok
<lazyPower> i'm not sure what this namenode charm would be. it might be something kwmonroe has cooking up
<thumper> that's fine
<thumper> I have enough issues to chase just now :)
<Zico> Mattyw_, GM, thank you for your earlier reply. It works, I just added the layer.yaml containing (includes layer:metrics) and metrics.yaml containing (metrics: juju-units:) and upgraded the charms as "juju upgrade-charm <charm_id> --path <local/path/to/charm>" and after 5 minutes, juju metrics --all, Gives the number of units for all the charms like a CHARM :)
<mattyw> Zico, glad it worked out, if you think there's improvements to be made with the docs I'd love to hear them.
<mattyw> Zico, any any more questions feel free to ask - I'm here all the time, if I can't answer a question I'll be able to point you to someone who can
<Zico> Mattyw, Great, thank you my friend, much appreciated :)
<Zico> Mattyw, Concerning the docs improvement, yes, I think there is room for improvement, for example, compiling the steps that I mentioned to form a "HELLO_WORLD" Metric, I mean the unit count by enumerating the process as 1 , 2, 3, & 4 where 1. Layer.yaml, 2. metrics.yaml, 3. upgrade-charm, 4. After 5 minutes (Default), invoking the juju metrics command.
<Zico> The Call for Upgrade-charm wasn't obvious unless you informed me to :)
<mattyw> Zico, which pages did you read from the docs, was it just https://jujucharms.com/docs/stable/developer-metrics ?
<Zico> Mattyw, yes, it was https://jujucharms.com/docs/stable/developer-metrics the starting page. But as you notice, there is no mention in it for " juju-units: " and no mention to trigger the "juju upgrade-charm"
<Zico> Mattyw, Also, in the layer.yaml section, the word " includes " is not mentioned...
<Zico> Personally, I added in the layer.yaml "     includes: ['layer:metrics']     "  as copied from a manually created TEST charm
<mattyw> Zico, yeah, that page kind of assumes you know lots of stuff already, we probably need a better getting started page, how did you find out about charm metrics (just trying to understand where would be best for us to make a change in the docs)
<Zico> Mattyw, Yes, Correct, in fact, I got frustrated at first :) because, I screened all the pages, and couldn't grasp a whole procedure. Starting point was: https://jujucharms.com/docs/stable/developer-metrics
<Zico> then went to https://jujucharms.com/docs/2.1/charms-metrics
<mattyw> Zico, we have https://jujucharms.com/docs/stable/developer-getting-started. I guess that wasn't helpful?
<Zico> Yes, Helpful of course, but doesn't read anything about the metrics. it would be nice to add a section to it to link to metrics
<mattyw> Zico, understood, thanks very much for your feedback
<mattyw> Zico, hopefully we'll make it easier for the next person :)
<Zico> Mattyw, you are welcome, my friend, I hope so :) I am doing some research on Orchestration through JUJU and I need to get feedbacks (Metrics) to trigger optimization
<Zico> BTW, I have a problem that is: My Laptop is able to spin multiple machine and provision multiple services, however, when I run a large bundle , it arrives to a time that It dies (Sluggish responding), although free -h and top and iotop are reading low load! Any hint?
<mattyw> Zico, sounds interesting, any questions or suggestions we're here to help/ listen
<mattyw> Zic, you're running lxd I guess?
<mattyw> Zico, ^^
<Zico> yes
<Zico> Mattyw, 7 applications running on 4 machines (4 applications on 1 machine and remaining is 1 app per machine).
<Zico> Mattyw, the 4 apps on 1 machines as 4 LXD
<kjackal> Good morning Juju world!
<mattyw> kjackal, morning
<mattyw> Zico, that means you'll probably have 5 lxd machines total including the controller
<mattyw> Zico, doesn't suprise me that it might be a bit slow on your laptop
<Zico> Mattyw, Correct.
<Zico> I have a new question please:
<Zico> I am struggling to make the PROXY SQUID_DEB_PROXY  HIT but all is miss :(
<Zico> I am following
<Zico> https://askubuntu.com/questions/3503/best-way-to-cache-apt-downloads-on-a-lan
<Zico> written by Jorje Castro
<Zico> All I get is TCP_MISS not TCP_HIT :(
<Zico> Although, I remove the machine and recreate, so it's supposed to be cached
<Zic> lazyPower: thanks for your reply (I just backlogged)
<Zic> (hi Juju world)
<wpk>  /wind 11
<Zico> Hi, how to purge (flush) historical logs, I mean JUJU debug-log --replay keeps tracks of several days back which are useless. Is it safe if I ssh the controller and empty the logsink.log by (ssh -m controller 0 and echo "" > /var/log/juju/logsink.log)?
<rick_h> Zico: since the logs are stored in the db that won't really work out.
<rick_h> Zico: there's tools in debug-log to help limit the time responded
<rick_h> Zico: check out https://jujucharms.com/docs/stable/troubleshooting-logs#the-debug-log-command
<Zico> Rick_h, Yes, Right, thank you, I checked this, Much Helpful :)
<Zico> Hi, is there a way to forcibly remove a unit or an application without removing the underlying machine? (The "--force" is only application to remove-machine). This is the case when I get stuck with an app/unit with status ERROR and MESSAGE: *hook failed: "install"*
<Zico> application* shall read applicable :)
<Zico> BTW, removing it from the GUI (Destroy) says, this application is marked for destroy on next deploy. What's the catch?
<rick_h> Zico: no, there's not. If you force then there's no way for Juju to know what to remove as it doesn't track the files/installs/etc the charm does
<rick_h> Zico: is that you have to go down to the bottom-right and hit commit
<rick_h> Zico: where it'll try the same thing the cli does to remove an application, trigger the hooks, etc
<Zico> Rick_h, yup, I commited of course :) and as result of commit, it says marked for destroy on next deploy
<rick_h> Zico: hmm, sounds like the GUI got confused?
<Zico> Yup The exact wording is: (This application has been marked to be destroyed on next deployment.)
<Zico> and the status is (Status: error - hook failed: "install"  Agent Status: executing  Workload Status: error)
<Zico> So, basically, I am stuck in infinite loop, neither the gui, nor the cli are able to remove the app/unit which is having status error (hook failed: install). The only way I could resort is by forcibly removing the underlying machine. My question is : Is there any other way to remove the app/unit without losing the machine?
<anrah> Zico: I think I have sometimes managed to resolve that by making dummy configuration change through cli and then say resolve to failed unit and after that remove-application
<Zico> Anrah, Good idea :), this is what I am thinking of. so, I go to $juju debug-hooks <app_name> and then what?
<anrah> I just said juju config <app> <config-value>=foobar
<anrah> then juju resolve <unit>
<anrah> juju remove-application application
<anrah> but hmm, you have only one app per machine?
<Zico> Anrah, Yes Only one app, per machine currently.
<Zico> Anrah, I have done the trick (Y) , changed the config of passwd and then *resolved*, but again, hook failed: install and stuck at same position again and cannot remove the app.
<Zic> lazyPower: can I deploy a specific bundle version through Juju GUI? because as I use manual-provisionning, I will need to redispatch charm to the "good" machine, and it's esasier with drag'n'drop if I need to demonstrate it to coworkers :)
<lazyPower> Zic: yep just download teh bundle to your machine and drag/drop onto the gui
<Zic> https://localhost:8080/gui/u/admin/default/canonical-kubernetes/bundle/38 <= hmm, I thought to just edit that silly '38'
<Zic> will it work?
<Zic> drag'n'dropping from local is also good to know :)
<lazyPower> Zic: you can view the specific revisin and choose deploy as well, sure
<lazyPower> *revision
<Zic> thanks, I'm impatient to upgrade this cluster in 1.7.2 to have the snap architecture
<Zic> (the other one is already at latest revision)
<Zic> my next move will be to test if Juju and our Puppet orchestrator has some conflict on some files
<Zic> with snap, normally, all will be OK
<Zic> lazyPower: before juju upgrade-charm, do you recommend upgrading the Juju client? we're not using the juju snap on the production cluster for now, I'm planning to switch to the snap version before/after the upgrade
<Zic> don't know if I choose before or after :>
<lazyPower> Zic: as far as i know the snap package has very little to do with what you wind up with in your controller, as thats all packaged and maintained on the controller itself
<lazyPower> it'll only change how you receive your client package and client updates
<lazyPower> Zic: but i would recommend the least-change method. introduce as few changes as possible, and slowly, so you can validate they haven't caused you any heartburn
<Zic> yup
<Zic> lazyPower: the kubernetes-master is stuck at "installing charm software" with no output at juju debug-log or in /var/log/juju of the kubernetes-master, where can I look to have more info? :(
<lazyPower> Zic: thats the pre-dependency bootstrapping hook for reactive. the only thing i can think of would be to juju debug-hooks that unit, and kill the process executing the existing upgrade-charm hook
<lazyPower> that way you can intercept it and run it manually
<Zic> oh, did not know debug-hooks
<Zic> python3 /var/lib/juju/agents/unit-kubernetes-master-0/charm/hooks/install /var/lib/juju/agents/unit-kubernetes-master-0/charm/hooks/install
<Zic> killing this one inside debug-hook ?
<Zic> and relaunch it manually ?
<lazyPower> Zic: yep, if you kill it, just wait a few seconds
<lazyPower> juju will auto-retry the hook and trap it in that tmux session you have open
<Zic> ok
<Zic> killed it 1min ago, it does not seem to respawn :(
<lazyPower> Zic: juju resolved kubernetes-master/0
<lazyPower> its likely on the backoff timer
<lazyPower> so resolving it will cause it to go ahead and retry
<Zic> ERROR unit "kubernetes-master/0" is not in an error state
<Zic> it stuck at maintenance/executing in fact
<lazyPower> Zic: systemctl restart jujud-unit-kubernetes-master-0
<Zic> on the juju controller machine right?
<lazyPower> cycle the agent, that should kick it in the head
<lazyPower> no
<lazyPower> on the unit you're attached to for debug-hooks
<Zic> ah, on the kubernetes-master
<lazyPower> yep
<Zic> oki
<lazyPower> just to make things easier, if i dont explicity identify another unit, i'm referring to the unit you're attached to via debug-hooks.
<Zic> it switched to error then back to maintenance/executing
<Zic> ok :)
<lazyPower> ok, in your tmux session
<lazyPower> there should be a new buffer open with the context listed
<lazyPower> "upgrade-charm" for example
<Zic> yep, I got the "This is a Juju debug-hooks tmux session."
<lazyPower> do you see the hook listed in the tmux buffers?
<lazyPower> i forget if that message prints for every buffer or not
<lazyPower> i tend to just ignore that spam now :)
<lazyPower> ok yeah, you're in the right context. i just trapped a hook to verify
<lazyPower> Zic: from here, you can execute the hook manually, and attempt to gather more information. This buffer is loaded with all the juju env bits we set to make the agent operate
<Zic> ok, so I'm trying to launch the python3 script that I killed before, right?
<lazyPower> Zic: so hooks/upgrade-charm if you're in the upgrade charm context. it may not give you an indicator as to whats actually happening, we scrape stdout from here to pipe to the logs.
<lazyPower> so if its just blank and hangs, time to start poking about to see if its network connectivity, or a locked apt daemon, or something similar
<Zic> root@mth-k8stestmaster-01:/var/lib/juju/agents/unit-kubernetes-master-0/charm# hooks/upgrade-charm <= I think I'm in the right context :)
<Zic> http://paste.ubuntu.com/24633743/
<Zic> it does not output after that
<Zic> (but does not return prompt either)
<lazyPower> Zic: so it appears that its held up attempting to install the wheelhouse
<lazyPower> :S i'm not sure what to recommend here, i haven't encountered reactive failing to bootstrap before
<lazyPower> and our resident reactive expert is out for Pycon
<Zic> same here :D it's the first time I encounter this issue
<lazyPower> Zic: not an ideal solution, but can you bug this with the steps to reproduce and i can pass this along to cory when he's back from pycon?
<Zic> if you weren't here, I would reboot the master with anger and expect it recaps the installation :D
<lazyPower> well
<lazyPower> thats an option
<Zic> huh :)
<Zic> trying it \o/
<lazyPower> but if its halting here, i dont necessarily think rebooting will help
<lazyPower> but its worht a shot
<lazyPower> give it a go
<Zic> in earlier time at my first try with Juju, I sometime rebooted the bad unit *and* the juju controller
<Zic> sometime it recaps well
<Zic> was long time ago before joining here o/
<Zic> oh, the upgrade-charm just gives me a traceback
<Zic> maybe because it receives the ACPI signal to reboot
<Zic> http://paste.ubuntu.com/24633780/
<Zic> don't know if it helps
<Zic> unit-kubernetes-master-0: 13:56:02 INFO unit.kubernetes-master/0.juju-log Invoking reactive handler: reactive/kubernetes_master.py:88:install <= saw that at juju debug-log after reboot, so it restarted correctly but... no signs of life after that entry
<lazyPower> aha
<lazyPower> i have a couse of action for you now
<lazyPower>   File "/usr/local/lib/python3.5/dist-packages/charmhelpers/core/hookenv.py", line 956, in resource_get <--
<lazyPower> it was locked up waiting on juju to resource_get a resource
<Zic> network issue? :o
<lazyPower> so, lets start with least invasive action
<lazyPower> can you re-attach to that unit and enter the debug-hook context again?
<Zic> yup
<lazyPower> same process, attach to unit, if its locked up, restart the agent daemon
<Zic> hmm
<Zic> the debug-hooks does not let me in the right context this time
<Zic> root@mth-k8stestmaster-01:~# pwd
<Zic> /home/ubuntu
<Zic> the tmux buffer is just "bash" instead of "install" like earlier
<lazyPower> right, when you first attach you only get a bash shell
<lazyPower> so, recycle the agent now
<lazyPower> systemctl restart jujud-unit-kubernetes-master-0
<Zic> ah yup
<Zic> forgot this step sorry
<Zic> ok, it's the right context now
<lazyPower> now, lets try this manually and see if we get any further detail
<lazyPower> resource-get kubernetes
<lazyPower> i suspect its going to just be hanging like it was when invoked with python, but you never know
<Zic> for now it's stucking yeah :)
<lazyPower> ok, in another termina, lets attach to the controller and tail the controller logs
<lazyPower> if there's an issue we should see some serious spam in there while this resource-get is constantly polling attempting to grab the resource
<Zic> 2017-05-23 14:01:28 ERROR juju.rpc server.go:510 error writing response: write tcp 10.52.128.99:17070->10.52.128.24:59540: write: broken pipe
<Zic> like this?
<Zic> (I have it repeatedly)
<lazyPower> that may be it
<lazyPower> i would have expected something more descriptive
<lazyPower> Zic: what version of the controller is this?
<Zic> 2.1.2
<lazyPower> Zic: looks like you're getting bit by this https://bugs.launchpad.net/juju/+bug/1627127
<mup> Bug #1627127: resource-get gets hung on charm store <cdo-qa> <cdo-qa-blocker> <juju:Incomplete> <https://launchpad.net/bugs/1627127>
<Zic> I feared that, as this test cluster is fully AWS (the production one is hybrid), I recreate the AWS SecurityGroup from scratch and mistaken somewhere, but normally it's openbar on private network
<lazyPower> Zic: well there's also a defect in resource-get hanging like this
<lazyPower> it should have returned null or error by now
<lazyPower> instead of hanging indef.
<Zic> always stuck with not returning the prompt :(
<lazyPower> click the link at the top that this bug effects me, and add detail that you're able to reproduce deploying the bundle revision -23 (? i think?)
<lazyPower> that'll help anastasiamac reproduce when it comes time to verify this bug again, as right now its incomplete and there's a good amount of back and forth about how to trigger this
<Zic> need to recover my Launchpad infos, wait :)
<lazyPower> Zic: and you'll get an update when its fixed too :) becase ya know, you interacted with the bug <3
<anastasiamac> lazyPower: repro would be awesome \o/ midnight here now so m clocking off but will read backscroll :D
<lazyPower> anastasiamac: awe i didnt mean to ping you to bring you in here, my bad
<lazyPower> cheers and will catch up with you tomorrow
<anastasiamac> lazyPower: u did not ping ;) but i heard it all the way from there :D
<Zic> lazyPower: added
<lazyPower> Zic: fantastic, thanks for that. Now we're in a situation where we have to wait :S
<lazyPower> Zic: correct me if i'm wrong, but this was just on the initial install of that older bundle rev yeah?
<Zic> yup
<lazyPower> ok
<lazyPower> i know how you can work around this
<lazyPower> so we dont have to wait but it wont be as clean as drag and drop
<Zic> the only special case is that I'm using manual provisionning
<lazyPower> in that bundle, it specifies individiual releases of charms... eg: kubernetes-master-12
<Zic> (even if it's fully on AWS, because our Internal system manage AWS instance on its own...)
<lazyPower> if you go to http://jujucharms.com/u/containers/$charm  (including revno)
<lazyPower> you can grab the resources for that charm on the right hand sidebar of that page
<lazyPower> then once you juju deploy that bundle, you can juju attach $charm kubernetes=kubernetes.tar.gz as an example
<lazyPower> so you'll need to manually attach the resources, but it will get you unblocked
<lazyPower> :( sorry that this isn't as smooth as it could be
<Zic>  https://jujucharms.com/u/containers/kubernetes-master-12/ is 404 :)
<lazyPower> gah
<lazyPower> s/-12/\/12/
<lazyPower> also i dont know that thats the release you need
<lazyPower> i was using 12 as an example
<lazyPower> i think you're on rev 21 of master in that bundle...
<Zic> lazyPower: just get the 12 from my juju status
<Zic> it's the charm revision of kubernetes-master, right ?
<Zic> or the revision of the charm-bundle?
<lazyPower> Zic: the charm revision
<Zic> lazyPower: do I need to restart the deploying of canonical-kubernetes from scratch ?
<lazyPower> Zic: you should be able to attach those resources and recycle the agent and it should unstick the deployment
<Zic> cool
<Zic> for attaching from CLI, what do I need? I think I'm mistaken "add-relation" and "attach" in my minds
<lazyPower> Zic: juju attach --help (on your workstation)
<lazyPower> Zic: and https://jujucharms.com/docs/2.0/developer-resources#adding-resources for reference
<Zic> lazyPower: oh, it's what I thought, I'm mistaken attach vs. relation
<Zic> thought it was relation to do manually :)
<Zic> I'm OK with attach so :D
<Zic> lazyPower | you can grab the resources for that charm on the right hand sidebar of that page
<Zic> kubernetes.gz is not clickable :D
<lazyPower> Zic: from the revision specified in the bundle
<lazyPower> yeah
<lazyPower> make sure thats 1:1, i wouldn't recommend trying to use a different revision of the resource than what is published for the charm you're deploying
<lazyPower> eg if you have kubernetes-master-12, then grab the resources off the right sidebar from https://jujucharms.com/u/containers/kubernetes-master/12/
<Zic> yup but how can I *grab* it? you mean download locally right?
<Zic> lazyPower: in https://jujucharms.com/u/containers/kubernetes-master/ (19th revision, the latest), ressources are clickable and downloadable, but for my revision 12th, ressources are not clickable either downloadable
<lazyPower> :/ i'm not sure why that would be the case, resources are supposed to persist for the lifetime of the charm
<Zic> lazyPower: https://jujucharms.com/u/containers/kubernetes-master/12/ can you get the kubernetes.gz ressource on this page?
<Zic> (I'm using Firefox)
<lazyPower> rick_h: is there an API hack i can use to view this charm resource and determine why there's no downloadable resource?
<lazyPower> Zic:  the only thing i can figure is somehow the charm got disassociated with a resource at this revision, and thats not going to be fun
<Zic> :'(
<lazyPower> as i have no clue what resource would go with that revision, its quite old
<rick_h> lazyPower: looking at what we're up to
<rick_h> lazyPower: https://api.jujucharms.com/charmstore/v5/~containers/kubernetes-master/meta/resources ?
<rick_h> lazyPower: e.g. https://api.jujucharms.com/charmstore/v5/~containers/kubernetes-master-12/meta/resources
<lazyPower> rick_h: https://api.jujucharms.com/charmstore/v5/~containers/kubernetes-master/12/meta/resources doesn't seem tow ork with a revision in the url
<lazyPower> oh
<lazyPower> ofcourse :) the format changes
<rick_h> lazyPower: yea, our bad for old vs new url format mixups
<Zic> lazyPower: lurking a bit and found that https://api.jujucharms.com/charmstore/v5/~containers/kubernetes-master-12/resource/kubernetes/1
<Zic> is this good?
<lazyPower> yeah there's nothing there...
<lazyPower> its got no fingerprint on the listing
<lazyPower> whoa
<lazyPower> what kind of wizardry is this
<Zic> :>
<lazyPower> rick_h: your api skills surpass mine in every way, i have no clue how you found that
<lazyPower> oh snap and zic found it to boot
<lazyPower> i need new glasses
<rick_h> lazyPower: https://api.jujucharms.com/charmstore/v5/~containers/kubernetes-master/meta goes to list all the bits available
<lazyPower> i think thats a sign i need to take my lunch before i head into a sig meeting
<rick_h> lazyPower: so that lead to /meta/resources (and then the revision trick)
<rick_h> lazyPower: :)
<Zic> lazyPower: is this "1" file which seems to be a tar.gz archive containing kube-* binary is the kubernetes.gz you expect?
<Zic> if yes, mv 1 kubernetes.tar.gz and I will continue your steps :)
<Zic> hmm, tried a ./kubectl --version of the binary inside
<Zic> it's 1.4.0-beta.10
<Zic> not my version :(
<lazyPower> Zic: time to walk through the revisions until you find the resource required. i hvae no clue whats going on in those older charm revs :| we were kind of a guinepig with resources
<lazyPower> Zic:  you need 1.5.1 right?
<Zic> 1.5.3
<Zic> http://paste.ubuntu.com/24634572/
<lazyPower> Zic: try master rev 19
<lazyPower> that should have 1.5.3
<lazyPower> might be 18... but right around that range.
<Zic> what I did not understand is why I have Rev 12 in the production cluster with kubernetes-master 1.5.3
<Zic> is the revision number is incorrect in this juju status?
<lazyPower> Zic: did you attach a resource package post deployment?
<Zic> nop :x
<lazyPower> i have no idea how this happened
<lazyPower> i'm just as baffled as you are :S
<Zic> this cluster in 1.5.3 is "one version" before 1.7.1
<lazyPower> to my knowledge nobody on the k8s team has gone back and refreshed resource revisions attached to a charm rev
<Zic> we waited so long to upgrade it because 1.7.1 needs an outage maintenance (+ some test on our side)
<lazyPower> so what we published with is what should be attached to the charms
<lazyPower> waiiit
<lazyPower> Zic:
<lazyPower> i know why we're seeing the version mismatch now
<lazyPower> promulgated version vs namespace version.  the promulgated charm is just a pointer to a charm rev in the namespace
<cmars> tvansteenburgh, hi, i'm trying to get that zetcd snap to work in a charm, https://github.com/cmars/charm-zetcd
<lazyPower> let me see if thats the case
<cmars> tvansteenburgh, but, i can't seem to connect to zetcd with zkctl
<lazyPower> cmars: he's afk for a bit headed to pickup his fam
<cmars> lazyPower, ack, thanks
<tvansteenburgh> just got back :)
<lazyPower> Zic: thats not it - https://jujucharms.com/kubernetes-master/  is not a promulgated charm. so we only point at the namespace
<tvansteenburgh> cmars: does this help https://github.com/tvansteenburgh/zetcd-snaps
<lazyPower> Zic: yeah i have no idea why thats the case :( i'm sorry i dont have better details.
<tvansteenburgh> cmars: i only tested with the example from the upstream zetcd readme
<cmars> tvansteenburgh, ok. might be that i'm trying to connect it to cs:~containers/etcd
<Zic> Client Version: version.Info{Major:"1", Minor:"4+", GitVersion:"v1.4.0-beta.11", GitCommit:"4b28af1232cc52da453eb4ebe3dc001314a1f99b", GitTreeState:"clean", BuildDate:"2016-09-23T22:53:01Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
<Zic> oops
<Zic> bad pasting, let my try again...
<Zic> lazyPower: Client Version: version.Info{Major:"1", Minor:"4+", GitVersion:"v1.4.0-beta.11", GitCommit:"4b28af1232cc52da453eb4ebe3dc001314a1f99b", GitTreeState:"clean", BuildDate:"2016-09-23T22:53:01Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
<Zic> RAH
<Zic> :'(
 * Zic needs coffee
<Zic> third try, I can do it
<Zic> lazyPower: https://api.jujucharms.com/charmstore/v5/~containers/kubernetes-master-19/resource/kubernetes/9
<Zic> lazyPower: found the 1.5.3 here
<Zic> don't ask me "why 9"
<Zic> but the /9 contains the 1.5.3 binary
<lazyPower> our api confuses me.
<lazyPower> i feel you
<Zic> :}
<Zic> lazyPower: does this mismatch displayed version can have significant problem when I will upgrade to the latest charms?
<Zic> or it's just a displaying bug?
<lazyPower> Zic: you'll be moving to snap packging
<lazyPower> we can make more garantees there
<lazyPower> Zic: you'll be in a track and always at tip of that track.
<Zic> ok
<Zic> I think I will giving up the goal of building a "clone cluster" in 1.5.3 tho :(
<Zic> will directly test the upgrade in prod and *feared* :|
<lazyPower> Zic: whats your scheduled time to do this upgrade?
<lazyPower> Zic: i can spend some time either this evening or tomorrow morning running down the resources/revisions you have deployed and get something setup so we can test thsi without running a science experiment in prod
<lazyPower> Zic: but i'm neck deep in trying to vet some code for a release today and i'm close to finishing. so i need the remainder of the day to finish this work up.
<Zic> lazyPower: we can schedule that together, I don't want to impose you a date as it's for my work and you are just here as community help (my company decline the Canonical offers, sadly :/)
<lazyPower> Zic: you're a valuable contributing member, you file bugs. i'm willing to help
<lazyPower> but i appreciate you being respectful of my time as well
<lazyPower> so let me rund own the resources you need, and we'll go from there.
<lazyPower> Zic: ill work off of http://paste.ubuntu.com/24634572/ and we can unwind from there
<Zic> ask me all you need, we are on different timezone (UTC+2 for info) and I'm on office from 10->19h, but I can lurk on IRC from home if needed
<Zic> lazyPower: for http://paste.ubuntu.com/24634572/ for example, as displayed charm rev is maybe false, if I have a way of manually retrieving the good rev, tell me :)
<lazyPower> Zic: /var/lib/juju/agents/unit-kubernetes-master/charm/revision
<lazyPower> i suspect it says "12" though
<Zic> it said... "0"
<Zic> xD
<Zic> # cat /var/lib/juju/agents/unit-kubernetes-master-0/charm/revision
<Zic> 0
<vlad_> Hey guys anyone here to field a quick question?
<Zic> can I throw something violently on the wall? :)
<vlad_> If you answer my question then yes for sure
<Zic> vlad_: huhu, sorry, was not for you :)
<vlad_> Zic: ahh ok no worries
<vlad_> So I'm deploying a PoC juju/openstack cloud an want to deploy my juju controller to a specific node... is there a way to do this? (I looked but couldn't find anything obvious)
<vlad_> Also I should clarify that I'm using maas as my machine provider
<lazyPower> vlad_: you can specify a machine tag and tag that machine in maas
<Zic> vlad_: I'm on manual provisionning so it's maybe incompatible with your case, but for me, it's like that: juju bootstrap manual/host.name.of.the.machine.which.host.the.controller cdk
<lazyPower> vlad_: juju bootstrap --help   there's anotion of bootstrap constraints. and in that constraint list, you can specify the maas tag.
<vlad_> lazyPower: thanks that's awesome didn't realize how deep constraints went.
<vlad_> Zic: Thank you as well!
<Budgie^Smore> o/ Juju world
<hatch> o/ Budgie^Smore
<lazyPower> \o Budgie^Smore
<rick_h> Budgie^Smore: are we chatting today?
<Budgie^Smore> yeah, was just pulling your number up... thought you were calling me for a min ;-)
<rick_h> Budgie^Smore: hangout! link in the calendar invite
<rick_h> Budgie^Smore: let me know if that doesn't work out
<jackjohnsmith> ^_^
<jackjohnsmith> remove it
<jackjohnsmith> remember felisha?
<jackjohnsmith> sure canadian gov isnt looking for a smell is it?
<H0LYR3V3NG3> rdy for a file top prevent visual
<H0LYR3V3NG3> prison?
<H0LYR3V3NG3> i know ur in csis body thats why im here
<jesse9> get it
<xModeMunx> Hi all. I'm new to juju, and just starting up a new cluster to test. This is an openstack "enterprise" local configuration.
<xModeMunx> Am I able to provision the infrastructure via the online tool? Or am I not doing it right?
<xModeMunx> I can't quite tell if this can be used as a "private" tool. I can register juju-cli as a cloud for my local maas configuration. So what are my limitations? :-/
<rick_h> xModeMunx: so you can use the Juju CLI to bootstrap to a "private" openstack, or a local maas
<rick_h> xModeMunx: from there you get a juju gui embedded into that Juju controller that you bootstrap so you get a bit of the online experience but self contained
<xModeMunx> rick_h: I have got to that stage so far. So, i've bootstrapped and via the cli I can poll some basic things to test. So, onward to "configuring" my openstack deployment, how can I go about this?
<rick_h> xModeMunx: oic, so you've used Juju to deploy openstack on top of MAAS?
<rick_h> xModeMunx: to configure the openstack you can use Juju to provide some application level configuring for each OpenStack service or go to OpenStack (horizin dashboard for instance) to manage the OpenStack from there.
<xModeMunx> rick_h: I haven't /yet/ deployed anything via juju. I have literally installed maas, added 2x "to-be" compute nodes, and 1x "to-be" controller node.
<rick_h> beisner: and others from the team that managed the openstack work would be able to help drop hints as to how to configure different bits you're interested in.
<rick_h> xModeMunx: oic, so it might be useful to try the conjure-up to help do a sample/test install. It kind of guides/walks through the process a bit more than a manual deploy
<xModeMunx> I have previously built OS from scratch, but having seen this tool, it seems I can alleviate mych of the efforts.
<xModeMunx> rick_h: ah, awesome, i'll give that a go now :-)
<rick_h> https://www.ubuntu.com/download/cloud/conjure-up check out https://docs.ubuntu.com/conjure-up/en/#getting-started
<stokachu> and im here as well to help answer questions
<stokachu> xModeMunx: ^
<xModeMunx> stokachu: Waw, you guys are probably the friendliest irc people ever :-D
<rick_h> xModeMunx: cool, yea hit up stokachu and others if you hit anything
<xModeMunx> Much appreciated guys. Let me go have a read, and see where it leads. Is this considered production ready btw? Cos it it proves useful, it may make its way into my next "real" work lab deployment.
<rick_h> xModeMunx: yes, this is the same stack of tools we use to support our paying OpenStack customers.
<xModeMunx> Thanks Rick. Are you part of the canonical support guys?
<rick_h> Just they get a nice phone number to call and some PDF files to go with it :)
<rick_h> xModeMunx: no, we're more the dev engineer side.
 * rick_h has to run biab
<xModeMunx> ah, I see. The "real" men ;-)
<jesse9> anywho i hacked all of you
<jesse9> goodluck logging into anything
<xModeMunx> ```Set automatic aliases for snap "conjure-up" (cannot enable alias "juju" for "conjure-up", it conflicts with the command namespace of installed snap "juju")```
<xModeMunx> Seems the first hurdle has presented itself.
<stokachu> xModeMunx: sudo snap remove juju
<stokachu> xModeMunx: conjure-up provides its own
<xModeMunx> stokachu: Ahh. Even this snap stuff is new to me. I must be getting old :-(
<stokachu> xModeMunx: im right there with you
<bdx> Sandbox2016!
<bdx> well
<xModeMunx> Am I correct to view conjure as a terminal-based alternative to the online juju architect design tool?
<stokachu> xModeMunx: to some extent, we go further and provide you with helpful guidance to configure your deployment
<stokachu> we can also make adjustments for deploying openstack and kubernetes on a single local machine
<xModeMunx> stokachu: I see. Thanks for the clarity.
<stokachu> bdx: are you handing out passwords again?
<bdx> the problem with being overzealous to login to your pc before the screen turns on and you realize it wasn't sleeping and your irc window had the context
 * bdx weeping
<stokachu> lol
<stokachu> ive done that before too
 * lazyPower starts poke-checking known accounts for bdx
 * bdx wipes all reminisce of known string
<lazyPower> good plan :) I tease anyway ;)
<bdx> lazyPower: its cool ... keep it up ... as long as you enlighten me to what this "CAAS" thing is while you are at it :)
<bdx> crickets
<bdx> :)
<rick_h> bdx: you causing trouble :P
<lazyPower> bdx: its a WIP, thats what it is :)
<lazyPower> or an experiment? i forget which
<lazyPower> maybe both
<xModeMunx> if/when this conjure install finishes, and it works. I will consider it quite magical.
<xModeMunx> I think it stalled. I crtl+c'd. Maybe I shouldn't have :-|
<stokachu> depending on your hardware it could take up to an hour
#juju 2017-05-24
<kjackal> Good morning Juju wolrd!
<bdx> http://imgur.com/a/blvN7
<bdx> fml
<bdx> MLFQ scheduler with process budgeting for anyone interested, implemented by yours truly - https://gist.github.com/jamesbeedy/f97393235a06f878655c7eeace717500
<Zic> hi Juju world, deploying a new CDK cluster (in the latest version this time /!\) and one of my 5 etcd is stuck at "waiting/idle" with message "Waiting for unit to complete registration."
<Zic> this is the only etcd unit stuck at that, the 4 others are active/idle
<Zic> I took a look at "juju debug-log" but it does not seem to be special error
<admcleod> Zic: anything anywhere else? e.g. syslog, cdk logs, juju logs on the unit itself?
<Zic> admcleod: I just checked /var/log/juju/* and nothing in error, I will check syslog also
<Zic> May 24 11:04:24 mth-k8stestetcd-03 /snap/bin/etcd.etcdctl[1584]: cmd.go:114: DEBUG: not restarting into "/snap/core/current/usr/bin/snap" ([VERSION=2.24 2.24]): older than "/usr/bin/snap" (2.25)
<Zic> I have many of it
<admcleod> Zic: not entirely sure if thats going to be related but perhaps worth updating?
<Zic> was a fresh deploying
<admcleod> Zic: hrm ok
<admcleod> Zic: well. in any case, 4/5 etcd are ok.. underlying network issue? can 'etcd 5' communicate w/ the others?
<kjackal> Zic: I am trying the deployment  now
<admcleod> kjackal to the rescue
<kjackal> not sure admcleod
<kjackal> Zic: you deployed the two extra etcds after the initial deployment had finished? Or did you trigger the deployment with 5 units?
<Zic> kjackal: directly deploying with 5 etcd yep
<Zic> not after
<kjackal> ok , Zic, thanks
<Zic> we're deploying all our CDK cluster with N+2 redundancy
<Zic> so 5 etcd to have at least a quorum of 3
<Zic> ask me if you need further action from me to have more debug logs, I don't have that much for now :(
<Zic> I check the status of the etcd service, it's stopped
<Zic> and cannot start:
<Zic> http://paste.ubuntu.com/24643047/
<kjackal> Zic: strange... in any case redeploying to see if I can repro. Will letyou know if i need more help
<Zic> kjackal: never have this one, I'm thinking is just a random bug :(
<Zic> all etcd have the same connectivity configuration
<Zic> I tried to reboot the etcd unit machine which bug with no new result
<Zic> http://paste.ubuntu.com/24643101/
<Zic> juju debug-log prints a little more after the reboot
<Zic> but nothing new compared to the local /var/log/syslog of the unit machine
<Zic> kjackal: can I say to that unit to restart the charm installation from the beginning
<Zic> +?
<kjackal> Zic: no I do nto think so. I couldn;t repro this bug. My suggestion is to add another etcd unit so you have 5. Also if you could wait a bit lazyPower and tvansteenburgh will be up shortly, they might be able to offer their opinion
<Zic> ok, I can wait, this is the test cluster instance :)
<kjackal> Many thanks Zic
<Zic> I'm fearing to lost precious debug logs which could interest CDk-team if I go further and drops that silly unit so I can wait :)
<Zic> (I will go dinner btw)
<kjackal> enjoy
<lazyPower> o/ morning
<tvansteenburgh> Zic: i think the syslog msg is a red herring. might be a charm bug, i'm not sure
<tvansteenburgh> found that status msg here https://github.com/juju-solutions/layer-etcd/blob/ca4c14c52822b41113e7df297e99097016494e07/reactive/etcd.py#L318
<tvansteenburgh> o/ lazyPower
<tvansteenburgh> maybe you can help :)
 * lazyPower is reading backscroll
<lazyPower> hmm, smells like a race during unit registration
<lazyPower> 4/5 came up good, the 5'th refuses to start.
<Naz> Hi, I want to point to an error in the documentation about Constraints.
<Naz> https://jujucharms.com/docs/2.1/charms-constraints says and I quote:
<Naz> In the event that a constraint cannot be met, the unit will not be deployed.  Note: Constraints work on an "or better" basis: If you ask for 4 CPUs, you may get 8, but you won't get 2
<Naz> This is NOT true, as I requested --constraints "cores=4 mem=16G" on my laptop having physically cores=2 and mem=8G and it instantiated a machine with 2 cores and 8G instead of the 4Cores/16G RAM requested as constraints.
<BlackDex> Hello there
<BlackDex> i have a juju 1.25 env where an subordinate is stuck
<BlackDex> it hangs on (stop)
<BlackDex> and doesn't get removed
<BlackDex> someone any ideas how to resolved that?
<tvansteenburgh> Naz: you are using the lxd provider?
<lazyPower> Naz: I dont think constraints like that are honored on the local/lxd provider. I know that behavior is true when using clouds like aws,gce,maas,openstack.
<BlackDex> never mind, i restarted the machine-xx jujud and it worked :S
<Naz> Tvansteen, Yes, I am working on local cloud LXD
<BlackDex> did that before didn't worked, now it does
<Naz> LazyPower, Yup, I am working locally over LXD
<lazyPower> BlackDex: its not immediately obvious but if a relationship fails during the subordinate charms teardown, it will halt.
 * lazyPower steps away to make coffee
<Naz> @LazyPower, @TvansteenBurgh, You guessed it right, but I think it's better to mention in the docs that this is applicable to online cloud but not to local/LXD,....
<Naz> Is it also the case for Locally deployed OPENSTACK?
<lazyPower> Naz: well, thats a slightly different story. LXD is based around density and will do everything it can to colocate your workloads on the requested hardware. The only type of allocation you could do would be to set cgroup limits, so you can over-request a machine as it were.
<rick_h> bdx: you get he hangout link ok?
<lazyPower> now when you deploy openstack, and yoou use nova - you'd run into issues because nova wouldn't have enough vcpu's to -dedicate- to that requested workload and i would suspect it stays in pending
<lazyPower> but if you used nova-lxd, it would likely be happy trying to cram whatever you throw at it wherever you have nova-lxd.
<Naz> @LazyPower, I see, thank you, I have another question, please, how to upgrade the memory when working on localCloud/LXD?
<rick_h> bdx: bueller, bueller
<Naz> So in other words, If I have a machine instantiated as constraints 2G_RAM, and based on metrics, found it's struggling and want to upgrade it to 4G_RAM?
<lazyPower> Naz: so depending on your cloud provider. I know on GCE you can stop the unit and change the instance type, and once you start it back up, it will inform the controller of its new ip address (i'm unsure if it re-reports its hardware) but thats one way to do it.
<lazyPower> another option would be to add a unit with different constraints, and then remove the unit with the lower constraints (not necessarily ideal with stateful workloads)
<Naz> @LazyPower, yes, I understand, however, first option induces an OUTAGE, second is seamless from End-user perspective. However, I am interested in inspecting first option further please. what do you mean by stop? do you mean juju remove-unit?
<lazyPower> Naz: no not at all. When i say stop i refer to stopping the unit in the cloud control panel. Most clouds require you ot have the instance in a stopped/power-off state in order to change hardware configuration
<Naz> or juju remove-machine?
<lazyPower> Naz: and ideally, you would be running in a highly available scenario to mitigate any outage.
<Naz> I think you meant Remove-machine and recreate another new one with higher constraints?
<Naz> @LazyPower, Agree with you on HA :)
<lazyPower> Naz: The first method would not involve any changes to the model using juju. You would be issuing these commands against yoru cloud provider to halt the instance and change its hardware profile.
<BlackDex> hmm i have a problem with relations
<BlackDex> they won't complete
<BlackDex> and this causes the services to restart every x seconds (if not less)
<BlackDex> the cluster-relation-changed keeps getting called
<Naz> @LazyPower, on local LXD Cloud, can I do some orchestration on resources like the ones offered for LXC?
<lazyPower> Naz: i'm not sure what you're asking me
<lazyPower> BlackDex: what charm is this?
<BlackDex> cinder
<Naz> @LazyPower, I want to do the following scenario: Start with limited resources machine , let's say 1 CPU, then during runtime, based on Real-time metrics, increase the CPU to 2. For example: in LXC: lxc config set my-container limits.cpu 1
<Naz> @LazyPower, how could I do this in JUJU?
<lazyPower> Naz: thats a bit beyond me, i dont know if we offer anything like that. I dont think juju is setting any constraints on the local provider.
<lazyPower> Naz: sorry i really dont know, i'd rather tell you i dont know than misinform you. The best i can say at this time is to try it and inspect the lxc profile post deployment. if you dont see any resource limits, its not something we support today but with a feature-request we can look into it.
<Zic> lazyPower: I'm back, if you need further info :)
<Zic> (for the 1/5 etcd which stuck at registration)
<lazyPower> Zic: just one unit didn't turn up?
<Zic> yup
<lazyPower> Zic: juju debug-log --replay -i unit-etcd-#  | pastebinit
<lazyPower> on the stuck unit
<lazyPower> i suspect a race during registration
<Naz> @LazyPower, Ok, I understand, Could you please point some scenario on how you think I can do orchestration in juju? (Orchestration is getting some metrics and reacting upon it to answer the demand)
<lazyPower> Zic: i didn't get back home until late lastnight so i havne't had a chance to fetch the resources, but i'll def. dig into that tonight for tomorrow.
<Zic> http://paste.ubuntu.com/24644010/ lazyPower
<lazyPower> Naz: elastisys has modeled an orchestrator for autoscaling. 1 moment while i grab you the link
<lazyPower> Naz: https://jujucharms.com/u/elastisys/charmscaler/
<Naz> @LazyPower, Great, I will have a look into that :)
<Zic> lazyPower: yup, I'm deploying this new test cluster in 1.7.3 at least to let the customer test his pod in 1.7.3, but I'm not giving up the subject to test the 1.5.3 -> 1.7.X, will wait what you discovered so far with our specific charm rv
<lazyPower> Zic: unit-etcd-2: 10:17:11 INFO unit.etcd/2.juju-log cluster:0: Invoking reactive handler: reactive/etcd.py:279:register_node_with_leader
<lazyPower> so it sent reigstration detail to the leader. can you hop over on the leader and run `etcd.etcdctl member list`
<Zic> http://paste.ubuntu.com/24644023/ <= lazyPower
<lazyPower> weird
<lazyPower> member fd9d260cab7a11dc is unreachable: no available published client urls <-- it completed AND joined at one point
<lazyPower> if it had not joined it would say (unstarted)
<Zic> hmm, and more weird
<Zic> where is 03 and 05 ?
<Zic> one of them is missing because of error, ok
<Zic> but the other one? :D
<Zic> OH
<Zic> lazyPower: did not saw something, wait
 * lazyPower waits
<Zic> 2 is missing
<Zic> but one is in active/idle
<Zic> did not saw the error message so...
<Zic> http://paste.ubuntu.com/24644044/
<Zic> from the beginning I spoke about etcd/2
<Zic> but etcd/4 is also in problem
<lazyPower> i would say unit 2 and 4 raced
<lazyPower> and are now in a deadlock
<lazyPower> you can juju remove-unit on those and re-add them and it should sort itself
<Zic> the two I just added to the default charm-bundle?
<Zic> (I alway scale etcd at 5 instead of 3 in CDK)
<Zic> will try that lazyPower
<lazyPower> so remove the errored unit, `juju remove-unit etcd/4`  wait for it to complete
<lazyPower> if the cluster still reports healthy, `juju remove-unite etcd/2`
<lazyPower> if cluster continues to report healthy, then you can juju add-unit etcd -n 2
<Zic> on which machine "juju add-unit etcd -n 2" will add?
<Zic> hmm btw: etcd/4                    error     idle       9        mth-k8stestetcd-05.aws-us-east-1    2379/tcp        hook failed: "cluster-relation-broken"
<lazyPower> there's a marginal chance that the units during turn up will miss another units registration request and attempt to register, you managed to hit that
<lazyPower> its a known deficiency because the coordination relies on querying the leader for the member list before it attempts registration. it looks for a non-healthy non-ready unit in the member list, if it finds it, it halts. if its not present it will declare its registering on the peer interface and attempt self registration
<lazyPower> Zic: thats expected, the unit itself is in a broken state. the leader will deregister the unit from the cluster if it has any details in its registration data store
<lazyPower> Zic: juju resolved --no-retry until the unit is gone.
<Zic> ok
<Zic> it switch to "terminated" and then gone :)
<Zic> hmm, it removes also the machine from the controller :>
<lazyPower> Zic: theres a way to change that, i think its the provisioner-harvest-mode model-config option
<lazyPower> Zic: https://jujucharms.com/docs/2.1/models-config#juju-lifecycle-and-harvesting
<Zic> I can just respawn the 2 etcd bugged machine after, not important for this test cluster :)
<Zic> do I remove etcd/2 also now?
<lazyPower> the ones that are stuck in registration limbo, yeah
<BlackDex> lazyPower: The cinder charm keeps running the cluster-relation-changed hook :(
<BlackDex> sometimes it restarts haproxy and apache2, and sometimes it tells that it is already running
<lazyPower> BlackDex: pop over to #openstack-charms, they have the most experience with those charms as the community maintaining them :)
<BlackDex> oke :)
<kjackal> Question: Can you get the models name from within a running charm?
<kjackal> is ther an env variable?
<jrwren> not AFAIK and why would you want to, a charm should definitely not behave differently based on the name of hte model in which it is deployed.
<kjackal> jrwren: thanks for the quick reply. Yes you are probably right on this
<dakj_> Hello guy I've a question for you, but the service landscape-client deployed on a node is working? Because I don't know how it works after the deploy. thanks
<Zic> lazyPower: redeploying two new etcd, it's always stuck in "waiting/idle Waiting for unit to complete registration." :o
<lazyPower> Zic: hmm, that removal should have kicked the one thats stuck
<lazyPower> Zic: can you remote into the master and issue a member list to pastebin again?
<lazyPower> i'm going to validate an assumption i have of whats blocking the other 2 units
<lazyPower> Zic: i'll also be latent, i'm in sig-onprem taking notes.
<Zic> http://paste.ubuntu.com/24644681/ <= lazyPower
<Zic> no problem, I'm on weekend (yaaaiii \o/) in two hours :)
<Zic> so I will also be latent time to go back home
<lazyPower> Zic: on the leader `etcd.etdctl member remove fd9d260cab7a11dc`
<lazyPower> should unstick those pending units
<lazyPower> looks like the unit that biffed registration didn't actually get removed, which is a whole different issue i'm going to have to look into if i can reproduce it
<Zic> lazyPower: did that, got a new strange thing :(
<Zic> one of the 2 new is full OK
<Zic> etcd/5                    active    idle   14       mth-k8stestetcd-05.aws-us-east-1    2379/tcp        Healthy with 5 known peers
<Zic> the other one is not OK :
<Zic> etcd/6                    active    idle   13       mth-k8stestetcd-03.aws-us-east-1    2379/tcp        Errored with 0 known peers
<Zic> etcdctl cluster-health return all is healthy except this one with:
<Zic> member 34f56278a8fdd1cf is unreachable: no available published client urls
<Zic> :(
<lazyPower> Zic: ok, what happened?
<lazyPower> ah, lag on my end, 1 sec
<lazyPower> Zic: this is the latest revision of the charm?
<Zic> yup
<Zic> deployed from the latest bundle-charm
<Zic> never upgraded, all fresh
<Zic> (from this morning)
<Zic> (#38 revision of canonical-kubernetes)
<ryebot> Zic: Can we see the journalctl logs of snap.etcd.etcd on mth-k8stestetcd-03.aws-us-east-1 ?
<Zic> yup
<Zic> http://paste.ubuntu.com/24645361/
<ryebot> thanks
<ryebot> wow, no errors or anything, just implodes
<ryebot> Zic: can you see if any processes are currently listening to port 2379 on that box?
<ryebot> `netstat -plant | grep LISTEN | grep 2379`
<Zic> and old man told me that "netstat is old, use 'ss' instead", but it's offtopic
<Zic> will try that
<ryebot> haha sure whatever works :)
 * ryebot googles ss in a desperate attempt to recover lost youth.
<Zic> (was a joke at my office for "old" coworker which always use "ifconfig" and "netstat")
<ryebot> xD
<Zic> ryebot: you can use the same parameter than netstat
<Zic> so ss -plant will work
<Zic> the output may differ a bit
<ryebot> ah cool
<ryebot> Tried it, got ESTABBED a bunch of times. Not sure how I feel about that.
<Zic> in any case: this netstat/ss does not return anything
<Zic> :(
<ryebot> okay, well that rules port conflicts out
<ryebot> hmm let me ruminate on these logs a bit
<ryebot> Zic: I'm guessing systemctl restarting etcd results in the same failure after a few moments?
<ryebot> Zic: Could I also see the logs from a good etcd?
<ryebot> lazyPower: Is there a debug logging mode for etcd?
<ryebot> nvm, --debug true seems to do it
<ryebot> Zic: can you also edit /etc/systemd/system/snap.etcd.etcd.service to add the --debug true flag?
<ryebot> lazyPower: if you come at me with a lmgtfy, well, I deserve it. xD
<Zic> ryebot | Zic: I'm guessing systemctl restarting etcd results in the same failure after a few moments? â  yes
<ryebot> +1
<Zic> ryebot | Zic: Could I also see the logs from a good etcd? â http://paste.ubuntu.com/24645416/
<ryebot> thanks
<Zic> http://paste.ubuntu.com/24645430/ <= for --debug
<ryebot> Zic: fantastic, thanks
<ryebot> Zic: There's an error in there I'm trying to get to the bottom of. Give me a little time to research.
<Zic> np :)
<ryebot> Zic: Can you ls the contents of /var/snap/etcd/common and /var/snap/etcd/current for me?
<lazyPower> ryebot: nah :) I'm loving the fact you stepped in to lend a hand <3
<Zic> http://paste.ubuntu.com/24645509/
<Zic> ryebot: ^
<ryebot> thanks
<ryebot> lazyPower happy to ;)
<ryebot> Zic: Can you share the contents of /var/snap/etcd/common/etcd.conf?
<Zic> http://termbin.com/pwyc
<ryebot> Zic: thanks
<Zic> (root@mth-k8stestetcd-03:~# cat /var/snap/etcd/common/etcd.conf | nc termbin.com 9999 -> it's like pastebinit but without the pastebinit client :>)
<Zic> don't know why I didn't use that before
<ryebot> cool
<ryebot> Zic: Can you get me the journalctl logs of the failing etcd again, but this time use `-o cat` in the journalctl flags?
<ryebot> Zic, can you also try replacing the ETCD_INITIAL_CLUSTER line in /var/snap/etcd/common/etcd.conf with the following, and then restart the snap.etcd.etcd service?
<ryebot> ETCD_INITIAL_CLUSTER="etcd6=https://mth-k8stestetcd-03.aws-us-east-1:2380,etcd1=https://mth-k8stestetcd-01.aws-us-east-1:2380,etcd0=https://mth-k8stestetcd-02.aws-us-east-1:2380,etcd3=https://mth-k8stestetcd-04.aws-us-east-1:2380"
<Zic> http://paste.ubuntu.com/24645558/
<ryebot> Zic: thanks
<Zic> http://paste.ubuntu.com/24645563/ <= ryebot for the ETCD_INITIAL_CLUSTER
<ryebot> Zic: thanks, same error in journalctl?
<Zic> http://paste.ubuntu.com/24645573/
<Zic> seems so
<Zic> (need to back to home, will retrieve my backlog from there o/)
<ryebot> Zic: o/
<ryebot> lazyPower: I need to grab lunch and continue with LF stuff, but I'll try to come back and help in a bit.
<lazyPower> ty ryebot
<ryebot> lazyPower Zic: We really need the end of that error line. I thought adding -o cat to the journalctl command would grab it, but it's still cut off
<ryebot> lazyPower Zic: but the end of that line should have the actual error: https://github.com/coreos/etcd/blob/master/etcdserver/server.go#L306
<lazyPower> ryebot: journalctl -xn --no-pager
<lazyPower> should get you what you're looking for
<rick_h> reminder Juju Show in 49 minutes! lazyPower hatch jrwren beisner jamespage kwmonroe  and anyone else interestsed
<ryebot> Zic: ^ would be super helpful
<ryebot> lazyPower: u rok thanks
 * rick_h goes to get coffee to prep
<beisner_> hi rick_h - on deck!
<lazyPower> ryebot: that was all stack overflow ;)
<lazyPower> anastasiamac: super happy to see you've got some joy on https://bugs.launchpad.net/juju/+bug/1627127. \o/
<mup> Bug #1627127: resource-get gets hung on charm store <cdo-qa> <cdo-qa-blocker> <juju:In Progress by anastasia-macmood> <https://launchpad.net/bugs/1627127>
<rick_h> Juju Show watch it from https://www.youtube.com/watch?v=VDXolq_eGkU and "join the conversation" at https://hangouts.google.com/hangouts/_/ytl/iwKTaiURK50IjCs9jO1d72S5s8ULBmDFiC7D91F9MiQ=?eid=103184405956510785630&hl=en_US
<magicaltrout> talking of ecosystem stuff
<rick_h> woot woot
<rick_h> and openstack stuff
<magicaltrout> Thomas from Tengu gave a good talk at Apachecon that included a bunch of juju slides
<rick_h> big topic of the day
<magicaltrout> and a bunch of people i bumped into were discussing snaps as a packaging format for various apache projects
<rick_h> magicaltrout: openstack folks doing a lot of snapping as well
<rick_h> magicaltrout: seems like a good thing :)
<magicaltrout> indeed
<magicaltrout> i'm gonna dive into it properly at some point soon
<rick_h> let us know if you need any help
<bdx> rick_h: that link is borked
<bdx> https://hangouts.google.com/hangouts/_/ytl/iwKTaiURK50IjCs9jO1d72S5s8ULBmDFiC7D91F9MiQ=?eid=103184405956510785630&hl=en_US
<rick_h> bdx: which link?
<rick_h> https://hangouts.google.com/hangouts/_/cfovp34gqrf2vliprctda575c4e bdx
<rick_h> bdx: others are in on the link not sure what's up
<magicaltrout> oh i also started charming up the worlds fastest analytic database today as well for our data platform & will be free to use
<magicaltrout> which is good for BI apps on the Juju ecosystem
<magicaltrout> marcoceppi: can you go through the Developer credits backlog when you get a spare slot? :)
<magicaltrout> also can someone review my gitlab charm
<magicaltrout> not cause i'm overly bothered about the promotion, i just want to get a code review to see if I'm following the process for the review queue in general
<magicaltrout> </adhoc requests>
<beisner_> #link openstack charm guide:  http://bit.ly/2rUKpnR
<rick_h> ty
<Budgie^Smore> did someone miss me?
<Budgie^Smore> o/ juju world
<lutostag> does the juju model-config no-proxy option support wildcarding or the 10.0.0.0/21 netmasking?
<anastasiamac> lazyPower: tyvm!
 * lutostag to myself, nope it doesnt juju model-config no-proxy=$(printf '%s,' 10.5.0.{1..255}; echo -n localhost) # from https://unix.stackexchange.com/a/23478
#juju 2017-05-25
<kjackal> Good morning Juju Wolrd!
<Zic> lazyPower / ryebot : hi, a night passed for me, let me backlog and connect to the VPN of the office and I will send you that :)
<lazyPower> Zic: sounds good. Thanks for digging into this
<magicaltrout> https://www.youtube.com/watch?v=HI0x0KYChq4
<lazyPower> o/ magicaltrout
<lazyPower> did you figure out your DNS conundrum?
<rick_h> for the record, I love the word conundrum
<rick_h> and mornings and such
<jam> rick_h: i wonder if that says a lot about you
<jam> but it is a fun word
<jam> rick_h: not to be confused with corundum
<wpk> coriander?
<rick_h> lazyPower: I feel like if we sat down to cover the things that "say a lot about me" we'd be here a while :)
<rick_h> wpk: not dinner time yet, ssssh
<Mmike> lazyPower, rick_h - hello, lads. Who do I ping these days for review.charmstore.com, do you know, maybe? Tim?
<magicaltrout> i didn't lazyPower
<magicaltrout> but i'm on vacation next week so i'm over it
<lazyPower> magicaltrout: fair enough
<lazyPower> rick_h: :)
<lazyPower> Mmike: what can I help you with?
<magicaltrout> when ya'll get bored......
<magicaltrout> 19:59 < magicaltrout> marcoceppi: can you go through the Developer credits backlog when you get a spare
<magicaltrout>                       slot? :)
<magicaltrout> 20:00 < magicaltrout> also can someone review my gitlab charm
<magicaltrout> 20:00 < magicaltrout> not cause i'm overly bothered about the promotion, i just want to get a code
<magicaltrout>                       review to see if I'm following the process for the review queue in general
<magicaltrout> ta
<Mmike> lazyPower, trying to request a review for a charm, but I end up with "review.jujucharms.com didnï¿½t send any data." and "ERR_EMPTY_RESPONSE" after I click 'Submit' button on the request-review page
<lazyPower> Mmike: ok thats interesting. Yeah tvansteenburgh would be a good person to ping on that, but i have news for both you and magicaltrout.
<lazyPower> we haven't publically announced this yet as there's still more planning/discussion that needs to happen, but we'll be decomissioning the review queue. Which in turn is going to bring in some new process/policy for charms.
<magicaltrout> aww but this is like the first time I've ever tried to do anything properly as well....
<lazyPower> magicaltrout: you wore us down man. hows it feel? :)
 * lazyPower assigns credit/blame to magicaltrout
<magicaltrout> ironically the previous time i attempted to get some stuff reviewed was when you decomissioned the old review queue
<magicaltrout> so its plausible
<Mmike> lazyPower, yup, just had a short chat with dosaboy, he mentioned something in that manner
<Mmike> lazyPower, but in the meantime I do need to get charm changes to the charmstore! :) I'll ping tim in an hour or so, it's too early for him yet
<tvansteenburgh> Mmike: which Tim are you looking for?
<Mmike> tvansteenburgh, hello! you sire. Having issues when trying to put a charm up for a review in the review.jujucharms.com
<tvansteenburgh> Mmike, looking, one sec
<Mmike> thnx
<tvansteenburgh> Mmike: hrm, revq is getting a 503 from the lp api at the moment...
<tvansteenburgh> Mmike: asking about it in #launchpad
<Mmike> tvansteenburgh, ack, thank you for looking into it
<tvansteenburgh> Mmike: i've asked wgrant to comment over there, so i'll wait and he what he says
<tvansteenburgh> Mmike: i've seen this once before - it seems to happen to some lp accounts but not others
<tvansteenburgh> Mmike: one way forward would be to have someone else submit the review, if the charm is maintained by a team
<Mmike> tvansteenburgh, ack, thank you. It's no rush, I'd say.
<Mmike> tvansteenburgh, well, it's maintained by the mongodb-charmers team, which I'm part of... but I always pushed the charm to my personal namespace and the asked for a review
<Mmike> tvansteenburgh, you think I should try doing it from mongodb-charmers namespace /
<Mmike> ?
<tvansteenburgh> Mmike: ok, so you've never had this prob before? that gives me hope that it's temporary
<tvansteenburgh> Mmike: yeah, worth a try
<Mmike> tvansteenburgh, nope, this is a first timer. not that I did tons of reviews either
<tvansteenburgh> Mmike: or just wait to see what wgrant says about that error
<tvansteenburgh> up to you
<Mmike> i'll wait a bit, have some other things on my plate (NOT lunch, yet :D ), as I said, no rush yet :)
<Mmike> tvansteenburgh, thank you for the help
<tvansteenburgh> Mmike: k, np
<jamespage> o/
<jamespage> tvansteenburgh: hey - can we get a new charmhelpers released out? I'm working on landing support for OpenStack Pike and our reactive charms need a released version with the right bits...
<tvansteenburgh> jamespage: will do
<jamespage> tvansteenburgh: also can we talk about moving charmhelpers to github?
<tvansteenburgh> yes!
<jamespage> I'd like to help make that happen if that's useful
<tvansteenburgh> that'd be great
<jamespage> I feel we can make alot of use of things like travis which is harder on LP
<tvansteenburgh> yes
<jamespage> tvansteenburgh: maybe the first step is to just get the code migrated over; then we can iterate from that point onwards
<jamespage> tvansteenburgh: tell you what - lemme post to that effect on #juju
<jamespage> ML that is
<tvansteenburgh> +1
<rick_h> erik_lonroth: for our party in a few feel free to just follow this link: https://hangouts.google.com/hangouts/_/canonical.com/rick-harding
<tvansteenburgh> jamespage: thanks!
<jamespage> metaphorical bull grabbed by the horns
<jamespage> shout me down if I'm sounding truly insane :-)
<tvansteenburgh> :D
<jamespage> marcoceppi: ^^
<tvansteenburgh> jamespage: charmhelpers-0.16.0 uploaded to pypi
<jamespage> tvansteenburgh: tvm
<jamespage> hmm is that actually a thing
<jamespage> ta very much
<tvansteenburgh> np
<rick_h> erik_lonroth: around for our chat?
<magicaltrout> the heat is getting to jamespage
<jamespage> magicaltrout: could be could be
<jamespage> magicaltrout: hows it on your side of the county?
<magicaltrout> Warm
<magicaltrout> although i'm off on the broads for a week next week
<magicaltrout> so hopefully it'll stay
<jamespage> magicaltrout: which part?
<magicaltrout> jamespage: Boating Stalham down to Hickling then down Acle then up to bure marshes(bewilderwood for the kids)
<jamespage> magicaltrout: nice
<jamespage> magicaltrout: my first job was working for a development company in stalham
<jamespage> meadowhouse bar lazer - I wonder what happened to them?
<magicaltrout> we'll see, never done 5 days on a small boat with the kids! ;)
 * jamespage goes to look
<magicaltrout> hey lazyPower i could see you fitting into the narrowboat lifestyle nicely ;)
<magicaltrout> cruising up and down the canals
<lazyPower> magicaltrout: its not that far of a stretch to the imagination
<lazyPower> i pretty much inhabit a narrow living space now :)
<admcleod> magicaltrout: did you just plan a trip to 'places with ridiculous names' then?
<magicaltrout> hell admcleod i'm from Yorkshire where we get used to americans pronouncing Ruswarp and Grosmont very wrong :)
<admcleod> presumably they're there because they thought they were going to lie-chester square and got the wrong train
<magicaltrout> ah you've met those people then
<admcleod> i was one :(
<magicaltrout> thats what happens with you hold a british passport but don't really like the country! ;)
<admcleod> hey, i like scotland
<magicaltrout> they don't have a passport
<magicaltrout> ... yet
<marcoceppi> jamespage: I just created the github repo, in advance of the vote passing
<jamespage> marcoceppi: awesome - url?
<marcoceppi> github.com/juju/charm-helpers
<rick_h> bdx: ping
<bdx> yo
<rahworkx> hello all, Was wondering if anyone could explain how the mongodb charm consumes storage ? any options for eps backed storage?
<rahworkx> eps=ebs
<Zic> lazyPower / ryebot : sorry, I was far away from my computer this day, was not planed :( -> here is what you asked: http://paste.ubuntu.com/24657948/
<lazyPower> Zic: fantastic, this gives me someplace to start. That etcd has an invalid cluster config from the relationship vs what the leader is reporting
<dakj_> Hello guys, a question I'd like to create a new charm, an example could be a  video surveillance server type MotionEye, and make that on juju store. What do I must to do? Which are the steps to make? Is there a guide to help to build a charm? Thanks
<hatch> dakj_ you should probably start here: https://jujucharms.com/docs/stable/authors-intro
<hatch> and feel free to ask questions here or on the juju mailing list
<dakj_> hatch: thanks a lot, I'll begin from that.
<hatch> dakj_ we're still working on the documentation so if you find anything lacking/missing/wrong feel free to file an issue/pr here: https://github.com/juju/docs
<dakj_> hatch: ok perfect.
<dakj_> hatch: I'm seeing that on juju store is not present Nginx, if you look for "web" there a bundle called "Web Infrastructure In A Box" where it's present Nginx but if you try to find alone there is just "
<dakj_> nginx passenger" why?
<hatch> nobody has written a nginx charm, typically it's installed within a charm as an internal server
<hatch> nginx passenger is quite old, I wouldn't recommend using it as it only supports precise
<hatch> dakj_ with all that said though, nothing saying you couldn't write one :)
<dakj_> hatch: I try it!!!
<hatch> :D
<dakj_> hatch: I'm following that https://jujucharms.com/docs/2.1/authors-charm-building
<hatch> I can see an argument for nginx being a standalone charm, and a subordinate
<marcoceppi> not really a subordinate, maybe a standalone charm
<marcoceppi> it's too hard to model the intricaces required to configure nginx as a subordiante where you woulnd't ever switch out that charm for another web server
<marcoceppi> which is why NGINX exists as a layer today
<hatch> marcoceppi ohh there is a layer for it
<hatch> TIL
<hatch> :)
<marcoceppi> absolutely
<marcoceppi> http://interfaces.juju.solutions/ about half way down
<hatch> and there it is http://interfaces.juju.solutions/layer/nginx/
<marcoceppi> it's used in quite a few charms
<hatch> marcoceppi would be cool if we could create a dependency graph for layers to see what charms are using which layers
<marcoceppi> it already exists
<marcoceppi> but the charm store refuses to ingest any dot files, so they get excluded from the store quite often
<hatch> ahhh
<marcoceppi> if you want to tackle a year+ old bug: https://github.com/CanonicalLtd/charmstore-client/issues/204
<hatch> :D
<dakj>  <marcoceppi & hatch: I'm sorry but what does that means?
<hatch> dakj nothing to worry about :)
<hatch> charms can be built by mixing layers, and we were just talking about a way to know which charms use them
<dakj> hatch: is it to complicate to begin? Because I don't see for example a LAMP or LEMP bundle it's for the same way explained of marcoceppi?
<dakj> hatch: sorry I want to say, using NGINX as example to build a charm is too hard as starting
<dakj> as the beginning
<hatch> dakj so the first thing you might want to try to do is simply creating a charm which installs something
<hatch> regardless of what it is, then expand from there
<dakj> ok
<hatch> dakj on the docs navigation there is a heading "Developer Guide" It's probably best to read at least Getting Started, Event Cycle, Charm Layers, Interface Layers first
<dakj> hatch: ok I'll begin from that to understand how it works
#juju 2017-05-26
<tychicus> is there an easy way to stop all machine agents?
<tychicus> I'm getting tons of these in the juju controller logs manifold worker returned unexpected error: cannot resume transactions: cannot find transaction ObjectIdHex("592615a781efc905b5ad13e0")
<tychicus> from what I've read mgopurge will fix this
<tychicus> but to run mgopurge you should stop the machine agents first
<tychicus> I'm guessing that it can be done with juju run, just loop through all the machines and run systemctl stop jujud-machine-n.service
<kjackal> Good morning Juju World!
<Zic> lazyPower / ryebot: I'm not working today, but I'm fully available on IRC as usual, if you need more infos. Do you know if it's just a race-condition, so if I restart from scratch the CDK deployment on Monday, it will have a chance to just finished without any incidents?
<Zic> just to know if I need to plan that I will be a little late on this delivery :)
<dakj> Hello guys: someone have had experience with the deploy of Landscape-Client charm?
<lazyPower> Zic: i've not been able to reproduce the issue
<Zic> lazyPower: ok, I will start a new install Monday, thanks :)
<lazyPower> Zic: keep me in the loop, i'm not saying that race condition isn't there :)
<lazyPower> i'm just saying i haven't been able to reproduce it so i cant debug it yet
<lazyPower> Zic: additionally i closed my browser and lost the status output i need to track down those bin/charm revs, would you mind terribly pastebinning me one last time? I'll copy it out to a note.
<Zic> lazyPower: the header of juju status with the "buggy" charm rev? I can recover that easily
<Zic> lazyPower: http://paste.ubuntu.com/24667484/
<lazyPower> Zic: thanks a bunch
<Budgie^Smore> o/ juju world
#juju 2018-05-21
<wallyworld> vino: did you see my email and did it make sense? we should chat when you are free so i can clarify various aspects
<vino>  wallyworld: i am waiting for u
<wallyworld> vino: ok, let's chat now
<wallyworld> am in standup HO
<thumper> veebers: ping
<veebers> thumper: pong, hows things?
<thumper> veebers: not too bad, can you chat now?
<veebers> thumper: I can in like 3 minutes?
<thumper> sure, ping when you're ready
<veebers> thumper: ready when you are
<thumper> veebers: coming
<kelvinliu_> veebers, would u mind take a look tiny PR when u have time, thx  -> https://github.com/juju/juju/pull/8733
 * veebers looks
<veebers> kelvinliu_: have you tested that change?
<veebers> kelvinliu_: if not I can kick of a run on a jenkins node for completeness sake
<kelvinliu_> it's running now, will take a round 30mins
<veebers> kelvinliu_: ah, ack. Change LGTM, will comment on the PR
<veebers> kelvinliu_: essentially if the add-k8s command works thats good enough (nothing else has changed here
<kelvinliu_> the cmd should just work.
<kelvinliu_> veebers, wow tests failed at a different failure.
<veebers> kelvinliu_: pastebin?
<kelvinliu_> https://pastebin.ubuntu.com/p/Cv6N8Cbvqz/
<kelvinliu_> there was a different error which was my fault, but now running into controller not found(event can't bootstrap)
<veebers> kelvinliu_: looks like a juju error: ERROR detecting credentials for "localhost" cloud provider: adding certificate "juju": Unknown request type
<veebers> kelvinliu_: the controller not found errors are cleanup attempts, unfort they're not aways clear if it's a "this is a normal cleanup attempt, it's ok if they're not found" or "I expected to find this but it's not there"
<kelvinliu_> if self.existing_controller is None:
<kelvinliu_>             raise NoExistingController()
<kelvinliu_> is it because controller not found, then raise this error?
<veebers> kelvinliu_: no, that's a bad trace not sure why it's mentioning the NoExistingController. You can see that it's attempting to bootstrap something (well after jujupy attempts to use an existing controller) (the line: ERROR Command '('juju', '--debug', 'bootstrap', '--constraints',. . . .)
<veebers> kelvinliu_: ah I see why it mentions it, it's crappy but makes sense. There is an "except NoExistingController:" and within that we "with self.new_bootstrap(...) yield" it's within new_bootstrap that the actual error occurs. That's why we see "During handling of the above exception, another exception occurred:"
<veebers> kelvinliu_: the issue is the "juju bootstrap" command failed, as per the ERROR seen there
<kelvinliu_> weird, i can bootstrap manually.
<babbageclunk> Mechanical review for removing tags from machine server IDs, anyone? https://github.com/juju/juju/pull/8734
<veebers> kelvinliu_: what command are you using to bootstrap? (it would be interesting as well to see what's in --config/tmp/xxx.yaml, passed into bootstrap command)
<babbageclunk> wallyworld: Could you take a look at https://github.com/juju/juju/pull/8734 plz?
<wallyworld> babbageclunk: sure, meeting will be done soon
<kelvinliu_> veebers, is this config file created by bootstrap py script?
<veebers> kelvinliu_: aye it is
<babbageclunk> wallyworld: thanks
<wallyworld> babbageclunk: done!
<babbageclunk> wallyworld: awse
<kelvinliu_> hi veebers u r right, the error is related with detecting credentials for "localhost" cloud provider: adding certificate "juju": Unknown request type
<veebers> kelvinliu_: ack, not sure why you're seeing it though
<kelvinliu_> can't remember which branch my current juju was built from. I re-installed it from snap, it works. need to investigate later about this issue
<manadart> Can I get a review of: https://github.com/juju/juju/pull/8730 ?
<manadart> Small one for review: https://github.com/juju/juju/pull/8736
<srihas> hi, how can we tell a charm to look for custom apt key while installing apt packages on the nodes?
<acwork> hello can anyone tell me how I can change system setting after reboot.  For example each time I reboot a ceph monitor  the process comes up on the wrong listener.  I have changed the ceph.conf files but when I reboot it get overwritten.
<acwork> Where is juju storing the system settings for a charm?  I changed the /etc/interfaces.d/50-cloud-init.cfg setting but this has no effect on reboot.
<bdx> srihas: layer-apt
<srihas> bdx: thanks
<manadart> Another one for review: https://github.com/juju/juju/pull/8738
<manadart> Mostly a backport of 2.4 changes for the same bug.
<frankban> hey all, is anyone available for reviewing https://github.com/juju/juju/pull/8735 ?
<manadart> frankban: Approved.
<frankban> manadart: ty!
<manadart> frankban: NP.
<bdx> rick_h_: ping
<rick_h_> bdx: pong
#juju 2018-05-22
<wallyworld> babbageclunk: can haz small review at some stage when you are free? no rush https://github.com/juju/juju/pull/8739
<babbageclunk> wallyworld: yup, looking now
<babbageclunk> wallyworld: approved
<wallyworld> yyay ty
<_thumper_> proxies suck
<_thumper_> just sayin
<manadart> jam externalreality: In the HO.
<manadart> Anyone able to take a look at https://github.com/juju/juju/pull/8736 ? It is small.
<stickupkid> manadart: looks good to me.
<manadart> stickupkid: Ta.
<srihas> hi guys, the space "aci" which is passed as 'juju deploy neutron-openvswitch --bind "data=aci" ' should have DHCP enabled?
<srihas> how can I remove a subordinate service that has "hook failed: install" status in juju 2.3
<srihas> ??
<manadart> Another PR for review: https://github.com/juju/juju/pull/8740/files - mostly mechanical.
<stickupkid> manadart: i added a quick comment to the PR...
<srihas> thank god, --no-retry saved me
<manadart> stickupkid: Ta.
<thumper> bugger...
<thumper> simon's gometalinter patch causes all the CI unit tests to fail
<veebers> thumper: ah, because gometalinter is not installed :-\
<thumper> veebers: yeah, I emailed the crew asking if it should be added to the makefile or just the jenkins jobs
<veebers> thumper: we can either update the jobs to install it as part of the process, or add it to the make file
<veebers> hah
<rick_h_> No, the linter was put in make check with what's already being run
<thumper> rick_h_: where?
<rick_h_> I know simon checked on that and already updated the makefile for make check to add the linting
 * rick_h_ is looking
<thumper> rick_h_: clearly it isn't working
<veebers> rick_h_: installing the linter, not running it
<rick_h_> ah, installing it...good question.
<veebers> running the linter is there (that's what's causing the error). It needs to be installed
<rick_h_> I guess I assumed it would be part of updating deps but ... guess not
<thumper> how are users expected to install gometalinter?
<thumper> it would have been good to get an email to the crew about the addition if it was going to cause problems
<thumper> or even the juju email list
<veebers> hindsight is 20/20 :-)
#juju 2018-05-23
<kelvinliu_> morning, veebers, would u mind to take a look this tiny PR plz? thx -> https://github.com/juju/juju/pull/8743/files
<veebers> kelvinliu_: hey o/ What's the issue, is it not passing under CI but passing locally?
<kelvinliu_> veebers, yeah, it's passing locally but failed on jenkins on different errors.
<kelvinliu_> veebers, got error like kubeconfig file was empty when it's built from jenkins gui,
<kelvinliu_> veebers, when i ran it manually from jenkins host, the error was cluster(api server) not stablized to let kubectl available
<kelvinliu_> so i have this pr to wait for workloads(this is actually required, i think) after wait_for_started.
<veebers> kelvinliu_: so looking at the latest jenkins job run, the kube config file it copies it seems that it wasn't populated with the required data? And thus the wait for workloads should sort that?
<kelvinliu_> veebers, the kubeconfig file is generated later in some time during the cluster bootstraping progress.
<veebers> kelvinliu_: how did the scp command work if the file wasn't generated at that time? Or are we talking about different things now?
<veebers> in the jenkins run I see "ERROR No k8s cluster definitions found in config", that's because the kubeconfig file is empty?
<kelvinliu_> veebers, scp seems never failed
<veebers> kelvinliu_: this is the fun part, figuring out why a test that passes on your machine doesn't pass in CI :-|
<kelvinliu_> well, another werd thing is k8s 1.10 bundle never pass on my local when i manually deploy the bundle, so i always have to use 1.9. but it passes in citest on my local.
<veebers> wallyworld: you have a moment re: tomb v1 -> v2 I know why the tests are failing, change in tomb v1 -> expectations, not sure how to proceed to fix
<wallyworld> veebers: give me 5
<veebers> I can give you 4.9, that's my best offer
<veebers> heh, sure thing
<wallyworld> veebers: did you want a HO?
<veebers> wallyworld: sure, probably best. 1:1?
<wallyworld> veebers: ok
<wallyworld> babbageclunk: i pushed commit 2 to https://github.com/juju/juju/pull/8739
<babbageclunk> wallyworld: ok looking
<anastasiamac> a review of a very simple status filtring fix PTAL - https://github.com/juju/juju/pull/8744
<thumper> wallyworld: ping
 * thumper looks for next victim
<thumper> babbageclunk: what 'cha up to?
<thumper> anastasiamac: lgtm
<anastasiamac> \o/
<wallyworld> thumper: hey
<thumper> wallyworld: how are your reviewing muscles feeling?
<wallyworld> hmmm, a loaded question there
<wallyworld> sure, what is thepr
<thumper> just proposing now
<thumper> it is the proxy stuff
<thumper> ended up being significantly bigger than I was initially expecting
<wallyworld> always is
<thumper> heh
<thumper> +889 â253 and 27 files changed
<thumper> https://github.com/juju/juju/pull/8745
<thumper> wallyworld: we should chat about it
<wallyworld> ok
<wallyworld> now ort after i have a look?
<thumper> 1:1?
<wallyworld> ok
<thumper> now
<cliu> Hi... I have deployed Openstack with juju with Telemetry #49 bundle. after deployment, how do I bring up the ceilo? I did not see that option in the horizon dashboard.
<thumper> wallyworld: the proxy PR https://github.com/juju/proxy/pull/3
<wallyworld> ok
<thumper> cliu: sorry, but I have no idea. Many of the openstack folk are europe based and are likely sleeping
<cliu> thumper: thanks. when would be a good time for me to join in the ask again/
 * thumper wonders if there is a freenode channel for canonical openstack questions...
<thumper> cliu: european morning
<cliu> thumper: thanks.
<rick_h_> thumper: cliu  #openstack-charms
<thumper> rick_h_: thanks
<rick_h_> is the freenode channel for them per https://docs.openstack.org/charm-guide/latest/find-us.html
<cliu> rick_h_: thanks
<wallyworld> thumper: that one lgtm
<thumper> wallyworld: ta
<rick_h_> cliu: most of them are out at the openstack summit this week but I'd hit up their irc channel or mailing list
<cliu> rick_h_: thanks. what is their mailing list?
<anastasiamac> cliu: according to the find-us link rick_h_provided, the mailing list can be found on https://docs.openstack.org/charm-guide/latest/mailing-list.html
<cliu> anastasiamac: thanks
<thumper> rick_h_: where would I find the GUI code for exporting a bundle?
<thumper> rick_h_: initial juju support will just mirror the GUI
<wallyworld> thumper: doneski
<wallyworld> babbageclunk: you happy with the PR?
<babbageclunk> wallyworld: oh, sorry - got distracted by other work.
<wallyworld> no wuckers
<babbageclunk> wallyworld: approved
<babbageclunk> thumper: sorry, missed your message
<babbageclunk> thumper: I'm struggling to write a benchmarking thing for the leadership API
<thumper> wallyworld: awesome, will look
<anastasiamac> and another status fix review PTAL https://github.com/juju/juju/pull/8747 <- this time fixes inconsistent results when filtering by app name \o/
<thumper> anastasiamac: minor tweak but otherwise good
<kelvinliu_> veebers, updated the PR and the citest passed on goodra, would u mind to take a look tmr? thx
<kelvinliu_> veebers, thx for the lxd upgrade
<srihas> hi, how can we change the network configuration on machines using juju?
<srihas> the current configuration came from the interface settings in MAAS
<srihas> is there a way to do without without deleting machines?
<manadart> externalreality: Logged a bug for the peergrouper warning messages: https://bugs.launchpad.net/juju/+bug/1772895
<mup> Bug #1772895: Peergrouper should not log multiple address warnings when not in HA <juju:Triaged by manadart> <https://launchpad.net/bugs/1772895>
<externalreality> manadart, ack
<externalreality> manadart, should the message also be directing users to configure using `juju controller-config` rather than `juju config`?
<manadart> externalreality: Yes it should. I'll add that info.
<jasongarber> Hi! ðð» I'm looking for some help using constraints on Rackspace, where instance types contain spaces.
<jasongarber> Creating Juju controller "atat-dev" on rackspace/iad ERROR invalid constraint value: instance-type=4GB%20Standard%20Instance valid values are: [512MB Standard Instance 1GB Standard Instance 2GB Standard Instance 4GB Standard Instance 8GB Standard Instance 15GB Standard Instance 30GB Standard Instance 15 GB Compute v1 30 GB Compute v1 3.75 GB Compute v1 60 GB Compute v1 7.5 GB Compute v1 1 GB General Purpose v1 2 GB General Purpose 
<jasongarber> (I added the %20, otherwise it says ERROR malformed constraint "Standard"
<jasongarber> (I added the %20, otherwise it says ERROR malformed constraint "Standard")
<manadart> stickupkid: Mind taking another look at https://github.com/juju/juju/pull/8740 ? Just addressed your comment.
<stickupkid> manadart: of course
<stickupkid> manadart: done, LGTM :D
<manadart> stickupkid: Ta.
<manadart> stickupkid: Do I need to rebase/merge develop into my PR to resolve: "fatal: unable to access 'https://github.com/alecthomas/gometalinter/': Could not resolve host: github.com" ?
<rick_h_> Howdy jasongarber, I'm looking to see how the instance stuff works in rackspace. Can you use the constraint of 4gb of memory?
<rick_h_> oh bah, he's gone
<stickupkid> manadart: so you have two options - 1) we wait for https://github.com/juju/juju/pull/8749 to land 2) comment out gometalinter checks in `./scripts/verify.bash`
<stickupkid> manadart: considering it's a pain, i'd just do 2 for now
<stickupkid> https://github.com/juju/juju/pull/8748 is the same tbh
<stickupkid> so we could just land it right?
<stickupkid> now *
<manadart> stickupkid: OK. Going (2) for now.
<stickupkid> manadart: ok by me
<TheAbsentOne> nice try rick_h_ x) I'll ping you if I notice it if he's back
<rick_h_> TheAbsentOne: :)
<TheAbsentOne> Besides you guys are referenced in my thesis as "everyone from irc" hope that is allright x)
<rick_h_> wooo anonymous!
<manadart> stickupkid: I don't actually have the metalinter stuff on my PR branch. I notice #8749 is merging, so I'll wait for it.
<jam> guild: anyone interested in https://github.com/juju/juju/pull/8751 ? it should handle overlapping subnets in Openstack and better support for IPv6 subnets (not trying to create FAN overlays for them)
<hml> jam: looking
<jam> hml: I was hoping you might be able to test it against a real openstack having created multiple networks there.
<hml> jam: i may be able to - have to check my permissions on the openstack
<jam> hml: even if you just test that I haven't broken things terribly without changing networks would still  be useful
<hml> jam: ack - there is a case iâm concerned about, but i need to look at the pr in more depth first
<hml> jam: verified pr on openstack where my project has two internal networks with subnets and one external networkâ¦.
<jam> hml: any chance that you can give it overlapping subnets?
<hml> jam: iâll try
<jam> hml: rick_h_ asked us not to land anything on 2.3 until 2.3.8 goes out the door, but I definitely appreciate your testing.
<hml> jam: works with duplicat subnets - ran both with change and with 2.3.7 to ensure openstack subnets configured to reproduce
<jacekn> is there anything I need to do in charms.reactive for it to create hooks in the hooks directory other than adding bits to metadata.yaml? I added cluster relation support but "charm build" did not realize that and created no hooks
<rick_h_> jacekn: hmm, do you have the layers?
<jacekn> rick_h_: yeah includes: ['layer:basic', 'interface:http', 'layer:snap']
<jacekn> rick_h_: but I can't see how those layers would know whihc hooks to create?
<rick_h_> jacekn: did you add the reactive.py to handle the events?
<rick_h_> jacekn: e.g. https://github.com/mitechie/jujushell-charm/blob/master/reactive/jujushell.py
<jacekn> rick_h_: yes and they are handled fine under update-status hook. But I want xxx-cluster-relation-{changed|joined|departed} hooks too
<rick_h_> jacekn: and metadata.yaml has the provides/requires bits added? Other than that not sure what might be missing
<jacekn> rick_h_: metadata.yaml only has "peers" section for the missing hooks
<rick_h_> jacekn: https://jujucharms.com/u/juju-gui/jujushell/ is the code that generates https://jujucharms.com/u/juju-gui/jujushell/ which does that as far as comparing
<rick_h_> jacekn: ah no, you need https://api.jujucharms.com/charmstore/v5/~juju-gui/jujushell/archive/metadata.yaml provides/requires in there like so
<jacekn> ok let me try adding it
<jacekn> I followed https://docs.jujucharms.com/2.3/en/authors-relations BTW, there is nothing in the "Peers" sectin about need to add them to provides/requires
<manadart> Small one for review. Nice in that it is almost all deletion: https://github.com/juju/juju/pull/8752
<jacekn> rick_h_: no that made no difference at all. I added the relation to all 3 parts in metadata.yaml (peers, requires, provides) and charm build even knows there are no hooks: https://pastebin.ubuntu.com/p/wCwYw2TBrw/
<rick_h_> jacekn: what's alertmanager-cluster vs prometheus-alertmanager ?
<jacekn> rick_h_: prometheus-alertmanager is realtion between prometheus and prometheus-alertmanger. alertmanager-cluster is internal alertmanager clustering
<hml> jam: I hit approve and found a snag in the testing
<hml> :-)
<hml> jam: confirming nowâ¦
<kwmonroe> jacekn: sounds like alertmanager-cluster is a new interface that you're adding to prom-alertmanager for peer relations.  is that right?
<jacekn> kwmonroe: not really an interface, it's 2 lines so I added support to my alertmanager.py
<kwmonroe> jacekn: charm build creates hooks for interfaces.  so for example, prom-am provides an alertmanager-service over the http interface (https://api.jujucharms.com/charmstore/v5/prometheus-alertmanager-2/archive/metadata.yaml), so charm build knows to go create alertmanager-service hooks using the http interface that it pulls from  https://github.com/juju/layer-index
<kwmonroe> jacekn: if you are providing a peers section in the prom-am charm, you'll need to define what interface that uses, and have something in the layer-index registry (or local in your $INTERFACE_PATH) for charm build to know what to do.
<kwmonroe> jacekn: afaik, you can't just use "@when <peer>.joined" in your reactive.py without having <peer> defined in your metadata, which needs an interface.
<kwmonroe> jacekn: also, a point of semantics, i said "charm build creates hooks for interfaces", when i should have said "charm build creates hooks for relations and pulls in the provides/requires.py from the interface associated with that relation from the layer registry (or locally)".
<jacekn> kwmonroe: so I have entry in metadata.yaml: https://pastebin.ubuntu.com/p/f9x933whJ3/
<kwmonroe> jacekn: cool, now you need to implement the provides/requires side of the altermanager-cluster interface
<jacekn> kwmonroe: which I did but I wanted to avoid overhead of maintaining multiple files for 3 lines function...
<kwmonroe> rick_h_: my kingdom for friggin 301 redirects in the docs!!!  or make google reindex faster :)
<jacekn> kwmonroe: so I'm just reading data from the relation in my alertmanger.py (I don't actually care about hooks any more in charms.reactive, I just want it to run my code on any hook since hooks are idempotent and properly gated anyway)
<jacekn> kwmonroe: if that's the way I think I prefer to just create hooks/* files manually in my layer
<kwmonroe> jacekn: if you need data from your peers, i don't know how to do it other than over an interface.  that doesn't have to be complicated, btw.. the spark-quorum interface, for example, simply lets each peer get a list of all the other peer addresses (https://github.com/juju-solutions/interface-spark-quorum).
<jacekn> kwmonroe: I just call context.Relations().peer.items()
<jacekn> kwmonroe: dedicated interface really looks like boilperplate for the sake of it
<jacekn> kwmonroe: but it looks like lack of hooks is expected in my case, I'll just ship them with my layer for simplicity
<kwmonroe> jacekn: i get the hesitation, but i don't know what unholy mess you're gonna have later.  i think a proper interface would be useful if you ever want to expand what your peers are doing, and will help you by having a proper endpoint to use in reactive (considering you can't know when your manual hook will fire).
<kwmonroe> you already know the idempotency and gate pitfalls, so i'm sure you'll make your hooks work -- i'm just throwing out advice for how <ahem> LITERALLY EVERYONE ELSE is doing it :)
<jacekn> kwmonroe: so if I ever have to expand beyond a few lines I'll consider splitting code to proper interface. And I do know when my hook will fire - I'll have alertmanager-cluster-{joined,changed,broken,departed} hooks and the'll run when anything in my relatino changes
<jacekn> kwmonroe: I wrote interface myself BTW, every real realtion in prometheus charms is an interface. But in this case it's significant overhead, increased troubleshooting cost and almost zero value
<kwmonroe> jacekn: on the hook firing, i was saying you *won't* know if altertmanager-cluster-foo fires before install, or after config-change, etc.  just cautioning you that you may not know what else has happened to the charm when that peer relation fires.
<kwmonroe> jacekn: just be assimilated already.  it's nice in here.
<jacekn> kwmonroe: yes I'm aware of that. I though reactive charms were not supposed to use @hook unnecessarily?
<kwmonroe> right jacekn - they're not.. ideally charms would react to flags.  so let's say your manual peer relation fires before anything else.  prom-am will not be installed yet because whatever does the installation hasn't happened yet.
<kwmonroe> jacekn: i think you'll make it work because you know what hooks need to do -- we don't have to keep going back-and-forth.
<jacekn> kwmonroe: that's no different from any other $random hook firing at any time. I don't use @hook so I dont' rely on hook names for anything
<jacekn> kwmonroe: all I want for the hook to call alertmanger.py, like all other hooks do
<kwmonroe> jacekn: aaah, so you *are* going to invoke the reactive loop in the manual hook.  i thought you were going to write some 3 line thing in your manual hooks that said "do_peer_things".
<jacekn> kwmonroe: ah no no. All I care about is my readtive code being called automatically
<jacekn> kwmonroe: sorry if I did not explain that clearly
<kwmonroe> no no, it's fine -- i assumed the worst.
<jacekn> kwmonroe: anyway I think I know sensible way to solve this. Thanks for your help!
<jacekn> rick_h_: and thanks to you as well
<kwmonroe> np jacekn!
<acwork> Hello can someone please tell me where I make the configuration changes to preserve network settings on reboot after a box has been provisioned with maas and juju.
<pmatulis> acwork, what kind of settings are you referring to?
<acwork> mtu specifically with bridge interfaces
<acwork> I can get them where I need them to be but on reboot the configuration is overwritten I am trying to understand where provisioning is occurring.
<rick_h_> kwmonroe: :( on the docs stuff. I'm curious how the new docs.jujucharms.com does.
<rick_h_> jacekn: ah glad you got it worked out
<acwork> am I asking a stupid question considering I am new to juju and maas
<pmatulis> acwork, you are looking directly on the MAAS node, or indirectly by some other means?
<pmatulis> i know you can set an MTU per VLAN for instance
<rick_h_> acwork: no, not stupid. I think that juju uses the machine as it's delivered from MAAS. In looking for setting up mtu and maas there's some bugs/etc I see on it.
<rick_h_> acwork: the other hting you can look into is customizing the setup https://docs.maas.io/2.3/en/nodes-custom
<acwork> rThank you for the response I will try on the customization.  I have a built an openstack cluster amongst 11 machines and I wanted to make sure I had the proper mtu as well.  Part of this build is ceph as well.
<veebers> Morning o/
<redir> \o
<babbageclunk> hey redir! how's it going?
<KingJ> What's the best way to deploy an application to all nodes, present and future? I want to ensure that the snmpd charm (https://jujucharms.com/u/bertjwregeer/snmpd/) runs on every host.
<KingJ> At a guess, as it's a subordinate application i'd just need to link it to an application that's deployed on all hosts?
<thumper> KingJ: yeah...
<thumper> we don't have the concept of machine subordinates...
<thumper> although it was raised at some stage...
<babbageclunk> I can't spin up lxd containers in a aws model - I keep getting this error: machine-0: 10:31:30 WARNING juju.provisioner failed to start machine 0/lxd/4 (Unpack failed, Failed to run: unsquashfs -f -d /var/lib/lxd/containers/juju-dbe87a-0-lxd-4/rootfs -n /var/lib/lxd/images/08bbf441bb737097586e9f313b239cecbba9622
<babbageclunk> 2e58457881b3718c45c17e074.rootfs: .  ), retrying in 10s (10 more attempts)
<babbageclunk> I thought it was disk space at first (I'm making about 10 lxds at once), but the host has plenty
<babbageclunk> Oh, hang on - they're coming up now. Took ages though.
<thumper> babbageclunk: thoughts on http://10.125.0.203:8080/view/Unit%20Tests/job/RunUnittests-race-amd64/369/testReport/github/com_juju_juju_worker_peergrouper/TestPackage/
<thumper> rick_h_: if you come back, do you know where the bundle export code lives in the gui?
<babbageclunk> thumper: looking
<veebers> kelvinliu_: actually I just asked the question on the PR ^_^
<kelvinliu_> ah, veebers I had a new push before had the chance to read ur comment. sorry, looking now
<veebers> kelvinliu_: hah sorry should have made sure I had the latest
<veebers> kelvinliu_: much nicer :-) question still stands, but still LGTM (actuall, LBTMNYAT, Looks Better To Me Now You Added That)
<kelvinliu_> veebers, good finding, now it always waits echo to be terminated
<blahdeblah> Your acronym-crafting powers are impressive, young veebers.
<veebers> ^_^
#juju 2018-05-24
<thumper> anyone want a quick easy review?
<thumper> https://github.com/juju/juju/pull/8754
<acwork> Hello, can anybody explain to me what files control the system setting on a LXC container provisioned with JUJU?
<thumper> acwork: what problems are you having?
<thumper> acwork: also, which version of juju
<acwork> Version of juju 2.3, I have installed an openstack cluster and I am trying to set the bridge interface on the LXC containers to a higher MTU.
<acwork> I was hoping to be able to persistently set the mtu for the LXC containers.  Not sure how to do that from JUJU.
<thumper> I think that juju sets the MTU on the containers based on the host device MTU
<thumper> the settings for each particular container aren't touched after creation
<thumper> so the standard lxc commands on the host could be used to tweak the containers
<acwork> So if I set the mtu via dhcp then the container should inherit the that on creation?
<kelvinliu_> would anyone have a quick look this tiny fix ?  https://github.com/juju/juju/pull/8755  thanks.
<acwork> not having luck with the lxdbr0 interface mtu
<thumper> acwork: I'm sorry, you are outside my networking knowledge
<thumper> I just know that juju doesn't do anything special with the MTU of the containers
<acwork> okay thanks for the response
<thumper> kelvinliu_: lgtm
<kelvinliu_> thumper, thx,
<jam> acwork: bionic or xenial? and what version of Juju? Generally when we create the bridges for containers we should set the MTU on the bridge to the same MTU as the original device, and when we create the MTU for containers we should set it to the MTU of the bridge they are being added to. But as Tim said, I don't think we update that post-create
<acwork> Xenial , juju 2.3
<anastasiamac> can i have areally tiny review https://github.com/juju/juju/pull/8756 (without this change, tabular status with only machines is not separated from controller info)...
<anastasiamac> thumper: this is the bit that i have removed from my last pr but really needs to be in :) m checking develop now...
 * thumper looks
<thumper> anastasiamac: hmm...
<thumper> how will this be in 2.3?
<thumper> the one we are releasing now?
<anastasiamac> thumper: yuck :( but so far i have only seen machine section only... i doubt many systems will have just one section in display....
 * thumper nods
<thumper> fair enough
<thumper> anastasiamac: let's get it merged
<anastasiamac> thumper: the scary bit is the one where there could 2 lines...
<thumper> anastasiamac: I have one for you
 * anastasiamac checking what sections displayed may end up with 2line separation
<thumper> why two lines?
<thumper> it seems like there needs to be a bit of highter level thought about who is putting in newlines and why
<thumper> https://github.com/juju/juju/pull/8754
<anastasiamac> because 'applications' ended up with a new line and say 'offers'... so if u have an output with application and offers (no machines) u'll have 2 lines... so rare but possible...
<anastasiamac> thumper: yes, i could re-work it to ensure that some central place is responsible for newlines...
<anastasiamac> thumper: files changed 156!?
<thumper> I'm ok with rare
<thumper> yeah, but look at the changes
<thumper> there are two global replaces :)
<anastasiamac> thumper: i thought u'd b... but in the long term, i'd like newlines mngmt :)
<thumper> anastasiamac: I agree, we should rework with better logic
<thumper> but it isn't urgent
<anastasiamac> thumper: k. then don;t worry about my pr. I'll ping when m ready again
<anastasiamac> but i'll look at urs :)
<thumper> anastasiamac: I approved your PR
<anastasiamac> \o/
<thumper> anastasiamac: I'm saying we should add better logic later
<anastasiamac> well, 'later' will b in the next hour at most ;)
<anastasiamac> thumper: lgtm
<babbageclunk> does anyone have a nice trick for a scrollable but still refreshing `juju status` command for when you can't just watch status because there are too many applications and units in your model to fit on one screen?
<anastasiamac> babbageclunk: use smaller font on ur console to fit everything? :)
<anastasiamac> babbageclunk: but srsly, i do not :(
<babbageclunk> anastasiamac: done that already but there are limits! I'm getting old. :(
<anastasiamac> babbageclunk: yes :( the only thing that i could do was to not display empty sections... is there something in particular that u r watching? maybe u can filter status to only show what u want?
<babbageclunk> yeah, that's probably the solution, but it's a hassle in this case since I have lots of different applications
<babbageclunk> ooh, I didn't realise the filtering took wildcards too!
<babbageclunk> yay thanks anastasiamac
<anastasiamac> babbageclunk: \o/
<anastasiamac> babbageclunk: yes, we'd need to be very explicit in new docs about how powerful status actually is :D
<anastasiamac> thumper: updated 8756 to have explicit start/end section methods... hopefully it's clearer as too what needs to happen...
<thumper> anastasiamac: here's one for you https://github.com/juju/juju/pull/8757
 * thumper takes a deep breath before addressing merge conflicts
<thumper> babbageclunk: you around?
<babbageclunk> thumper: yup
 * thumper waves at alexisb's autojoiner
<thumper> babbageclunk: https://github.com/juju/juju/pull/8757
<thumper> babbageclunk: very simple branch
<babbageclunk> curses!
<babbageclunk> ha, looking
<babbageclunk> ooh, that is suimple
<babbageclunk> simple
<thumper> yeah
<thumper> very
<babbageclunk> thumper: approved
<thumper> thanks
 * thumper EODs
<manadart> Anyone able to take a look at https://github.com/juju/juju/pull/8752 ?
<wallyworld> manadart: lgtm modulo a lack of expert LXD knowledge :-)
<manadart> wallyworld: Thanks.
<manadart> For review; softens the default networking verification/creation when local LXD server is clustered: https://github.com/juju/juju/pull/8760
<stickupkid> manadart: i'm looking now...
<manadart> stickupkid: OK, thanks.
<stickupkid> manadart: lgtm
<manadart> stickupkid: Ta.
<rick_h_> stickupkid: watch out, saw some other status stuff landing from anastasiamac, just a heads up.
<rick_h_> and morning party people
<stickupkid> rick_h_: thanks, will do
<stub> cory_fu: Do you know if there is a way to get a wheelhouse.txt like such as ' cassandra-driver --global-option="--no-cython"
<stub>  ' to do what I mean? I think I need add options for when charms.reactive bootstrap is installing the contents of the wheelhouse into the venv
<stub> hmm, I look out of luck. The base layer is just doing pip install wheelhouse/*, rather than something like pip install -r wheelhouse.txt
<cory_fu> stub: Hrm.  I don't think so.
 * stub wonders if there is some sort of setup.cfg to override pip
<cory_fu> stub: We should probably add something like that.  It does seem like it would be needed for several packages.  I'm actually surprised it hasn't come up yet
<cory_fu> stub: A non-ideal solution would be to create your own WheelhouseTactic in your layer
<stub> I've tripped over it before, but just used deb dependencies to avoid chasing it at the time
<cory_fu> There are some bugs around custom tactics that are only fixed in edge, though
<stub> I could stick a file in lib/charms/layer/__init__.py or similar that monkey patches the base layer ;)
<need-help> I'll repost this from #conjure-up
<need-help> Hi, I'm trying to use conjure-up to deploy CDK, but I'm getting failures both on my Mac as well as using an Ubuntu 16.04 jump box. The Mac attempt fails on the kubectl step, while the Ubuntu jump box fails on the cni integrations
<need-help> The CNI integration fails due to not being able to associate an instance profile with a running ec2 instance, but it looks like this is because it takes a few seconds for instance profiles in IAM to finish provisioning
<need-help> Can't add a sleep 60s to the bash script because it's an uneditable squashfs read-only mount for snap
<stub> cory_fu: It also broke with the 'natsort' package, which declares a dependency in its setup.py of 'argparse ; python_version < 2.7'
<cory_fu> I answered need-help in #conjure-up but for posterity, the cloud integration stuff is being overhauled in https://github.com/conjure-up/spells/pull/191 which includes a fix for the profile delay
<spiffytech> cory_fu: I'm trying the updated AWS/k8s steps you suggested to needs-help earlier, and conjure-up is failing, with the enable-cni step complaining that 'juju trust' isn't a command. Do I need a newer version of juju to use conjure edge, or that spells branch?
<cory_fu> Hrm, it should fall back to setting the config
<spiffytech> I cloned the spells branch, and used the Snap conjure-up/juju to do `conjure-up --spells-dir /path/to/spells/repo --channel=edge`. That's all that's necessary, right?
<spiffytech> I didn't see any new configuration options for AWS integration either, like I see for e.g., the Helm or Prometheus spells.
<kwmonroe> spiffytech: cory_fu: i wonder if it's actually failing, or if the stderr from this is just being displayed: https://github.com/conjure-up/spells/blob/k8s-integration/canonical-kubernetes/steps/04_enable-cni/ec2/enable-cni#L10
<kwmonroe> iow, may need a 2>/dev/null if it's just leaky output
<spiffytech> Possible
<spiffytech> I do get a big red message telling me conjure failed, though.
<spiffytech> It tells me to check the generic conjure log, which has this. I can't understand the problem from this alone. http://haste.spiffy.tech/hizexudiru.coffee
<kwmonroe> spiffytech: if your deployment is still around, does "juju config aws credentials" say anything?  (don't tell us what it says, btw)
<spiffytech> Sorry, I tore it down already.
<spiffytech> I can run it again, if there are no other suggested changes to make first.
<kwmonroe> spiffytech: you have a local spells directory cloned, right?
<spiffytech> Yep, `git clone -b k8s-integration https://github.com/conjure-up/spells.git juju-spells-`
<kwmonroe> spiffytech: may as well try to gobble up the stderr if you're gonna run it again.. adjust your spells directory ./canonical-kubernetes/steps/04_enable-cni/ec2/enable-cni to add a 2>/dev/null on line 10:  if ! juju trust -m "$JUJU_CONTROLLER:$JUJU_MODEL" aws 2>/dev/null; then
<spiffytech> Okay
<kwmonroe> spiffytech: it's just a guess on my part that the stderr is causing conjure-up to bark unnecessarily
<spiffytech> Change made, running conjure-up now.
<cory_fu> Just having output on stderr wouldn't cause it to explode, but it's possible the check is not falling through as intended or that something else is blowing up and the stderr message is masking or confusing it
<cory_fu> It definitely could use the 2>/dev/null bit, though
<cory_fu> spiffytech: What channel of the conjure-up snap are you using?
<spiffytech> GA
<spiffytech> GA download, plus the --channel=edge flag when running it on the command line
<cory_fu> Ok, mine failed on the AWS charm, which died with "AWS was not able to validate the provided access credentials"
<cory_fu> I've not seen that before
<cory_fu> Wait, that's odd.  I have another model from another run, and that's the one that failed
<cory_fu> Oh, ha.  Now they've both failed.  Lovely
<cory_fu> It looks like it's not setting the credentials correctly via the config options
<spiffytech> kwmonroe: I reran conjure-up and got the same failure. `juju config aws credentials` prints out a single line, looks like a token.
<cory_fu> spiffytech: Yes, that's expected.  It's actually base64 encoded.  If you run it through base64 -d, it will have "null" as the key and secret
<cory_fu> Which is not expected
<kwmonroe> yup spiffytech - me too.  cory_fu, from ~/.cache/conjure-up/canonical-kubernetes/deploy-wait.err:
<kwmonroe> DEBUG:root:aws/0 workload status is error since 2018-05-24 18:13:38Z
<kwmonroe> ERROR:root:aws/0 failed: workload status is error
<kwmonroe> cory_fu: any my aws status log: https://paste.ubuntu.com/p/wpHGm5nJqv/
<kwmonroe> and the hook error: https://paste.ubuntu.com/p/BVvSTw5JV2/
<cory_fu> spiffytech, kwmonroe: Damn.  It looks like the format of `juju credentials --format=json` changed between stable and beta
<cory_fu> Anyone know how to tell jq to pick one key or another, whichever has a value?
<kwmonroe> sorry, i always have to search stackoverflow anytime i get near jq
<jhobbs> two jq's and a grep :)
<kwmonroe> yas!
<cory_fu> kwmonroe, spiffytech: PR updated, I'm testing the fix now
<spiffytech> Great
<cory_fu> Seems to be working.  Feel free to pull the branch and test yourself as well
<kwmonroe> jq if then else?!?!  i've seen it all.
<cory_fu> kwmonroe: :)
<cory_fu> I'm sure there's a nicer way to do that, but I like that it's all in one command
<kwmonroe> cory_fu: updated branch and juju config aws credentials is 2legit2quit
<cory_fu> kwmonroe: I'm glad that spiffytech helped catch this before stokachu's merge finger got too twitchy.  ;)
<kwmonroe> lool, true dat
<kwmonroe> cory_fu: remind me the order of step scripts.. it's before-config, before-wait, and after-deploy, right?  and the before-wait happens when you click the final Deploy button?
<cory_fu> kwmonroe: Yep.  It's a little confusing, because before-wait is after the deploy is started but before juju-wait is called.  after-deploy is after juju-wait finishes, and is a legacy name
<cory_fu> kwmonroe: This list is in order of the phases, though it's not explicit about exactly when each is run: https://github.com/conjure-up/conjure-up/blob/master/conjureup/consts.py#L41-L46
<cory_fu> I should at least add comments there
<cory_fu> Have to run, changing locations again
<cory_fu> bbiab
<kwmonroe> cool cory_fu! TIL more phases
<kwmonroe> cory_fu: my deployment completed with the updated spells branch. nice job!
<kwmonroe> spiffytech: ^^ fyi
<spiffytech> Excellent! Thanks a bunch!
<bdx> kwmonroe: supsup
<bdx> kwmonroe: cs:~omnivector/slurm-node and cs:~omnivector/slurm-controller
<kwmonroe> roger that bdx
<bdx> kwmonroe: thanks
<bdx> Oooo a shiny https://paste.ubuntu.com/p/nCcvwKjfMZ/
<kwmonroe> bdx: took 'em both to the prom
<bdx> kwmonroe: I see that, thank you!
<zeestrat> Yay
<cory_fu> kwmonroe: I made a small update to the logging in the spell PR; mind taking a look?
<thumper> rick_h_: not sure if you are around, but know status of https://bugs.launchpad.net/juju/+bug/1770051 ?
<mup> Bug #1770051: ERROR detecting credentials for "localhost" cloud provider: adding certificate "juju": Unknown request type  <lxd> <juju:Fix Committed by manadart> <https://launchpad.net/bugs/1770051>
<rick_h_> thumper: per the bug that's fixed last week?
<thumper> rick_h_: yeah... got an email from the bug from ryan saying he is still seeing it
<thumper> just looking up the hash
<thumper> rick_h_: hmm... the comment mentions c173 which is tip of develop
<thumper> bollocks
<veebers> kelvinliu: ah, that seems like the issue you where hitting the other day ^^
<rick_h_> thumper: where's the email?
<rick_h_> thumper: I don't see anything in the bug that ryan commented
<rick_h_> thumper: do you mean this one? https://bugs.launchpad.net/juju/+bug/1771885
<mup> Bug #1771885: bionic: lxd containers missing search domain in systemd-resolve configuration <bionic> <network> <juju:Fix Committed by ecjones> <juju 2.3:Fix Released by ecjones> <https://launchpad.net/bugs/1771885>
<rick_h_> that one has some back/forth right now as we collect more details on the failure from the OS folks
 * rick_h_ steps back away and will keep an eye out for thumper replies
<thumper> rick_h_: no, I was getting emails on that bug
<thumper> rick_h_: although ryan is now saying it is resolved...
<thumper> I'm confused
<thumper> but looks now like we are good
<kelvinliu> veebers, yes, i got this error before on lxc 3.
<wallyworld> vino: just a few small tweaks. the main one is to re-use the existing addService struct in the status test, instead of making a new one. plus the handing of endpoint bindings in the status formatter - the value can be assigned directly rather than making a map and copying values
<vino> wallyworld: ok let me take a look.
#juju 2018-05-25
<raub> Can I use vsphere withotu vcenter with juju?
<thumper> raub: not entirely sure, but I *think* so
 * thumper goes to make a coffee
<raub> thumper: the only info I found was either with juju 1.X or if 2.X vcenter
<thumper> raub: I may well be wrong, I've never tried using vsphere
<blahdeblah> When I last looked at it, vcenter was a requirement, but that was several years back.
<thumper> race fix review anyone? https://github.com/juju/juju/pull/8766
<thumper> wallyworld: https://github.com/juju/juju/pull/8766
<wallyworld> looking
<wallyworld> thumper: lgtm, so long as tests pass :-)
<thumper> they do... see the merge check passes
<kwmonroe> >&2 echo "cory_fu, what does stderr redirect at the front of a line do?"
<kwmonroe> cory_fu: i know this works (https://github.com/conjure-up/spells/pull/191/files#diff-adf6301122077a979448abe4928806c0R12), but never knew why it was at the start of the line
<kwmonroe> hmph.. maybe $(echo "foo" >&2) is not the same as $(>&2 echo "foo")?  to the terminals!
<cory_fu> kwmonroe: I don't know why that syntax works, but I can't ever remember the right order and just google "bash echo stderr" and get this result: https://stackoverflow.com/questions/2990414/echo-that-outputs-to-stderr/23550347#23550347
<kwmonroe> fair enough cory_fu, https://paste.ubuntu.com/p/WX8KQwCSCB/ seems to show front or back doesn't matter.  i respect your individuality.  +1 remains on that pr ;)
<kjackal> Hello  do we know who updates these: https://streams.canonical.com/juju/images/releases/streams/v1/com.ubuntu.cloud.released-aws.json
<cory_fu> kwmonroe: heh.  The "more info" page which that answer links to has an "A note on style" section that says "never precede a command with a redirect".  :p
<cory_fu> But that's just, like, his opinion, you know?
<kwmonroe> :)
<Cynerva> Anyone know how Juju decides what the public address is for a MAAS machine, when the machine is attached to multiple networks?
<Cynerva> Is there a way to tell it to use a specific address / network interface / network space for the public address?
<pmatulis> Cynerva, that's within MAAS network configuration
<Cynerva> ok, thanks, i'll look around and see if i can find it
<zeestrat> pmatulis: How does that work in MAAS? Link to MAAS docs? I've had some issues with this before in some older versions of juju&maas and would be great to see how it should work now.
<pmatulis> Cynerva, zeestrat: https://docs.maas.io/2.3/en/nodes-commission#post-commission-configuration ?
<bdx> Cynerva: --via
<bdx> https://docs.jujucharms.com/2.3/en/developer-network-primitives
<Cynerva> pmatulis: i'm not having much luck with it. In my case I have 6 network interfaces all set to auto-assign. I do see maas is giving special treatment to the first interface, using it for pxe and when populating the A record, so I assume juju's just picking it up from there. haven't found a way to tell maas which interface is primary, or public, or preferred, or anything. will keep looking
<zeestrat> pmatulis: I'm not sure how the maas part comes into play as that is just the regular vlan, subnet selection. How does the --via work with bundles?
<Cynerva> bdx: thanks, but if i'm reading right, --via applies to relations, whereas i'm asking about machines and units which may be standalone with no relations involved. Specifically the value of "DNS" for the machines, or "Public Address" for the units, as seen in `juju status`
<bdx> it doesnt matter in that case
<bdx> so you are g2g1
<bdx> ooh, I see what you are saying I think
<bdx> like unit_get('public-address') ?
<bdx> how does it know what the public address is?
<Cynerva> bdx: yeah, that's the one :)
<bdx> Cynerva: *I think* this is what network_get() resolves
<bdx> Cynerva: give your charm some endpoint bindings in the metadata.yaml, then use network_get()
<bdx> "it doesn't matter in that case" - what I meant by this is that you can grep for the correct ip address in the expanded yaml or json status, and you can't rely on `juju status` for knowing what ip to display if a machine has multiple ip addresses... I think
<bdx> aha
<Cynerva> bdx: hmm, i thought network_get only replaced unit_get('private-address'), not public-address
<bdx> what if you don't have the concept of a "public-address" anymore, but just replace it with a space binding? - I think I know the answer to this
<bdx> because then on providers other then maas
<bdx> you wouldn't get the public
<bdx> ok
<bdx> I see where you are going with this now
<bdx> Cynerva: I'm not able to get a public address using network_get() in any situation
<bdx> tried running it on a bunch endpoints in different charms that are deployed to aws and have public ip addresses
<bdx> Cynerva: I think your concern is quite valid sir
<bdx> I would like to hear someones else take on this though
<Cynerva> thanks for looking into it, bdx
<bdx> Cynerva: I thought I had hit what you were talking about before, this is similar but different
<bdx> Cynerva: from what I understand, unit_get('public-address') will return the public address on public clouds, and on non private clouds (like maas where machines can have multiple ip addresses) it will pick the ip address for which the 0.0.0.0 route lives
<bdx> looking into how it figures that out now
<bdx> Cynerva: https://github.com/juju/juju/blob/develop/worker/uniter/runner/jujuc/unit-get.go#L66
<bdx> so it comes from Context
<bdx> now just to find out how what puts it in there
<bdx> Cynerva: getting closer https://github.com/juju/juju/blob/master/api/sshclient/facade.go#L30,L33
<bdx> so https://github.com/juju/juju/blob/master/api/sshclient/facade.go#L63,L80
<bdx> Cynerva: from what I can tell, the public ip address comes from the the 0th element in the list of ssh ip addresses
<bdx> I could be totally wrong  though
<bdx> Cynerva: so looks like unit_get('public-address') will return the ip with the 0.0.0.0 route for maas nodes, and the NAT/public ip for machines with public ip
<bdx> Cynerva: there
<bdx> :)
<bdx> I wonder if there is a way we can make public address data available in network_get()
<Cynerva> ah okay, thanks bdx
<bdx> np
<bdx> so that people writing charms don't run into this
<bdx> and they just have a single consistent way of getting network info
<bdx> I'm sure this will be a rabbit hole for many
<bdx> and its slightly clunky in my opinion
<bdx> Cynerva: what I think would be cool is if you could do: `ingress_public = network_get('http').get('public-address') `
<bdx> I wonder if we could make an ask for network_get to just include the public-address
<bdx> https://bugs.launchpad.net/juju/+bug/1773432
<mup> Bug #1773432: network-get should always include PublicAddress() <juju:New> <https://launchpad.net/bugs/1773432>
#juju 2018-05-27
<veebers> thumper, babbageclunk: I figured out how to disable go-megacheck from on the fly runs (flycheck) if you need the same. Should only affect you if you have flycheck enabled etc.
<thumper> I don't, but it might be useful
<veebers> thumper: "(setq-default flycheck-disabled-checkers '(go-megacheck))", if you're using spacemacs chuck it in your dotspacemacs/user-init (didn't work for me in user-config)
<babbageclunk> veebers: I use flycheck but I haven't had a problem with go-megacheck yet for some reason
<babbageclunk> I'll use this if I do though, thanks!
<veebers> babbageclunk: are you using spacemacs? I'm not sure why but once I had gometalinter installed it kicked in. I imagine it was setup but disabled due to no checker executable etc.
<babbageclunk> veebers: no, I haven't tried to switch over to spacemacs yet
<veebers> babbageclunk: ah right, yeah I imagine spacemacs had some pre-sets champing at the bit and only reared it's head once the binary was available
#juju 2020-05-18
<babbageclunk> tlm: can you review this plz? https://github.com/juju/juju/pull/11591
<tlm> sure
<kelvinliu> wallyworld: free HO?
<wallyworld> kelvinliu: ok
<tlm> lgtm babbageclunk, will try again with that when it merges. Just go it again
<babbageclunk> tlm: thanks!
<babbageclunk> sorry about that
<tlm> not your fault
<thumper> hpidcock: if you are going to update juju to use -N on ppc64el, did you want to look at the etcd-io/bbolt?
<hpidcock> thumper: I can look at that too
<thumper> lt looks like a drop in replacement with a mem fix
<thumper> perhaps extra fixes too
<thumper> at least that is the theory
<thumper> hpidcock: how's the python fun going?
<hpidcock> thumper: I think I'm done for pylibjuju for now, just need to cut a release, might have it coincide with rc2
<thumper> tlm: looks like a test failure in your model agent landing
<thumper> hpidcock: ack
<thumper> tlm: FAIL: machine_test.go:640: MachineSuite.TestManageModelRunsCleaner
<thumper> tlm: I'm wondering how useful that test is, looking at the content, it is doing a hell of a lot that isn't what we care about
<thumper> tlm: looking, I'm not sure you touched that one at all, and it is just a time sensitive test
<tlm> thumper: sorry back from lunch
<tlm> are this test is driving me nuts
<tlm> thumper: what do you recommend ?
<babbageclunk> tlm: sorry - I've been playing whack-a-mole with those tests. I'm going to bump up the timeouts wholesale
<tlm> ok babbageclunk, no issues at all. Weird that mine is the one suffering. Making me wonder if I have missed something
<babbageclunk> tlm: it's possible I guess but I can't see how it would be something you've done - those tests won't be running your workers
<thumper> tlm: just try to merge again
<thumper> I have a branch that should fix that intermittent timeouit
<thumper> just pushing now
<thumper> babbageclunk: https://github.com/juju/juju/pull/11592
<hpidcock> wallyworld: can you fork github.com/hashicorp/raft-boltdb into juju/raft-boltdb please
<wallyworld> ok
<thumper> hpidcock: what level of changes do we need?
<hpidcock> thumper: just a path rewrite on the bolt db
<hpidcock> because etcd renamed the project
<thumper> hpidcock: and go mod doesn't help there?
<thumper> ah...
<thumper> poo
<hpidcock> not when they renamed it
<wallyworld> thumper: could you do the fork, i'm in the middle of some Z^%W%@! unit tests
<thumper> wallyworld: ack
<thumper> hpidcock: here you go: https://github.com/juju/raft-boltdb
<hpidcock> thumper: thanks
<babbageclunk> thumper: approved with gusto!
<hpidcock> thumper: wallyworld: can you both review and merge please https://github.com/juju/raft-boltdb/pull/1
<tlm> thumper: thanks for the PR
<babbageclunk> thumper: your pr hit a different intermittent failure, I kicked it off again
<babbageclunk> duh, sorry, that was a check build not a merge one
 * babbageclunk is a dork
 * tlm offers babbageclunk a run
 * babbageclunk accepts
<thumper> babbageclunk: no worries
<thumper> I've filed a bug for that
<thumper> we get a lot of intermittent failures in that package
<thumper> I feel that they all have the same root cause
<thumper> but I've not looked yet
<thumper> tlm: for the record, I kicked your PR merge again
<tlm> thanks thumper
<hpidcock> thumper: can you both review and merge please https://github.com/juju/raft-boltdb/pull/1
<wallyworld> babbageclunk: it's bigger than it looks due to deleting a lot of code and moving some code. i still have a unit test to fix in worker/uniter but good apart from that https://github.com/juju/juju/pull/11593
<babbageclunk> wallyworld: normally people say the other way?
<wallyworld> hpidcock: looking now
<babbageclunk> wallyworld: ok. looking
<babbageclunk> oops meant a comma there
<babbageclunk> fullstop sounds super terse!
<wallyworld> all good
<babbageclunk> whoa, looks big!
<wallyworld> lots of deleted code
<wallyworld> and moved code
<wallyworld> core changes not too bad
<wallyworld> hpidcock: done
<hpidcock> wallyworld: many thanks
<wallyworld> babbageclunk: thre's 4 commits which natch the pr description if that helps. the raft and lease worker bits should be familiar hopefully
<babbageclunk> ok
<babbageclunk> yeah, that definitely helps
<wallyworld> it's all a bit of a rush sorry
<wallyworld> otherwise i'd have done separate prs
<wallyworld> just got this fix this %W@$!%$ uniter test
<babbageclunk> no worries!
<babbageclunk> wallyworld: oh, you've done the autoexpire removal work, nice
 * babbageclunk gets rid of that part of his branch
<wallyworld> babbageclunk: yeah, sorry, i had to cause it was all mixed up in the work
<babbageclunk> makes sense
<wallyworld> there' sstill the dummy provider stuff though
<wallyworld> i think there's a fair bit that can be deleted off that
 * tlm ducking out for a little bit to get some air
<wallyworld> babbageclunk: i added an implementation of RevokeLease() in the dummy store and that fixes the tests
<babbageclunk> nice
<wallyworld> maybe i can delete ExpireLease() now for the dummy store, i think we only use it to claim a lease for leadership tsting
<wallyworld> yup, nothing uses it
<babbageclunk> wallyworld: the only extra bit is that there needs to be a background goroutine for the dummy lease store so it can expire leases internally
<wallyworld> babbageclunk: i thought about it but from what i can see, we only ever claim a lease to set up a unit leader
<wallyworld> i am pretty sure the testst will now all pass
<babbageclunk> ok, if you don't think there are any places that need expiry that's easier
<wallyworld> yeah, i'll see if the current tests pass
<babbageclunk> sounds good
<wallyworld> i'll add expiry if needed but i don't think so
<wallyworld> kelvinliu: did moving the uniter struct initialisation help?
<kelvinliu> HO?
<wallyworld> sure
<hpidcock> thumper: the deferreturn issue fix was landed https://go-review.googlesource.com/c/go/+/234105/
<hpidcock> hasn't been picked up for a backport to 1.14 yet. Will need to keep an eye out for it.
<hpidcock> thumper: https://github.com/juju/juju/pull/11594
<wallyworld> kelvinliu: this solves most of it - leadership stable after removing wrench. it keeps logging that it wants to depose leadership so a small issue to solve still https://pastebin.ubuntu.com/p/Fx8Y8XsfSd/
<wallyworld> kelvinliu: just afk for a bit, be back soon
<kelvinliu> wallyworld:  looking now, ty
<wallyworld> kelvinliu: did it work for you too?
<kelvinliu> yes finishing the pr now
<wallyworld> kelvinliu: did you see the repeated messages about running a leader deposed hook?
<wallyworld> seems to be more log noise than anything since show-status-log is ok
<wallyworld> but something needs fixing
<wallyworld> it might be the addition of the logger which now prints messages
<wallyworld> so it's always been there
<kelvinliu> I saw the warning message even before this branch
<wallyworld> lots og them repeated?
<wallyworld> i'll see if i can fix
<kelvinliu> did u see lots of repeat? I only saw once
<wallyworld> i saw lots of repeats
<wallyworld> we expect one but not repeated
<kelvinliu> I can't re-produce the warning message now..
<wallyworld> i'm trying again, we'll see
<wallyworld> kelvinliu: it happens after adding and removing the wrench file
<wallyworld> kelvinliu: and it happens because the unit agent local state struct gets Leader=true for non leaders for some reason
<wallyworld> because looks like leader tracker is setting remotestate leader to true
<kelvinliu> u mean the local leader state is out of sync
<wallyworld> seems like it, need to do more debugging
<wallyworld> kelvinliu: yeah, the new tracker still gives bd results for no leaders after the wrench file is removed :-(
<kelvinliu> wallyworld:  did u build the latest code?
<kelvinliu> it works fine for me
<wallyworld> kelvinliu: i'll pull your latest code and try again
<wallyworld> i was working with my initial diff
<kelvinliu> wallyworld: I just removed debugging msg and fixed tests. no much change
<wallyworld> ok, i'll pull latest any and try
<kelvinliu> yep
<stickupkid> manadart, https://github.com/juju/python-libjuju/pull/423
<stickupkid> or hpidcock if you're around
<manadart> stickupkid: Approved it.
<stickupkid> ta
<manadart> stickupkid: Landed on develop instead of 2.8. Backport: https://github.com/juju/juju/pull/11597./
<manadart> achilleasa, hml: Can you tick that patch? ^
<hml> manadart: looking
<manadart> hml, petevg: Test the shutdown service. It does indeed just fail on Bionic, and is pointless.
<hml> manadart: iâm getting a compare changes screen, not a pr
<manadart> hml: https://github.com/juju/juju/pull/11597
<stickupkid> manadart, done
<petevg> manadart: hah! I guess that's a strong argument for just queuing it up for now. And also for making a bug to actually fix it ...
<stickupkid> manadart, achilleasa do we still need this acceptance test, or does the new CI test cover this? https://github.com/juju/juju/blob/develop/acceptancetests/assess_network_spaces.py
<manadart> stickupkid: Which new one do you mean?
<stickupkid> https://github.com/juju/juju/tree/develop/tests/suites/spaces_ec2
<manadart> stickupkid: Thought so. It doesn't really test the same things. That one tests bindings, including the upgrade-charm path.
<stickupkid> shame, wanted to get rid of another test tbh
<manadart> stickupkid: Python one tests space constraints, including container-in-machine.
<stickupkid> manadart, fiiiiiiiiiiiiiiiiiiine, will add it to the lst of things to move rather than delete
<manadart> stickupkid: I can re-write in shell style when I do that bindings card in the "Doing" lane./
<stickupkid> manadart, let's do that because the python one doesn't do a good job at cleaning up
<manadart> stickupkid: Ack.
<stickupkid> also I'm pretty sure we can reuse the VPC... in the python test, but let's ignore that for now
<stickupkid> hml, you could fix that as a tempary measure I guess for running out of VPCs in eu-west-1
<manadart> hml, petevg. Did a quick smoke test on MAAS. Bionic containers appear to release IPs upon both remove-machine and kill-controller, so we don't need a network shutdown service.
 * manadart heads home.
<thumper> petevg: https://github.com/juju/juju/pull/11598 plz
<petevg> thumper: taking a look
<thumper> petevg: it is just forward porting the fix I did on friday
<thumper> as wallyworld mentioned, should get it into the 2.8 branch
<thumper> I should have done it friday, or yesterday, but it slipped my mind
<petevg> thumper: Got it. I marked it as approved.
<thumper> petevg: ta
<petevg> np
<thumper> petevg: bug 1876849
<mup> Bug #1876849: [bionic-stein] openvswitch kernel module was not loaded prior to a container startup which lead to an error <cdo-qa> <OpenStack neutron-openvswitch charm:Incomplete> <juju:New> <https://launchpad.net/bugs/1876849>
<tlm> wallyworld: https://github.com/juju/juju/pull/11599
<wallyworld> looking
<wallyworld> tlm: ta, lgtm
#juju 2020-05-19
<wallyworld> hpidcock: yeah, should be a const, i can abort the merge if tlm wants to fix
<hpidcock> nah all good
<wallyworld> can be drive by nxext time
<hpidcock> yep
<wallyworld> hpidcock: FYI pr landed, release doc started, https://docs.google.com/document/d/13OZtXkx_0lfa3a6W6xRYqlY9p3MQT0KLtFRU49acXhM/
<hpidcock> wallyworld: awesome!
<wallyworld> hpidcock: or thumper: here's 2.8-rc into 2.8 https://github.com/juju/juju/pull/11600 clean merge
<hpidcock> wallyworld: any merges?
<hpidcock> merge conflicts*
<hpidcock> oh
<wallyworld> np :-)
<hpidcock> you already said clean merge
<wallyworld> yeah
<hpidcock> approved
<wallyworld> ta
<hpidcock> wallyworld: I'm going to start the release
<wallyworld> ok, sgtm
<wallyworld> hpidcock: maybe wait for buildjuju to finish to reduce load on slaves?
<hpidcock> wallyworld: yeah, just killed the old ci-runs
<hpidcock> waiting for this one to at least pass build
<wallyworld> +1
<tlm> tiny pr if anyone has a few minutes https://github.com/juju/juju/pull/11601
<wallyworld> tlm: thre's also uniter.go
<tlm> ?
<wallyworld> same pattern
<wallyworld> if we fix in caasoperator should also fix in uniter.go as well
<wallyworld> original code was from 2015
<tlm> ah ok makes sense
<wallyworld> i wonder if we needed the mutex back then
<tlm> thought I had missed a use case
<tlm> I was thinking the func was called multiple times but not the case
<wallyworld> no
<wallyworld> tlm: here's the commit message
<wallyworld> I believe that [this recent ppc64 failure](http://reports.vapour.ws/releases/3059/job/run-unit-tests-trusty-ppc64el/attempt/3804#highlight), is caused by a race between the restartWatcher and watcher cleanup closures.
<wallyworld> we should look to be sure we're not reintroducing a test issue
<tlm> that for me? Also link doesn't work
<wallyworld> no link won't work, it's an old CI server
<tlm> ah cool
<wallyworld> i just wanted to point out the commit message and why the mutex was added
<tlm> ah ok, mutex isn't doing anything now besides adding some noops
<wallyworld> we should think about if removing it would reintroduce any test failures
<wallyworld> yeah, it does appear to be redundant
<tlm> if we get test failures out of that then It would be a miracle that it has been working for so long.
<wallyworld> tlm: ah, the addCleanup() had the mutexes
<wallyworld> and they have been removed
<wallyworld> for some reason
<tlm> ah k
<wallyworld> this line u.addCleanup(func() error
<wallyworld> but it's not there anymore
<tlm> updated pr
<tlm> hey my first uniter change wallyworld. I have battle scars now :)
<wallyworld> lol
<wallyworld> tlm: i think we changes how stuff is shutdown since 2015 so mutex is now not not needed
<tlm> yep
<wallyworld> lgtm
<tlm> made juju 0.000005% faster
<wallyworld> progress
<timClicks> "what's new in juju 2.8" should now be feature complete https://discourse.juju.is/t/whats-new-in-juju-2-8/2859
<wallyworld> awesome ty
<wallyworld> timClicks: the vsphere section should mention the new credential folder attribute?
<timClicks> actually just noticed that the new/changed hooks section is also quite sparse
<wallyworld> timClicks: min-juu-version isn't new, it's been there since 1.24 i think. What is new is that a charm says "2.8.0" to signal it doesn't need a k8s PV created for it since it (the charm) has been updated to use the new controller storage facility
<thumper> why do we have a package level variable for the watcher in the first place?
<hpidcock> thumper?
<thumper> hpidcock: from this https://github.com/juju/juju/pull/11601
<thumper> the lock was removed, but next to it was a package level watcher
<thumper> which seems wrong to me
<hpidcock> they are function variables
 * thumper looks again
 * thumper sighs
<thumper> I should have expanded the diff
<thumper> sorry for the noise
<thumper> https://github.com/juju/juju/pull/11602 for a boring review
<tlm> i an do it
<babbageclunk> tlm: I'm looking
<tlm> ah cheers babbageclunk, my only feed back thus far and it may be there is that logger should be an interface type like the pattern in new workers
<babbageclunk> (I should have said I was)
<babbageclunk> it is one isn't it?
<tlm> maybe, I hadn't got that far yet
<tlm> as in type Logger interface { DebugF() }
<babbageclunk> yup yup
<babbageclunk> I can't remember why we started doing that though - was it just for ease of testing?
<tlm> Yeah make duck typing in testing easier
<tlm> makes*
<babbageclunk> I have a vague memory of there being something else as well but I can't think what it is.
<tlm> maybe for regex testing log messages in tests ?
<babbageclunk> thumper: approved
<wallyworld> thumper: last one for today, also clean merge https://github.com/juju/juju/pull/11604
<wallyworld> hpidcock: since thumper is ignoring me, perhaps you can +1 2.8->develop, similar to 2.8-rc->2.8 https://github.com/juju/juju/pull/11604; clean merge, no accidental version updates
<hpidcock> wallyworld: ok looking
<hpidcock> wallyworld: approved
<wallyworld> tyvm
 * thumper sighs
<thumper> I need to get these alerts sorted
<thumper> wallyworld: sorry
<wallyworld> thumper: no worries, i'm not really mad, just trolling you :-)
<hpidcock> wallyworld: is there a fast way to mark all bugs in a release as released?
<hpidcock> in a milestone*
<wallyworld> only using the python clp client
<wallyworld> IIANM
<wallyworld> *lp, not clp
 * thumper knocks off a few more todos for the merging of agents
 * thumper EODs
<thumper> see ya tomorrow folks
<thumper> wallyworld: could I get you to do a pass over the latest stop the line tests?
<thumper> just to see why they are failing, and email crew?
<thumper> especially given RC2
<wallyworld> ok
<thumper> ta
<stickupkid> manadart, ping when you get a chance...
<manadart> stickupkid: Pong.
<stickupkid> daily?
<manadart> stickupkid: https://github.com/juju/juju/pull/11606
<stickupkid> manadart, if MapWithSpaceNames passes in a lookup, but lookup is nil, should we return nil or the underlying bindings?
<stickupkid> manadart, I suspect we should return nil, but that gives you nothing
<manadart> stickupkid: If the lookup is length zero, we do what it does now and retrieve the spaces from state.
<stickupkid> manadart, why?
<stickupkid> manadart, why not push the logic out as a dependency?
<stickupkid> manadart, just make it implicit that we now require the network.SpaceInfos to get you MapWithSpaceNames...
<stickupkid> it's less magicial
<manadart> stickupkid: I guess it's doable. Not many call sites.
<stickupkid> manadart, that's my thought atm
<stickupkid> manadart, anything magical I'm tending to run away from atm
<manadart> stickupkid: In that case, a length zero lookup returns a `errors.NotValidf("empty space lookup")`.
<stickupkid> wicked, like it
<manadart> stickupkid: https://github.com/juju/juju/pull/11607
<rick_h> stickupkid:  if I deploy a bundle with pylibjuju with a CMR am I asking for trouble?
<rick_h> stickupkid:  on the consuming side
<stickupkid> rick_h, probably
<stickupkid> rick_h, let me check if we implemented all the lookups
<rick_h> booooooooo
<stickupkid> rick_h, it depends, we handle consuming of offers, but we might not handle everything.
<rick_h> stickupkid:  ok, guess I'll see how far I get ty
<stickupkid> rick_h, indeed
<achilleasa> what can possibly go wrong here? https://github.com/juju/juju/blob/develop/network/network.go#L233-L252
<achilleasa> (hint: look at the map access patterns)
<stickupkid> achilleasa, I don't like the fact it modifies the original map, creeps me out
<achilleasa> stickupkid: if you run out of space in the map, a new backing store will be allocated and the _reference_ passed to the func gets modified. However, the caller still has the old map ;-)
<stickupkid> achilleasa, also why doesn't gatherBridgeAddresses return []net.Addr
<stickupkid> achilleasa, hence the creeps me out
<stickupkid> achilleasa, addressesToRemove[DefaultLXDBridge] = gatherBridgeAddresses(DefaultLXDBridge)
<stickupkid> done
 * stickupkid goes to check that he didn't write it
<stickupkid> haha
<achilleasa> nah, I will just throw all into a list and pass it to the filter func
<stickupkid> fair
<achilleasa> there might be some dups but who cares
<stickupkid> solve that later :)
<achilleasa> stickupkid: you need the map after all because the map gets flattened into a []ipNetAndName slice
<rick_h> guild I updated guimaas with guidance to the snap and 2.7 FYI
<rick_h> if anyone hits issues let me know and I might ask to take over guimaas for a demo in the near future just a heads up
<stickupkid> rick_h, nice :)
<stickupkid> gr8, when we moved to go 14 we didn't update all the mocks - haha
#juju 2020-05-20
<wallyworld> thumper: https://github.com/juju/juju/pull/11609
<tlm> wallyworld: doing the rc1 upgrade I am seeing this in the controller after upgrade
<tlm> ERROR juju.worker.dependency engine.go:671 "api-caller" manifold worker returned unexpected error: [bf8dc1] "controller-0" cannot open api: unable to connect to API: dial tcp 127.0.0.1:17070: connect: connection refused
<tlm> any thoughts ?
<kelvinliu> wallyworld: tlm ho?
<wallyworld> sure
<tlm> k
<thumper> wallyworld: lint failure
<wallyworld> thumper: should be fixed
<thumper> wallyworld: going trhough it now
<thumper> wallyworld: bug 1878329
<mup> Bug #1878329: stuck k8s workload unit following upgrade-charm with new image <k8s> <juju:New for kelvin.liu> <https://launchpad.net/bugs/1878329>
<thumper> tlm: did you want to chat?
<tlm> thumper: in standup with wallyworld and kelvinliu discussing bugs atm
<thumper> tlm: ok
<thumper> tlm: I may leave you to work with them on the rc2 issues
<tlm> no worries. Enjoy your arvo
<thumper> I may give a brief try to see if I can reproduces the caas operater intermittent test failures locally
<thumper> I have about 30 minutes
<tlm> k, I am banging my head against the wall at the moment as I can do an rc1-rc2 upgrade. Juju doesn't bump the k8's image locally
<tlm> seems to work for everyone else
<thumper> tlm: running caas operator tests in a loop could reproduce in 400 iterations
<thumper> but ran with -race and hit it in 10
<thumper> tlm: this line appears in failing test, but not passing test: [LOG] 0:00.087 WARNING test unit running chan["gitlab/0"] is blocked
<tlm> hmmmm have you found where that log message comes from thumper ?
<thumper> line above says:
<thumper> 						// This should never happen unless there is a bug in the uniter.
<thumper> caasoperator.go line 527
<thumper> can't get it to fail again
<thumper> got it to fail eventually, but running stress-race and make check in another window
<thumper> stress-ng doesn't seem to tickle it
<thumper> I think I'd need to understand all the channels to debug further
 * thumper reads code for 10 minutes
<tlm> two ticks thumper, just writing a response in launchpad
<tlm> free now thumper
<thumper> tlm: which hangout?
<tlm> standup ?
<thumper> ack
<wallyworld> kelvinliu: did you see paul has pasted links to the charm on the bug?
<kelvinliu> wallyworld: yep
<wallyworld> i'm just landing some other stuff, will test soon
<tlm> wallyworld, kelvinliu
<tlm> whoops
<tlm> PR https://github.com/juju/juju/pull/11610 no rush it's more of a look and see if you agree type situation
<wallyworld> tlm: i'm always a little wary about creating a buffered channel to solve a race; in this case, I can't see from the code ho we can get events from the containerStartChan  before we start selecting from the channel, so as yet i can't see how this solves the race... is there a causal link between the unbuffered chnnel and the race?
<kelvinliu> wallyworld: https://bugs.launchpad.net/juju/+bug/1879598  HO to discuss this issue?
<mup> Bug #1879598: terminating pod confuses k8s charm operator <k8s> <juju:Triaged by kelvin.liu> <https://launchpad.net/bugs/1879598>
<wallyworld> sure one sec
<wallyworld> kelvinliu: free now
<kelvinliu> stdup?
<wallyworld> yup
<tlm> wallyworld: I can try and explain it better over HO if you want
<stickupkid_> manadart, CR https://github.com/juju/juju/pull/11608
<manadart> stickupkid_: Swap you. https://github.com/juju/juju/pull/11607
<manadart> stickupkid_: Got a sec to HO?
<stickupkid_> sure
<stub> cory_fu: Know what is happening here? I think layer:status is broken with focal (py3.8), but I haven't traced why: https://pastebin.ubuntu.com/p/6bXFzkxB6b/
<stickupkid_> manadart, when you get a second, want to talk through the next stages of work on bindings thing
<stub> cory_fu: Weirdly, hooks work fine, but actions failing (which use the charm-env shebang)
<manadart> stickupkid_: Want to HO now before standup?
<stickupkid_> sure
<stub> cory_fu: nm, found charms.layer.basic.activate_venv where the docstring explains it
<cory_fu> stub: I'm not sure about activate_venv, but I think actions need to explicitly call import_layer_libs: https://github.com/juju-solutions/layer-basic/blob/master/lib/charms/layer/__init__.py#L6
<stub> cory_fu: activate_venv does that, and some other twiddling.  'This is handled automatically for normal hooks, but actions might need to invoke this manually,'
<cory_fu> Ah, right, I see
<cory_fu> charm-env does part of it by ensuring the venv is active, but not the extra twiddling
<stub> activate_venv works in any case \o/ . I have no idea when this broke; I haven't rebuilt the charm for ages.
<stub> Was thinking it was py3.8 problems
<cory_fu> I think it's mostly due to the `from charms import layer` pattern vs `from charms.layer import status`
<stickupkid_> manadart, if you're still around https://github.com/juju/juju/pull/11611
<stickupkid_> manadart, this lifts up the endpoints binding calls along with AllSpaces call...
<stickupkid_> manadart, I'm going to tackle the merging in another PR
<manadart> stickupkid_: I will have a proper look in the AM. Have to head off now.
<stickupkid_> manadart, wicked, nearly finished myself
<hatch> What api facade/method can I use to get a controllers cloud/region ?
<thumper> hatch: not sure about that, but show-controller has the details you need
<thumper> check what facade it hits
<hatch> thanks, I'll check there
<hatch> all these side effects.....
<hatch> thumper I'm not really sure where `details` is being populated https://github.com/juju/juju/blob/d59b8637e150943cb987a8194addbdc116746b63/cmd/juju/controller/showcontroller.go#L139
#juju 2020-05-21
<tlm> wallyworld, kelvinliu: have updated that PR based on discussion this morning https://github.com/juju/juju/pull/11610
<kelvinliu> tlm: lgtm
<timClicks> babbageclunk, kelvinliu: have we fixed this vsphere bug? "In 2.7.0, networking rules must allow direct access to the ESX host for the the Juju client and the controller VM. The Juju clientâs access is required to upload disk images and the controller requires access to finalise the bootstrap process. If this access is not permitted by your site administrator, remain with Juju 2.6.9. This was an inadvertent regression and will be fixed in a futur
<timClicks> e release"
<kelvinliu> timClicks: probably not yet
<kelvinliu> seems we had a workaround to fix it, but we didn't have a chance to fix it
<timClicks> where are clouds.yaml and credentials.yaml stored on macOS and Windows?
<babbageclunk> timClicks: no, I don't think we know any way to fix it (without the workaround of allowing access to the host).
<thumper> wallyworld: got a few minutes?
<wallyworld> sure
<thumper> 1:1?
<thumper> wallyworld: it looks like the stuck offer also stops the model from being destroyed
<thumper> in the same repro
<wallyworld> hmmm, ok, even with --force?
<thumper> I was using kill-controller
<thumper> I would assume that would --force things
 * thumper has to EOD
<thumper> I think the same underlying issue is causing the destroy failure
 * thumper out
<stickupkid> this little bad boy keeps failing on me atm MachineWithCharmsSuite.TestManageModelRunsCharmRevisionUpdater
<stickupkid> anybody know anything about it?
<stickupkid> manadart, fix for the above https://github.com/juju/juju/pull/11613
<manadart> stickupkid: Approved.
<stickupkid> ta
<stickupkid> manadart, that didn't work, I believe that the long wait just isn't long enough, as I can't replicate it locally
<stickupkid> yeah, it's hard to get the replicaset sorted, which is causing this I believe
<stickupkid> manadart, If you get a second as well --> https://github.com/juju/juju/pull/11611
<stickupkid> precursor to more work around endpoint binding changes
<stickupkid> manadart, got a sec?
<manadart> stickupkid: Yep.
<stickupkid> in daily
<rick_h> petevg:  ping, do you have a few I can steal?
<petevg> rick_h: a few minutes? Not yet. But I do after the daily sync.
<rick_h> petevg:  rgr ty
<petevg> np
<stickupkid> manadart, I'm guessing for endpoint bindings that have an alpha space, we should be ignoring that for the machine topology?
<manadart> stickupkid:
<manadart> Hmm.
<manadart> I don't think we are ready for always spaces yet, so yes.
<stickupkid> manadart, thought as much, just trying to hammer out the tests around this
<manadart> stickupkid: Actually, this won't work for O7k.
<stickupkid> manadart, what do you mean?
<manadart> stickupkid: If we have a charm with bindings to space alpha and beta, we'd only get a NIC in beta and the charm wouldn't work.
<stickupkid> ah, right right
<stickupkid> so you'd end up with space topology all the time then I believe
<stickupkid> manadart,
<manadart> stickupkid: The case where you could ignore it and omit generation is when there is only one space, because that will be alpha and the operator doesn't care about spaces.
<stickupkid> right right
<manadart> I mean omit generation of the topology.
<manadart> Yeah, I think that's the rule.
<stickupkid> manadart, PR is ready for re-review
<josephillips> hi
<josephillips> hi/
<josephillips> i already have a ubuntu openstack deployed with juju and kvm is posible assign some nodes to run on lxd and anothers with kvm?
<rick_h> josephillips:  for the nova compute nodes?
<josephillips> yep
<josephillips> i want designated some nodes with kvm and another nodes with lxd
<rick_h> josephillips:  hmm, have to check with the openstack folks. There was a lxd charm at one point but I'm not sure how that's managed today.
<rick_h> beisner:  do you know someone that can point josephillips in the right direction? ^
<josephillips> is possible perform a config per unit and not per application?
<beisner> o/
<rick_h> josephillips:  no, the idea of applications is the units are consistent so you'd definitely need to split into two sets of units
<josephillips> https://github.com/openstack-archive/charm-lxd
<josephillips> because this documentatoin the usage is changin on nova-compute the virt-type
<josephillips> if i do that will change kvm to lxd in all nodes
<beisner> josephillips rick_h - the nova-lxd charm and hypervisor work is deprecated.  KVM is the hypervisor that we support.
<beisner> with the nova charms, that is.
<rick_h> beisner:  ok, I wasn't sure if there was a new path for the lxd as a hypervisor. Thanks for the guidance
<josephillips> any reason for that?
<beisner> the main reason was:  no actual adoption of it vs. the effort to maintain and develop it.
<beisner> s/no/very little/
<josephillips> but openstack itself is keep supporting it right?
<beisner> we are openstack itself in this context.
<josephillips> oh
<beisner> josephillips:  virt-type lxc technically is all vanilla and might just work, but we do not validate it, nor do we put it forth for users to use.
<beisner> via charms
<josephillips> oh got it but still have the same problem that lxd
<josephillips> if i do that will replace all compute nodes with kvm to lxc?
<beisner> josephillips: yes.  charm config is application-wide.
<josephillips> another question
<beisner> josephillips: you can deploy two nova-computes as two differently-named applications and that works
<josephillips> oh
<josephillips> how i can do that?
<beisner> "works" in the sense of you can have two differently-configured personalities of nova-compute units
<beisner> not "works" in the sense that you will succeed with virt-type lxc.
<beisner> :)
<beisner> josephillips: all the same relations, just deploy another nova-compute with a name like nova-compute-foo.
<josephillips> i have to download the charm locally to do that?
<beisner> ie. nova-compute-bar and nova-compute-foo would exist in the deployment.
<beisner> you can do it in a bundle
<beisner> also, you could download a local copy
<josephillips> and is not planning get a another container support in future ?
<beisner> josephillips:  lots of container support around here for sure :-)
<beisner> josephillips:  nova driving lxd is the specific use case that isn't in development.  lxd/lxc are in full-gear adoption and use, just not with the nova shim.
<beisner> josephillips: have you checked out lxd clustering?
<josephillips> nope
<josephillips> i was looking a solution for users can create lxc container for nodes for databases
<beisner> josephillips:  gotcha.  so you can do that with juju on openstack in kvm instances;  also, circling back to the lxd clustering (which is sans openstack):  https://linuxcontainers.org/lxd/docs/master/clustering
<josephillips> i will check that
<jim31> hey, i was hoping someone might be able to help me with an issue i'm experiencing trying to enable kubeflow on microk8s. the issue arises with mongodb pod complaining about what i assume is ipv6 binding. i tried disabled ipv6 on lxd, but it doesn't seem to have fixed anything.
<jim31> 2020-05-21T20:07:28.442+0000 I CONTROL  [initandlisten] options: { net: { bindIpAll: true, ipv6: true, port: 37017, ssl: { PEMKeyFile: "/var/lib/juju/server.pem", PEMKeyPassword: "<password>", mode: "requireSSL" } }, replication: { oplogSizeMB: 1024, replSet: "juju" }, security: { authorization: "enabled", keyFile: "/var/lib/juju/shared-secret" },
<jim31> storage: { dbPath: "/var/lib/juju/db", engine: "wiredTiger", journal: { enabled: true } }, systemLog: { quiet: true } }2020-05-21T20:07:28.443+0000 I STORAGE  [initandlisten] exception in initAndListen std::exception: open: Address family not supported by protocol, terminating
<jim31> this happens when calling the command microk8s.juju --debug bootstrap microk8s --config juju-no-proxy=10.0.0.1
<jim31> i need to pass ipv6: false i think to the mongo pod but am lost on where to do that
<wallyworld> thumper: i commented on the bug - their log level was <root>=WARNING, so yeah, not much got included :-)
<wallyworld> thumper: one small step https://github.com/juju/juju/pull/11615
<timClicks> kelvinliu: will that bootstrap to EKS PR land in Juju 2.8.0
<kelvinliu> timClicks:  sry, it should target develop branch
<timClicks> kelvinliu: no need to apologise :)
<wallyworld> we want it for 2.8
<wallyworld> sinces so far it's a small CLI tweak only
<kelvinliu> wallyworld: let's discuss the target branch for eks?
<wallyworld> ok
<wallyworld> thumper: sigh, the cmr thing all worked for me on 2.8, will have to test with 2.7. you tested with 2.8 though right?
#juju 2020-05-22
<hpidcock> thumper: tlm: you both have interest in this https://github.com/juju/juju/pull/11616
<tlm> roger
<tlm> looks good hpidcock, can you update pki/testing for the test authority to use it ?
<hpidcock> tlm: it should already
<hpidcock> only the pki package tests with secure keys
<tlm> a test would have to bring in testing for the it to apply ?
<hpidcock> tlm: true, but that is pretty much all test packages
<tlm> can we introduce it into pki/testing to garuntee it ?
<tlm> that would be my only feedback
<hpidcock> sure
<tlm> any idea what the performance is like with rsa 512 V ecdsa 224 ?
<hpidcock> probably negligible difference
<hpidcock> I can have a look
<tlm> all good was just wondering if you knew
<tlm> sort out mongo in k8's and we can do the swap for 2.9 maybe
<tlm> just did a check and it looks like most dns root zones have swapped over to ecdsa now
<tlm> smallish PR if anyone has time https://github.com/juju/juju/pull/11612 (not urgent)
<wallyworld> thumper: progress of sorts, i left a comment on the bug. the mechanics of the issue i can see but not yet the root cause
<babbageclunk> ugh, is there any way I can handle where I try to restore the snapshot on the controller nodes (ie, move the snapshot dir back) but only some of them succeed?
<wallyworld> what's the cause of one of them failing?
<tlm> is lxd hanging for anyone else when bootstraping from 2.8-rc ?
<wallyworld> not last time i tried
<babbageclunk> wallyworld: oops, missed your response - not sure, I'm trying to handle the situation where we need to restore the database snapshots (because the juju-restore process has failed for some reason) but then restoring the snapshots has failed on some number of nodes. I guess at that point we ask the operator to restore them.
<wallyworld> i think in some cases it's ok (to start with) to inform what wnet wrong and suggest a manual fix, possibly followed by running restore again after the user intervention
<thumper> https://github.com/juju/juju/pull/11617 if someone is feeling bored
<thumper> wallyworld: re bug above, I imagine that is a little frustrating
<wallyworld> thumper: there's weird stuff happening, success vs failure depends on if the state watcher picks up all changes in one event or two (in the latter case, the worker doesn't seem to see what it needs and the event gets lost), and we seem to be incorrctly watching relation unit changes multiple times
<thumper> ugh...
 * thumper EODs
<thumper> time for cleaning the kitchen and a glass of wine I think
<thumper> night all
<stickupkid> manadart, updated my PR https://github.com/juju/juju/pull/11611
<manadart> stickupkid: Yep. Will look.
<manadart> stickupkid: Can further simplify like this: https://pastebin.canonical.com/p/5ZfmWmK9kn/
<stickupkid> ooo, like it
<stickupkid> let me try it
<stickupkid> manadart, done, just did some renaming of stuff and then applied the changes.
<manadart> stickupkid: I just approved it, but you can do a QA step on OpensStack to check that a charm with multiple bindings causes a machine to be provisioned with multiple NICs.
<manadart> Also need the LP bug in the description.
<stickupkid> manadart, fun fun fun
<stickupkid> manadart, yeah, this PR grew, I'll sort that out now
<stickupkid> manadart, I'll test that we didn't regress AWS as well
<stickupkid> manadart, you got microstack installing recently?
<stickupkid> mine is just hanging installing rabbitMQ
<manadart> stickupkid: Using `--devmode --edge`?
<stickupkid> manadart, let me try again
<stickupkid> devmode
<stickupkid> manadart, thank-you again
<stickupkid> manadart, mind if I edit your post for now
<manadart> stickupkid: Sure.
<stickupkid> achilleasa, https://github.com/juju/juju/pull/11618#pullrequestreview-416899148
<achilleasa> stickupkid: it's all spaces now ^^
<stickupkid> FIIIIIIIIIIIIIIIGHT
<stickupkid> achilleasa, approved
<achilleasa> (and 0-width UTF8 chars :-) )
<stickupkid> ha, yes!
<achilleasa> stickupkid: manadart: unless you remove the systemd service entries when you clean up you will still get the "machine is already provisioned error". I guess I should change my PR to clean them
<manadart> achilleasa: Yeah, that's probably from when we removed them from /lib/systemd...
<manadart> We should delete them.
<achilleasa> stickupkid: updated PR to delete the systemd services. ok to merge?
<stickupkid> sure
<stickupkid> manadart, https://github.com/juju/juju/pull/11621
<stickupkid> manadart, I could update the base branch, so made a new PR
<stickupkid> manadart, just realised this needs to land first https://github.com/juju/juju/pull/11622
<achilleasa> manadart: my changes broke the manual add-machine. I accidentally filtered out the ovs bridge (the only active NIC in my setup) from the list of usable addresses...
<achilleasa> interestingly, the agent did connect briefly to the controller and set its password; then it exploded :D
<achilleasa> (the call to network.FilterBridgeAddresses happens later)
<manadart> achilleasa: I see.
<manadart> stickupkid: Just doing a quick check on that patch. Had to update Go on my notebook.
<stickupkid> sure sure
<stickupkid> no rush
<manadart> stickupkid: Approved 11622.
