#juju 2012-07-02
<hokka> i agree current approach of wrapping cli lxc commands is a bit ugly. but I thought lxc can take care of all these issues already
<hokka> i mean autorestart, firewall, network rules -- libvirt offers all this stuff for lxc containers, no?
<imbrandon> lxc is a hack, it should die in a fire
<hokka> lxc technology appears to mature slowly, it is not that bad. also, now if one wants to install openstack with juju there are only two options I'm aware of: maas which requires 10(!) physical hosts to deploy because it will deploy each service on a separate physical hosts which appears to be ridiculous to me. the second option is lxc. it lets deploy several openstack services on one host, which is nice I think.
<imbrandon> it is that bad, if you want a seperate them use real virtualization that already does all this for you, esx , vbox , to name a few, not a bsd jail on crack thats hacked to do some of the same things
<imbrandon> and there is dev stack
<imbrandon> as well as many real virtualization products
<hokka> note that I was talking about using juju to bootstrap openstack
<hokka> do you suggest to deploy openstack components in vbox?
<imbrandon> i dont sugest anything other than lxc is a hack that needs to die in a fire, it may have its uses but for our usecase there are already things mature that dont require re-coding to do something they were never intended to do
<hokka> J
<lifeless> how do you interrogate a specific relation from a charm hook?
<lifeless> (I want to see if the zk relation is setup in opentsdb in the start hook)
<imbrandon> relation-get ?
<lifeless> so, in the zookeeper-relation-changed hook, I do zk_host=`relation-get private-address`
<imbrandon> yup
<lifeless> but that is missing the context in the start hook.
<hokka> lifeless, imbrandon: thanks for the feedback
<lifeless> isn't it? Or am I misunderstanding teh whole thing
<imbrandon> right, you only have access to it in the relation* hooks
<lifeless> so thats my quesiton
<lifeless> imbrandon: btw you were a bit harsh on hokka :(
<imbrandon> ahh as far as i konw you dont
<imbrandon> or cant
<imbrandon> sorry :(
<lifeless> SpamapS: ^ your review, how can I do this thing?
<imbrandon> lifeless: yea it just pains me everytime someone tries to use lxc, we put it as something its not and get their expectations high
<imbrandon> but yes i probably was, we just need a real local provider like you said
<lifeless> imbrandon: sure, problem I saw was that hokka was trying to solve a problem, and while you're accurate, it didn't help them.
<lifeless> sorry, overlapped with you there. enough said.
<imbrandon> :)
<imbrandon> but yea as far as the relation stuff , i dont think there is a way iirc, other than some questionable measures
<imbrandon> like using the juju charm from the cli of a hook
<imbrandon> or similar
<imbrandon> SpamapS lifeless hazmat : mmmm i wish there was an interactive shell that simulated a charm env where i could fire hooks in the corect context at will
<imbrandon> that would make creating new charms so much easier
<imbrandon> know if there is a way i could simulate that now  that i'm overlooking ?
 * imbrandon is finishing nginx today
<lifeless> is software ever finished ?
<imbrandon> heh true, i feel its not, just like websites, your never "done"
<imbrandon> lifeless: ok let me rephrase , puttin nginx in a state that others can make use of , hopefully :)
<imbrandon> :)
<lifeless> \o/
<lifeless> of course, have to ask why that matters, nginx after all (/wink)
<imbrandon> heh
<lifeless> I wish they would choose less disruptive defaults
<lifeless> I keep having to support folk who get bitten by their Vary/compression defaults.
<imbrandon> i'm trying out some intresting common lib approach , take a peek
<imbrandon> yea
<imbrandon> i reset those right off
<imbrandon> http://bazaar.launchpad.net/~imbrandon/charms/precise/nginx/trunk/files
<imbrandon> about to dump the default configs in /templates and use preg_repalce on ##place-holder-values##
<imbrandon> is my next step
<imbrandon> lifeless: http://bazaar.launchpad.net/~imbrandon/charms/precise/nginx/trunk/view/head:/templates/nginx.conf
<imbrandon> thats what i use as the base, seems to work well
<lifeless> do you export the log data as relations ?
<imbrandon> not yet, but i had considered it
<imbrandon> for like loggy or something to be used
<imbrandon> i'm just now adding in the shared mount stuff
<imbrandon> for nfs, before it did not have that, actually this was all one large charm before, trying to find the break points into smaller charms is the key
<imbrandon> as in juju deploy website , and the one charm did EVERYTHING, thus breaking it out into nginx , website, database , logs , mount etc etc
<imbrandon> actually, i just realized something ... /me goes to shuffle a little code ...
<lifeless> imbrandon: right
<lifeless> I'm poking at logstash atm
<hazmat> lifeless, nice, think there is an extant attempt at it, but its not very good.. http://jujucharms.com/search?search_text=logstash
<lifeless> yup
<lifeless> (thanks - I did already know but I appreciate the hint anyhow)
<imbrandon> http://15.185.225.6/
<imbrandon> heh working state now
<imbrandon> few more cleanups and it will be ready for review i think
<imbrandon> actually might be now /me looks
<imbrandon> hrm nope, one more thing
<lifeless> hazmat: do you have any idea on the relation-get thing ?
 * hazmat scrolls back
<lifeless> hazmat: in my start hook, I need to check that a specific relation exists.
<hazmat> lifeless, relation-ids
<hazmat> lifeless, allows you to check for instances of a relation given a relation name
<lifeless> (and then pull out the data from it)
<hazmat> which can then be passed to relation-get/relation-set/list
<hazmat> it was mainly meant for upgrade contexts, but its useable in any non relation hook context
<hazmat> imbrandon, lxc is a not hack
<lifeless> hazmat: so, if [ -n "`relation-ids zookeeper`" ] ... ?
<hazmat> although the juju implementation of local provider could perhaps use that moniker
<imbrandon> for what we;re using it for it is, its not a true virtualization container
<lifeless> hazmat: lxc is -very- hairy at the moment. Even with all the work we're putting into it.
 * hazmat remembers the beatles
<hazmat> its getting better all the time ;-)
<imbrandon> hehehe
<hazmat> lifeless, very hairy? its not root secure, what in particular is hairy?
 * hazmat checks rel-id syntax
<imbrandon> i still think that a true xen/vbox/umode container would be better
<lifeless> hazmat: it depends on the entire kernel being properly namespaced, we've had a raft of bugs where that isn't the case.
<lifeless> hazmat: e.g. powering off the machine from within lxc
<lifeless> hazmat: attempting to insmoding 32-bit modules into a 64-bit kernel from within a container.
<hazmat> lifeless, indeed, but some of those are mitigated with app armor
<lifeless> hazmat: yes, but think structurally.
<hazmat> lifeless, yes.. there's a lot of surface area there
<hazmat> and there many things not properly namespace'd
<lifeless> hazmat: the dependency stack for lxc to be secure in and off itself, is huge. And *known* (not speculated) to have un-upgraded non-namespace aware code.
<hazmat> but it has been done correctly
<hazmat> with openvz
<lifeless> has anyone written a json api presenting ec2-like semantics (just enough for juju in particular) for either libvirt or lxc itself ?
<hazmat> lifeless, sure.. they call it openstack
<hazmat> lifeless, i think that's code looking for a problem to solve..
<hazmat> it may be conceptually nice
<hazmat> but its overkill imo
<hazmat> we do need to refactor the lxc provider
<hazmat> but thats mostly to just drop libvirt
<hazmat> since lxc in precise does network
<hazmat> and to switch to lxc ubuntu cloud img
<lifeless> hazmat: it would provide an avenue to have smaller code base.
<hazmat> so we can init with cloud-init the same as other setups
<hazmat> lifeless, perhaps.. i'm not so sure
<lifeless> which is a good thing; would let the local provider just be an api consumer, the hairy stuff could then reuse bits of openstack, or standalone, as appropriate.
<hazmat> lifeless, getting to cloud-init and the code is already minimal
<hazmat> lifeless, and much less software into the upstream stack
<lifeless> hazmat: I suspect there are a large raft of optimisations you're not well placed to make at the moment.
<hazmat> lifeless, wrt to lxc?
<lifeless> hazmat: such as layered fs's on top of the base image
 * hazmat nods
<lifeless> which a local provider daemon could take care of more easily.
<hazmat> btrfs snapshots would be butter :-)
<lifeless> (which btw would shrink the footprint substantially for whatever is in the 'image')
<lifeless> hazmat: uhhg, inappropriate ;)
<hazmat> well not for code, but operationally it would be much nicer
<hazmat> that's one of my concerns about moving to lxc everywhere
<hazmat> if we have to wait for machine bootstrap and then download an lxc image all over again
<hazmat> it would be nicer if we could use the root fs a base in a stable secure fashion
<lifeless> right, which you can, if you get intimate with lxc
<lifeless> which makes juju less portable
<lifeless> -> break the dependency, provide a crisp clear boundary, and let folk like imbrandon write local ones for their OS.
<hazmat> lifeless, at the moment i don't really regard local provider as portable anyways..
<lifeless> and the stuff on the other side of the boundary can get as awful as needed.
<hazmat> much less juju
<hazmat> definitely we want to target the latter towards some notion of portability, but how that stretches is still up in the air
<lifeless> sure; my main point is to separate the concerns.
<lifeless> When you describe juju, adding 'and it knows how to xyz local containers' in doesn't fit with the main thrust.
<hazmat> lifeless, if its using cloud init for containers, then the api wrapper around lxc is going to be about the same as an api wrapper around some other provider that's facilitating the same, at least till it goes deep on features, in which case yeah.. it would be nicer to have it external
<lifeless> exactly.
<hazmat> lifeless, my concern on the latter is how close it may be getting to something like openstack, ie what's the scope limitation on that
<lifeless> the difference between 'this is how you call lxc command line' and 'this is an API you can use', is that an API you can use can be used from within a container.
<lifeless> which solves your zookeeper-outside issue
<lifeless> its similar to openstack in the same way MAAS is similar to openstack.
<hazmat> lifeless, that's not really an issue re outside zk
<hazmat> lifeless, that was by gustavo's choice..
<hazmat> originally lxc for juju was modeled as machines instead of units
<hazmat> er. implemented as a contributed patch by SpamapS
<lifeless> yes
<lifeless> I spent some time tweaking it
<lifeless> anyhow
<lifeless> we could go around this indefinitely.
<lifeless> I appreciate there are some choices in here; some of them make less sense to me than others, and there isn't sufficient explanation for me to be able to agree or disagree with the /why/.
<hazmat> lifeless, if the api is around i'd be game for incorporating, but its not something we can do ourselves given priorities atm
<lifeless> Sure, never suggested you should :)
<lifeless> I figured the channel might know if someone somewhere had done one.
<lifeless> hazmat: what makes me sad right now is the ip using patch was rejected.
<lifeless> So, I'm running a fork, and probably will be forever.
<hazmat> lifeless, well...
<hazmat> lifeless, openstack native provider would work just as well
<lifeless> hazmat: it uses ip addresses
<lifeless> exactly as I proposed.
<hazmat> lifeless, exactly..
<lifeless> just a different provider.
<lifeless> So I don't understand why its acceptable in one provider and not another.
<hazmat> lifeless, because in the ec2  case,  the most common use is public, and public addresses there are...
<hazmat> i dunno.. you'd already convinced me ;-)
<lifeless> yah
<hazmat> bedtime for me, one more merge to do
<imbrandon> gnight hazmat ( when ya head out )
<imbrandon> ok headed to sleep
<imbrandon> SpamapS: https://bugs.launchpad.net/charms/+bug/994699
<_mup_> Bug #994699: Charm Needed: Nginx <Juju Charms Collection:Fix Committed> < https://launchpad.net/bugs/994699 >
<imbrandon> its in a functioning state, would LOVE some input / patches / code snipits / review / etc etc, i think its ready for more than me to work on now, basics are laid
 * imbrandon heads to sleep
<imbrandon> ( note: I want to make setting the values for template:relace more elegant but i need fresh eyes in the morning maybe )
<imbrandon> template::replace*
<SpamapS> imbrandon: I'll take a gander
<SpamapS> lifeless: btw, imbrandon told you wrong. You can access any relation-* command from any other hook. You want the relation-ids command.
<SpamapS> lifeless: if you are not in a *-relation-* hook, you need to be more explicit is all
<lifeless> SpamapS: thanks, hazmat set me in the right direction, though I haven't dug around yet to figure it all out
<SpamapS> ok good
<SpamapS> I skimmed the backscroll but missed that bit I guess
<lifeless> seems a shame I can't just say relation-get relation=zookeeper
<lifeless> 14:31 < lifeless> hazmat: in my start hook, I need to check that a specific relation exists.
<lifeless> 14:32 < hazmat> lifeless, relation-ids
<SpamapS> the problem is zookeeper might have more than one thing related to it
<lifeless> etc
<lifeless> SpamapS: from within the context of a hook on opentsdb ?
<lifeless> SpamapS: how is that different to being within the context of the zookeeper-changed hook of opentsdb ?
<SpamapS> yeah, you might say 'add-relation opentsdb zk1' and 'add-relation opentsdb zk2' ..
<SpamapS> lifeless: whether thats valid or a good idea is not for juju to say. but in many cases (mysql db relation) its totally valid
<SpamapS> lifeless: when you're in a *-relation-* hook, the relation ID is implied, you have a $JUJU_RELATION_ID even.
<lifeless> mmm
<SpamapS> lifeless: but if you're in any other context, you need to figure out what relation id you want to inform/inspect
<lifeless> so this needs to be written up somewhere
<lifeless> its really hard to discover let alone work with
<SpamapS> yeah it only landed in early April
<SpamapS> its used in several charms already but needs to be an explicit chapter in the docs IMO
<lifeless> what would be most awesome would be for someone to figure out the tasks charm writers need to accomplish, and make that really easy.
<SpamapS> like "Advanced Relations"
<lifeless> well, pretty much every charm I can think of needs a start that starts if the relation is already there, apparently.
<SpamapS> hm good point
<SpamapS> lifeless: I suspect we'll find declarative ways to do all of this before too long
<SpamapS> lifeless: Since juju is an event engine, we should be able to write a pretty simple state machine to drive charms
<lifeless> some simple charms might start when there are no relations, but I suspect they are all done: DB's, proxies and web servers.
<lifeless> anything *interesting* needs data to act on :)
<SpamapS> lifeless: indeed, tho thus far I know of no charms which actually do make sure they have their relations before start
<lifeless> SpamapS: well, opentsdb *can't* start until it has it :)
<lifeless> SpamapS: ditto hbase
<lifeless> hbase has to have zk and hdfs to do anything; pretty sure it only starts when all the configs are there, and start does $nothing
<SpamapS> lifeless:  [ -f /etc/opentsdb/required.thing ] && service opentsdb start || echo not ready yet
<lifeless> SpamapS: there is nothing written to disk though :)
<lifeless> its all in zk
<SpamapS> lifeless: the charm would keep that state
<SpamapS> lifeless: the relation-ids+relation-get approach is just a way to do it w/o a file on disk
 * SpamapS pokes at a nice generic monitoring interface
<SpamapS> EvilMog: btw, I couldn't get john+mpich2 to work.. just seems to error out all over the place. :-/
<EvilMog> yeah
<EvilMog> its a pain in the ass to get going
<EvilMog> only code I ever get to work is the older zeroshell patch
<EvilMog> but the jtr authors claim it works
<EvilMog> may have to go john openmpi
<EvilMog> I may get them to join this channel and chat with you
<SpamapS> EvilMog: charm "works" so you can try it too.. it just fails with assertions when you actually try to do anything
<EvilMog> yeah
<EvilMog> I get the same issue with the recent code
<EvilMog> which is why I may try it with openmpi
<EvilMog> instead of mpich2
<SpamapS> makes sense
<EvilMog> http://openwall.info/wiki/john/parallelization
<SpamapS> yeah I used that
<SpamapS> and the 10.04 guide you posted
<EvilMog> one common problem with jtr + mpich is not having clock synch'd, and not having ssh keys to the whole cluster
<EvilMog> yeah
<EvilMog> and the hosts files
<SpamapS> I have ssh to whole cluster, thats easy w/ juju
<SpamapS> clock skew might have been an issue, I did not check
<EvilMog> I know code that works, but its older base
<EvilMog> http://www.bindshell.net/tools/johntheripper.html
<EvilMog> http://www.bindshell.net/tools/johntheripper/john-1.7.3.1-mpi8.tar.gz
<EvilMog> specifically
<EvilMog> and that one I know works with mpich2
<EvilMog> bert@ev6.net is the guy you want to talk to
<EvilMog> he wrote the original mpi code
<EvilMog> btw I really appreciate it
<EvilMog> ftp://ftp.openwall.com/pub/projects/john/contrib/parallel/mpi/MPIandPasswordCracking.pdf
<EvilMog> again thats for the older bindshell implementation though
<SpamapS> EvilMog: cool.. I'll poke at it another time when I'm not super tired
<EvilMog> no worries
 * SpamapS passes out
<EvilMog> and no rush
<EvilMog> my new cluster won't be online for another month
<EvilMog> the other option is https://github.com/ccdes/clortho/blob/master/README
<_mup_> juju/trunk r552 committed by kapil@canonical.com
<_mup_> [trivial] remove old docs tree, docs are now @ lp:juju/docs
<melmoth> hola ! I m playing with maas. juju bootstrap fire up a node, but my maas machine cannot resolve node-000077770001.local
<melmoth> it can resolve node-000077770001 though.
<melmoth> but when i do a juju status, it try to connect to the name.local one , and as it cannot resolve this name, i m stuck
<melmoth> anyone got an idea what i might have been doing wrong ?
<hazmat> melmoth, maas is returning the .local name
<melmoth> yeah. I try to remove it manually with the web page that let you edit nodes names.
<hazmat> and its not resolvable to your client..its really shouldn't be returning an mdns name
<melmoth> seems to work
<hazmat> melmoth, cool
<koolhead11> hazmat: that doc request was pending in queue 4 ages :)
<hazmat> koolhead11, yeah.. the charm reviewers queue worked out so well, i put one together for core, and spent a good chunk of yesterday clearing it out
<hazmat> cleared out like 12 branches yesterday
<hazmat> down to 6, mostly mine though, http://jujucharms.com/tools/core-review-queue
<hazmat> now to work through the openstack branch
<hazmat> koolhead11, the new doc as a separate branch should help make doc changes go much, much faster (based on evidence to date)
<koolhead11> hazmat: thanks.
<SpamapS> hazmat: go man go! Nice job on the merges the last 24 hours :)
<hazmat> SpamapS, thanks
<SpamapS> james_w: Hey, I'm working on enhancing nagios, nrpe, and monitoring in general. Did you ever go much further than lp:~james-w/charms/precise/nagios-nrpe-server/trunk ?
<james_w> SpamapS, not outside my head
<tedg> I think something weird is going on, but I'm really not sure.
<SpamapS> james_w: ok, I have some solutions for your nrpe.cfg issues
<tedg> It seems like the juju agent never starts on a machine that is numbered 5
<SpamapS> tedg: Are you saying "There's something happenin here, and what it is aint exactly clear" ?
<tedg> It always gets to the state where the instance is running but the agent is not.
<tedg> And it always seems to be machine #5
<tedg> Hmm, maybe because I've continually terminated four?
<james_w> tedg, lxc?
<tedg> SpamapS, Not sure what I'm saying... :-)
<tedg> james_w, EC2
<tedg> Do other folks use "terminate-machine" or am I alone there?  :-)
<tedg> I mean, and expect to create nodes again, not just as a final clean up.
<hazmat> SpamapS, it helped hugely to put up a queue page for the core
<_mup_> juju/gozk r20 committed by gustavo@niemeyer.net
<_mup_> Mentioned that the package has moved.
<imbrandon> hazmat: any idea why my nginx isnt showing in the charm queue ( i'm positive its something i forgot to do, and not the queue problem , just not sure what )
<imbrandon> ahh
<imbrandon> and i take that back
<imbrandon> it is, i was just to fast
<hazmat> imbrandon, 10m ;-)
<imbrandon> :)
<imbrandon> $config = template::read('nginx.conf');
<imbrandon> template::write('/etc/nginx/nginx.conf',$config);
<imbrandon> that is just too sexy, now if i can get the rest of the charm so :)
<_mup_> Bug #1020245 was filed: "terminate-machine" drops two machine numbers <juju:New> < https://launchpad.net/bugs/1020245 >
<jcastro> imbrandon: can you check your mail and see if HP sent you anything wrt. the HP cloud accounts?
<imbrandon> jcastro: sure one sec
<imbrandon> not that search is turning up
<jcastro> imbrandon: yeah I think I'll need to mail all of you
<imbrandon> jcastro: why whats up ?
<jcastro> I think they enabled them
<jcastro> but I need you to check
<jcastro> the free 3 months thing
<imbrandon> oh ,i hope so, /me has had instances spun up for about 10 days
<imbrandon> heh
<imbrandon> jcastro: my bill is still -0-'d out so i'm assuming its on
<jcastro> I wonder when they turned it on
<imbrandon> not sure, yea i kinda assumed it was when u told us about it
<imbrandon> oopsie :)
<jcastro> hazmat: what's the scoop on the openstack native provider?
<hazmat> jcastro, i was going to review today
<hazmat> but got derailed
<jcastro> hazmat: cool
<hazmat> jcastro, its the last one in the queue
<hazmat> jimbaker, ping
<jimbaker> hazmat, hi
<jcastro> hazmat: that works out, they turned on the free accounts today, so this should give us a nice pool to test from
<hazmat> jcastro, nice
<hazmat> jimbaker, you've got an approved branch ready to land fwiw
<jimbaker> hazmat, sounds good
<jimbaker> hazmat, still catching up after being sick for 3 days
<hazmat> jimbaker, i'd hold for a few an hr though, i've got a trunk issue that i need to fix
<jimbaker> hazmat, ok, just tell me when you're done w/ that
<SpamapS> hazmat: have we figured out the natty/oneiric build issues yet?
<hazmat> SpamapS, re format2.. no
<hazmat> i asked bcsaller to look at it, but not sure if there's any progress
<SpamapS> imbrandon: is OMG on precise or oneiric?
 * negronjl is out to lunch
<pindonga> hi, trying to get juju running on openstack, and I get this: ERROR Invalid host for SSH forwarding: ssh: Could not resolve hostname server-13056: Name or service not known... any ideas?
<hazmat> pindonga, this should be a faq
<hazmat> pindonga, depending on your maas config it may not hand out addresses routable from the client
<hazmat> pindonga, afaicr you can set the name in maas directly
<pindonga> hazmat, pm
<hazmat> its really a maas setup question
<hokka> is it possible to have relationships between services in different environments?
<SpamapS> hokka: not yet no, but thats definitely something we'd like to do
<SpamapS> hokka: you can "fake it" with subordinate charms
<SpamapS> hokka: you but you have to manually bring the data from one env to another unless you get really clever :)
<m_3> jcastro: yo
<SpamapS> m_3: good morning sunshine
<m_3> SpamapS: g'day mate
<m_3> SpamapS: and now I can say that and actually know _which_ day too :)
<SpamapS> Friesmurday ?
<SpamapS> Or Sunthednesday
<SpamapS> m_3: trying to tackle the tricky art of a generic monitoring interface
<SpamapS> nagios/icinga are almost *too* powerful for this :-P
<m_3> nice
<m_3> take a peek at sensu
<hazmat> m_3, i don't understand all the hype on sensu
 * m_3 likes the possible integration with an underlying openstack install
<hazmat> its rabbitmq..
<m_3> right
<hazmat> and lacks a decent frontend afaik
<SpamapS> Nagios has never had a decent frontend
<hazmat> SpamapS, its like saying cassandra.. its the new monitoring hotness and toss in some adapters
<SpamapS> somehow dominated everybody else with s***ty 1993 style HTML tables
<hazmat> SpamapS, ichinga FTW
<hazmat> jk
<SpamapS> I wonder how much of what I'm doing for nagios will translate to icinga
<hazmat> sensu basically tosses some adapters onto rabbitmq.. and now people treat it like the perfect monitoring solution..
<m_3> thin, scales, adaptable... what's not to like?
<hazmat> m_3, buts what it do?
<hazmat> its a log transport
<m_3> what do you really need a monitoring soln to do?  I want custom metrics
<SpamapS> hazmat: sounds pretty good to me
<SpamapS> this polling stuff is for the birds
<m_3> that get where I want them to go... the rest I can handle with other stuff
<hazmat> sigh.. i could write an amqp adapter for collectd and be equiv
<hazmat> SpamapS, its still polling
<m_3> simple composable tools
<hazmat> er.. not polling pushing
<SpamapS> hazmat: you could, but you didn't, and they did.. right? ;)
<m_3> hazmat: yeah, true
<m_3> community of plugins/adapters
<SpamapS> collectd scares me
<m_3> well, the _start_ of one :)
<SpamapS> 49 C libs many of which are really crappy
<SpamapS> anyway, what you really want is not a way for your service to say "poll this" but "record this"
<SpamapS> *how* you record that is up to the monitoring system
<m_3> hazmat: I rprefer "publishing"... it's lighter weight :)
<SpamapS> hazmat: http://collectd.org/wiki/index.php/Plugin:AMQP
<m_3> hot-n-sour soup style... just dump it in... anybody insterested can pick it up
<m_3> sorry for the lag... my irssi client's stateside
<m_3> dave cheney and I had a hilarious interchange... he's down the street so we had high latency... possibly even round-the-world routes :)
<hazmat> SpamapS, exactly.. and avoid the overhead of a ruby processes ;-)
<SpamapS> hazmat: +1
 * m_3 ducks
<hazmat> jimbaker, can you have a look at this trivial.. fixes trunk http://paste.ubuntu.com/1072207/
<hazmat> the new format v1/v2 tests were pretty exact on output (good thing), but the the validate branch, allows for them some of them to be set at least
<hazmat> re bools and floats
<hazmat> i'm tempted to back out the whole validate branch though..
<hazmat> but considering they couldn't be set previously err, still seems like a win
<hazmat> er. set from the cli params
<hazmat> bcsaller, ^
<jimbaker> hazmat, ahh, that's not good to have failing tests in trunk
<jimbaker> hazmat, even if they were about very picky as to what was the then behavior. in any event, the trivial looks fine to me
<jimbaker> +1
<bcsaller> that looks fine to me as well
<hazmat> jimbaker, yeah.. the other branch last one merged in the stack yesterday was pre formatv2 tests
<hazmat> thanks guys
<_mup_> juju/trunk r553 committed by kapil@canonical.com
<_mup_> [trivial] cli config validation compatibility with format v2 [r=bcsaller, jimbaker]
<hazmat> jimbaker, trunk is green if you want to go ahead with the status-expose
<jimbaker> hazmat, thanks
<hazmat> jimbaker, bcsaller incidentally i also put together one of those review queue pages for pyjuju.. http://jujucharms.com/tools/core-review-queue
<bcsaller> nice
<hazmat> bcsaller, if you can merge trunk.. and repropose your branch, i can have a look later this evening
<bcsaller> yeah, cleaning up the others too, there should be more than one
<jimbaker> hazmat, did we ever find out about the build problems on oneiric/natty?
<hazmat> bcsaller, ^?
<hazmat> dog walk bbiab
<bcsaller> there are other branching going into review
<jimbaker> my one small attempt to replicate this (by launching a small instance for oneiric) simply suggested that this seemed to be a general problem. but not for the format stuff
#juju 2012-07-03
<jimbaker> which ran fine
<jimbaker> SpamapS, are we still marking bugs "fix released" when merged into trunk?
<jimbaker> or should that be "fix committed"?
<m_3> jimbaker: I think merged => fix is released
<m_3> but I don't know if we have anyuthing special for the juju release process :)
<jimbaker> m_3, sounds good, i'll just wait on SpamapS for the final say here :)
<hazmat> jimbaker, fix released is what we've been doing
<hazmat> jimbaker, a separate distro task tracks the other
<hazmat> bcsaller, others for?
<bcsaller> the lxc stuff and the --test option support
<hazmat> bcsaller, i'll probably leave the others for review another day, but i can do the subordinates now
<m_3> lunch
<imbrandon>  ohhh whatcha fixin m_3 ?
 * imbrandon wants extra bacon :)
<imbrandon> I like to buy the world a home , and furnish it with love ... grow apple trees and honey bees, and snow white turtle doves. ./~
 * imbrandon has no idea why that song , from a commercial that air'd before he was born , is stuck in his head.
<misto> is juju comparable to salt stack ?
<imbrandon> misto: kinda, saltstacks are more infrastructure management, juju orchstrates services and how they interact
<m_3> imbrandon: had noodles :)
<misto> so you run juju from your dev box and orchestrate your ec2 ?
<imbrandon> juju is an event engine, that lets you do diffrent things for events , including managing infratucture but also alot more
<imbrandon> m_3: :)
<imbrandon> misto: thats part of it yes
<imbrandon> but not really ec2
<imbrandon> more about the services your running ON ec2 or RAX etc
<imbrandon> the services are king , not the infrastructure, you dont have to care about that
<misto> I am trying to understand which solution is best to manage entire services stack on amazon web services
<misto> saltstack, juju, or bare cloud formation init scripts
<imbrandon> misto: think about this example "juju deploy myweb" , i dont care that it setup a new user on the db with correct permissions, and made sure the db server was tuned for high load , or that the webserver was configureed correctly or that it can scale to 1000 req a second with one command
<imbrandon> misto: well the awnser is ... yes
<imbrandon> misto: because all of those tools do diffrent things that may somewhat overlap
<imbrandon> misto: but imho to do EVERYTHING as you state juju will be what you may be looking for
<imbrandon> as salt wont do service orchstration and cloud init is too bare
<imbrandon> but like i said its kinda a apple to oranges comparison
<imbrandon> misto: in reality you may end up with something like saltstacks or puppet manifest inside juju charms
<imbrandon> :)
<misto> and that is the part that confuses me
<misto> does juju monitors the health of the instances, kind like cloud watch? and then spawn new instances?
<misto> or it has a recipe that follows to spawn an instance and then the puppet/ saltstack goes from there ?
<imbrandon> it can, it dosent tho
<imbrandon> think of it like init.d for the cloud
<imbrandon> its an event engine you can make do anything really
<misto> I have to see a charm
<imbrandon> check out a few jujucharms.com has links to all of them
<misto> tnz
<misto> s/ tnx
<imbrandon> np, yea its alot to wrap head arround but there really isnt much out there that comparea
<imbrandon> compares
<imbrandon> so its hard to explain sometimes :)
<misto> the part that is appealing is that is company-backed
<imbrandon> :)
<imbrandon> yup juju is backed by canonical and the community both ( in fact I'm community and m_3 is company as far as active on IRC the last hour or so heheh ) although that distinction is rarely needed we all work to the same end for the most part
<misto> I came across juju from go
<imbrandon> ahh cool
<imbrandon> yea there was a prentation by gustavo iirc at google io yesterday or day before
<imbrandon> iirc
<misto> yep, friday
<imbrandon> m_3: i think thats a good comparison , you ? juju is a bit like init.d/upstart for the cloud , heh
<misto> is ensemble part of juju ? or is another thing ?
<m_3> imbrandon: dunno
<imbrandon> ensemble became juju
<m_3> misto: ensemble is the old name of the project.. been renamed to juju
<imbrandon> it was renamed
<misto> gotcha
<imbrandon> m_3: the thinking ( in my head ) is when diffrent events happen like network comming online then upstart fire script <blah> or hook <blah>
<imbrandon> heh
<imbrandon> but i guess its more than that as there is the analog to dbus talking for relations
<imbrandon> hrm
<imbrandon> heh
<m_3> imbrandon: yeah, the key is the interdependency... which I guess upstart has with particular events... never made that connection though
<imbrandon> yea its a bit of a strech,but kinda
<m_3> little more of a notion of handshaking and conversation with relations... not just waiting on them
<imbrandon> yea
<m_3> i.e., juju has a little more...
<imbrandon> right, kinda what i ment about the dbus backtalk
<imbrandon> but yea then its another piece
<imbrandon> m_3: i dont think bacon and noodles would mesh well :) /me had McDonalds for lunch , gonna regret that one later
<misto> from the faqs:  It is not yet ready to be used in production.
<imbrandon> misto: depends on your value of production, Mark prob said it best in his last email ( that I should add to the faq ) but like www.omgubuntu.co.uk is run by juju on EC2 that enjoys 7 million+ pageviews a month
<imbrandon> let me grab a link to how he put it ...
<imbrandon> misto: https://lists.ubuntu.com/archives/juju/2012-June/001722.html
<SpamapS> misto: basically, before using it in production, read all of these bugs: https://bugs.launchpad.net/juju/+bugs?field.tag=security and https://bugs.launchpad.net/juju/+bugs?field.tag=production
<SpamapS> misto: you need to consider them, and workaround all of them before using it
<imbrandon> and omg is a case where the sysops are a team of 1
<imbrandon> :)
<misto> the bootstrap node need high availability :D
<misto> can you have more than one bootstrap node in different regions ?
<SpamapS> misto: well for EC2, you should have two bootstrap nodes anyway (one in each region)
<SpamapS> misto: its actually not that hard to get two regions talking to one another. Just that there's nothing built in, so you'll have to write your own custom charm to do it.
<imbrandon> SpamapS: i think charm getall should dump the charms into a series subdir, my mode of deployment reciently has been "charm getall /var/lib/charms && mkdir /var/lib/charms/precise && mv /var/lib/charms/* /var/lib/charms/precise && export JUJU_REPOSITORY=/var/lib/charms && juju deploy local:nginx"
<imbrandon> that would remove all that moving crap
<SpamapS> imbrandon: err, charm getall will put them wherever you tell it to.. so...
<SpamapS> mkdir -p /var/lib/charms/precise && charm getall /var/lib/charms/precise ?
<imbrandon> right but i am thingking when ther eis more than one
<imbrandon> like precise and quantal
<imbrandon> then i can still say get all and it actually gets all
<imbrandon> not just the currecnt series :)
<imbrandon> that and if the .mrconfig is in the precise dir
<imbrandon> then juju compalins that it cant make sense of it
<SpamapS> oh thats
<SpamapS> crazy
<SpamapS> get a series at a time please :)
<imbrandon> heh
<SpamapS> imbrandon: the .mrconfig will be ignored by trunk
<imbrandon> k
<SpamapS> that was merged today IIRC
<imbrandon> rockin
<SpamapS> tho frankly mr is crap
<imbrandon> yea i tried to make it get my git repos too
<SpamapS> before mbp left Canonical he offered to hack up a bzrlib thing that used 1 SSH connection to do them all
<imbrandon> it barfed
<imbrandon> i added [something else] git clone https://sdfsdfsdf.git at the end
<imbrandon> it did not like :)
 * imbrandon dident read the mr docs tho for full disclosure, just "tried" it
<imbrandon> i was like ohhh cool .... damn :(
<imbrandon> SpamapS: http://paste.ubuntu.com/1072617/
<imbrandon> note machine 0
<imbrandon> that do that to you as well ?
<SpamapS> imbrandon: yes
<SpamapS> same problem unfortunately
<imbrandon> kk
<SpamapS> imbrandon: hp or rax?
<SpamapS> I really want to try rax
<imbrandon> hp
<SpamapS> since theirs is essex
<imbrandon> yea i need to dig out my rax credentials
<imbrandon> and make sure they are still good
<imbrandon> i havent used them in weeks
<imbrandon> SpamapS: is the os provider in trunk now ?
<misto> let's see if I am able to setup tomcat on my localhost with juju
<SpamapS> imbrandon: no I believe it has some rough edges in testing to get right
<imbrandon> k
 * SpamapS wonders what level of hell the demon that designed nagios's config structure came from
<imbrandon> SpamapS: http://paste.ubuntu.com/1072692/
<imbrandon> catches 503 , 404, etc etc etc
<SpamapS> nice
<imbrandon> you can even specifiy specific ones like
<imbrandon> error_page 404 = @404fallback
<imbrandon> etc
<SpamapS> alright.. hmm.. beginnings of a generic monitoring interface taking shape
<SpamapS> just need a second monitoring implementation to see if its feasible.. hm
<SpamapS> later.. time for sleep
<imbrandon> SpamapS: would the newrelic ones work ? or too much centered on external svc
<imbrandon> ttyl
<hazmat> SpamapS, just got activated by rax ostack beta
<hazmat> took about a week
<sanderj_> Do anyone know about stackops for deploying openstack, compeard to with ubuntu juju charms?
<imbrandon> sanderj_: without putting words in his mouth I think adam_g may have the most insight into that from what i've noticed just hanging round
<sanderj_> imbrandon, seems like he is away.
<imbrandon> could be so, i havent seen him active today, just know he works with juju and openstack both pretty closely
<imbrandon> not that he is the only one, just first that came to mind
<imbrandon> i may help with juju questions but konw next to nothing about stackops
<imbrandon> sooooo :)
<sanderj_> Ah, ok.
<sanderj_> I'm just wondring if there is any downside in choosing stackops..
<imbrandon> ahh now that i could not tell ya :)
<sanderj_> ok
<imbrandon> there are others here that could though if ya idle long enough
<imbrandon> i'm sure some will pop iin
<sanderj_> Ok, i'll wait for someone
<imbrandon> some days cant get a word in edgewise some days it a bit slow :)
<imbrandon> but yea there are a few arround that should have atleast a little insight
<imbrandon> cjohnston: heya
<cjohnston> yo
<imbrandon> wanna try the openstack provider on RAX ? got a shiney new nginx charm ( that should match what we setup the other day manyally )
<imbrandon> just pop in some creds and bootstrap , deploy nginx with juju take a few copy and pastes for our notes and then use it if you want if not kill the env
<imbrandon> :)
<cjohnston> maybe sometime later?  I'm in the middle of a few things right now
<imbrandon> sure sure
<imbrandon> when you got time hit me up
<hazmat> sanderj_, so there are many vendors with their own ostack distribution, doing that to me at least means getting away from upstream and becoming dependent on the vendor. the juju charms track upstream closely, we perform per upstream ostack commit testing on multi-node bare metal with openstack. i haven't used stackops so i couldn't really stay much about them, outside of it looks like they have their own distribution. it also doesn't look like they don't docum
<hazmat> ent their product offering or pricing so rather hard to say. if you want a commercial install setup, canonical sells a fixed price jumpstart for a 20 node installation... really depends on what your looking for, free and easy to install, commercial support, commercial features, automated mass installer, custom consulting, etc.
<sanderj_> hazmat, I read somewhere that someone doubted stackops will be able to release security upgrades just as ubuntu will.. every 6 months.
<sanderj_> But that's a wild guess I belive.
<imbrandon> well ubuntu releases security updates as needed, not just every 6 months
<imbrandon> we have a new stable release every 6 months tho
<hazmat> and newer releases of openstack specifically will be available/supported on precise/12.04 LTS
<imbrandon> ( that means 5 years of security support minimal )
<imbrandon> time for breakfast, bbiab
<imbrandon> btw moins hazmat
<jcastro> sanderj_: adam_g is on west coast time, he should be around in a few hours
<sanderj_> Seems like stackops is based on ubuntu.
<SpamapS> sanderj_: stackops looks like a wizard in front of some other technology like Cobbler or MaaS..
<SpamapS> sanderj_: one advantage you get w/ juju+maas+ubuntu is that the entire thing is open source and developed in concert with the community.. I don't know if stackops shares that
<sanderj_> SpamapS, there is one guy in #stackops so it can't be that huge community.
<SpamapS> looks like its a django app
<SpamapS> with some tight integration into horizon somehow
<sanderj_> Hmm... intresting.
<SpamapS> Its fascinating actually
<SpamapS> So you just boot up all these boxes..
<SpamapS> hit them in your browser..
<SpamapS> and they redirect you to stackops.org to configure them
<SpamapS> sanderj_: Juju+MaaS is still undergoing a lot of development and growing pains.. right now they both have issues, but the juju approach at least seeks to *try* to let you work in a self contained manner.
<SpamapS> sanderj_: and by "they both" I mean juju and stackops
<sidnei> uhm, i have juju machine agent pegged at 100% cpu, anyone seen this?
<SpamapS> sidnei: yes
<SpamapS> sidnei: kill it
<SpamapS> sidnei: bug fix is coming soon.. basically just destroy that env
<SpamapS> sidnei: bug #1006553
 * sidnei un-green-ly destroys the environment
<_mup_> Bug #1006553: local provider machine agent uses 100% CPU after host reboot <juju:Triaged by bcsaller> < https://launchpad.net/bugs/1006553 >
<sanderj_> SpamapS, I'm not sure.. but after some reading.. it seems like stackops still is running ubuntu 10.04
<SpamapS> sanderj_: that seems like a wise choice for the next couple of months. 12.04.1 will have quite a few bug fixes. :)
<sanderj_> AH, ok.
<SpamapS> sanderj_: though I'd hope their beta product would move forward to 12.04
<SpamapS> sanderj_: thats one thing they're going to have a hard time with, if they only ever track the LTS's.. they won't get the incremental bump every 6 months.
<SpamapS> still its a really interesting product
<SpamapS> hm, I think I may have found a bug
<SpamapS> you can't store yaml in relation settings
<SpamapS> http://paste.ubuntu.com/1073239/
<jcastro> SpamapS: hey can you explain the workflow for putting the openstack provider in 12.04? When hazmat lands it, will it be SRU'ed or will it come with the next milestone of juju for 12.04?
<SpamapS> jcastro: there is no next milestone of juju for 12.04
<SpamapS> jcastro: SRU's are for serious bugs
<SpamapS> jcastro: it will land in the PPA, and I think we will land a "stable PPA" in the next few weeks.
<jcastro> ugh, really
<SpamapS> jcastro: we can also go with precise-backports
<jcastro> hmm, in hindsight we should have figured out a way to add providers in the stable release
<jcastro> juju-openstack or something
<robbiew> jcastro: so with pyju destined for retirement, the best approach is to push folks towards a PPA....the more stuff bolted on to pyju the messier things get
<jcastro> yeah it just sucks that we're only like 3 months past release and what's in the archive is basicallly grrrr ....
<SpamapS> m_3 and I argued for a plugin architecture from the beginning
<SpamapS> jcastro: the thing in precise-proposed is great
<SpamapS> jcastro: we need to finish verification of it actually
<_mup_> Bug #1020635 was filed: cannot store yaml in relation settings <juju:New> < https://launchpad.net/bugs/1020635 >
<jimbaker> SpamapS, did you try charm format 2 re bug 1020635?
<_mup_> Bug #1020635: cannot store yaml in relation settings <juju:New> < https://launchpad.net/bugs/1020635 >
<SpamapS> jimbaker: no, let me do that, but I doubt it will matter
<jimbaker> SpamapS, there is extensive testing of yaml for relation settings, so i would expect it should work
<SpamapS> jimbaker: so thats new only for format 2?
<jimbaker> SpamapS, correct
<jimbaker> SpamapS, you need to specify it in the charm itself, using format: 2
<SpamapS> jimbaker: I'm using precise-proposed so it doesn't exist yet in that one. Moving to PPA
<jimbaker> SpamapS, ok
<SpamapS> jimbaker: btw is there any reason bcsaller is trying to fix the natty/oneiric failures when it was your commit that broke them?
<SpamapS> oneiric/natty are still on r543
<jimbaker> SpamapS, i think it was purely the fact that i was sick last week
<SpamapS> ahh
<SpamapS> tho it looks like precise/quantal are stuck on 546
<SpamapS> PPA is in bad shape
<hazmat> jimbaker, do you want to take over from bcsaller on that one? bcsaller did you make any head way on that one?
<bcsaller> hazmat: I didn't, the delta looked fine to me and the code seemed to run in isolation
<jimbaker> hazmat, i can do that, although it's still not clear to me how to reproduce
<SpamapS> I can't reproduce it even in an oneiric/natty chroot...
<jimbaker> like bcsaller, the code seemed to run just fine in isolation
<hazmat> hmm
<SpamapS> but I suspect it is a timing bug
<SpamapS> something in natty/oneiric goes slow, or doesn't handle a race properly (older twisted maybe?)
<jimbaker> even when i ran it on a small oneiric instance, in which case lots of other stuff did fail in the tests
<bcsaller> this however was just json marshalling
<jimbaker> just not the format stuff
<hazmat> SpamapS, not likley
<hazmat> this is simple string matching
<jimbaker> correct, there is nothing async here
<SpamapS> jimbaker: other things failed?
<SpamapS> is it possible those failures were async and manifest by screwing up state in a way that bleeds into this test code?
<jimbaker> SpamapS, i didn't make a note, but i did see various failures, apparently based on resource constraint
<SpamapS> small should be able to handle the tests.. thats odd
<jimbaker> SpamapS, when tests fail, they can certainly bleed into other ones
<hazmat> jimbaker, only if the test is broken
<hazmat> failures should not cascade
<hazmat> if the test isn't properly yielding or cleaning up.. then its broken
<hazmat> even if it doesn't fail
<jimbaker> hazmat, these are good points. again, i didn't attempt to diagnose this particular case, i just noticed the failing tests when i did this yesterday. worth repeating
<hazmat> yeah.. and capturing
<negronjl> 'morning all
<SpamapS> jimbaker: format 2 does not help
<jimbaker> SpamapS, hmm, well at least that's a useful data point
<SpamapS> jimbaker: I'm digging into the zk tree now
<SpamapS> pretty sure its just a case of needing to escape input when building the topology node
<SpamapS> workaround is to just base64 encode the yaml
<jimbaker> SpamapS, if it's a string that you want to interpret as a binary string, it should be b64 encoded (per yaml)
<SpamapS> no its a string
<jimbaker> SpamapS, this is supported for format: 2, and tested
<SpamapS> relation-set should take *ANYTHING*
<SpamapS> except perhaps nulls since we're passing in via cmdline args so null termination is necessary
<jimbaker> SpamapS, i'm not certain what that means in practice, because of encoding issues
<SpamapS> well the docs need updating then, they're vague
<jimbaker> format: 2 does change that interpretation that it actually works w/ yaml
<jimbaker> so you can specify any yaml input and it will be faithfully preserved as such upon a later relation-get
<SpamapS> yeah I suspect this is something else
<jimbaker> SpamapS, definitely appears to be the case
 * SpamapS *curses* the useless backtrace
<xnox> rumour has it, there is openstack provider I can do heavy testing of
<xnox> on a cloud.
<jcastro> mgz: hazmat: ^^^
 * xnox has a lot of cloud to run juju on ;-)
<jcastro> so xnox wants to try rebuilding the archive in HP Cloud, I figure it's a good time to bang on the provider while we have him here?
<hazmat> xnox,  hpcloud has some issues, i believe the worst case is though is you have to shut off machines by hand. its at lp:~gz/juju/openstack_provider
<mgz> xnox: provided you're happy using the tools to cleanup manually if neede... what hazmat said
<xnox> hazmat: ok. I will run it and i'll fiddle with it ;-)
<mgz> I need to integrate a couple of fixes for HP, but I'll do that now and poke you to pull
<hazmat> mgz just sent out review round2 fwiw
<mgz> thanks hazmat
<mgz> xnox: pushed the changes you'll need
<xnox> mgz: cheers
<mgz> for config, you need to set in environments.yaml - {type: openstack, default-image-id: (an image as returned by `nova image-list`), default-instance-type: (1-5 per `nova flavor-list`), juju-origin: lp:~gz/juju/openstack_provider}
<xnox> ok
<mgz> and have in environment OS_USERNAME etc
<mgz> I'm off out for a bit but will be around later.
<xnox> mgz: same here. Off to go home ;-)
<jcastro> jamespage: ping
<jamespage> jcastro, pong
<jcastro> hey so since there's only one thing in the queue for tomorrow
<jcastro> I was wondering if you could investigate bug #1020691
<_mup_> Bug #1020691: Charm doesn't work at all <ubuntu (Juju Charms Collection):New> < https://launchpad.net/bugs/1020691 >
<jamespage> jcastro, sure
<jamespage> jcastro, where is the branch?
 * jamespage goes to look
<jcastro> lp:~charmers/charms/precise/ubuntu/trunk
<jamespage> jcastro, ta
<jamespage> weird - I'll talk to folk tomorrow....
<jamespage> jcastro, BTW europython juju presentation went weel
<jcastro> oh man I totally forgot about that
<jcastro> good to hear!
<jamespage> standing room only and **loads** of questions at the end
<jcastro> that's really excellent
<jcastro> any pics by any chance?
<jcastro> jamespage: also, we found a typo in the hadoop thing in the flyer
<jamespage> jcastro, did you?
<jcastro> so either the charm changed or we messed up
<jamespage> what was it?
<jcastro> but we checked it like 4 times so I dunno what happened
<jcastro> hazmat: the last line right?
<jcastro> juju add-unit -n20 hadoop hadoop-slavecluster
<jamespage> ah
<jamespage> I see
<jcastro> it's ok we're due for reprinting anyway
 * jamespage phew
<jamespage> anyway - have to go get my flight - until tomorrow
<jamespage> I was quite looking forward to reviewing the nginx charm
<jamespage> ho-hum
<hazmat> jamespage, awesome!
<hazmat> jamespage, i'm curious to talk post flight to get the question highlights
<jcastro> m_3: i sent the title rename to michelle, we should be good there
<SpamapS> jimbaker: so I haven't nailed down the exact problem, but I think the real issue is that the yaml is kept *as yaml* rather than embedded as a string of bytes
<xnox> I have silly questions: is there an ssh charm which i can reuse to make 50 slaves be accesibly via ssh from the master node and the master nodes ssh key will be generated / distributed by juju to the slaves?
<SpamapS> jimbaker: http://paste.ubuntu.com/1073747/
<SpamapS> jimbaker: IMO, the yaml underneath 'monitors' there should be escaped
<SpamapS> xnox: actually
<SpamapS> xnox: I made an attempt at an MPI charm for john the ripper..
<SpamapS> xnox: in which the master generates a key and installs it on all of the slaves
<SpamapS> xnox: lp:~clint-fewbar/charms/precise/john/trunk
<jimbaker> SpamapS, instead of being stored as a map
<xnox> SpamapS: let me see. I'm guessing it's not a charm but hooks
<jimbaker> SpamapS, i still don't see how that issue then becomes a problem with the topology node
<xnox> SpamapS: so for example cephs charm has useful ssh interfaces and hooks
<xnox> can I reuse those in my package "for free" without copying it's hooks
<SpamapS> xnox: no, we don't have inheritance, but this is about the 10th time in the last month that I've seen a need for it
<SpamapS> xnox: of course, we could package those hooks into a library
<xnox> SpamapS: yeah, cause there are plenty of things that talk to each other over http or ssh and it would be nice for $charm-master require to be $ssh-master and it's `units` to be $ssh-slave of their master
<xnox> that would be nice. or as suborinate service
<xnox> in some-cases you would want for all units to be able to talk to each other - e.g. shared memory computations
<xnox> but in most one2many, aka 1 master and Many slaves should be sufficient
<SpamapS> subordinates are a bit clunky for this
<SpamapS> it works
<SpamapS> but its really not an awesome experience
<xnox> SpamapS: source /usr/share/xnox-juju-hooks/ssh.hook
<xnox> SpamapS: unless my master node, should be the one I am running juju from....
<xnox> in that case I can ssh into all of them
<SpamapS> xnox: I don't really know what you're trying to say. :P
<xnox> SpamapS: create an beffy server
<xnox> SpamapS: login
<xnox> SpamapS: juju deploy 100 slaves
<xnox> now using juju describe/status whatever simply start executing parallel workflows from a screen sessions
<xnox> SpamapS: or is that actually the typical way to use juju, e.g. not from local machine but from a public cloud instance to begin with
<SpamapS> xnox: sure, just have the master charm set you up a parllel-ssh or capistrano or fabric config
<SpamapS> xnox: thats what the john charm does
<SpamapS> only with .mpd.hosts
<m_3> jcastro: cool deal
<hazmat> .. saltstack
#juju 2012-07-04
 * SpamapS is getting even closer to generic monitoring now.. :-D
<SpamapS> I wonder if there is an open source real user monitoring package
<imbrandon> yes
<imbrandon> the one newrelic uses is open source too
<imbrandon> they are all based on the same lib from a google employee
<imbrandon> SpamapS: there is also jiffy and mod_pagespeed, probably the most used though is the one google analytics and newrelic use called epsiodes http://stevesouders.com/episodes2/
<SC-RM> I have deployed maas on a server and trying to deploy new servers from this one, but all my new instances have instance-state: unknown
<SC-RM> any ideas of where and what to look at?
<imbrandon> all i wanted was a pepsi, just one pepsi, and she wouldent give it to me ...
<jamespage> bbcmicrocomputer, morning
<jamespage> moring all!
<bbcmicrocomputer> jamespage: morning
<jamespage> bbcmicrocomputer, fancy having a stab at the solrcloud charm?  Apache just alpha'ed Lucene and Solr 4
<jamespage> and we already have a nice zookeeper charm....
<bbcmicrocomputer> jamespage: sure, am already working on the existing Solr charm to add dynamic core/db hookups
<bbcmicrocomputer> jamespage: assign the ticket to me :)
<jamespage> bbcmicrocomputer, bug assigned - have fun!
<jamespage> looks interesting
<bbcmicrocomputer> jamespage: cool, thanks
<bbcmicrocomputer> jamespage: yeah it's pretty cool being able to distribute Solr.. although they still don't do distributed idf, so you have to be careful how you distribute your documents amongst servers
<jamespage> bbcmicrocomputer, have you looked at solandra at all?
<bbcmicrocomputer> jamespage: only briefly
<jamespage> bbcmicrocomputer, I did a while ago
<jamespage> but not recently
<jamespage> it looking interesting
<bbcmicrocomputer> jamespage: yeah that is quite cool using the Cassandra backend
<jamespage> bbcmicrocomputer, the last solr deployment I did had a multi-master configuration across data centres
<bbcmicrocomputer> jamespage: that is pretty impressive!
<jamespage> we ended up using reliable message queues to ensure consistency between active site application and both masters
<jamespage> but it was complex
<jamespage> I think solrcloud and solandra both solve that issue more effectively..
<bbcmicrocomputer> jamespage: yeah, sounds like it would
<freeflying> does Juju support use specific instance when deploy a charm
<jamespage> freeflying, specific instance or specific instance type?
<freeflying> jamespage: instance type I meant, sorry
<jamespage> freeflying, in which case the answer is yes
 * jamespage digs 'contraints' documentation out
<jamespage> freeflying, take a look here - https://juju.ubuntu.com/docs/constraints.html
<freeflying> jamespage: saying I want deply hadoop, can I have master deployed with one type, and slave with another type instance?
<jamespage> freeflying, yep - take a look at http://markmims.com/cloud/2012/06/04/juju-at-scale.html
<jamespage> this is some testing we did with hadoop + juju at scale
<jamespage> the master is deployed on m1.large and the slaves on m1.mediums
<freeflying> jamespage: yup, just get there, very nice, thanks
<jamespage> freeflying, note that we have a few 'hadoop' charms in the charm store ATM
<jamespage> use 'hadoop' rather than any of the others - there is a bug outstanding to get that resolved.
<jamespage> freeflying, and pester me if you have any issue - I wrote it :-)
<freeflying> jamespage: very neat :)
<jamespage> freeflying, I need to write an in-depth blog post on using the hadoop charm - the README has quite a bit of info
<jamespage> http://jujucharms.com/charms/precise/hadoop
<jamespage> it has lots of deployment topologies...
<freeflying> jamespage: at first glance, its a complex charm :)
<jamespage> freeflying, its certainly more complex than some
<jamespage> it uses a late binding technique to decide what role a service is playing as its related to other deployments of the hadoop service
<freeflying> jamespage: but for many devops, they don;t need dig into the charm itself, just use it, get their workload fullfilled
<jamespage> freeflying, +1 to that
<freeflying> jamespage: thats the magic of juju :)
<jamespage> absolutely
<imbrandon> freeflying: wow, long time since I've seen your face round , how ya been :)
<imbrandon> heya jamespage
<jamespage> imbrandon, hey - just looking at you nginx charm
<freeflying> imbrandon: doing pretty well, how about you
<imbrandon> jamespage: ahh its ummmm, functional , still many rough edges but i wanted to get others eyes on it
<imbrandon> :)
<imbrandon> freeflying: good good, more juju than kde these days :)
<freeflying> imbrandon: yep, some of the irc logs tells that obviously
 * imbrandon is actualy writing a proof-of-concept jitsu replacemnt/arch rewrite , heheh dunno how its gonna fly tho
<freeflying> imbrandon: it sounds cool
<imbrandon> gonna run it by SpamapS and maybe jamespage this morning and see what they think
<imbrandon> :)
<imbrandon> mmmm lp:~imbrandon/+junk/pepsi
<imbrandon> fooooood time , afk
<jamespage> imbrandon, what is the container scoped website relation for in the nginx charm?
<jml> Is there a way I can get a postgresql service to give me a db & creds without a charm that relates to it?
<jml> Just wondering if I can spin one up to poke at while I'm figuring some things out
<hazmat> jamespage, there's  a juju-jitsu deploy-to
<hazmat> oh..
<hazmat> jamespage, solandra is dead, its folded into datastax enteprise along with brisk
<jamespage> hazmat, thought it might be
<SpamapS> jml: you want to run the relation hook without an actual relation?
<jml> SpamapS: pretty much, yes.
<SpamapS> jml: you could deploy a subordinate to the pgsql box .. pgclient or something like that
<jml> Hmm.
<SpamapS> jml: seems a bit silly tho.. why not just deploy a dummy charm and simulate w/ debug-hooks?
<jml> SpamapS: "just"
<SpamapS> charm create dummy
<SpamapS> add relation
<SpamapS> deploy, debug-hooks
<jml> SpamapS: I ended up installing postgresql server on my laptop
<jml> SpamapS: there's a bunch of stuff on my laptop that would be tedious to move over to a new juju instance
<SpamapS> jml: bind mount in the LXC might help.. right?
<jml> SpamapS: yeah.
<jml> SpamapS: although my original motivation was convenience (hey, juju makes it easy for me to set up a throw-away postgres db, right?). as is, it's turned out to be less convenient.
<jml> SpamapS: which isn't such a big deal. it's probably just not the right tool for the job.
<SpamapS> why did it turn out to be less convenient?
<SpamapS> jml: was it just less convenient because you couldn't type 'sudo su postgres' to become the admin?
<jml> 'sagi -y postgresql; sudo -u postgres createuser -s -d jml; sudo -u postgres createdb -O jml udd'
<SpamapS> jml: because 'juju ssh postgres/0 -t sudo su - postgres' is about the same :)
<SpamapS> jml: indeed, that is easier if you don't mind having it native and such
<jml> SpamapS: right. I don't mind having it native. I'd prefer not to, but I can live with it. I certainly prefer it to having the *client* machine being another instance, which involves all sorts of tedious copying of source and data files.
<jml> or worse, remote editing
<SpamapS> right forget the client being a juju box..
<SpamapS> jml: I think just deploying, and running the same sudo stuff via juju ssh is fine
<hazmat> jml, you can invoke a relation hook in a non relation hook context, but outside of any hook context?
<hazmat> oh.. you want the data
<SpamapS> hrm.. config-get seems to have no way to print out an empty string properly
<SpamapS> if I specify a single value that is an empty string it should just print the string of bytes, not ''
#juju 2012-07-05
<m_3> mornin
<sep332> hello, I was just wondering if there is a way to deploy a Diaspora server using Juju?
<hazmat> SpamapS, even with format2?
<hazmat> sep332, there doesn't appear to be a charm for it, only owncloud
<hazmat> sep332, contributions welcome.. https://juju.ubuntu.com/docs/charm-store.html
<sep332> of course :) I was wondering if there was a project underway, thank you
<imbrandon> SpamapS: its the novel way in how it handles the sub commands, i did it POC in ruby a) cuz thats what i knew well enough to pump it out in less than an hour and b) it wasnt php so there was a chance on it to be taken serious and c) python argparse can lick my ...
<imbrandon> but language aside its the idea that i'm trying to convey more than anything , and the way i explain things i figured this time it was best said with code :)
<imbrandon> it could be redone in python , c++, vb6 , i dont care :)
<SpamapS> imbrandon: heh true :)
 * SpamapS gets back to fighting with nrpe, nagios, and *
<SpamapS> hazmat: yes even w/ format 2, config-get on a defaulted "" gives back ''
<SpamapS> hazmat: which I suspect is just format_json doing its job as directed
<SpamapS> hazmat: what I really want is *RAW*
<SpamapS> not smart, and not json
<SpamapS> give me the raw bytes
<SpamapS> hrm.. seems like there is a bug in relation-ids when used w/ subordinates
<lifeless> SpamapS: oh noes
<matsubara> Hello there, I'm writing a charm that needs to install a new kernel so I can load the kvm module. if I install the new kernel, reboot, how can I resume the install hook from where it left?
<hazmat> SpamapS, format2 is yaml, and an empty string is an empty string
<hazmat> SpamapS, re relation-ids w/ subordinates, could you describe the issue, i don't see a  bug re.
<jcastro> jamespage: heya
<jcastro> jamespage: did you have a chance to look at the ubuntu charm?
 * jcastro has a plan to demo it
<jcastro> xnox: hazmat: hey so any luck testing the openstack provider?
<xnox> jcastro: quantal mirrors on hpcloud were up today. will test tonight.
<hazmat> jcastro, on vacation today.. doing meetings though
<jcastro> oh nice, is that what imbrandon set up?
<jcastro> hazmat: ok I'll catch you on the flip side
<xnox> yes
<SpamapS> hazmat: I'm still diagnosing the subordinate issue.
<SpamapS> matsubara-afk: you can't resume the install hook, but you can set everything up to resume the process on reboot with an upstart job
<SpamapS> matsubara-afk: juju and the agents will also keep track of what has completed, so its possile if you do a 'reboot' without exitting the install hook that on reboot the agent will re-run install.
<SpamapS> matsubara-afk: one guarantee we have is that config-changed will run whenever the agent is started up, so you can also put your resume logic in there
<negronjl> 'morning all
<SpamapS> negronjl: buenos dias
<jimbaker> negronjl, morning
<negronjl> SpamapS: hola
<negronjl> jimbaker: 'morning
<matsubara> thanks SpamapS
<SpamapS> hazmat: # config-get monitors  âÂ·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
<SpamapS> ''  err
<SpamapS> lets try that again
<SpamapS> # config-get monitors
<SpamapS> ''
<SpamapS> jimbaker: ^^ so even individual values are json encoded I assume?
<SpamapS> actually no looks like smart still
<matsubara> SpamapS, do you know of a charm that does the resume with an upstart job as an example?
<SpamapS> matsubara: no, reboots are a recent addition
<matsubara> SpamapS, do I need to run a special juju command to reboot? or just reboot the machine and juju will do the right thing?
<SpamapS> well, 4 or 5 months ago, but still, recent enough that its a new thing to play with
<SpamapS> no
<SpamapS> just reboot
<SpamapS> matsubara: juju installs upstart jobs for the machine and unit agents
<matsubara> SpamapS, what's the difference between juju-machine-agent.conf and juju-maas-nodes-0.conf in /etc/init/? (maas-nodes-0 is the name of this charm btw)
<SpamapS> matsubara: one runs the machine agent, the other runs a unit agent
<jimbaker> SpamapS, in charm format 1, json encoding is used at some point. in a roundabout way, this accounts for why smart formatting renders strings as unicode
<matsubara> SpamapS, what the machine agent does and what the unit agent does? (sorry, if this is explained in the docs, I can read there)
<jamespage> jcastro, yep - but it appears to be a store issue
<jcastro> ah ok
<SpamapS> matsubara: machine agent spawns/removes unit agents and units (units are instantiations of charms)
<jamespage> jcastro: james_w and niemeyer discussed it after my initial query yesterday
<SpamapS> jimbaker: I don't know what that means..
<jamespage> I think a change is proposed which should fix it
<jamespage> but I've lost my backscroll....
<matsubara> SpamapS, I see. thanks!
<SpamapS> jimbaker: what I see is '' where I would expect nothing
<jcastro> jamespage: oh ok, so it's on the right radars then
<SpamapS> jimbaker: its particularly problematic in shell scripts
<jimbaker> SpamapS, understood. yaml can properly account for an empty value (= None in Python), but the question is whether this is readily produced by code using relation-set/relation-get
<jimbaker> vs an empty string which can also be kept as a distinct value
<jimbaker> fwiw, the output was intentionally made as friendly to shell scripts as possible, using the variety of acceptable yaml formats
<jimbaker> SpamapS, i guess one possible problem here is that relation-set foo= implies that foo should be deleted; so if you want store an empty string, you would say something like relation-set foo='', which then keeps this as an empty string, reporting back ''. i guess we could tweak it such that it returns nothing in this case, at the cost of losing roundtripping on this value
<SpamapS> jimbaker: yaml is totally unacceptable to shell, so I'm not sure how that computes..
<jimbaker> SpamapS, i think it sounds like format: 2 has a big problem
<SpamapS> jimbaker: for shell, I want a string of bytes every time with strings. The only time yaml/json/fooo matters is boolean.
<jimbaker> as i have implemented it, since it does use yaml, pervasiviely
<jimbaker> SpamapS, do you want relation settings to interpret booleans?
<SpamapS> jimbaker: sometimes it will be ok. Many times it will not. :-/
<SpamapS> jimbaker: no, but this is config settings
<jimbaker> SpamapS, also, when you say a string of bytes, i assume you mean to keep it as such. so no unicode interpretation. feed bytes in any format (say utf-8, high bytes, whatever), get back as the exact string of bytes
<SpamapS> jimbaker: just think of how hard it will be to use in shell..  echo "Title='$(confi-get title)'" is pretty common. Now that will be ''thetitle''
<SpamapS> jimbaker: *yes*
<SpamapS> why the heck would juju try to interpret the bytes?!
<SpamapS> don't confuse "string" with "str"
<jimbaker> SpamapS, my apologies
<SpamapS> no apology necessary
<SpamapS> I'm just surprised this never came up during any review
<SpamapS> like, did anybody actually try format 2?
<jimbaker> SpamapS, i did, and i did bear in mind the formatting aspects of shell (hence the format tweaking). in my usage, it seemed fine. but that does not change the fact that it does not work for you
<jimbaker> SpamapS, fwiw, there is an extensive discussion of how it is supposed to work in the merge proposal, https://code.launchpad.net/~jimbaker/juju/charm-format-2/+merge/108831
<jimbaker> SpamapS, in any event, it should be straightforward to change the format, given the refactoring that was done, to mean use bytes when working with relation settings. this can also be tweaked to provide for special interpretation of true/false if that's desired
<SpamapS> jimbaker: you guys said way too much for me to find where you all thought it was a good idea to output all strings wrapped in ''
<SpamapS> + self.assert_smart_output("", "''\n")
<SpamapS> I see that it is intentional
<jimbaker> SpamapS, yes, it does ensure roundtripping of empty strings
<SpamapS> is it only empty?
<SpamapS> I haven't actually tried any full strings
<jimbaker> SpamapS, so a couple of options i see:  1) we could specifically work around that, so empty strings are always ouput as nothing at all; 2) change it to bytestrings
<SpamapS> can dial my surprise down a lot if its only empties
<jimbaker> SpamapS, yaml tries to avoid quotes on strings if possible
<SpamapS> jimbaker: I think I'll just argue for format 3 to have raw byte strings, which is what it should have always been
<SpamapS> or at least, we need an explicit way to say "don't touch this"
<jimbaker> SpamapS, i don't think we have to support format 3 if it makes more sense to just use bytestrings
<SpamapS> I'd also like for relation-set and 'juju set' to be able to take input from stdin
<SpamapS> jimbaker: no this is bigger than a single issue
<SpamapS> I've wanted to insert a binary file a few times now
<jimbaker> juju set is different, because it does interpret its input
<SpamapS> x="$(cat foo)" is not the way to do that.
<jimbaker> SpamapS, well if you do use !!binary, you can feed in a b64 encoded string
<SpamapS> jimbaker: which I already do. ;)
<SpamapS> jimbaker: but then on the -get side I have to use yaml again to decode it
<SpamapS> and then we run into shell fail again
<SpamapS> jimbaker: I think it makes a ton of sense to use yaml. It just doesn't make any sense to leave the shell without any help parsing it
<jimbaker> SpamapS, do try it with nonempty strings, to see if that really works or not for your case. if not, we can change to bytestrings and forget yaml for relation-set/relation-get
<SpamapS> jimbaker: Ok so non empties works. BUT the bigger issue is that it is very non-obvious how juju is interpreting things when I would expect that strings would always just be raw strings. This is something to think about long term.
<jimbaker> SpamapS, so it sounds like we should at least change the output of empty strings to nothing, as we discussed
<SpamapS> no
<SpamapS> special casing makes it workse
<SpamapS> worse
<SpamapS> jimbaker: this is a deeper issue
<SpamapS> jimbaker: the fact that relation-set monitors="$(cat monitors.yaml)" does not result in   monitors: "...the content of the yaml file" is a deep and troubling problem
<jimbaker> SpamapS, then i suggest changing to bytestrings in format: 2 for relation-get/set. config settings should be kept in yaml however
<jimbaker> since they do have a type interpretation
<SpamapS> jimbaker: but
<SpamapS> jimbaker: the type is *string*
<SpamapS> so, can I expect a string with all the yaml in it?
 * SpamapS has not tried it yet
<jimbaker> of course there is relation-get - to consider
<jimbaker> but that could imply yaml output
<SpamapS> it does imply yaml
<SpamapS> but the values must be the strings that were set
<jimbaker> SpamapS, since we know the types for config values, we could take this into account
<jimbaker> so use yaml to parse except for string types, which implies bytestrings
<SpamapS> no except
<SpamapS> meh
<SpamapS> jimbaker: forget I said anything. I'll deal. I don't have the mental capacity to debate these subtleties
<jimbaker> SpamapS, let's just put it on hold then. your points, especially with respect to reasonable usage like relation-set monitors="$(cat monitors.yaml)" are valid. however, i certainly agree about the subtlety
<jcastro> jamespage: m_3 hey check this out: http://whirr.apache.org/docs/0.7.1/quick-start-guide.html
<jcastro> it's like basically juju for hadoop
<jimbaker> SpamapS, i have just updated this merge proposal to be more robust, but it remains quite useful: it adds help to jitsu. https://code.launchpad.net/~jimbaker/juju-jitsu/subcommand-help/+merge/111634
 * avoine just got juju to bootstrap on HP cloud with the openstack branch
<SpamapS> avoine: woot!
<avoine> really cool indeed
<avoine> using the latest userpass auth-mode
<SpamapS> avoine: when I tried it destroy-environment failed
<SpamapS> avoine: and machine 0 didn't have metadata
<avoine> your right, destroy-environment is not working
<avoine> I think I have the metadata
<avoine> well there is something in /var/lib/cloud/instances/i-00010533/user-data.txt
<SpamapS> no i mean in juju status
<avoine> ok
<avoine> yeah same here
<jcastro> https://juju.ubuntu.com/docs/getting-started.html
<jcastro> hey should we recommend installing cgroups-lite for LXC?
<jcastro> it was my understanding that yes, we should
<SpamapS> doesn't lxc recommend it already?
<jcastro> ah yes, correct, that answers my question, ta
<jcastro> <---- reshuffling getting-started.rst
<SpamapS> cool
<SpamapS> bcsaller: hey, do subordinates not watch their relation settings like regular unit agents? I do relation-sets in later hooks using -r and they are never propagated to subordinates
<bcsaller> SpamapS: they should
<SpamapS> http://paste.ubuntu.com/1076855/
<SpamapS> Thats on the mysql side.. setting it in upgrade-charm
<SpamapS> http://paste.ubuntu.com/1076860/
<SpamapS> bcsaller: and thats the unit agent that is connected on the other side. sees the change, but does not run changed hook
<bcsaller> SpamapS: I can look into it but I was pretty sure those code paths were tested
<bcsaller> thanks for letting me know
<SpamapS> bcsaller: Its a corner case, don't spend much time on it
<RichardRaseley> So, I am interested in setting up an OpenStack environment using JuJu and MaaS (as outlined here https://help.ubuntu.com/community/UbuntuCloudInfrastructure), but I only have 5 nodes to work with. Is it possible for me to co-locate some of the services but still use juju to do the deployment? Like if I wanted 1x for mass / juju 1x mysql, rabbitmq, keystone, horizon, and 3x nova nodes...
<jamespage> jcastro, kinda
<jcastro> jamespage: hey so I think there might be a problem with the etherpad-lite charm
<jcastro> and the best I can figure out is "some kind of npm error"
<jamespage> jcastro, marvellous
<jamespage> jcastro, I'll take a look tomorrow
<jcastro> http://pastebin.ubuntu.com/1076998/
<jamespage> jcastro, the nodejs PPA has upgraded from 0.6.x to 0.8.x and etherpad does not like it
<jamespage> jcastro, whats in the archive will probably work - I'll test tomorrow
<SpamapS> jamespage: the distro caught up with nodejs last cycle btw
<jamespage> SpamapS, yeah - I know - well at least for about 1month anyway
<jcastro> heh
<jcastro> SpamapS: I was waiting for your "archive rocks, told you this would happen."
 * SpamapS bows
<SpamapS> one is glad to be of service
<jamespage> SpamapS, npm landed pretty late didn't it
<jamespage> I think thats why the charm still uses the PPA TBH
<jcastro> I'll test with the archive version and if it works submit a branch
<jcastro> I need to get better at workflow with charms anyway
<jamespage> jcastro, thanks - that would be really great
<jamespage> I'll review it tomorrow
<jamespage> assuming its working OK :-)
<jcastro> heh
<jcastro> same errors switching to node in the distro
<jamespage> jcastro, odd
<imbrandon> mmm ferrari fxx
 * imbrandon want
<jimbaker> hazmat, i just proposed a branch that reuses security groups (and does not attempt to delete the machine security groups)
<jimbaker> hazmat, seems to work well in actual usage against ec2 too
<hazmat> SpamapS, i think some of your output concerns might be addressed by https://code.launchpad.net/~bcsaller/juju/sane_output_test_option/+merge/106067
<hazmat> jimbaker, sweet!
<jimbaker> hazmat, it certainly makes for a better experience
<SpamapS> hazmat: hopefully. I think the issues are subtle.. but will bite us
<m_3> mornin gang
<imbrandon> heya m_3
<SpamapS> alright, juju precise SRU going to precise-updates!
<SpamapS> *finally*
<m_3> whoohoo!
<sigmonsays_> Hello
<imbrandon> Hello
<sigmonsays_> This is quite the long shot but I am curious about seeing gozk (go zookeeper) implementation. From my understanding, some pieces of juju were rewritten in Go to take advantage of talking to zookeeper using go. I was wondering what such implementation might look like
<imbrandon> sigmonsays_: niemeyer wrote that ( gozk ) iirc, might be the best to ask
<sigmonsays_> thanks imbrandon
<m_3> sigmonsays_: https://launchpad.net/gozk it's all in the open
#juju 2012-07-06
<imbrandon> wget https://launchpad.net/juju/trunk/galapagos/+download/juju-0.5.1.tar.gz
<imbrandon> bah
<m_3> pushing a new release of jitsu after a bit... lemme know if you want anything else in beforehand
<imbrandon> m_3: pink pony
<imbrandon> heh
<imbrandon> working on new 0.5.1 osx pacakge and rpm now as well
<m_3> imbrandon: cool
<m_3> no pink ponies for you
<m_3> ;)
<m_3> well ok, i guess you can have pps
<imbrandon> http://fridge.ubuntu.com/files/no-pony-for-you.jpg
<imbrandon> doh it 404's now, IS must have found it , lol
<imbrandon> hahaha found a copy http://ircimages.com/ircimages/7/5/75dfb725844220ba4d824268c72ea0c3.jpg
<imbrandon> :)
<imbrandon> ( in my defense, crimsun, no i, uploaded the original to the fridge, pitty too as google has links to it all over heh )
<imbrandon> :)
<m_3> nice
<imbrandon> heh yea it was on the fridge since breezy
<imbrandon> :)
<imbrandon> no idea when they took it off heh
<imbrandon> Google, Amazon, HP and Microsoft all agree on one thing. Ubuntu
<imbrandon> heh
<m_3> imbrandon: yup, it's pretty cool
<imbrandon> SpamapS / m_3: got an OSX install handy ? http://cl.ly/Htjg
<imbrandon> :)
<m_3> imbrandon: nope... ubuntu on my mac
<imbrandon> check out the screenshot anyhow :)
<imbrandon> heh
<imbrandon> bout to update the docs :)
<imbrandon> no more brew required, only xcode from the App store and the juju-installer :) /me wonders if i could get that into the App store ...
<m_3> imbrandon: nice screenshot
<m_3> lunch
<imbrandon> .win 9
<jamespage> jcastro: works for me if I drop use of the PPA
<jamespage> going to propose that now
<jcastro> m_3: around?
<negronjl> 'morning all
<jcastro> morning!
<jcastro> https://bugs.launchpad.net/charms/+source/redis-master/+bug/1021157
<_mup_> Bug #1021157: Charm fails to start redis-server on EC2 <redis-master (Juju Charms Collection):New> < https://launchpad.net/bugs/1021157 >
<jcastro> great response to a bug!
<negronjl> jcastro: morning
<jcastro> jamespage: hey so dropping the PPA worked for you?
<_mup_> Bug #1021861 was filed: Transient error /w MAAS provider: Unknown operation: 'list_allocated'. <juju:New> < https://launchpad.net/bugs/1021861 >
<jimbaker> one thing that jitsu now supports in the new version in the ppa is --help, both on the main command and its subcommands. it's not complete for all subcommands (that requires some additional refactoring), but still useful
<jcastro> hazmat: you working today?
<hazmat> jcastro, no, but if you need something i'm around
<hazmat> jcastro, what's up?
<jcastro> oh, the no would answer my question about the openstack provider then I guess, heh
<robbiew> lol
<hazmat> bcsaller, can you lbox propose -cr the test option branch for an updated reitveld
<bcsaller> hazmat: yeah, I think lbox is working again
<hazmat> bcsaller, it is
<hazmat> jimbaker, re the secgrp branch, looks good, just trying to work through the implications for existing groups, afaics its fine, but its also not clear if there's a test explicitly for that case (existing group with open ports)
<hazmat> bcsaller, thanks
<jimbaker> hazmat, yeah, i'm thinking about that testing
<jimbaker> hazmat, there is certainly a lot of testing in the firewall piece re open/close ports to ensure that it conforms
<jimbaker> in practice, when a new unit is started, the watch for that machine does the firewall changes in a short period of time (<1s is what i observed)
<hazmat> jimbaker, so the firewall is processed as part of process_machine in the provisioning agent which allocates the new machine
<hazmat> ?
<jimbaker> hazmat, yes
<hazmat> jimbaker, with an additional test around the pre-existing group, it feels like it should be good to merge
<jimbaker> hazmat, sounds good to me
#juju 2012-07-07
<ejat> bug 891868
<_mup_> Bug #891868: juju cli api should be invokable outside of units  <juju:Confirmed> < https://launchpad.net/bugs/891868 >
<ejat> anyone can help me how to get config-get ?
<ejat> getting the same error as in the bugs
<hazmat> ejat, you use it in a hook ?
<hazmat> ejat, or if your on the client, you can use juju get service_name to see it
<ejat> ok thanks
#juju 2012-07-08
<tshauck>  Hi, how can I set the PYTHONPATH during runtime and keep it set, every os.whatever I try, the path always reverts
<imbrandon> wow, parse is awesome , like really
<imbrandon> heya SpamapS jcastro hazmat : I'm about to write a blog post about it but those of you that use juju on AWS or Azure can now have free PREMIUM newrelic accounts ( the equiv of the $24 a month acct ) http://newrelic.com/aws and http://newrelic.com/azure
<imbrandon> gogo no reason not to monitor with new relic now :)
<imbrandon> [ btw the newrelic charms are in the charm store now too ]
<hazmat> imbrandon, cool
<hazmat> imbrandon, what's the catch?
<imbrandon> none, i've seen them do it with other partners too
<imbrandon> thye just want critical mass i think
<hazmat> that's pretty awesome
<imbrandon> the more ppl using new relic the harder for others to break in
<imbrandon> etc
<hazmat> its one of the nicer drop in application level monitoring tools
<hazmat> everything else is pretty much generic or manual app
<hazmat> or stack specific
<imbrandon> yea this is a good mix of both
<imbrandon> they have the generic or stack api's
<imbrandon> i hear they are releasing a node one soon
<hazmat> at least for python they tie into all of the common db adapters and web frameworks to give pretty useful generic app stuff.. most of the other monitoring stuff is generic at the machine level
<hazmat> imbrandon, thanks
<imbrandon> np :)
<imbrandon> btw finaly migrated my blog from wordpress to octopress yesterday , #1 omg it was easy #2 omg i'm never going back to a mysql backed blog
<imbrandon> heh
<imbrandon> it took all of about 5 minutes, and i had it migrated, permalinks the same ( no nasty rewrites to worry about ) and comments working
<imbrandon> heh
<imbrandon> just need to find a way to encapsulate it all into a "migrate from wordpress" charm in some sane way ... that would be awesome :)
<hazmat> imbrandon, or even just a relation on an octopress charm.. although octopress is static so its perhaps a bit strange
<imbrandon> ahh that could be awesome
<imbrandon> yea its static but its almost identical if discus comments are used
<imbrandon> because at that point really wordpress is static
<imbrandon> or should be
<imbrandon> hazmat: https://parse.com/ is pretty slick too if you hadent seen them yet, just started playing with it a few days ago, but its got great potential for select apps ( wouldent use it for everything )
#juju 2013-07-01
<marcoceppi> JoseeAntonioR: I am now
<JoseeAntonioR> marcoceppi: hey, wanted to know if you can give me a hand with a couple things on my postfix charm
<marcoceppi> JoseeAntonioR: Yeah. I'm about to head off to bed, but tomorrow I'll be around to help out
<JoseeAntonioR> sure, thanks!
<jcastro> hey evilnickveitch
<evilnickveitch> jcastro, hey
<jcastro> heya
<jcastro> check out the juju list, apparently large swaths of the writecharms stuff is missing
<evilnickveitch> jcastro, yeah, I saw that, I am working on a fix. thanks!
<jcastro> <3
<evilnickveitch> jcastro - the issue was, the "writing a charm" was covered in 2 different places in the old docs, and I only converted one (because it was on my list once, not twice), but it is a reasonably straightforward fix. I will have it done today, and can make it beautiful later
<jcastro> nod
<arosales> For folks looking on the Juju list they may have noticed we have launched a Charm Championship competition
<arosales> https://github.com/juju/charm-championship/
<henninge> Hi!
<henninge> How can I work around the missing debug-hooks command?
<henninge> I ssh into the unit and add '/var/lib/juju/tools/1.11.2-precise-amd64' to PATH, so I can run relation-get but it complains about JUJU_CONTEXT_ID missing.
<henninge> What environment do I need to build to be able to run relation-get?
<arosales> henninge, hello :-)
<arosales> marcoceppi, any suggestions on debug hooks
<arosales> henninge, are you using juju 1.11.1?
<arosales> henninge, sorry I see the version in your path now
<henninge> arosales: 1.11.2, actually
<henninge> arosales:  and Hi ;-)
<marcoceppi> henninge: I've not tried to mock debug-hooks yet in 1.11.2, typically when I'm writing charms and a hook errors out, I'll just ssh in to the machine, add a few juju-log lines to spit out the relation-get data then run juju resolved --retry on the unit/relation
<marcoceppi> henninge: Give me a second to spin up an evironment and check what envs you need
<henninge> marcoceppi: Thanks
<henninge> yeah true, --retry is an option.
<marcoceppi> henninge: I used to know back in pyjuju, but I'm sure there are extra environment variables now. As such I typically patch something live then run --retry until I get it working. After that I just pop the fixes in the local charm and plug away
<henninge> marcoceppi: Ok, I'll go with that then. Thanks! ;-)
<marcoceppi> henninge: IIRC they're adding debug-hooks or something similar in to core sooner or later, so that's merely a stop gap until then
<henninge> Cool. I was hoping for that ...
<henninge> Thanks again guys, gotta go.
<wedgwood> jcastro: sorry if this is covered elsewhere, but in the contest rules I see "All deployment bundles and charms must be the participantâs original work"
<wedgwood> does that mean that a participant would have to write their own postgres charm, for example?
<marcoceppi> wedgwood: The wording might be misleading as the contest strongly encourages using charms from the current ecosystem
<wedgwood> misleading and possibly confusing/frustrating.
<marcoceppi> wedgwood: jcastro is out for the rest of the day, you might want to mail the list for a clarification/to get it fixed
<wedgwood> marcoceppi: will do thanks
<arosales> wedgwood, I can also follow up on the wording
<wedgwood> arosales: ok. I'm emailing the list now too, for good measure.
<arosales> wedgwood, ok thanks
<arosales> wedgwood, thanks for brining it up
<Jarco> Hello, Where can I find info on Juju pricing?
<sarnold> Jarco: juju itself is free. hosting on AWS or HP Cloud or Rackspace or whatever will not be free, of course. If you're using your own OpenStack environment or MAAS bare-metal "clouds", juju is free.
<Jarco> I will need to look into openstack then
<Jarco> Or perhaps aws. Seems to be very good and not to expencive
<marcoceppi> Jarco: if you've got hardware already, OpenStack and MAAS are good options, if you don't own hardware and can accept cloud prices HP Cloud is built on OpenStack or you can use AWS. The provider boils down to preference as the provider is interchangeable in Juju. So you can use the same charm and deploy a service on to AWS as you do OpenStack, MAAS, etc
<thumper> jcastro, m_3, marcoceppi: I'm looking for some juju presentations that I can pilliage for a talk this afternoon, got any handy?
<m_3> thumper: one sec... looking for link
<thumper> m_3: ta
<thumper> dropbox?
<m_3> thumper: invited you... it's a bit frail as it's a dropbox share of a U1 share... etc etc
<thumper> m_3: what happened to U1?
<thumper> m_3: can't the u1 share just work?
<m_3> don't have the U1 link on this laptop
<m_3> jcastro might though
<m_3> negronjl oe ^
<m_3> or negronjl ^^
<negronjl> thumper: u1 link for what ?
<thumper> m_3: you can share from the web ui
<thumper> m_3: the share seems to work ok though :)
<m_3> thumper: it's not shared with me over U1 anymore
<thumper> m_3: the cloudcamp one is quite good
<thumper> needs a little tweaking under the hood
<thumper> but the background is almost perfect for my audience
<m_3> thumper: great
<m_3> have a good talk!  break a leg and such
<thumper> thanks
#juju 2013-07-02
<JoseeAntonioR> hey marcoceppi, around and free?
<JoseeAntonioR> hey guys, question, how do I know which directory is the charm going to be into? let's say I want to use a.txt on my charm and I want to locate it, where would it be?
<JoseeAntonioR> or I should just specify the directory?
<marcoceppi> JoseeAntonioR: All hooks start in the root of the charm. There is also a $CHARM_DIR environment variable that is set, so if you ever cd out of the charm directory you can get back to it with that variable
<sidnei> gary_poster: ping
<gary_poster> sidnei, pong
<sidnei> hi gary_poster, had a question about juju-gui: does it display services deployed via command line or only those dployed via the gui?
<gary_poster> sidnei, all (both command line and gui)
<sidnei> gary_poster: so if i have an existing environment and deploy juju-gui to it afterwards the existing services will show up just fine?
<gary_poster> sidnei, yes
<sidnei> gary_poster: mighty cool. thanks!
<gary_poster> welcome sidnei :-)
<sidnei> holy connected graph batman, it works.
<beuno> sidnei, pic or it didn't happen
<sidnei> beuno: http://ubuntuone.com/4q8wr0gtdb7Tyu5XDKsYX1
<sidnei> gary_poster: btw, i was confused for several seconds while it loaded the environment, there was no visible indication that anything was happening
<gary_poster> sidnei, for pic, majority of those are subordinates, right?
<gary_poster> sidnei, I ask because we plan to make those pictures (with subordinate graphs in particular) a lot prettier for this cycle
<sidnei> gary_poster: yes, and i think i might be misusing those subordinates, i believe only one would be ok for all services instead of one per service
<gary_poster> cool sidnei, yeah.  IS likes separate ones in order to upgrade individually, so they have similar issues
<gary_poster> even though yes, typically they could be collapsed
<sidnei> gary_poster: ah, i knew there was a reason
<gary_poster> sidnei, no visible indication anything was happening: we have a spinner which is supposed to give some visual reassurance.  was that early, late, absent, or insufficient?
<gary_poster> "that" == the spinner
<sidnei> gary_poster: i didn't see any spinner nope
<gary_poster> weird sidnei, ok.  If you go to [GUI IP]/juju-ui/version.js in the browser what does it tell you?
<sidnei> gary_poster: var jujuGuiVersionInfo=['0.6.1', '753'];
<sidnei> gary_poster: ah, sorry. so reloading, there was indeed a spinner
<gary_poster> sidnei, oh ok.  so it was insufficient
<sidnei> gary_poster: so *after* the spinner, i got the open charm browser drawer and a blank canvas for like 60+ seconds
<gary_poster> sidnei, ah!  ok, haven't seen.  I'll file a bug.  Thanks!
<sidnei> gary_poster: and then i was clicking around and suddenly the services showed up magically
<gary_poster> heh
<gary_poster> ack, yeah, not cool.  I'll subscribe you to the bug, for the heck of it :-)
<gary_poster> sidnei, probably doesn't matter, but just in case, what browser and what juju environment?
<gary_poster> (ec2, openstack, etc. and pyjuju vs juju core)
<sidnei> gary_poster: chrome Version 28.0.1500.63 beta, canonistack, pyjuju
<gary_poster> perfect thanks again sidnei
<sidnei> gary_poster: is there a planned feature to hide a specific service? nagios is like the eye of sauron and makes everything else impossible to see
<gary_poster> sidnei, lol, yes, generally.  last two months of cycle is "environment power tools" for stuff like that.
<sidnei> cool, thanks muchly
<sidnei> http://ubuntuone.com/2VrziUdvYEXdOHo2Os117T with nagios removed and manually aligned things
<gary_poster> much better :-)
<sidnei> gary_poster: (or someone else) is there an svg of the juju logo?
<gary_poster> sidnei, yeah...looking
<robbiew> sidnei: still need that SVG file?
<sidnei> robbiew: nope, got it
<robbiew> I've got one, if so
<robbiew> cool
<sidnei> doing some slides for my presentation tomorrow, redoing niemeyer's diagrams
<robbiew> nice!
<Akira1> anyone have the link handy for the recorded version of friday's juju charm school?
<robbiew> Akira1: https://juju.ubuntu.com/resources/videos/
<robbiew> specifically -> http://www.youtube.com/watch?v=08dOs3eO04M
<Akira1> thanks :)
<robbiew> Akira1: damn...but it looks like it wasn't recorded properly
<Akira1> er, :~~~(
 * robbiew checking
<sidnei> gary_poster: going to http://uistage.jujucharms.com:8080/ gives me some unstyled html atm, as if the css isn't loading
<robbiew> nevermind...user error
<Akira1> yeah looks like it is ok to me
<gary_poster> sidnei, don't see it.  more details?  Maybe dupe steps?
<sidnei> gary_poster: i wonder if i got a broken cached version
<sidnei> gary_poster: it loads instantly as if cached locally, but no css
<sidnei> oh, not sure how i got to 8080
<sidnei> :80 works fine
<gary_poster> ok cool
<gary_poster> though 8080 does for me too...
<gary_poster> mm, one idea
<gary_poster> sidnei, do you see uistage 8080 when you go to chrome://appcache-internals/
<gary_poster> we don't use the appcache anymore
<gary_poster> because of issues like this
<sidnei> gary_poster: yup
<gary_poster> sidnei, ah ok.  kill that and then 8080 will work fine too.
<sidnei> Size: 29.8 kB
<sidnei> Creation Time: Tuesday, October 16, 2012 4:25:00 PM
<sidnei> Last Update Time: Tuesday, October 16, 2012 4:25:00 PM
<sidnei> Last Access Time: Tuesday, July 2, 2013 5:00:12 PM
<gary_poster> sidnei, yeah, pretty old :-)
<balafi> will juju work with AWS free tier?
<balafi> I try to follow the steps at juju documentation page and getting  'error: use of closed network connection' on 'juju bootstrap'
<balafi> any help would be appreciated
<balafi> ooops - it worked
<balafi> I see the bootstrap instance in the EC2 management console but can't do 'juju status'
<balafi> error: no instances found
<henninge> balafi: give it some time to start up
<henninge> balafi: Amazon free tier is for micro instances AFAIK but juju will spin up m1.small instances by default.
<thumper> balafi: which zone?
<balafi> us-west-2
<thumper> balafi: I've had issues with ap-southeast-2
<thumper> balafi: where the api lies
<thumper> however us-* ones have been good generally
<thumper> balafi: can you see it in the AWS web console (the instance)?
<balafi> yes
<balafi> its m1.small
<balafi> its 'running', 2/2 check passed
<henninge> I have ocasional problems in eu-west-1, too.
<henninge> try juju status again.
<balafi> I had hard time to bootstrap also
<balafi> I was getting error and then it worked with me doing any changes
<balafi> without changes
<thumper> balafi: it seems to be a problem with the api
<thumper> which makes me sad
<thumper> as there is nothing I can help with immediately
<balafi> no problem
<henninge> thumper: I have seen occasional glitches in the api in the past, outside of juju. I think it wants very defensive programming. :-(
<thumper> henninge: yeah... kinda sucks
<thumper> I find it frustrating because ap-southeast-2 is closest to me
<thumper> and it doesn't work very well
<thumper> or at least, we don't work well with it
<thumper> it seems weird that different regions have different API results
<thumper> or interactions
<thumper> versioning maybe?
<thumper> I'm not sure, but a frustrated user
<henninge> Hm, I have not yet used different regions, let alone compared results.
#juju 2013-07-03
<ehg> anyone else come across go juju's upgrade-charm failing after 9 upgrades of a charm?
<ehg> maybe due to: /worker/uniter/charm/deployer.go:prefix = prefix + time.Now().Format("-%Y%m%d-%H%M%S")
<ehg> not formatting the date properly, and coming out as a string
<ehg> literally
<ehg> it causes a pretty nasty infinite loop that runs config-changed :(
<ehg> colleague has filed a bug here: https://bugs.launchpad.net/juju-core/+bug/1197369
<_mup_> Bug #1197369: more than 10 upgrades to a charm causes horrible infinite loop <juju-core:New> <https://launchpad.net/bugs/1197369>
<ubot5`> Launchpad bug 1197369 in juju-core "more than 10 upgrades to a charm causes horrible infinite loop" [Undecided,New]
<_mup_> Bug #1197369: more than 10 upgrades to a charm causes horrible infinite loop <juju-core:New> <https://launchpad.net/bugs/1197369>
<ehg> snap
<stub> ehg: I suspect it is fixed in the dev version. I think I've seen it, but not since switching to the PPA version
<ehg> stub: it seems to be in head still, http://bazaar.launchpad.net/~go-bot/juju-core/trunk/view/head:/worker/uniter/charm/deployer.go#L191
<ehg> i'm not too familiar with bzr, however
<arosales> juju charm meeting starting in a few minues
<arosales> *minutes
<arosales> on air URL: http://youtu.be/ZQ47UzK65-Q
<arosales> etherpad: http://youtu.be/ZQ47UzK65-Q
<arosales> sorry
<arosales> etherpad: https://plus.google.com/hangouts/_/cb8d75f1182f29cddb2b46a565cb9ff361af9e09?authuser=0&hl=en
<arosales> that is actually the hangout URL if folks want to join
<arosales> real etherpad link: http://pad.ubuntu.com/7mf2jvKXNa
<arosales> :-)
<arosales> marcoceppi, will be checking to see if he can update the onair
<arosales> http://ubuntuonair.com/ also updated
<smartboyhw> arosales, um, since jcastro is on holiday, I need to ask you: The JuJu championships, 13-18-year-olds are supposed to contact you guys.
<smartboyhw> So, how to contact? jcastro told me to send an  email to him but he's on holiday, not probably the best idea...
<smartboyhw> Or is he back!!?!??!
<smartboyhw> jcastro, ping ping ping
<arosales> smartboyhw, hello
<smartboyhw> arosales, hey:)
<jcastro> hey guys
<jcastro> yeah, smartboyhw I have your info
<smartboyhw> jcastro, yay:)
<arosales> ah jcastro !
<jcastro> I've sent you an email
<smartboyhw> Wait, my parents submit MY entry!?!?!?!?
<smartboyhw> What kind of rule is that?
 * smartboyhw :O
<jcastro> you're underage, there's a bunch of legal stuff we have to do
<smartboyhw> I mean, it's OK to let them have the prize money... But I think I should be the one who submits the entry myself, jcastro.
<jcastro> I don't really make these rules
 * smartboyhw decides not to take part due to this strange rule.
<smartboyhw> If I don't have control over what I enter, what's the point?
<CyberJacob> Hi
<CyberJacob> Anybody here?
<marcoceppi> CyberJacob: Hi!
 * CyberJacob is now reinstalling to try and fix the problem
<CyberJacob> ok, so now I have my first problem again
<CyberJacob> and I don't know how I fixed it
<arosales> CyberJacob, what is the issue you are seeing?
<arosales> CyberJacob, http://pastebin.ubuntu.com
<CyberJacob> arosales: http://pastebin.ubuntu.com/5841424/
<CyberJacob> arosales: everything gets stuck on pending
 * arosales taking a look
<arosales> CyberJacob, ah so this is a maas provider
<CyberJacob> yup
<arosales> CyberJacob, what version of juju are you running?
<CyberJacob> 0.7
<CyberJacob> fresh from apt-get
<arosales> cool
<CyberJacob> on a brand new MAAS install
<arosales> CyberJacob, and to confirm your environment did you grab maas from the ppa or ubuntu archive?
<CyberJacob> I grabbed a server install ISO and selected "multiple server install with MAAS" from it's grub
<arosales> 12.04 server install?
<CyberJacob> 13.04
<arosales> ok
<arosales> CyberJacob, I take it all follow on deploy after the bootstarp go into pending
<CyberJacob> yup
<hazmat> arosales, could you pastebin a tail -n 200 of  /var/log/maas/maas.log
<CyberJacob> and juju bootstrap went fine
<hazmat> CyberJacob, on the maas server
<arosales> CyberJacob, your set of enlistments looks good too from the maas side?
<CyberJacob> I think so
<CyberJacob> how do I check?
<hazmat> CyberJacob, a pastebin of /var/log/maas/maas.log would help figure out the issue
<hazmat> CyberJacob, another useful log here would be /var/log/juju/provisioning-agent.log on the juju bootstrap server
<CyberJacob> hazmat: http://pastebin.ubuntu.com/5841462
<hazmat> hmm
<CyberJacob> from what I can tell, Node2 is the juju master
<CyberJacob> (it booted up when I ran bootstrap)
<CyberJacob> hazmat: http://pastebin.ubuntu.com/5841464/
<hazmat> CyberJacob, thanks
<hazmat> CyberJacob, this should already be fixed in 0.7.. but it looks alot like a port specification bug in the maas provider.. in your juju provider config for maas can you add  :80 to the maas node address portion of the url
<hazmat> ie maas-server: my-maas-node:80
<hazmat> err i mean... maas-server: http://my-maas-node:80
<roaksoax> hazmat: here!
<hazmat> CyberJacob, ^
<hazmat> roaksoax, what do you make of this maas.log output.. http://pastebin.ubuntu.com/5841462
<CyberJacob> would that be on the server I ran `juju bootstrap` from?
<hazmat> CyberJacob, yes it goes into the juju client 's environment.yaml
<CyberJacob> ok, done
<roaksoax> in which step is this happening?
<CyberJacob> do I need to restart the service?
<hazmat> CyberJacob, you have to execute a command that will modify the environment for it to sync to the bootstrap node
<hazmat> CyberJacob, ie.. add-unit / deploy/destroy-service etc
<roaksoax> first seems that an invalid MAC address is being sent to maas, then no mac address being sent, then mac address duplication
<CyberJacob> hazmat: ok, just tried to deploy another charm
<CyberJacob> now I have two machines with instance-id pending
<arosales> roaksoax, note this is a 13.04 server media install of maas
<roaksoax> arosales: the issues doesn't really seem to be MAAS related, but rather whomever is interacting with MAAS.
<roaksoax> that's why I need to know when is this happening
<arosales> roaksoax, gotcha
<hazmat> CyberJacob, could you pastebin any new output from the provisioning agent log
<roaksoax> arosales: btw.. do we have a juju-core documentation to use with lxc?
<arosales> roaksoax, so aiui the issue at hand is deploying services after the initial juju bootstrap go into pending
<CyberJacob> hazmat: http://pastebin.ubuntu.com/5841486/
<arosales> roaksoax, juju-core hasn't landed local provider yet
<arosales> roaksoax, also for my maas debugging education there is "check-commissioning" helpful to verify nodes are "ready" for deployment onto, or is there a better method?
<hazmat> hmm.. looks like maas isn't listening on the port from that provisioning agent log
<CyberJacob> hazmat: is there a way to force it to sync to the bootstrap node without deploying something?
<roaksoax> arosales: the issue above seems to be related to enlistment. As far as commissioning, the only way to see what's going on with the commissioning process, would be to get into the image while it is happening (ephemeral image debuuging)
<roaksoax> https://lists.launchpad.net/maas-devel/msg00808.html
<hazmat> CyberJacob, juju get-constraints will do it
<arosales> roaksoax, thanks for the doc
<CyberJacob> roaksoax: you want me to run that one?
<hazmat> CyberJacob, does the juju bootstrap node have connectivity to the maas server?
<roaksoax> CyberJacob: I'd first like to know after what process is that you are seeing that error. (I'm guessing is enlistment, before the node is registered in maas)
<roaksoax> CyberJacob: you are having several different errors that lead me to believe it is enlistment failing for some reason, then trying to enlist again, and so on
<hazmat> CyberJacob, at the moment juju is complaining that it can't connect to the maas server.. hence the provisioning agent log error -> Error interacting with provider: Connection was refused by other side: 111: Connection refused
<CyberJacob> hazmat: found the issue
<CyberJacob> hazmat: I was following the instructions in http://maas.ubuntu.com/docs/juju-quick-start.html
<CyberJacob> which has "maas-server: 'http://localhost:5240'" in environments.yaml
<CyberJacob> changed that to the ip of the MAAS server
<CyberJacob> (with the port)
<CyberJacob> and everything jumped into life after I pushed the settings
<hazmat> CyberJacob, cool
<arosales> nice
<arosales> roaksoax, hazmat thanks for the debugging session help
<CyberJacob> thanks guys!
<CyberJacob> (or gals)
<arosales> roaksoax, could you update http://maas.ubuntu.com/docs/juju-quick-start.html to add a note for the maas server URL
<arosales> CyberJacob, good luck with maas/juju. Glad to see you are using it :-)
<CyberJacob> arosales: I have no actual use for it, just messing arround
<CyberJacob> arosales: next up I'm going to see what use I can find for OpenStack
<arosales> CyberJacob, that in itself is a use
<arosales> :-)
<CyberJacob> I still don't see the advantage of OpenStack over Xen or vmWare
<CyberJacob> but I deffinitely see the uses of MAAS
<roaksoax> arosales: sure thing
<arosales> roaksoax, specifically to note the mass-server URL can be ip of the MAAS server (as found in this case).
<arosales> roaksoax, thanks :-)
<arosales> CyberJacob, thanks for helping improve the docs
<roaksoax> arosales: yteah though, those docs seem to be older ones and not from the latest branch
<roaksoax> arosales: but the docs on the website *do* say this "You may need to modify the maas-server setting too; if you're running from the maas package it should be something like http://hostname.example.com/MAAS."
<CyberJacob> arosales: docs fixed, juju working and Egypt without a President, all in one night!
<arosales> and 4th of July tomorrow for the US
<arosales> roaksoax, ah ok. That should be sufficent
<arosales> *sufficient
<arosales> roaksoax, thanks again
<roaksoax> arosales: yeah, i guess people miss it sometimes and don't fully read that section :)
<roaksoax> (including myself, I had to read couple times before I realized that was there :) )
<arosales> ya I would probably fall into that group, lol
<roaksoax> lol :)
<arosales> roaksoax, thanks!
<roaksoax> arosales: np!
<arosales> roaksoax, while I am bugging you
<arosales> :-)
<roaksoax> shoot:)
<arosales> what are your thoughts on adding  https://lists.launchpad.net/maas-devel/msg00808.html to a debug section on the maas docs
<roaksoax> arosales: It has been added actually. They are in the 'docs/troubleshooting.rst' in the maas source
<arosales> roaksoax, but not on the live docs at maas.u.c?
<roaksoax> but not yet published. The latest doc is not published
<arosales> ah ok
<roaksoax> yeah
<arosales> but in the queue which is good :-)
<roaksoax> :)
<CyberJacob> ok, so I'm still having issues
<CyberJacob> ping arosales & hazmat :)
<CyberJacob> think I found the error
<CyberJacob> http://pastebin.ubuntu.com/5841664/
<arosales> CyberJacob, sorry missed your ping
<arosales> CyberJacob, still not deploying services?
<CyberJacob> nope
<CyberJacob> arosales: The agent-state for everything I deploy is still pending
<arosales> CyberJacob, were you trying to expose the service or just deploying?
<CyberJacob> just deploying
<CyberJacob> though I did try an expose as well
<arosales> the trace back ~looks~ like its coming from a juju expose.
<arosales> Although the issue seems to be stemming from communication with the provider (maas)
<arosales> CyberJacob, could you pastebin  pastebin a tail -n 200 of  /var/log/maas/maas.log and /var/log/juju/provisioning-agent.log on the juju bootstrap server again
<CyberJacob> arosales: Just turned my server off for the night :)
<CyberJacob> arosales: I'll grab it tomorrow for you
<arosales> CyberJacob, ok just ping back in here when you are ready to resume debug
<arosales> note US holidy the next couple of days
<arosales> so may be quiter in here than usual.
<hazmat> CyberJacob|Away, there are some folks on eu timezones that  will be around though (also in #maas)
#juju 2013-07-04
<UnderControl> Hi, I'm wondering what would be the correct way to update the ownCloud charm? Would sending a MP be the right way?
<UnderControl> (It currently downloads an unsecure version in the install/upgrade hook)
<sarnold> UnderControl: that, or file a bug report
<UnderControl> sarnold, Alrighty, thanks.
<jcastro> MP would also be fine
<CyberJacob> hazmat: You still around to help look at my issue?
<CyberJacob> or anybody for that matter
<ahasenack> does anyone know if more than one hook can be running at the same time on the same unit?
<ahasenack> for example, I run some juju add-relation command, and the respective relation-joined hook starts to run on a unit
<ahasenack> but let's say it takes a long time (hours)
<ahasenack> before it finishes, I run another juju add-relation command, for another relation
<ahasenack> what will happen? Will both -joined hooks be running? Will the last one be queued? Will the last juju add-relation command block?
<ahasenack> ok, short answer, the second joined hook will be queued up
<adam_g> ahasenack, yea, AAIK they are serialized but perhaps unpredictably
<adam_g> *AFAIK
<ahasenack> are the old docs archived somewhere?
<ahasenack> I'm missing some info
<ahasenack> ah, in jujucharms.com/docs it seems
#juju 2013-07-05
<marcoceppi> ahasenack: what part of the documentation were you missing?
<ahasenack> marcoceppi: relation-* commands, env vars that are available
<ahasenack> searching for relation-set turned up a blog post, to give you an idea
<ahasenack> in the search box from the juju documentation page
<marcoceppi> ahasenack: okay. well need to make a charm author reference doc soon
<ahasenack> yep
<ahasenack> and it still talks about debug-hooks as if they exist
<ahasenack> I filed a bug about that yesterday, but for some reason it got filed in juju-core, not juju-core/docs, so I marked it as invalid and didn't file a new one
<marcoceppi> ahasenack: thanks. you'll need to target the docs series on juju-core for doc bugs
<marcoceppi> probably need to clarify that. tagging docs should work too
<ahasenack> marcoceppi: I really miss manpages for the tools
<ahasenack> marcoceppi: find myself trying to read go to get the command line options of the tools
<ahasenack> marcoceppi: so this is the url to file a bug report about the docs? https://bugs.launchpad.net/juju-core/docs/ the report a bug link?
<marcoceppi> ahasenack: that should be it
<ahasenack> marcoceppi: so that link is https://bugs.launchpad.net/juju-core/docs/+filebug
<ahasenack> marcoceppi: but when I click on it, the page that loads is https://bugs.launchpad.net/juju-core/+filebug
<marcoceppi> ahasenack: there's a bug for manpages too against Juju-core
<marcoceppi> ahasenack: thanks. I'll update the docs
<ahasenack> marcoceppi: yeah, that one I think I filed
<ahasenack> marcoceppi: so I don't see how doc bugs are different from juju-core bugs, given that url redirect
<marcoceppi> I think it auto fills the targeted series when you report. need to verify though
<marcoceppi> as soon as I figure out the sandbox URL for lp
<ahasenack> ok, lemme try to file the bug about debug-hooks, let's see what happens
<ahasenack> marcoceppi: https://bugs.launchpad.net/juju-core/+bug/1198169 that's what it became
<ahasenack> I don't know where to set it to "docs"
<_mup_> Bug #1198169: debug-hooks should be removed from juju-core documentation <juju-core:New> <https://launchpad.net/bugs/1198169>
<Guest28536> I have an upgrade-charm question
<Guest28536> hm
<Guest28536> freenode in trouble
<marcoceppi> What's  your upgrade question?
<stub> wedgwood_: Do you have an opinion on https://bugs.launchpad.net/charm-helpers/+bug/1195634? The alternative is separate functions for magic & non-magic & different-magic.
<_mup_> Bug #1195634: Better magic for write_file <Charm Helpers:New> <https://launchpad.net/bugs/1195634>
 * wedgwood_ looks
<wedgwood_> stub: that's old stuff from spads' original library. I would personally prefer to separate that out to core.template (or contrib.template) and give people an option of template formats.
<wedgwood_> stub: and I don't really think the magic is better than non-magic
<stub> wedgwood_: That sort of sounds like a +1 for proceeding the way I suggest
<wedgwood> stub: yes, I suppose it is :)
<stub> Just didn't want to waste any time if another approach was preferred ;)
<mariusko_> Hmm: error: unknown constraint "instance-type"
<mariusko_> It is supposed to be supported according to the documentation, but it fails after replacing juju with juju-core
<mgz> mariusko_: you can't use instance-type with juju-core at present
<mgz> just give the cpu/mem characteristics of the sort of machine you want
<mariusko_> Ah ok
<mariusko_> Same problem with amazon region. How would that be workaround?
<mariusko_> invalid value "ec2-zone=a" for flag --constraints: unknown constraint "ec2-zone"
<mgz> region isn't a constraint, you just put "region" in your config
<mgz> ah not sure for availabilty zone
<mgz> can you file a bug on that one?
<mariusko_> Hmm, it is critical to support HA
<mariusko_> Hmm: https://bugs.launchpad.net/juju-core/+bug/1183831
<_mup_> Bug #1183831: ec2 constraints missing <juju-core:Won't Fix> <https://launchpad.net/bugs/1183831>
<mgz> it's also a little wonky as a constraint, because you'd really want different units of the same service across several availabilty zones, no?
<mariusko_> Japp, that is true
<mgz> you can leave a comment on that bug if you've anything to add
<mgz> I think at the least we should recognize the constraint and print an error saying it's gone on purpose rather than just professing ignorance
<mariusko_> It is actually a workaround by adding haproxy-a, haproxy-b etc and relating them accordingly and placing them in different zones
<mariusko_> For me it is a blocker for moving from juju to juju-core...
<mgz> right, which I think is an argument for just adding it, and getting a better solution later
<mariusko_> Japp
<mariusko_> mgz: https://bugs.launchpad.net/juju-core/+bug/1183831/comments/4
<_mup_> Bug #1183831: ec2 constraints missing <juju-core:Won't Fix> <https://launchpad.net/bugs/1183831>
<mariusko_> Should it be reopened?
<nxvl> hi, i just installed ubuntu server with Virtual Machine Host and want to deploy some virtual machine with juju
<nxvl> is there any documentation on that
<nxvl> ?
<sarnold> nxvl: try these? https://juju.ubuntu.com/docs/
<nxvl> hmm, there is no easy to find link from home page
<nxvl> thanks
<nxvl> will take a look
<nxvl> oh, wait, yes have seen that, it talks about AWS, HP OpenStack and MAAS, but no ubuntu server virtual machine host
<JoseeAntonioR> nxvl: basically, do a 'local' deploy?
<nxvl> JoseeAntonioR: that uses lxc, not kvm
<nxvl> AFAICT
<JoseeAntonioR> erm, yes
<nxvl> jcastro: ^^
<JoseeAntonioR> nxvl: he's on holidays
<nxvl> oh
<ahasenack> nxvl: lxc is not supported in juju-core yet
<nxvl> ahasenack: yeah i don't like lxc anyways
<marcoceppi> nxvl: So juju does orchestration, it doesn't do provisioning
<marcoceppi> You'll need a provisioner (in this case AWS, HP, etc are provisioners) for "bare metal" you can use MAAS (or virtual MAAS which is just MAAS on virtual machines)
<marcoceppi> The only real "provisioner" Juju has (had, coming back soon to juju-core) was the local provider meant for development and testing purposes
<marcoceppi> There's an "ssh provider" coming soon which will allow you to provision any hardware/machine via SSH, which might be of interest to you if MAAS doesn't fit what you're looking for
<sarnold> nxvl: this may also be useful: http://jujucharms.com/~virtual-maasers/precise/virtual-maas
<marcoceppi> Until then MAAS is essentially your only "bare metal" option
<sarnold> you'll be running actual programs several layers of emulation deep, so it won't be quick..
<nxvl> marcoceppi: right, so the thing is: i will be using AWS for final deployment of the app, but i want firt test and write my charms "locally", for that i have 1 physical server where i want to deploy my test environment using juju so i can develop my internal charms
<marcoceppi> nxvl: Right, so unfortuantely at this time the local provider is the only way to do so (outside of MAAS, but that requires more than one piece of hardware) When the SSh provider lands you'd be able to just "deploy" to that machine without needing any prerequisites
<nxvl> so basically i need to test/develop on one machine that will emulate AWS
<marcoceppi> nxvl: You need to test/develop on a machine that utilizes a "provider" level that Juju knows how to talk with - if you're going to be testing the charm
<nxvl> so, what if i hear about this juju charm awesomeness and i want to test it before i mess with my AWS cloud?
<nxvl> i basically can't?
<nxvl> we sysadmins are quite grumpy when it comes to playing with live infrastructure
<marcoceppi> nxvl: You'll just need to use a provider juju knows how to talk to. AWS, HP Cloud, OpenStack, LXC (local provider), or MAAS
<marcoceppi> We plan to add the "here's a machine deploy this charm" provider, but that's not ready yet
<marcoceppi> aka, the SSH Provider
<jpds> nxvl: Set up a few KVM machines, and PXE boot them off a MAAS VM.
<nxvl> but ahasenack just told me that juju-core does not supports lxc
<marcoceppi> nxvl: what jpds isn't too difficult, if you've got the time. I was able to cobble together VirtualBox machines with MAAS so straight KVM would be easier
<jpds> nxvl: MAAS VM != LXC, I use KVM.
<nxvl> yeah i prefer kvm
<marcoceppi> nxvl: so, clarification, the local provider will actually create the "VMs" for you. Where as with MAAS you enlist machines for MAAS to use, then juju will turn them on and deploy to them when it needs to. Two different providers
<nxvl> jpds: so, what you are saying is: install a KVM machine with MAAS main server and then PXE boot a couple of other machine as MAAS nodes and be happy?
<jpds> nxvl: Yes.
<jpds> nxvl: Then point juju at the MAAS main server.
<nxvl> jpds: that's why i like you so much! How is London?
<jpds> nxvl: Todo bien.
<nxvl> marcoceppi: i don't mind creating the MAAS VMs for testing purposes and then leverage on AWS to do that, that's fine
<marcoceppi> Then that would be a fine route to go. The docs on MAAS+Juju are a little light so if you get stuck feel free to pop back in here, I'm sure we can find someone to help you out
<jpds> I do MAAS+juju in VMs all the time.
<adam_g> jamespage, i through up 3 MPs to charm-helpers, all of which are required to support the new openstack templating stuff
<jamespage> adam_g, ack - I'll look on monday
<adam_g> cool, thanks
<JoseeAntonioR> marcoceppi: hey, have a sec so check my code?
<JoseeAntonioR> s/so/to
#juju 2013-07-06
<melmoth> anyone used to charm-helpers ? (lp:charm-helpers) i dont really understand how to use this...
#juju 2014-06-30
<AskUbuntu_> Problem installing MAAS nodes on Intel NUC | http://askubuntu.com/q/489786
<gnuoy`> jamespage, I have a few mps attached to a couple of bugs if you get a moment. Bug#1335760 & Bug#1335762
<_mup_> Bug #1335760: neutron metadata agent service fails if https-service-endpoints is enabled on the keystone charm <Juju Charms Collection:New for gnuoy> <https://launchpad.net/bugs/1335760>
<_mup_> Bug #1335762: neutron-api charm does not support https <Juju Charms Collection:New> <https://launchpad.net/bugs/1335762>
<jamespage> gnuoy`, ack
<jamespage> that first one points me at something I need to fix in the neutron-gateway for the network-splits work
<jamespage> gnuoy`, reviewed - 2 x +1 and 1 needs some more work
<gnuoy`> jamespage, ack, thanks
<gnuoy`> jamespage, I've updated that branches with fixes for the issues you mentioned
<jamespage> gnuoy`, all looks good to me
<gnuoy`> jamespage, thanks I'll merge them into next
<jamespage> gnuoy`, +1
<jamespage> gnuoy`, https://code.launchpad.net/~james-page/charms/trusty/rabbitmq-server/resync-plus-makefile-helpers/+merge/224973
<jamespage> if you have 2 mins
<gnuoy`> jamespage, sure
<jamespage> gnuoy`, also this MP (which is not yet approved/review/landed) impacts on the neutron-api charm
<jamespage> https://code.launchpad.net/~openstack-charmers/charms/precise/nova-cloud-controller/worker-configuration/+merge/221683
<gnuoy`> jamespage, Shouldn't we block on lint even if your mp doesn't introduce that lint ?
<jamespage> gnuoy`, yeah - lemme sort that out
<gnuoy`> ta
<jamespage> gnuoy`, done
<AskUbuntu_> ubuntu juju local configure failed | http://askubuntu.com/q/489917
<gnuoy`> jamespage, rabbit mp approved
<jamespage> gnuoy`, ta
<schegi> jamespage, got something done for the osd journal devices but still got some questions.
<jamespage> schegi, ok
<jamespage> marcoceppi, if you are around today pls can we talk mysql and networking stuff :-)
<schegi> jamesapage, the thing with the osd devices was quite straight forward but i ran into problems when write the config back to the ceph.conf like it is done in emit_cephconf. emit_cephconf is called before the osds are osdized. The problem is now once an osd is osdiced getting all the information that should be presented in the ceph.conf seems very complicated. one the one hand i got the devices list like defined in the config (journal and
<schegi> but at this point i have no clue which osd.X is actually using which device. getting all osd_ids is trivial but getting the used journal and data device given an osd_id might only be possible by using ceph-disk list, which can't be called clusterwise but only per node.
<lazypower> sparkiegeek: awesome! I'll check it out when i'm done with my morning email run
<sparkiegeek> lazypower: thank ye kindly
<schegi> jamespage, back at the desk so if you got some time notify me
<jamespage> schegi, give me a few
<jamespage> just wrapping something else up
<bloodearnest> bcsaller: heya - when you playing with using docker in a charm, did you try with that charm on an lxc unit?
<bloodearnest> am encountering some issues
<schegi> jamespage, kk
<jamespage> schegi, ok - back
 * jamespage reads
<schegi> jamespage, ok
<jamespage> schegi, Q - why do you need to store the journal information in ceph.conf?
<schegi> as long as i can read from the comments in the code the ceph.conf is written as a kind of backup
<schegi> and i thougth it would be nice to have [osd.XX] sections also in the ceph.conf. to get it working this is not necessary
<jamespage> schegi, I've tried to keep it minimal in the charms and use ceph-disk an the OS integration it uses as much as possible
<jamespage> schegi, I think maintaining a consistent ceph.conf in a large cluster will be hard to implement, and not provide a huge amount of value.
<schegi> This is true
<jamespage> schegi, ceph-disk can initialize blocks on a journal device, and it links to the device by UUID in /var/lib/ceph/osd/ceph-N/journal where N is the OSD number
<jamespage> schegi, I guess the trick is distributing journals over avaliable SSD's automatically during provisioning right?
<schegi> i justadded an osd-journal-devices parameter to the config and build device/journal-device tuples.
<schegi> iwhich i then pass to ceph.osdize
<schegi> seems to work. now working on an modulo functionality to distribute multiple journals on one given device if more devices (osds) are defined than journal devices. But i am not shure how ceph behaves if a given journal device has not enought space (too many journals and/or to big journal size) have to experiment a bit
<jamespage> schegi, well we have the journal size specified in configuration as well
<jamespage> schegi, so we can calculate the number of OSD's that a specific journal device should be able to accomodate
<jamespage> schegi, as osd-devices is a whitelist, we need to determine which devices are present on the current unit, and then distributed their journals evenly across the available journal devices
<jamespage> schegi, I would suggest that if there is insufficient capacity on the journal devices, we error out the hook so an admin can see this
<jamespage> rather than reverting to on-OSD-disk journal
<schegi> ok
<rbasak> sinzui: around? I have a 1.18.4 SRU prepared for upload to trusty-proposed. I wonder if you can advise what testing I can do on it, please?
<rbasak> I wondered how I might align SRU verification testing so that it's essentially the same at the testing you already do on your (upstream) stable releases.
<lazypower> sparkiegeek: merged and pushed
<sparkiegeek> lazypower: cheers!
<lazypower> Thanks for the update :) That'll be a lot friendlier in CI / new users
<d4rkn3t> hello dear, I neet help with JUJU is there someone can help me? thanks
<d4rkn3t> I've run the command "juju bootstrap --upload-tools -e maas --debug" during the all debug JUJU try to connect to a node "Node02Cluster01Svr:22". the node change its status from ready to allocated, with the OS in running. after 10 minutes the error is this "ERROR juju.cmd supercommand.go:305 waited for 10m0s without being able to connect: ssh: Could not resolve hostname node02cluster01svr: Name or service not known". In the Region Contro
<d4rkn3t> ller I've set DNS and DHCP
<sinzui> rbasak, I am not sure what to advise, CI tested 1.18.4 with http://juju-ci.vapour.ws:8080/view/Architectures%20and%20Series/  , http://juju-ci.vapour.ws:8080/view/Functions/  , http://juju-ci.vapour.ws:8080/view/Providers/ .
<mbruzek> Does anyone know if the pprint plugin works for hp-cloud?
<sinzui> rbasak, Those tests verified the package CI created with the streams that could be published...but since streams are now published, and 1.18.4 was tested, I think you want a set of verification tests that demonstrate that ubuntu's 1.18.4 is still compatible
<sinzui> oh
<AskUbuntu_> juju bootstrap using maas unable to ssh into nodes | http://askubuntu.com/q/490000
<mbruzek> Am I using the plugin incorrectly?  http://pastebin.ubuntu.com/7726582/
<rbasak> sinzui: I've been looking there, but they seem to all relate to 1.20? Or is there a particular build in the past that is the 1.18 branch?
<mbruzek> It seems to work when I am using Juju local.
<rbasak> sinzui: I'd like to be able to somehow run all the tests that you consider required for a stable release, except against what I'm about to upload.
<rbasak> sinzui: (or, if needed, I could do it after it appears in trusty-proposed)
<sinzui> rbasak, This is the log of the 1.18.4 release. I think the 1.18.1 upgrade verification and 1.18.4 deploy are relevant. That demonstrates that version of juju works with each cloud, each stream, and the cloud images: https://docs.google.com/a/canonical.com/document/d/1YtE-V83H20RVW8Gd8byPQyULU5nP0KOMUbESk9_UUNY/edit#heading=h.dp1wyrj1wujg
<sinzui> rbasak, well the the ci set of tests are more relevant then. The did use the package
 * rbasak looks
<marcoceppi> jamespage: I am around!
<jamespage> marcoceppi, w00t!
<jamespage> marcoceppi, so two things
<jamespage> 1) hows the mysql charm redux going?
<jamespage> 2) I need to hack it
<jamespage> marcoceppi, re 2) specifically I want to be able to make 'data' traffic run over a specific network
<jamespage> I have a few ideas on how but wanted to discuss with you first.
<marcoceppi> 1) Rewrite has slowed because of other work, unfortunately
<marcoceppi> 2) It's pretty hackable, I'm putting a lot of the new code in to charm-helpers under contrib/mysql
<lazypower> mbruzek: juju pprint should work on *any* provider
<marcoceppi> I'll push up an updated branch of both today
<jamespage> marcoceppi, excellent
<mbruzek> lazypower, According to my pastebin was I using it incorrectly?
<jamespage> marcoceppi, I was hoping to take a similar approach to the one I took for rabbitmq
<marcoceppi> mbruzek: it's just `juju pprint` not `juju status pprint`
<jamespage> which is to override the private-address setting on the amqp relation with a different one
<marcoceppi> jamespage: that sounds sane enough
<jamespage> marcoceppi, but its more complex due to the grants that get created for each user/accessing server
<mbruzek> thanks marcoceppi
<marcoceppi> jamespage: ah, for the user/db perms
<jamespage> yes
<marcoceppi> should still be doable with a little work. Are you going to make this a configuration option?
<jamespage> marcoceppi, I can do it within the scope of the existing relation data on the shared-db relation type
<jamespage> marcoceppi, yes
<jamespage> to "configuration option"
<sinzui> rbasak, There is no easy way to make CI test a package it didn't create, but I can imagine a new job that starts with a set of debs. the provider and function tests just pickup debs from the publish-revision job. The arch and series tests for local also pickup the deb. The unittests also accept a tarball though
<marcoceppi> jamespage: cool, yeah I'll push up what I have after I get the tests passing again so you want to take a look
<jamespage> marcoceppi,  so default is as now; turn this on and it will switch to using a new network, on the assumption that the service unit is actually connected to the configured network
<sparkiegeek> dpb1: this really is a terrible idea
<rbasak> sinzui: that sounds promising. How difficult / how much work would it be for you to add corresponding jobs that enables -proposed and uses apt-get install instead of installing debs directly?
<rbasak> sinzui: then we could have a process where I upload to trusty-proposed, and then we just need to run those jobs to get to verification-done state.
<sinzui> rbasak, I think the solution is to revise the build-revision and publish-revision jobs to accept an alternate start param. Build rev would pickup the packages, publish moves them to a location to the tests.
<sinzui> rbasak, I hesitate to proper install the proposed package because we need to add a way to restore the previous packages. We manually control when package is stable. since the machines are shared with other procs
<dpb1> marcoceppi, jcastro: sorry 'bout this, if someone could look at this i would appreciate it.  selfsigned cert is broken (by my last commit) for apache2 (missing dependency on precise): https://code.launchpad.net/~davidpbritton/charms/precise/apache2/1335473-pyasn1-libs
<marcoceppi> dpb1: taking a look now
<sinzui> rbasak, I can certainly add support to test your packages starting this week. I am dedicated to sorting our this kind of problem for Ubuntu this month
<dpb1> marcoceppi: thx
<jcastro> mbruzek, you're back from holiday I take it?
<rbasak> sinzui: thanks. Sorry, I assumed that the tests could run destructively.
<mbruzek> jcastro, yes
<rbasak> sinzui: so you're OK with installing debs directly, but don't want to add -proposed itself?
<rbasak> sinzui: in any case, I think we're agreeing on some kind of vague verification plan here, that we need to iron out and implement?
<rbasak> sinzui: if that's right, then shall I go ahead and upload and try and get this update landed in trusty-proposed, on the basis that we'll sort out a process and get it verified between us?
<sinzui> rbasak, The tests extract the package, set the paths to the package, then execute the test. We don't install to the system location
<rbasak> sinzui: ah, I see. Well I suppose the existing archive dep8 tests will make sure that package is hooked up to the user OK, so I guess that's OK for now.
<rbasak> (ie. the user can call the client)
<rbasak> Then if these tests verify that juju itself is functional, given that it's essentially a static binary I don't see a problem not testing it from the normally installed location.
<sinzui> rbasak, I definitely want to allow you or me to pass in built debs/archives and reverify that things work. When clouds change. I do this manually to verify the cloud broke stable juju
<sinzui> we have simple automated tests for cloud health, comprehensive testing of done my hand
<rbasak> sinzui: sounds like a plan. I'll get this upload done today then - thanks.
<jcastro> hey hazmat
<jcastro> I see you pushed up personal branches of logstash and kibana for trusty
<hazmat> jcastro, g'morning
<jcastro> How are those working out for you? I'd like the get them up in trusty for real alongside ES
<hazmat> jcastro, i'd say the log stash one is still a work in progress. i need to sync up with charles he was looking at them.. atm just trying to clear out post vacation tasks.
<jcastro> ack
<jcastro> welcome back
<jcastro> hazmat, I have a demo with ES people at OSCON, think we can sort it by then?
<lazypower> hazmat: i haven't leveraged any brain power against them yet. I saw that you were going for monitoring - i need to get the basic stuff in there like the forwarder and it should be g2g for basic deployment no?
<lazypower> well, that and some tests
<hazmat> jcastro, definitely
<jcastro> hazmat, if you have demo ideas or useful bundles with all the ELK stuff lmk.
<jamespage> marcoceppi, ping me a branch for mysql once you have one up - not going to hack into the current version - its like crawling through mud.
<jamespage> gnuoy`, how do you feel about a context mutating data on the relation its associated with?
<gnuoy`> jamespage, I don't follow
<jamespage> gnuoy`, ok - heres the context
<jamespage> we want access to mysql databases to flow of a specific network
<jamespage> I just want to configure that in the mysql charm
<jamespage> however grants are made based on remote host addresses; so the related charms need to know which IP to provide
<jamespage> http://paste.ubuntu.com/7726892/
<jamespage> this works via a deferred hook execution on a -changed hook
<jamespage> the mysql charm would provide the required 'access-network' relation data; the remote charms would then provide their IP on said network back
<jamespage> gnuoy`, I could write this into the SharedDBContext
<jamespage> but that would mean invoking a relation_set on that relation
<gnuoy`> jamespage, I have no strong ideological objection to that being in the hook context
<jamespage> gnuoy`, ack
<jcastro> jose, and niedbalski
<jcastro> hey, antonio told me you guys wanted to get in on the review schedule for charms?
<niedbalski> jcastro, i have been doing it unofficially, count me in.
<jcastro> ok
<jcastro> I'll add you to the calendar
<niedbalski> jcastro, great!
<jcastro> how's next week look like to you?
<jcastro> you'll be with mbruzek
<niedbalski> jcastro, i prefer this one, next week i will be on Cts sprint
<lazypower> jcastro: not a bad idea. asanjar is out for spark conf.
<lazypower> niedbalski: that pairs you with bcsaller.
<jcastro> ok
<jcastro> I'll take next week then with bruzer
 * mbruzek waves
* lazypower changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP || Weekly Reviewers: bcsaller & niedbalski || News and stuff: http://reddit.com/r/juju
<jcastro> niedbalski, basically we usually do top to bottom in the queue (link in the topic)
<lazypower> niedbalski: if you run into anything you need help with, feel free to ping me if bcsaller isn't available.
<niedbalski> lazypower, jcastro no worries.
<niedbalski> thanks !
<jose> jcastro: yeah, sure, I'd love to help with that
<jcastro> marcoceppi, oh I forgot to remind you
<jcastro> when you're working on the mysql charm
<jcastro> keep in mind the open bugs on the charm (duh)
<jose> jcastro: you can schedule me for next week with mbruzek if you want
<jcastro> I'll put you after, I go to oscon the week after that and I wanted to pull my weight. :)
<lazypower> jose: you're with me dude
<jose> oh cool
<lazypower> we're gonna rock that queue like it's never been rocked before
<jcastro> https://bugs.launchpad.net/charms/+source/mediawiki/+bug/1298674
<_mup_> Bug #1298674: Mediawiki defaults to PPA  use <audit> <mediawiki (Juju Charms Collection):Triaged> <https://launchpad.net/bugs/1298674>
<jcastro> if you guys run out of things to do
<lazypower> Pretty sure that's been patched
<jcastro> this bug is so embarrassing!
<lazypower> i think its a stale bug
<jcastro> I just looked in the store
<jcastro> PPA is there. :-/
<lazypower> booo
<lazypower> oh thats right, its got some clint fixes in there
<jose> I tried to fix it but pushed to precise instead :P
<jose> I'll take a look after I take this exam
<jose> laters!
<jcastro> ahasenack, can you resolve this bug as appropriate? https://bugs.launchpad.net/charms/+bug/1124471
<_mup_> Bug #1124471: swift-proxy fails to install when source is a ppa <Juju Charms Collection:Fix Committed> <https://launchpad.net/bugs/1124471>
<jcastro> trying to clean up our bugs
<jcastro> jose, yeah if it's all set to go we should push to trusty
<jose> cool, I'll check in a couple hors
<mattyw> has anyone got juju working with devstack?
<mattyw> I'm stuck on what to configure as the region on the env.yaml
<sherl0ck_> Hello..We have deployed OpenStack environement using MAAS and Juju. I had a question - Is it possible to shutdown the compute blade and bring it back up safely?
<ahasenack> jcastro: done
<marcoceppi> sherl0ck_: if you restart the machine, it should re-register in Juju properly
<marcoceppi> sherl0ck_: or are you referring more to a safe reboot within OpenStack context?
<sherl0ck_> Hey thanks marcoceppi
<sherl0ck_> I meant to shutdown  (power down )the physical node completely and than bringing back up. Do you think Juju will still identify it as compute node
<marcoceppi> sherl0ck_: yes, there's an upstart job that will start on boot which will re-register it in the juju environment
<marcoceppi> sherl0ck_: until then, it will show up as "DOWN" in juju status
<jose> jcastro: hey, about bug 1170034, I have a duplicate (bug 1309980) https://code.launchpad.net/~jose/charms/precise/wordpress/fix-1309980/+merge/216568 is on the works
<_mup_> Bug #1170034: integration with memcached broke <wordpress (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1170034>
<_mup_> Bug #1309980: Relationship to memcache seems incomplete <wordpress (Juju Charms Collection):In Progress by jose> <https://launchpad.net/bugs/1309980>
<avoine> marcoceppi: Do you know where I could find a generic jenkins job for testing charm?
<marcoceppi> avoine: not off the top of my head, tvansteenburgh did you get access to the old jenkins setup?
<avoine> marcoceppi: your not using jenkins for automated testing anymore?
<marcoceppi> avoine: we are, we're just re-vamping things at the moment
<avoine> ok ok
<jose> tvansteenburgh: hey, did you get to take a look at that test I linked last week?
<tvansteenburgh> marcoceppi: i got access to what sinzui set up for us
<tvansteenburgh> jose: no, i never got back to that, is it still failing?
<jose> tvansteenburgh: it is, the relations hooks are not being ran even though the relation is there
<jose> which is kinda concerning - may happen for other tests too
<tvansteenburgh> jose: i've never seen it happen.
<tvansteenburgh> jose: but, i'll pull the branch and take a closer look
<jose> tvansteenburgh: if there's the chance you could check if the hooks are running on your end that's be awesome
<jose> is anyone around having troubles with AWS? I'm getting 502s and 503s a lot
<tvansteenburgh> jose: i've tried to run the tests a couple times and the install hook keeps failing with this in the log: http://paste.ubuntu.com/7728539/
<tvansteenburgh> any ideas?
<jose> tvansteenburgh: apt-get update
<tvansteenburgh> tried that
<jose> hmm, lemme check
<AskUbuntu_> deploying charms using juju fails with tcp connection timed out | http://askubuntu.com/q/490141
<tvansteenburgh> jose: i guess it could be a transient problem with the archives?
<jose> tvansteenburgh: not sure. last week the issue was resolved with apt-get update and retrying the hook
<jose> let me guess - you're in the local provider?
<tvansteenburgh> yeah
<jose> then that may be it
<tvansteenburgh> there's an apt-get update in the install hook itself even
<jose> yeah, but I'm not sure if it's getting the charm from the charm store or locally
<tvansteenburgh> ok, i'll try some other stuff
<jose> cool, thanks
<jose> I wouldn't suggest AWS as I've been seeing some issues lately
<mbruzek> tvansteenburgh, were you able to resolve this issue?
<mbruzek> tvansteenburgh, I am hitting something similar too on local provider on Power.. http://pastebin.ubuntu.com/7728552/
<mwhudson> i don't have context, but that means you need to run apt-get update maybe?
<mwhudson> trusty-updates has 2:4.1.6+dfsg-1ubuntu2.14.04.2 now
<mwhudson> that paste is trying to install 2:4.1.6+dfsg-1ubuntu2.14.04.1
<mwhudson> and yeah tvansteenburgh's issue looks similar
<tvansteenburgh> mbruzek: no, haven't resolved
<mbruzek> tvansteenburgh, I thought my problem was related to setting proxies with juju but you have the same problem as I do.  Is it fair to assume you did not set proxies in Juju?
<tvansteenburgh> mbruzek: correct, no proxy
<tvansteenburgh> mbruzek: i've got to EOD, will pick this back up in the morning
<mbruzek> tvansteenburgh, Thanks for the clarification, marco thinks we need to open a defect.
<mbruzek> tvansteenburgh, is your scenario reproducable?
<tvansteenburgh> so far, every time
<mbruzek> tvansteenburgh, I am also going to EOD, we should talk tomorrow morning
<tvansteenburgh> mbruzek: sounds good, ttyl
#juju 2014-07-01
<jose> bcsaller: hey, if you're still around it'd be awesome if you could take a look at https://code.launchpad.net/~jose/charms/precise/wordpress/fix-1309980
<AskUbuntu_> Juju Icehouse-Openstack LXC | http://askubuntu.com/q/490164
<yaell> Hi I am writing a basic charm that should just take a given tar file open it and install a program. 2 questions: first what is the best place to place the tar file? second: This charm does not provide or require anything. Can a charm not provide and not require? Thanks!
<lazypower> yaell: every charm should provide something. And the placement of the tar is up to you as the charm author.
<yaell> Thanks! So if the charm just installs a program what should it provide?
<lazypower> That depends on what it's installing.
<lazypower> but ideally it should provide an interface to interact with the application. eg: if you're installing a minecraft server - it should expose an interface that sends the service ip, and port for connections.
<yaell> I'm installing a driver and toolkit.
<lazypower> hmm. is for like an SDK type installation?
<lazypower> or is this more for ' i have a service that depends on this driver and toolkit, and i need this everywhere in my network'?
<yaell> It's a driver that supports a nic so yes I depend on it and need it in my network
<lazypower> That sounds like a prime candidate for a subordinate. https://juju.ubuntu.com/docs/authors-subordinate-services.html
<lazypower> the idea behind a subordinate is it doesn't actually create a machine to occupy on the service map. it is deployed within scope:container of other services. So consider the scenario you deploy a Redis server - and attach this subordinate to the redis instance. The subordinate is deployed to any machine of the redis service that gets spun up, as its a subordinate attached to the redis cluster.
<lazypower> That removes the requirement of having a provides, and you consume the juju-info explicit relationship when defining your subordinate. Now it adheres to the proper interface guidelines, and your charm is portable to any service (therefore any machine)
<yaell> Ok I will take a look at it sound like it is the solution to my problem :) Thanks  a lot!
<lazypower> No problem
<schegi> some here with knowledge about network settings in a bootstrapped juju node
<schegi> trying to change network setting and not able to ifdown primary interface
<jamespage> schegi, is it a maas environment?
<jamespage> if so then Juju will have automatically create a bridge on eth0 -> br0 for use with LXC instances
<schegi> got the following problem: first my interface use the new naming sceme em1 , em2, p1p2 etc 2nd problem if i have bootstrapped my juju node i like to change the networks to use bonding on all 1g interfaces, added a new interfaces file and brought up the interface manually works fine. but on reboot the node hangs somewhere in cloudinit.net something
<schegi> jamespage and yes its a maas environment
<schegi> does i have to change it somewhere in the maas??
<jamespage> negronjl, the last commit you landed for charm-helpers breaks the unit tests btw
<jamespage> negronjl, I've reverted it - please ask the author to try again :-)
<jamespage> schegi, no - maas and juju don't support manipulation of network devices in this way (yet)
<schegi> jamespage, yes i thought so but, as i said rebooting the bootstrapped juju node with modified interfaces and bonding load as module, doesn't bring up the network interfaces and bonds and later on just hangs in cloud-init-nonet. seems that cloud-init somehow changes /e/n/i on reboot. is that right??
<schegi> at least it changes the first interface. because if i leave this interface as it is all bonds come up and everything works fine
<jamespage> schegi, I'd hope not after first boot
<jamespage> schegi, its possible juju is mangling something
<schegi> no after juju bootstrap the node is reachable but if i then change the first networking interface in /e/n/i and reboot the machine is not reachable any more
<jamespage> schegi, yeah - juju might be doing something on the reboot
<schegi> jamespage, is this configurable? somehow? where to look up what juju is doing any idea?
<jamespage> schegi, also it adds something into /e/n/interfaces.d for br0 - that might be causing this
<jamespage> schegi, its not configurable
<jamespage> at least I don't think it is
 * jamespage may be wrong
<Caguax_> Hi all, anyone want to help with a juju error while bootstrapping juju on maas ?
<Caguax_> http://paste.openstack.org/show/85227/
<lazypower> Caguax_: that was a known bug, stable has moved to 1.18.4 - can you upgrade and retry the bootstrap?
<thresh> hello everyone, any plans for nginx charm?
<Caguax_> Let me try that, Just upgrade juju, right ?
<lazypower> sudo apt-get update && sudo apt-get upgrade should catch it
<lazypower> thats only necessary if you've got a currently running/deployed juju thats behind the current stable tip
<Caguax_> sudo apt-get update && sudo apt-get upgrade, did not catch it.
<Caguax_> root@luflores-maas:~# juju version
<Caguax_> 1.18.1-trusty-amd64
<Caguax_> root@luflores-maas:~#
<lazypower> hmmm... do you have an apt-proxy?
<lazypower> it may be behind in the packages.
<lazypower> or perhaps its pinned at 1.18.1? there's quite a few reasons why it wouldn't be updating - and without having insight i dont know what to recommend.
<Caguax_> This is just a new install. Trusty was install, then maas and juju
<lazypower> well thats the shipping version, there's definately an update though.
<mattyw> stokachu, ping?
<stokachu> mattyb, pong
<stokachu> mattyw, pong
<Caguax_> Let me figureout the how to upgrade and I will try again
<mattyw> stokachu, hey there, I read your blog post earlier about the cloud-installer (I'm the mattyw that posted that bug on github this morning)
<lazypower> Sounds good Caguax_
<stokachu> mattyw, hey man, i got your bug report and was curious if `juju bootstrap` works outside of the installer?
<mattyw> stokachu, quick question I had, I have two machines that I'd like to try the multi install on. One of them has 8GB ram but the other one only has 2.5GB I wondered if there's any order I should do the install to get the best results?
<mattyw> stokachu, juju hadn't actually been installed, it was a fresh install of trusty
<stokachu> mattyw, so the machine with 8GB should run all your services except for nova-compute
<jrwren> sudo apt-get update && sudo apt-get upgrade should have upgraded it. Maybe you don't have trust-updates repository enabled?
<stokachu> mattyw, actually flip that
<stokachu> the 2.5G ram should run everything but nova-compute
<mattyw> stokachu, and I need to install maas first right?
<stokachu> mattyw, the installer will install maas for you and set it up
<stokachu> you'll just need manually commission a machine
<stokachu> so basically run the installer and it'll go to the status screen and wait for a machine to be allocated via maas
<stokachu> at that point commission your 2.5G machine first and the installer will deploy all the services on there
<stokachu> then commission your 8G for nova-compute
<stokachu> you could commission both machines at the same time as the installer has constraints set for what it needs
<stokachu> so nova-compute should automatically use the 8G machine
<mattyw> stokachu, ok great
<mattyw> stokachu, I'll check it out and see how it goes
<stokachu> mattyw, cool man, hit me if you need help im EST and around all week
<stokachu> hit me up i should say
<mattyw> stokachu, ok great, thanks very much for your help
<stokachu> anytime :)
<urulama> hi all
<urulama> question: charms deployed in "manual" env are not deployed using LXC, right?
<schegi> jamespage, seems that the bootstraped node only brings up the bonds after reboot but not the included interfaces
<urulama> question2: is it possible to setup manual env, with one machine in this environment deploying charms as LXCs?
<schegi> urulama, you can always use the --to parameter when deploying charms to define to which machine a charm should be deployed to and there is also an systax to define lxc host and container together with --to
<jcastro_> stokachu, may I have permission to reuse parts of your blog post on the cloud installer to answer some askubuntu questions?
<stokachu> jcastro, of course!
<lazypower> urulama: right, charms deployed in a manual env are not necessarily deployed using LXC.
<lazypower> and as schegi pointed out, using the --to flag with lx:0 (for example) will deploy to an lxc container on machine 0.
<lazypower> you also have the opportunity of using --to kvm:0 if you need kvm containers if the host supports it.
<urulama> lazypower: so with juju-local installed, and with --to lxc:N, the charm is automatically deployed using LXC?
<urulama> lazypower: (installed on that machine)
<lazypower> are you wanting to use the local provider?
<urulama> lazypower: no, no, just curious how to mix environments. using LXC on say machine 1, but KVM on machine 2
<lazypower> in a manual setup, to deploy to a single machine in the manual environment you dont need to have juju-local installed. but yes, if you specify on the CLI --to lxc:# it will deploy to an lxc container on that machine.
<urulama> lazypower: tnx
<cory_fu> What terminal do you all use?  Gnome terminal is ok, but I prefer an underline cursor but I'm getting tired of it being the same size as an underscore and thus not being able to see the cursor whenever it's on an underscore.
<lazypower> cory_fu: no help here, i use gnome_terminal
<mbruzek> cory_fu, you can change the shape of the cursor
<mbruzek> in gnome terminal
<mbruzek> Block, I-beam, Underscore
<cory_fu> mbruzek: The only options are a full-character-height block, vertical i-beam, and an underline that is indistinguishable from a typed underscore
<mbruzek> cory_fu, what is wrong with the block?
<cory_fu> I want an underline-style cursor, but to actually be able to see the damned thing when the cursor is on an underscore
<cory_fu> Block is big and ugly
<cory_fu> :)
<mbruzek> can you change the font to have a slightly different underscore?  or does the terminal use the underscore character?
<luflores> Hi all...I update to 1.18.4 and now I am getting this...http://paste.openstack.org/show/85240/  NTP issue ?
<cory_fu> Hrm.  That's a good idea that I hadn't tried, mbruzek.  Thanks
<mbruzek> cory_fu, either that or just accept the ugly block cursor, or the slender i-beam
<cory_fu> Just seems strange that the options are so limited
<lazypower> luflores: is this with a bootstrap?
<luflores> lazypower: Yes, I ran juju bootstrap --upload-tools
<lazypower> hmm, and you're getting a time mismatch - yeah sounds like one of the clocks are skewed
<lazypower> who's operating in the future? :)
<luflores> lazypower: Let me check my UCS Manager and set ntp. I have NTP running on the MAAS/JUJU server already
<lazypower> luflores: i would think thats coming from whichever work station you have running the bootstrap command. since you told it to --upload-tools - its generating that package and pushing the tools @ your bootstrap node
<jcastro> stokachu, for the cloud installer, what's the IP/url of the dashboard? and is there a login and password?
<jcastro> (adding in some detail to your post)
<luflores> hmm if I don't run the --upload-tools I get  curl: (6) Could not resolve host: streams.canonical.com, but the machine can resolve the name
<stokachu> jcastro, so the horizon dashboard url will be displayed at the bottom of the status screen
<stokachu> and you are prompted to enter a password in the initial install dialog
<stokachu> so that would be used to login to horizon as the admin
<stokachu> there is also an ubuntu user with the same password that can be used to deploy instances under that account
<jcastro> ack
<stokachu> the only thing im not able to change is the juju gui password
<jcastro> isn't it just a config option?
<stokachu> thats what i thought but it apparently isnt the same for the login part
<stokachu> i think that only reads the jenv file
<jcastro> you should file a bug, that seems wrong/weird
<stokachu> i filed a bug on it awhile back and was told it wouldnt change
<jcastro> oh, :(
<stokachu> lemme see if i can find it
<jcastro> well, I think UX wise, if someone is doing a cloud installation and wants to set all the passwords in one go, that seems the best way to do it
<stokachu> yea same here
<stokachu> jcastro, https://bugs.launchpad.net/juju-gui/+bug/1317109
<_mup_> Bug #1317109: unable to override login password <add-user-story> <cloud-installer> <juju-core:Triaged> <juju-gui:Won't Fix> <https://launchpad.net/bugs/1317109>
<stokachu> ah looks like thumper updated it recently
<jcastro> I wouldn't call May recent, heh
<stokachu> but i dont know how that will affect the gui login
<jcastro> we should add this to the cross call
<jcastro> I'm on it.
<stokachu> jcastro, cool, that will get discussed this thursday?
<lazypower> luflores: whicih machine? the one being bootstrapped or your workstation?
<jcastro> stokachu, I added it to the agenda
<stokachu> jcastro, thanks!
<lazypower> luflores: that would be the onus of the bootstrapped machine, and iirc - that's all being routed through the MAAS dns configured services. which *should* be forwarding those requests
<lazypower> but i'm not positive it is, its been a bit since i've looked at the default maas dns setup
<jcastro> stokachu, huge major flaws always get fixed, it's the little stuff like this that papercuts people to death
<jcastro> so when I see them I try to stomp them with fury
<stokachu> haha i agree
<jcastro> stokachu, the answer could very well be "wait for juju user and auth to land", but we can at least track it
<stokachu> sounds like a plan
<jcastro> out of curiosity, when it's finished where will the cloud installer land? Backports? cloud archive?
<stokachu> jcastro, its in the archive now and each dev release i upload the latest, i think for trusty though we're going to push newer updates via backports
<stokachu> since this is a 'soft release' im keeping all the latest bits in the ppa until we're ready to advertise it in the official ubuntu archives
<stokachu> jcastro, i'd also like to get a more permanent home for the documentation, right now its on readthedocs.org
<stokachu> no ubuntu branding or anything
<jcastro> as soon as they find out where the cloud docs will go they will tell us, I figure this can just be part of that.
<stokachu> ok cool
<mbruzek> lazypower, I said it is that jose we have to worry about!
<jose> what?!
<jose> what happened?
<lazypower> jose: mbruzek said it. for the record ;)
<jose> :P
<jose> well
<mbruzek> jose we are having problems with chamilo
<mbruzek> and I blamed you
<luflores> lazypower: The error is on the machine where I run the 'juju bootstrap command'
<jose> mbruzek: may I know what those are? I'm free today
<mbruzek> jose, tvansteenburgh was having problems with the charm.  The test failed differently for me than Tim. Apache2 does not seem to be running on 10.0.3.173
<jose> mbruzek: let me double check the code I pushed to the branch
<mbruzek> jose, Tim was getting an apt-get install error, I did not see the apt-get problem, just the message and failed test.
<mbruzek> jose also 00-setup needs -y flags on add-apt-repository and apt-get install
<jose> mbruzek: yeah, apt-get was showing in the local provider, but not on the cloud
<jose> and see bug #1335340
<_mup_> Bug #1335340: 00-setup's apt does not have -y flag <Juju Charm Tools:New> <https://launchpad.net/bugs/1335340>
<mbruzek> jose, good call on filing that bug!
<mbruzek> jose, hopefully we can fix that soon.
<jose> if you point me to where those files are generated, I could surely fix them
<lazypower> its in a template directory
<lazypower> grep -ri through the source tree, its only in 1 spot
<lazypower> lp:charm-tools
<jose> cool, thanks
<jose> I'm running my tests now too
<luflores> lazypower: I modified the ntp server on the maas server and now all is good...now to the next issue :) Thanks
<lazypower> luflores: brilliant news! Glad its sorted :) still having an issue with resolution?
<luflores> lazypower: I am going to test that next, but I trying to use maas/juju for openstack deployment
<khuss> when I deploy a new nova-compute using juju, it is using ubuntu 12.04. How do I make it install 13.10 instead? I tried changing the version by editing the node, which didn't help.
<sebas5384> khuss: maybe this can help you http://juju-docs.readthedocs.org/en/latest/provider-configuration-openstack.html
<lazypower> sebas5384: o/
<sebas5384> hey! lazypower o/
<sebas5384> :)
<lazypower> whats up skillet?
<sebas5384> lazypower: here I am struggling with the drupal charm hehe (finally i have time for continue)
<khuss> sebas5384: thanks. let me check
<lazypower> sebas5384: which version of the drupal charm? you had a branch, and there was another using drush
<sebas5384> lazypower: https://github.com/sebas5384/charm-drupal
<sebas5384> master branch
<lazypower> and i think there were some crusty copies of drupal floating around somewhere
<sebas5384> its in ansible :)
<sebas5384> yeah, but none of them are like production and development ready
<sebas5384> using best practices known by the community
<sebas5384> i was showing the charm to a guy of acquia and he love the ideia
<sebas5384> another thing i make is the icon hehe
<sebas5384> *made
<lazypower> right on
<sebas5384> I'm going to finish the first milestone and then i'm gonna comeback to the issue in launchpad where are happening some review :)
<lazypower> good plan. ping me when you're ready for a review, i'd love to look it over
<sebas5384> awesome!! thanks! :)
<lazypower> i'm a bit bured atm in a charm rewrite, but i always enjoy a fresh look at new cuts on an old charm.
<sebas5384> hehe
<sebas5384> i would love to rewrite all charms in ansible hehe
<sebas5384> extending some roles
<lazypower> all? thats ambitious
<sebas5384> hahaha yeah i was kidding
<lazypower> nope, i have it on record that sebas5384 is going to rewrite all the charms in ansible. pack it up everybody, sebas's got this one.
<lazypower> ;)
<sebas5384> hehehhee
<sebas5384> oh!! let me ask you something about subordinated charms
<lazypower> go for it
<sebas5384> when i make a relation between drupal<>varnish for example
<sebas5384> i must pass some default.vcl file for make the proper configurations
<sebas5384> and thats its like the same case with apache solr relation
<sebas5384> so, what should be the best way?
<sebas5384> i sow some solutions around
<lazypower> whats the purpose of this default.vcl file?
<lazypower> is it configuring drupal.. or...
<sebas5384> like sending a hash of the file, or making a subordinated charm to make this customizations
<lazypower> well, you've got a few options. you can base64 encode it so it retains its integrity when being xferred over the wire
<sebas5384> its a custom configuration for avoid some agressively cached things
<lazypower> or you can use the subordinate and relate it to any service that needs it and rely on relationships to do the configuring
<sebas5384> ok, theres goes #1
<jose> uh oh, this looks like a problem: tests give a different exit code after the first tun
<lazypower> but be careful - you'll want to make sure you account for any config-changed hooks not over-writing the settings if they are present, its sometimes prudent to make portions of a config immutable and only under the governance of that specific relationship hook - so in that case its difficult for me to tell you what to do aside form create a sentinel file, load it into an ansible variable on dont act on it if that predicate value is true. (if that makes
<lazypower> *any* sense at all)
<sebas5384> lazypower: ok, but when i make a relation drupal<->varnish the subordinated charm related to varnish knows about it?
<lazypower> i'm not super familiar with the varnish charm, i think varnish ootb is kinda dumb and aggressively caches everything, just like it would if you specify no config to varnish post deployment. it says "k i got it all boss, serving all the things"
<lazypower> it may require some tweaking of the varnish charm itself to support this.
<sebas5384> yeah exactly
<sebas5384> so the varnish and the apache solr
<schegi> is there someone who could help with some cloud-init related problems during system boot or can point me to an irc channel or something like this?
<sebas5384> are not for a automated topology set
<sebas5384> probably i will had to give it a look to this charms after the drupal charm is ready
<sebas5384> thanks for the tips lazypower ;)
<lazypower> sebas5384: well, again i'm not super familiar with it. it might expose a configurable interface.
<lazypower> if it does that, you should be able to ship over the config OTW
<lazypower> that would be ideal, considering there are special varnish caching strategies for different apps.
<sebas5384> yeah, but it doesn't hehe
<sebas5384> and in the case of apache solr
<lazypower> i know in my use cases for drupal/joomla/wp - i wanted to cache only assets and let scripts regenerate since we pushed updates frequently.
<sebas5384> there are a lot of other files that i would like to override like stopwords.txt file
<sebas5384> yes!
<sebas5384> lazypower: in drupal you have a number of modules to integrate with varnish for example
<sebas5384> so using the headers
<sebas5384> you can dynamically change the rules of caching
<lazypower> well, in that case, its probably going to need modifications. Just keep in mind if you're going for having it in the charm store ~charmer recommended charm, it'll have to be compat with drupal and everything else.
<lazypower> which is going to be tough, because i can see scope creeping quickly on that...
<lazypower> all of that is domain specific
<sebas5384> yeah, but others like wordpress have that problem too
<sebas5384> if we just do some more relation variables to configure each instance of the varnish or cores of solr
<sebas5384> should be ready for others that might need that level of costumization too
<lazypower> cool. :)
<sebas5384> thats why i was thinking in the responsabilities, subordinated charm, or a nice relation-set variable to pass the files
<sebas5384> hehe
<lazypower> sounds like you've got some ideas, i'll let ya to it and we can reconvene during review
<sebas5384> great! :D
<sebas5384> thanks sharing your thoughts :)
<jcastro> hey sinzui, https://bugs.launchpad.net/juju-core/+bug/1242468
<_mup_> Bug #1242468: Boilerplate for HP Cloud missing several keys <config> <hp-cloud> <juju-core:Triaged> <https://launchpad.net/bugs/1242468>
<jcastro> what's the recommended method for hp cloud?
<jcastro> like, shouldn't we be telling people to use keys?
<lazypower> jcastro: yep, we should be. Thats more secure than putting L/P in there.
<jcastro> yeah I just need a snippet from someone using it with keypair to fix the docs
<lazypower> i thought the boilerplate had the key sections in there jsut commented out
<jcastro> I have a card to fix up the hp page today
<lazypower>       # auth-mode: keypair
<lazypower>         # access-key: <secret>
<lazypower>         # secret-key: <secret>
<lazypower> thats what was generated by juju back in the 1.16 series
<jcastro> hah I see what happened
<sinzui> jcastro, looks like that bug is fixed in 1.19.4 and I should close it
<lazypower> not sure whats changed in the 1.18+ since i havent regenerated my env.yaml in quite a while
<jcastro> I used quickstart and it doesn't include the commented out sections
<lazypower> ah, nice
<lazypower> TO THE BUG TRACKER! *points for great justice*
<jcastro> actually, no, if we just make it correct in the template ....
<sinzui> lazypower, 1.17/1.18 wrongly removed interesting config advice for HP. When HP closed their old regions, the devs were encouraged to fix the config
<lazypower> sinzui: ah, i should probably be regenerating my template between point releases and migrate data instead of leaving all this cruft in here.
<lazypower> i mean, my hp cloud config right now isn't even useable thanks to them closing down the old api and moving to horizon. i haven't put in the effort required to get the new config setup.
<sinzui> lazypower, be careful if you use 1.18, It doesn't know the new region and use-float true
<lazypower> good looking out.
<lazypower> jcastro: thats noteworthy  ^
<lazypower> i doubt we'll be having a lot of people on 1.19 reading the docs since thats -devel
<jcastro> yeah I am trying to get it working
<jcastro> 2014-07-01 16:43:31 ERROR juju.cmd supercommand.go:305 cannot start bootstrap instance: index file has no data for cloud {az-1.region-a.geo-1 https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/} not found
<sinzui> lazypower, jcastro the charm-bundle-slave has cloud-city. Its environments.yaml for charm testing is configured for modern HP, and specifically US East where you have 40 instances all to yourself
<jcastro> stuck on this
<lazypower> sinzui: over half of that was greek to me. I know what cloud-city is, but what is this charm-bundle-slave you speak of? is that the testing instance we got?
<jcastro> sinzui, where can I see a sanitized copy of that config? I'm just trying to update the docs for modern HP
<sinzui> lazypower, jcastro . This doc goes out to all charmers. https://docs.google.com/a/canonical.com/document/d/1mqO2geOuQwTMNpja2MG0B2JW4By6xy3GBc2o-dEGrNQ/edit
<jcastro> oh is this for the automated testing?
<lazypower> oh i've seen this!
<lazypower> i had no idea thats what you were talking about. ok. cool - i'm more up to date thatn i feared i was.
<sinzui> jcastro, the slave has a copy the the cloud-city dir, The charm-testing-hp env is what charm bundle testing will use. It is a valid config.
<jcastro> ack
<jcastro> I am getting access denied with my key
<sinzui> yuck
<lazypower> jcastro: i'm in,,,, let me get you the bits you need
<sinzui> jcastro, oops, wrong users
<sinzui> oh i did doc the right user
<sinzui> jcastro, are your keys on Lp correct?
<jcastro> yessir
<jcastro> I get to the power machines just fine
<jcastro> sinzui, we can sort it later; as long as lazypower can get me a working config to fix the docs
<lazypower> jcastro: sent to you over canonical irc pm.
<lazypower> its using user/pass though... combined with the paste i gave you earlier should be g2g, make sure we verify before its up in teh docs though - i'd hate to be wrong on that...
<jcastro> my notify-osd is crying right now
<jcastro> yeah, we want to use keypass, not username/pw
<lazypower> sorry :) i should have dumped that in canonical pastebin now that i think about it
<lazypower> next time man, i'll do it right next time.
<jcastro> it's all good
<lazypower> i always forget we have a nice password mfa protected pastebin
<lazypower> pastebinit has spoiled me
<jcastro> 2014-07-01 16:43:31 ERROR juju.cmd supercommand.go:305 cannot start bootstrap instance: index file has no data for cloud {az-1.region-a.geo-1 https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/} not found
<jcastro> any idea on this?
<jcastro> I think the keypair auth is working
<jcastro> I am just having another error
<lazypower> hmm thts teh same thing i get
<lazypower> and mine is with the old config. the   AZ data looks correct though
<sinzui> lazypower, There is no az data i modern config. HP is Havana now, the AZs aren't hard coded
<jcastro> http://pastebin.ubuntu.com/7732336/
<jcastro> this is what I have
<sinzui> bogus region
<sinzui> bad ip
<sinzui> bad use-floating-ip
<sinzui> http://curtis.hovey.name/2014/06/12/migrating-juju-to-hp-clouds-horizon/
<jcastro> did we not update the docs for that?
<jcastro> no matter, I'll fix it now
<sinzui> jcastro, the release candidate shows this http://pastebin.ubuntu.com/7732342/ which has the right use-floating-ip and advises a sensible region
<jcastro> ok so it's fixed in the template, so I don't need to file a bug there
<jcastro> ok, working on the docs now, I'll have a PR with working HP in a few minutes
<jcastro> it's working now
<sinzui> jcastro, great. Your the first person to confirm my recommendations I have had this nagging fear I missed something
<schegi> can someone tell me what cloud-init is doing on boot on a bootstraped node??
<jcastro> 2014-07-01 17:02:59 ERROR juju.cmd supercommand.go:305 cannot start bootstrap instance: cannot assign public address 15.125.121.196 to instance "7eb9f960-45d6-4e30-8b0a-2cca4f925b67": failed to add floating ip 15.125.121.196 to server with id: 7eb9f960-45d6-4e30-8b0a-2cca4f925b67
<jcastro> is this our fault or theirs?
<lazypower> schegi: its updating sources, f etching additional dependencies like rsyslog-forwarder, etc.
<jcastro> sinzui, I made a mistake, the HP Docs are up to date other than the keypair part, which I'm fixing now.
<schegi> i got some problems with it. When changing my /e/n/interfaces it sleeps for a couple of minutes in cloud-init-nonet. here my boot.log http://pastebin.com/xYFynAWS
<jcastro> If someone has a minute
<jcastro> https://github.com/juju/docs/pull/132
<jcastro> sinzui, since you're on the dev release, can you tell me if `juju help hpcloud` mentions username/password or keypair?
<themonk> hi all
<jcastro> http://askubuntu.com/questions/490141/deploying-charms-using-juju-fails-with-tcp-connection-timed-out
<jcastro> anyone see this before?
<jcastro> looks like it's trying to hit the public store but can't
<lazypower> jcastro: probably related to the maas proxying. all your subs proxy through the cluster controller as i understand it.
<stokachu> mattyw, what version of juju do you have?
<jcastro> hazmat, remember my bug that said m1.smalls shouldn't be the default?
<jcastro> http://aws.amazon.com/blogs/aws/low-cost-burstable-ec2-instances/
<jcastro> I wish we had storage sorted
<mattyw> stokachu, I'd have to double check
<hazmat> jcastro, yeah.. saw those looks  interesting
<hazmat> jcastro, we need new a bug to update the instance types in core with those as well
<stokachu> mattyw, http://paste.ubuntu.com/7732683/
<stokachu> mattyw, make sure you running 1.18.3 or higher
<jcastro> hazmat, doing it now
<stokachu> oh wtf
<stokachu> mattyw, i just saw the comment of the deprecation
<jcastro> hazmat, https://bugs.launchpad.net/juju-core/+bug/1336473
<_mup_> Bug #1336473: Support new t2 instance types on AWS <juju-core:New> <https://launchpad.net/bugs/1336473>
<mattyw> stokachu, the machine I ran the script on was a clean version of trusty so it got whatever version came out of the packages. my local machine is 1.19.3 and that warns me about deprecation
<mattyw> stokachu, I hadn't actually realised use-clone was ever valid - that's my bad, sorry
<stokachu> mattyw, nah man you're good and i appreciate the bug report
<stokachu> mattyw, there seems to be some issues with things being renamed, deprecated at an alarming rate
<mattyw> stokachu, for the record the version is was running was 1.18.1
<mattyw> stokachu, which is what it got from the packages
<stokachu> mattyw, yea so lxc-use-clone was introduced in 1.18.3 and now is apparently deprecated in 1.19.x and newer
<jcastro> hazmat, they've basically removed m1's totally from the instance pages and the pricing pages
<mattyw> stokachu, ok - that pull request isn't going to cut it at the moment then
<mattyw> stokachu, we probably just need to have a pre requsite for a particular version of juju to be installed
<stokachu> mattyw, yea, also i need to talk to the juju guys to figure out what their plan is
<stokachu> 1.18.3 will be lxc-use-clone but 1.22 will be lxc-clone
<stokachu> as far as those comments state
<stokachu> mattyw, maybe thinking of just setting a hard version requirement in the debian package
<mattyw> stokachu, that's probably a good options
<mattyw> stokachu, I'll close that pull request for now then
<stokachu> mattyw, cool man, if you run into anything else like that with deprecation warnings being printed let us know
<mattyw> stokachu, I'm going to try out the multi instance install tomorrow, I'll let you know how it goes!
<stokachu> mattyw, awesome man, thanks!
<jose> urgh, I got scared thinking that t1.micros were not going to be free anymore after this t2 announcement, but looks like they will!
 * jose continues testing
<whit> is there any way to simply point juju at the directory for a charm (locally) for deployment without having to nest it in a series folder?
<lazypower> whit: you have to nest a series folder
 * whit flips table and leaves
<whit> jk
 * lazypower replaces the table and shows whit a seat
<lazypower> have a seat
<lazypower> ;)
<whit> lazypower, nobody has written a plugin for "hey just deploy my thing"?
<lazypower> whit: you can do that if you specify filepath with a deployer file - but from the cli, not that i'm aware of
<lazypower> its still very judicious in ensuring you have a logical local charm repository.
#juju 2014-07-02
<themonk> hi all
<themonk> can anyone tell me when i should use peer?
<jose> themonk: afaik it's when the service will relate to itself
<lazypower> themonk: whenever you ahve scale out situations. as in you go from a single node to 2+
<lazypower> those nodes are by defintion peers of one another. this is handy for say exchanging cookie secrets in webapps
<themonk> another question if 2 charm require each other, do i use different name for same relation or same name?
<themonk> hello lazypower :) how are you?
<themonk> hello jose :)
<jose> hey themonk!
<jose> it's the same relation I think
 * jose hasn't looked too much into peer relations
<themonk> jose, forgate peer talking about normal relation :)
<themonk> jose, forgate peer talking about normal relation  now:)
<jose> oh
<jose> themonk: what do you specifically mean by relation name?
<lazypower> themonk: its all dependent on the interface you use. its dictated by the provides: and requires: directive in your metadata.yaml
<lazypower> and i'm doing well, thanks :)
<themonk> jose, lazypower, charm1->provides->relname1->interface1 and charm2->requires->relname1->interface1  then charm2->provides->relname2->interface2 and charm1->requires->relname2->interface2
<themonk> this is the case
<themonk> or will i do it like this: charm1->provides->relname1->interface1 and charm2->requires->_relname1->interface1  then charm2->provides->relname2->interface2 and charm1->requires->_relname2->interface2
<jose> yeah, I think that's fine
<themonk> jose, which one
<jose> both are the same, I don't see any difference
<lazypower> you've got a circular reference in relationships
<lazypower> unless they need 2 specific defined interfaces
<lazypower> you only need the provides/requires one way
<lazypower> its a bidi comunication interface
<lazypower> as in you can set on both sides, and receive on both sides. so you're good 2 go to exchange whatever data you need to send/receive across teh wire with just the singular interface
<lazypower> these aren't governed by RFC's they are loosely coupled/typed relationships. arbitrary variables and values
<jose> should we do a charm school about relations?
<themonk> lazypower, hmm
<lazypower> jose: there are several already
<themonk> jose, yes that will be great
<lazypower> juju.ubuntu.com/videos
<jose> themonk: see ^
<themonk> ok thanks man :)
<lazypower> ugh
<lazypower> its not clearly labeled as "relationships"
<jose> who could've done that? *cough* jcas *cough* tro *cough*
<lazypower> its in one of the getting started videos
<lazypower> so it encompases more than just relationships
<lazypower> i'm sorry i dont have a direct video link for you
<lazypower> i'll raise the issue at tomorrow's standup that we need to do a video over relationships
<lazypower> themonk: there's also an open call to 'what should we do next' on the list - which we'd love to have your feedback if you've got grey areas that a charm school would help with
<lazypower> we're always upping our game, one week at a time, on getting new videos out there for our user base to really grokk what we're doing with juju
<lazypower> so make sur eyou're subbed to juju@lists.ubuntu.com to get in on all the community voting action
<themonk> hmm ok i will
<themonk> AFK 15 min
<l1fe> quick question: i've been creating and destroying some juju environments
<l1fe> however after destroying an environment, juju-mongodb is leftover
<l1fe> when i try to purge/auto-remove it
<l1fe> an upstart script somewhere keeps on redownloading the package and reinstalling mongodb
<l1fe> how the heck do i stop this?
<l1fe> i'm currently on juju 1.19.4 running on maas with trusty nodes all over
<l1fe> n/m...had to remove the upstart scripts in /etc/init and then purge
<lazypower> l1fe: sounds bugworthy. can you file a bug against juju-local about that?
<mwhudson> 237.872  + juju deploy --to 0 nova-compute237.873  ERROR cannot upload charm to provider storage: Head http://10.254.30.125:8040/: dial tcp 10.254.30.125:8040: connection refused
<mwhudson> would this be a race condition?
<mwhudson> it's in a script that does more or less "juju bootstrap; juju deploy --to 0 nova-compute"
<mwhudson> (manual provider)
<schegi> anyone here who could help with a cloud-init-nonet issue during boot??
<voidspace> https://github.com/juju/testing/pull/17
<l1fe> sorry to be a bother, but i'm trying to deploy juju-gui on my juju environment, it every time i try, my node stays in pending status
<l1fe> i'm deploying on a MAAS controlled, 4 node environment
<l1fe> bootstrap worked, relatively well, although i had to manually go in an create a /var/lib/juju/nonce.txt in order for it to not timeout
<l1fe> when i do a "juju deploy juju-gui" it kicks off the process fine, and in maas it shows the node as going from ready to "allocated to root"
<l1fe> but ultimately nothing is ever done on the node
<l1fe> also, nothing really shows up in the all-machines.log in /var/log/juju
<cory_fu> So, jose commented on my choice of using a juju run script for setting admin passwords in the Apache Allura charm (https://bugs.launchpad.net/charms/+bug/1314699), but I'm still not convinced that having a config option is the right approach for admin passwords.
<_mup_> Bug #1314699: New charm - Apache Allura <Juju Charms Collection:Incomplete> <https://launchpad.net/bugs/1314699>
<jose> cory_fu: hey, I recommended that because juju set is the default for any options you can configure, I would personally not have it as a run script
<lazypower> cory_fu: didn't you email the list about it?
<cory_fu> Yes.
<lazypower> was there any traction from your list mailing that was pro/against?
<jose> hmm, i didn't see that one
<lazypower> i think its a unique approach - and it's definately work further investigation / discussion.
<cory_fu> It kind of went off on a tangent about password-typed config options
<lazypower> s/work/worth/
<cory_fu> But I'm not convinced that passwords make sense as options at all, regardless of whether there's a password type
<lazypower> if they're a salted hash in the database - whats the argument against that?
<cory_fu> For one thing, the passwords can be changed outside of Juju, leading to a mismatch between Juju's idea of the password value and the actual password value
<lazypower> sarnold: ping
<cory_fu> I think Juju options make sense for things that can't be configured from within the application, but trying to manage application settings from Juju seems incorrect to me.
<cory_fu> Honestly, it would be better if the application in question handled the initial admin password itself, on first run.
<lazypower> cory_fu: whys that? we have several charms that support application configuration.
<cory_fu> lazypower: So how do you handle the possibility of Juju resetting options that were changed from within the application due to Juju having an outdated idea of what the value should be?
<lazypower> cory_fu: within the domain of what we're talking about, if juju has an idea of the application configuration, it makes sense to me to change those within juju and not the application.
<lazypower> what's being described is very much a smiliar problem that exists with all CM frameworks - if you deploy msyql and tweak your my.cnf by hand - its going to be lost on the next convergence.
<cory_fu> And if Juju doesn't cover all of the settings, you end up in a weird place where you should use Juju to set some of the options, and the application others, even if those options are on the same page within the application.
<cory_fu> In Allura's case, there are three default users set up.  So you would presumably set the "root" user's password via Juju, but none of the others?
<jose> you can have three config options
<cory_fu> Though that's not as bad as being in a situation where you have to remember "the 5th option down on the settings page shouldn't be touched, since it's handled by Juju"
<lazypower> cory_fu: so a lot of this sounds like it would be well served by juju-actions when they land. Such as setting up username/passwords
<cory_fu> jose: That seems like config.yaml bloat.  :)  What about the other users that will later be added?  And what if the admin removes the default users and creates a new admin user?
<lazypower> that prevents it from being a configuration option and exposed
<cory_fu> lazypower: +1 and that's what I was using the run script as a makeshift version of.  :)
<lazypower> and exposes an option for you to add/remove users with juju through an action.
<lazypower> cory_fu: oh i'm pretty much for your implementation
<lazypower> cory_fu: i'm playing devils advocate here so its being fleshed out in everybody's face so when we start doing more of this, i can point to the irc log and say "we talked about it on the list and here - i like this better +1 for participation"
<cory_fu> jose: Just to be clear, I'm not entirely averse to adding a "root-password" option to the Allura charm.  Just raising it for discussion.  :)
<cory_fu> lazypower: Good
<cory_fu> That's also why I raised it here.  I'm currently adding the option to the charm, anyway, though I'm going to leave the run scripts in (since I basically need them to set the password anyway)
<lazypower> cory_fu: nah leave the run script
<lazypower> what you've got makes all the sense in terms of domain specific configuration
<lazypower> the service doesn't expose a default password that can be hacked
<lazypower> i wont nack it for a missing password config value - especially since as you pointed out, there are 3 users with only 1 configuration option.
<lazypower> bcsaller: will you have a chance to review allura this week? Cory's got a really interesting implemenation that could set the model for charm security moving forward.
<cory_fu> It also uses the services framework (though it's probably not the best poster-child)
<lazypower> well - i know that actions are going to land in the not so distant future
<lazypower> so this is a really good stop gap / prototype implementation until we get 'juju actions'
<lazypower> its the first charm that i'm aware of that will have a juju-run based config step too, which i like. its exposing more usefuleness to juju-run
<cory_fu> jose: What was that charm I gave you a +1 on recently?
<lazypower> HAHAHA cory_funo retroactive -1's!
<lazypower> j/k, nack away ;)
<jose> cory_fu: tracks I think?
<cory_fu> No, I think it started with a C?
<lazypower> chamilio
<cory_fu> I wish there was a way to see my past activity on Launchpad
<cory_fu> lazypower: That's the one.
<jose> I've been working in many charms lately
<cory_fu> That would be awesome to convert to the services framework, since it would make all of the effort of having to maintain the dot-files go away
<lazypower> cory_fu: services framework?
<cory_fu> jose: You're a charming monster.  :)
<lazypower> he's Charmbot 5000
<jose> BTW, I do like the approach you have and believe it's worth a Discussion
<jose> yeah, see my /nickserv info
<cory_fu> lazypower: The thing that we have proposed for merging into charmhelpers, and that I use in the Allura charm, that we developed while working on Cloud Foundry
<lazypower> Ah, i saw teh inclusion of custom helpers
<lazypower> i didnt dive too deep into it, when i reviewed allura it was a pretty fresh cut
<lazypower> you'e since remixed the charm haven't you?
<cory_fu> lazypower: Yeah, quite a bit
<lazypower> that must be a post-review inclusion
<lazypower> if bcsaller doesn't ping back about being able to review allura, ping me tomorrow and i'll dive into it
<bcsaller> I see the branch, but I don't see it in the review queue
<cory_fu> lazypower: The services framework, in a nutshell, is a system to automatically manage the complexity of dealing with figuring out at what point you have all the info you need to actually set up the software, which could come from disparate sources such as config, relation data, etc, and could end up being completed in any of a number of hook invocations
<lazypower> bcsaller: the bug is incomplete
<lazypower> https://bugs.launchpad.net/charms/+bug/1314699
<_mup_> Bug #1314699: New charm - Apache Allura <Juju Charms Collection:Incomplete> <https://launchpad.net/bugs/1314699>
<cory_fu> bcsaller: I'm about to update said bug, as I've made some changes
<lazypower> cory_fu: whoa. thats awesome
<lazypower> so you define the required params, and once everything in this dictionary is complete, we say "Ok return true and configure now"
<bcsaller> lazypower: we did try to sell you all on this in Vegas :)
<lazypower> bcsaller: i probably said shut up and take my money, then forgot i voted for it.
<cory_fu> However, jose, I wasn't able to reproduce that issue with the mongodb hook error, nor even see how it could have occurred, since it could only happen if the pymongo library was broken
<bcsaller> ha
<cory_fu> As in, not properly installed
<lazypower> bcsaller: you guys are like kickstarter - too many good ideas, me with too little money
<bcsaller> just waiting for fb to dump 19M on us
<lazypower> cory_fu: moving forward, when you want reviews - make sure the bug is "Fix Comitted" or "new"
<lazypower> otherwise it gets scrubbed from teh queue
<cory_fu> lazypower: Yeah, I know.  I was still in the process of making fixes
 * lazypower flips tables
<cory_fu> I literally *just* pushed up my last fix.  :)
 * lazypower puts it back
<lazypower> k
<cory_fu> lazypower: Where are the IRC logs posted at?
<jose> cory_fu: I'll have to check once I'm back home
<jose> irclogs.ubuntu.com
<cory_fu> lazypower: http://imgur.com/gallery/5VDqD3L
<lazypower> cory_fu: you sir, have taken it to the next level.
<sarnold> lazypower: pong :)
<lazypower> sarnold: we had abit of a discussion up above between cory, jose and myself. From a Sec Audit perspective, how do you feela bout charms using isolated scripts to perform user/password management in leu of a configuration option on a charm?
<sarnold> lazypower: heh, nice can of worms we've got here :)
<lazypower> isn't it though?
<lazypower> if you've got any input on the matter, i'd love to hear your opinions.
<sarnold> ideally we wouldn't even have passwords -- they have to be stored somewhere, whether it is in mongo or a plain text file in ~/.juju/ somewhere. the danger of storing them in ~/.juju is that someone might check them into git as an easy way to 'distribute' juju control amongst a team..
<sarnold> so it'd be wonderful if services that support key-based authentication could use keys, instead, but that's unrealistic to hope for that; some services just require passwords.
<lazypower> sarnold: this is about a service thats being deployed with juju - in the event of apache allura - it come swith a root, a git, and a reviewer user (paraphrasing) - and we need to set them up in the application. Key based auth is not an option unfortunately - its a webapp.
<sarnold> I would much prefer a centralized place to store them all, and using the juju set functionality is easily flexible enough to handle them all, but I further hope admins wouldn't actually need to set (or see) passwords as a regular part of business
<sarnold> lazypower: ah; I had expected something more like pgsl accounts for applications, these are users "within" an application.
<sarnold> lazypower: you just added some nuts to this can of worms :)
<lazypower> correct. the 3 users are required to finalize a core installation.
<lazypower> haha, well - i was thinking this was going a bit askew based on your answer
<lazypower> and i'm also known as the master of no-context
<sarnold> .. and so firing up e.g. juju set just to change the reviewer password seems heavy-handed :)
<cory_fu> sarnold: Like I said above, ideally the application itself would manage the admin user / password and juju wouldn't have to get involved, but in this case (and others), the application needs a "bootstrap" user / password that can't be set up properly from within the application
<sarnold> lazypower: well, it's certainly my fault for not reading -all- of scrollback, just enough to jump to the wrong conclusion :)
<cory_fu> Really, once the admin password is set, it should be managed from within Allura, which is another argument against having it as a config option, IMO
<sarnold> yeah, that makes sense
<cory_fu> But I can also see having Juju actions for managing admin users as a convenience option.
<sarnold> though, if we think about e.g. apache httpd; it has a pluggable authentication and authorization mechanism, different kinds of authetication can be required, and user/passwords can be handled via e.g. htpasswd or ldap or gdbm or any other number of tools; I might expect a "protected filesharing with apache" charm to provide configuration switches for the different mechanisms, and perhaps even a pre-seeding mechanism for the htpasswd method..
<cory_fu> Exactly.  And Actions would seem like an ideal candidate for managing that.  Using a config option would seem wrong to me.
<sarnold> so I might also expect that an allura charm could provide a way to preseed the values with a plain text file -- when they eventually grow a pluggable auth system to use e.g. github users or something, it might not be necessary..
<cory_fu> Yeah.  Allura has some support for LDAP, so that's probably the right direction to go, long-term
<lazypower> sarnold: that all feeds into actions being the path forward.
<lazypower> do i take that as a vote pro script, nack on config?
<marcoceppi> cory_fu: lazypower this sounds like it's best used with juju actions
<marcoceppi> the password stuff
<marcoceppi> I'd support a juju run to set password
 * marcoceppi looks at charm
<marcoceppi> It
<marcoceppi> 's definitely not a blocker
<lazypower> agreed
<lazypower> i want to see more traction around this. i'm pretty sad that the conversation got derailed on the mailing list wrt a password field type. I think this warrants a follow up post to the thread
<cory_fu> I can make a follow-up post.
<cory_fu> And link to this discussion
<lazypower> cory_fu: sounds good
<cory_fu> Regarding the transient AttributeError, I was completely unable to reproduce it, nor come up with a reason why it might have happened or a way to prevent it in the future.  Would that be a blocker, or can we chalk it up to one of those things?
<lazypower> if its missing, it looks like maybe pypi had a hiccup?
<cory_fu> But how did the step that actually installed it not fail?  And how did it fix itself magically?
<cory_fu> It makes no sense to me
<lazypower> if it had a hiccup and pip install didn't exit 1 - it makes sense that retrying ti worked.
<lazypower> i dont know if pip exits > 1 if an install fails, do you?
<cory_fu> Re-running the relation-changed hook would not have installed pymongo.  That only happens in the install hook
<lazypower> hmm
<lazypower> well thats weird
<cory_fu> Also, pymongo itself imported ok, it was only when accessing the MongoClient attribute that it barfed
<cory_fu> I even tried looking at the pymongo source to come with an idea, and see nothing obvious
<lazypower> i'm going to have to blame this one on khai
<cory_fu> khai?
<lazypower> marco wrote an RFC about khai being the root of all problems
<cory_fu> What does he have against some random village in Pakistan?
<marcoceppi> Khai is a person not a village
<marcoceppi> though I feel bad for everyone in that village now
<lazypower> marcoceppi: at some point you need to send me that RFC so i can mirror it and point people at it when i make that reference.
<marcoceppi> lazypower: https://github.com/marcoceppi/547-khai
<lazypower> YES!
<marcoceppi> I need to update it, it's expired
<tvansteenburgh1> HTTP 547, "Use in an abundantly conservative manor" https://github.com/marcoceppi/547-khai/blob/master/draft-547-http-547.nroff#L66
 * tvansteenburgh1 imagines an abundantly conservative large country house, with much surrounding land.
<alesage> trying to bootstrap a local env and finding that juju is trying to connect to mongodb at 37017 , my apt-get installed mongodb-server listening on 27017, something I'm doing wrong?
<sarnold> alesage: juju supplies its own mongo, no?
<alesage> sarnold, so I'm told, but juju bootstrap seems to want me to install one: http://pastebin.ubuntu.com/7738856
<sarnold> alesage: ah, then I'll return to the background :) good luck
<alesage> sarnold, thanks you gave me hope :P
<sarnold> alesage: hooray! :)
<lazypower> alesage: did you have the mongodb package instaleld prior to installing juju-local?
<lazypower> sinzui: are we still providing support for running mongo in paralell with an existing mongodb install? i know this was a pain point in the past - not sure if it ever got resolved.
<alesage> lazypower, don't think so
<lazypower> alesage: initctl list | grep juju - can you pastebin the output of that command for me?
<alesage> lazypower ok in process on another bootstrap attempt, one min
<alesage> lazypower coming up blank fwiw
<lazypower> dpkg -l | grep juju-local
<alesage> lazypower, blank again, sensing a theme
<lazypower> alesage: sudo apt-get install juju-local, then re-try the bootstrap with local provider
<alesage> lazypower, attempting thx
<alesage> lazypower, no errors thanks
<alesage> "Starting Juju machine agent"
<lazypower> alesage: np. Glad we got it sorted :)
<lazypower> alesage: if you dont mind me asking, were you looking through the juju docs or did you just dive right into using juju?
<alesage> lazypower, forgive the noobishness but I also have a 'local' env set up in .juju/environments, is this config 'parallel' to the local provider I've just installed??
<alesage> lazypower, dove in with a howto from a colleague
<lazypower> alesage: yep. local is an environment that deploys locally using LXC on your workstation. its intended for rapid deployment prototyping / charm development
<lazypower> the idea is that you can export a bundle from your local environment, and move that to any public cloud that we support, such as hp-cloud, aws, joyent, etc.
<alesage> lazypower, makes sense--just trying to clarify if a 'local' env named in my config would require that juju-local package to be installed?
<lazypower> Correct, unless you changed the environment type
<alesage> lazypower, ok thanks again
<lazypower> as it were, if you feel like beign obscure between names that are meaningful to you - you could change it to type: ec2
<lazypower> and plug in some AWS credentials
<lazypower> and your 'local' environment is now an AWS provider based environment.
<lazypower> which is a good point, i haven't asked about that in the past....
<alesage> I see
#juju 2014-07-03
<thomi> Has anyone heard about an issue deploying the postgres charm on trusty where it fails to read /etc/ssl/private/ssl-cert-snakeoil.key (as in http://paste.ubuntu.com/7739744/ ) ?
<thomi> it doesn't happen for me, but alesage is hitting it, and I know others have had it happen as well...
<l1fe> quick question - if you bootstrap juju on maas, it will enlist a single node. shouldn't deploying a new service automatically enlist a new node from what's available from maas?
<jose> l1fe: it should mark another node as used, it's going to use one node per service unless you specify other stuff
<l1fe> jose: gotcha...and if you do a juju status with no services deployed
<l1fe> it would still always just show the juju core node
<l1fe> and nothing else from maas
<l1fe> i'm having a problem where it tries to deploy a new service...gets the node id from maas, and then gets stuck in pending
<l1fe> nothing in the machine-0 logs
<jose> correct
<jose> hmm
<l1fe> there isn't even a /var/log/juju
<l1fe> on the machine-1 or whatever node it's trying to deploy on
<jose> can you please try destroying the service and re-deploying?
<l1fe> sure, i've destroyed the environment a few times and tried to redeploy
<l1fe> (have to manually go in an remove juju-mongodb)
<l1fe> i'll try again
<jose> weird thing
<l1fe> if i manually provision the boxes
<l1fe> everything works great
<jose> maybe someone around who has previous experience with maas will be able to help, but I'm no maas expert unfortunately :(
<l1fe> i'm on juju 1.19.4
<l1fe> :(
<l1fe> not sure if it matters
<l1fe> but the last line i get in machine-0.log
<l1fe> is juju.provider.maas environ.go:304 picked arbitrary tools &{1.19.4-trusty-amd64 https://streams.canonical.com/juju/tools/releases/juju-1.19.4-trusty-amd64.tgz 181fac6e269696eb68f1ff1ff819815af5da1beafd2d963bd5165fb7befdee84 8052214}
<l1fe> and then nothing
<jose> have you tried using 1.18? maybe it's a bug in 1.19 and we haven't noticed yet
<jose> I have no maas environment to test in, otherwise I would be already checking
<l1fe> i'll try 1.18 :)
<l1fe> same thing with 1.18
<jose> then it's not a bug on juju
<jose> or, well, it is, but I don't know how to troubleshoot it
<l1fe> haha, i'm not sure either...unless there are some logs that i don't know about
<jose> there *is* a log called all-machines.log in /var/log/juju for the bootstrap node
<jose> maybe check there?
<l1fe> yeah, nothing
<noodles775> jamespage: Hey there. Are you able to fix up a ghost revision on the rabbitmq-server branch? If you try `bzr revno lp:~charmers/charms/trusty/rabbitmq-server/trunk` it'll error (at least does for me), the same error which also stops me from pulling that branch in a deploy.
<noodles775> or jam1 might have the bzr foo to fix that too?
<jamespage> noodles775, I'll look
<noodles775> Txs
<jamespage> noodles775, wtf - I don't even know that that means
<noodles775> jamespage: afaik, it means that the branch knows about a revision, but doesn't have the details. I don't know much more than that, other than to fix it in the past on our own branches, we've run: `bzr reconfigure --unstacked` on the branch, but I can't say whether that's OK here (I don't know if the charm branches are stacked for a reason etc.)
<jamespage> noodles775, I'll have to poke someone with more bzr knowledge that I have - mgz?
<noodles775> jamespage: k, thanks. fwiw, I verified that I can reproduce and fix the error on my own branch like this: http://paste.ubuntu.com/7740886/
<jamespage> noodles775, it might be an side-effect of the fact that I pushed the same branch to both precise and trusty charm branches
<jamespage> but not 100% sure
<jamespage> gnuoy, can you take a look at - http://bazaar.launchpad.net/~james-page/charm-helpers/network-splits/view/head:/charmhelpers/contrib/openstack/ip.py
<gnuoy> jamespage, sure
<jamespage> gnuoy, context is for the various combinations of https, clustered-ness and network configuration a charm might be in
<jamespage> gnuoy, usage example - http://bazaar.launchpad.net/~james-page/charms/trusty/cinder/network-splits/view/head:/hooks/cinder_hooks.py#L181
<gnuoy> jamespage, I thought the charms only supported one vip
<jamespage> gnuoy, right now that is the case; this adds support for 'vip' to be a list
<jamespage> supporting multiple VIP's
<gnuoy> jamespage, this make the last one in the last the preferred one is that what you want ?
<jamespage> gnuoy, the last one within the subnet is fine
<jamespage> gnuoy, I'm making the assumption that people won't provide multiple VIPs on the same subnet
<jamespage> which I think is OK
<gnuoy> jamespage, lgtm
<urulama-away> i've got a service (rabbitmq-server) with a status "life: dying" due to 'hook failed: "install"' and this service just cant be removed. how can i shoot it in the head?
<urulama-away> btw, it's been dying for an hour now :D
<lazyPower> urulama: a failed hook will trap all events until you resolve it
<lazyPower> you can force deletion of the machine, then remove teh service
<lazyPower> juju destroy-machine # --force
<lazyPower> juju destroy-service rabbitmq-server
<urulama> lazyPower: tnx, did that in the end
<urulama> lazyPower: what about removing containers ... i've got machine 1 (remote) with two containers ... is it possible to remove containers one by one? or is the same "remove-machine --force" the only option?
<rick_h__> urulama: you should be able to remove single containers. kadams was working on doing that in the gui and you might be able to test it if you've got the gui there
<rick_h__> urulama: though that requires a couple of extra steps
<l1fe> not sure if anyone with maas+juju background is on now, but anyone have experience where juju bootstrap works, but when trying to deploy a service (juju-gui), it enlists a node, and gets stuck in pending state?
<l1fe> if i manually provision the node, however, it works fine
<l1fe> (which kind of defeats the purpose of having maas)
<urulama> rick_h__: will try it from gui. was playing to much with "oh, let's destroy the service before the container is brought up and redeploy/destroy a few more times"
<bac> l1fe: sorry i have no maas experience.  will try to find someone who might help.
<urulama> rick_h__: ended up with no services and a lot of containers in pending state
<rick_h__> urulama: heh
<l1fe> bac: thanks
<rick_h__> urulama: ok, sounds like you're having fun :)
<urulama> rick_h__: indeed :)
<urulama> rick_h__: is quickstart the same as "create manual and then juju deploy juju-gui --to lxc:0"?
<bac> rick_h__: aren't you on vacation?
<urulama> ^^^
<bac> urulama: it does both of those things
<bac> urulama: and it deploys a bundle if you give it one
<bac> urulama: for LXC it does not --to the gui to 0 as it is not allowed
<urulama> bac: does the last statement mean that it needs lxc for machine 0 because otherwise it is not allowed (as experience shows)?
<l1fe> quick question with regards to where to install juju-core to - if i'm in a maas environment, if I install juju-core on a node in the maas cluster, will that node theoretically still be able to be enlisted by juju?
<l1fe> (that's if I ever get juju and maas to work together in the first place...)
<rick_h__> bac: urulama yes, heading out now
<rick_h__> bac: urulama was finishing hooking things up :)
<Tribaal> sinzui: Hi, I've hit https://bugs.launchpad.net/juju-core/+bug/1337340 this morning, and it smells like a race condition to me ("nothing" changed between it happening and a successful bootstrap) :(
<_mup_> Bug #1337340: Juju bootstrap fails because mongodb is unreachable <landscape> <juju-core:New> <https://launchpad.net/bugs/1337340>
<sinzui> Tribaal, I saw that same error in a test yesterday. I will look into your bug
<Tribaal> sinzui: ack, thanks!
<l1fe> anyone on with maas+juju exp? having a hard time debugging why nodes are stuck in pending state
<lazyPower> l1fe: i've run into this before
<l1fe> lazyPower: were you ever able to resolve this?
<lazyPower> l1fe: do you have any other enlistments running during the pxe boot phase
<l1fe> ah, this isn't during the maas pxe boot phase
<l1fe> this is while deploying charms in juju
<l1fe> http://askubuntu.com/questions/491306/jujumaas-when-deploying-charms-juju-gui-new-machines-stuck-in-pending-sta
<lazyPower> l1fe: what you're showing me is the node itself did not finish its enlistment phase when juju requests the machine
<lazyPower> l1fe: its helpful to know where that's bailing out. These are NUC's right? are you going straight hardware witht he nUCS or are you using VM's on the NUCS?
<l1fe> straight hardware
<l1fe> the nodes are properly "ready" in maas, and after running juju deploy, they go, correctly, to allocated to root
<l1fe> after that, though, not sure what the heck is going on since no logs are created for juju on the new node
<l1fe> and nothing is logged on the already bootstrapped machine
<l1fe> if i manually provision the machine, though, everything is fine
<l1fe> but, that kind of defeats the purpose :)
<lazyPower> are you assigning the nodes before you run juju deploy?
<l1fe> no, only thing i did was bootstrap the one machine
<l1fe> based on the documentation, i kind of assumed juju+maas would handle everything else
<lazyPower> it should.
<lazyPower> so, do you see where the machien states pending in your log output in the askubuntu question?
<l1fe> yup
<lazyPower> that tells me something is happening during the enlistment / configuration phase thats gone awry. there is a -v flag you can pass to juju. can you run a deployment with the -v flag and pastebin the output?
<l1fe> not sure how to get it out of "pending"
<l1fe> ok, will do
<lazyPower> i want to isolate if its a juju issue, or a maas issue
<lazyPower> in the instances i've run into this problem - had multiple deployments running and the pxe boot never finishes loading the image
<lazyPower> destroying the unit and re-requesting usually solves teh problem.
<lazyPower> it appears to be something related to my TFTP server on my host - i havent dove real deep into it - so i'm crossing fingers this isn't teh same problem :) because its uncharted territory for me if it is.
<l1fe> http://pastebin.com/BM802wd5
<l1fe> it didn't look like the -v changed much in terms of output
<l1fe> whoops
<l1fe> http://pastebin.com/YD6i1m3d
<l1fe> sorry about that
<lazyPower> when you ssh into dynuc001.maas - is there anything in /var/log/juju?
<l1fe> nope
<lazyPower> ok so we aren't connecting then - that should be one of teh first things it does is remote in and setup teh scaffolding
<l1fe> if i go into auth.log on dynuc001.maas i don't even see that they try to login
<l1fe> i wonder if it has anything to do with the bug where during the bootstrap process, everything bugs out unless i first go onto the node to manually create a /var/lib/juju/nonce.txt
<lazyPower> hmm
<lazyPower> that i don't know. my exposure with maas is limited to my VMAAS setup.
<lazyPower> i dont have enough hardware to do a proper maas
<l1fe> not even sure what other logs i can look at
<l1fe> since at this point, it's like all output just drops into the void
<jrwren> why does the mongodb for localdb use so much space? 1.2G on one host, 6.5G on another
<lazyPower> jrwren: depends on how many blocks it allocates to perform the storage.
<lazyPower> jrwren: typically, mongodb will allocate ~ 2gb of storage space to get started, after that its exponential on growth - since its BSON its creating a blob on disk.
<lazyPower> and it doesnt take what it needs, it takes more than it needs for rapid growth so it stays snappy.
<lazyPower> l1fe: it makes me wonder if juju can access the node.. have you tried ssh'ing intot eh nodes ubuntu user using the ~/.juju/ssh/id_rsa key?
<jrwren> how much does a tiny juju local need?
<l1fe> yup, juju can ssh in
<lazyPower> jrwren: that i don't know. I'd ping in #juju-dev as they have the document collection specifics
<l1fe> lazyPower: i think maybe i have the answer - quick question re: lifecycle
<l1fe> when i do a juju deploy over maas, does it signal for the node to restart and then go into pxe boot to do its stuff
<lazyPower> i'm going to start from teh top and try to split apart concerns by starting the line with who's doing the work
<lazyPower> juju => Signals to mass it needs a machine
<lazyPower> mass => Powers on the machine and runs enlistment. The node reboots and completes enlistment by loading all of the prefs and ssh keys from teh MAAS server and returns a READY signal to teh juju bootstrap node (or client in terms of a bootstrap)
<lazyPower> juju => Connects to the fresly provisioned server and loads scaffolding, packages, and the juju-client services
<lazyPower> deployment begins.
<lazyPower> i may have missed something in there, but thats my understanding of the complete lifecycle during a requested deployment using juju/maas
<l1fe> ok, so what happens if in maas, all the nodes are already enlisted and provisioned with something
<lazyPower> i haven't actually over-allocated - so i would *think* it would return an error since maas is aware of how many nodes it has
<lazyPower> but it might sit in pending
<lazyPower> let me try to spin up 20 mongodb nodes hang on
<l1fe> i guess i'm not sure how the lifecycle works if juju signals to maas to run enlistment, wouldn't the nodes have to already be enlisted for it to signal that it will deploy a service to it?
<l1fe> so when i do juju status, it comes back and says, i have a machine, it is called dynuc001.maas
<l1fe> and that it allocated that machine for the service juju-gui
<lazyPower> l1fe: well its not looking good for juju erroring out on not having enough nodes in its pool
<lazyPower> its still pending though - there's time for it to fail out
<l1fe> HA
<l1fe> got it to work
<l1fe> son of a...
<lazyPower> l1fe: thats great news - what was the fix/workaround? and wrt what happens
<lazyPower> they go forever pending.
<l1fe> alright, so basically, when you kept on saying that juju will go into pxe boot
<l1fe> something finally clicked
<l1fe> in order for my to do juju bootstrap properly on my boxes
<l1fe> i had to follow: https://bugs.launchpad.net/juju-core/+bug/1314682
<_mup_> Bug #1314682: juju bootstrap fails <bootstrap> <juju> <maas-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1314682>
<l1fe> and create my own nonce.txt
<l1fe> i think this is because the power controls for the NUCs in maas doesn't work as expected
<l1fe> and it never restarts the node properly, to allow it to pxe boot and provision everything properly
<l1fe> so it never really sunk in, that that was the lifecycle
<l1fe> i just figured it would ssh into the machine, and run the appropriate commands
<l1fe> so when i did juju deploy, i figured it would do the same thing
<l1fe> instead, based on what you said
<l1fe> it's expecting to go into pxe to have everything configured there
<l1fe> through maas (which makes sense, since why else would you have maas)
<l1fe> so i went onto dynuc001.maas and forced a reboot
<l1fe> let pxe do it's thing
<l1fe> and the status finally came back as started
<l1fe> hope that made sense...on the bright side, because of these problems, i come to understand the whole process since i have to manually intervene where normal power management would have worked haha
<jrwren> it would be cool if local provider had an option to run machine 0 in an lxc
<lazyPower> l1fe: ah, ok :)
<lazyPower> jrwren: thats on the books somewhere. to treat teh bootstrap node as its own LXC container instead of hulk-smashing it on the HOST
<jrwren> lazyPower: excellent!
<niedbalski> thumper, you there?
<thumper> niedbalski: yep
<niedbalski> thumper, ok, i'll add a mock for that. not sure if this specific package has any mock library imported
<thumper> niedbalski: I'm just looking in the juju testing library
<thumper> that may help
<thumper> niedbalski: if you use "github.com/juju/testing" there is a PatchExecutable there
<thumper> in cmd.go
<niedbalski> thumper, cool, i'll use that then. thanks.
<thumper> niedbalski: thanks for the patch
<themonk> lazyPower, hi
<themonk> how to use juju-gui charm  locally?
<themonk> like if i deploy juju-gui in my lxc container then how to load local charms using juju-gui?
#juju 2014-07-04
<jose> hey cory_fu, still around?
<jrwren> rackspace uses a different openstack auth provider. Is this supported?
<Caguax> I am deploying juju on my maas server. I am having an issue with DNS. When I bootstrap the server that is getting bootstrap is getting the ip of the maas server as DNS. I will like to specify The DNS. Any one knows how to do this ?
<thumper> jrwren: I think the openstack itself is fine, the problem being that it doesn't offer cloud storage
<thumper> jcsackett: we are in the process of removing our dependency on cloud storage
<thumper> but it isn't there yet
<thumper> jcsackett: sorry, wrong nick
<Caguax> Sorry I mean to put this in the #juju not here :)
<Caguax> ahhh...I guess it too late and I am work too much :)
<thumper> Caguax: or perhaps you need to ask in #maas
<Caguax> thumper: Yeah..I try there but got crickets :(
<sarnold> it is the day before a long holiday weekend in the .us ..
<sarnold> I'm not sure why I'm still working, for example :)
<thumper> ah yeah...
<thumper> forgot about that
<niedbalski> thumper, I added a couple of tests to MR 225584
<thumper> niedbalski: ok, cheers, I'll take a look shortly
<thumper> niedbalski: replied, can you see the diff comments?
<niedbalski> thumper, ack, fixed :)
<thumper> niedbalski: review done, looking good
<niedbalski> thumper, thanks
<niedbalski> axw; I have re-submitted PR 227, if you have a chance to review/merge.
<niedbalski> thanks
<axw> niedbalski: will do
<axw> niedbalski: have you tested upgrading? I think the new value needs to have a default of "" to support existing environments
<niedbalski> axw, nope just new installs, i'll check.
<stub> I can't bootstrap my local provider today
<stub> Hmm... so foot gun, although it indicates destroy-environment --force isn't destroying enough
<stub> I have an lxc container 'juju', which I setup the other day too keep all my charming isolated and allow me to switch between different juju revs (with other containers)
<stub> My home directory is shared.
<stub> juju bootstrap on the host failed, despite running 'juju destroy-environment --force' on the host. I thing with mongo access perms.
<stub> and the 'juju' lxc container was shut down, so it was not holding anything open.
<stub> Guessing lxc crud, .local/share/lxc suspect
<stub> Nope... mongo having trouble on the host
<Tribaal> stub: is that this one by any chance? https://bugs.launchpad.net/juju-core/+bug/1337340
<_mup_> Bug #1337340: Juju bootstrap fails because mongodb is unreachable <bootstrap> <landscape> <mongodb> <juju-core:Triaged> <https://launchpad.net/bugs/1337340>
<Tribaal> stub: hit that the whole day :(
<LiveOne> hi
<d4rkn3t> Hello everyone, please I need help on MaaS, and bind9. The problem is when I try to make the bootstrap of the node via Juju the command ssh has an error in the connection session, Bind not resolve the hostname. If I try to use ssh with the node's IP it works. Is there someone can help me, please?
<high_fiver> Hi, I've created  a mysql juju charm (tried both precise & trusty) and mysql will not start (start: Job failed to start). Can anyone provide some insight into the error.log - http://pastebin.com/ZgDJ8AvE or let me know if they have heard of a bug. Thanks in advance
<sebas5384> high_fiver: try "juju set mysql dataset-size=60%"
<sebas5384> and before that "juju resolved mysql/0"
<sebas5384> till the state its ok, and then run the juju set ...
<sebas5384> ;)
<sebas5384> that should do the work
<high_fiver> sebas5384, excellent, works
<high_fiver> sebas5384, thanks man
<sebas5384> high_fiver: glad to help :)
<high_fiver> sebas5384, what does "juju resolved"  do
<high_fiver> sebas5384, ah fixes error state, thanks cool
<high_fiver> sebas5384, thanks=thats
<sebas5384> yeah! and you can use it with --retry
<sebas5384> juju resolved --retry mysql/0
<high_fiver> looking at the charm code the wordpress install is fired off once you make the relation to mysql - is that correct?
<sebas5384> high_fiver: yes!
<sebas5384> high_fiver: i'm doing the same in the drupal charm
<sebas5384> high_fiver: https://github.com/sebas5384/charm-drupal/blob/bash/hooks/db-relation-changed
<themonk> hello all
<themonk> in my charms start i run a bash script from python (i am using python for charm code), and in that bash script i starts a process using "command &" which start a child process at this point everything is ok, now problem is start callback keeps running because of bash starts a process which is running and juju status shows "agent-state: installed" it never turns "agent-state: started" how do i fix this??
<themonk> np i solved my problem :)
#juju 2014-07-06
<nooky> Hello, one question, how can I set the ami id via constraints?.
<mwhudson> argh
<mwhudson> i am trying to deploy a local charm and i'm becoming very confused
<mwhudson> ubuntu@linaro-test:~$ juju deploy --to 0 --repository=charms local:trusty/newcassandra
<mwhudson> ERROR charm not found in "/home/ubuntu/charms": local:trusty/newcassandra
<mwhudson> ubuntu@linaro-test:~$ ls charms/trusty/newcassandra/
<mwhudson> config.yaml  copyright  exec.d  files  hooks  icon.svg  lib  Makefile  metadata.yaml  notes  README.md  revision  scripts  templates  tests  TODO  TODO.storage
<mwhudson> what am i missing?
<michelp> mwhudson, i'm trying that step myself, but i have a trailing slash on my repository: juju deploy --repository=/home/$USER/dev/charms/ local:trusty/openresty
<michelp> also i'm specifying an absolute path
<mwhudson> michelp: that doesn't make a difference
<mwhudson> (i'm almost relieved about that!)
<michelp> mwhudson, that's all i have to offer though, i'm just starting this myself :)
<mwhudson> and the error message has the absolute path in...
<mwhudson> heh thanks
 * mwhudson regrets his timezone
<michelp> fwiw that command works for me, my charm deploys
<mwhudson> michelp: ok
<mwhudson> thumper: hello in here :)
<thumper> mwhudson: yeah... that is a bit weird
<mwhudson> thumper: i can't even find where that error message is coming from
<mwhudson> (in the juju source i mean)
 * thumper pokes around in the code
<thumper> I wish we didn't use "curl" for "charm url" as curl means something else to me
<thumper> mwhudson: I have found the error at least
<mwhudson> thumper: oh>
<mwhudson> ?
<thumper> mwhudson: can you check the permissions on the directories?
<mwhudson> drwxr-xr-x all the way down
<thumper> bugger
<thumper> mwhudson: code is github.com/juju/charm/repo.go:473
<mwhudson> oh
<mwhudson> i was only looking in juju/juju
<mwhudson> not yet adjusted to brave new world i guess
<thumper> hmm...
 * thumper scratches his head
<thumper> mwhudson: ok, found an option
<thumper> mwhudson: the charm name inside the metadata needs to match the path, so newcassandra
<mwhudson> oooh
<mwhudson> yes
<thumper> mwhudson: if you just copied a different cassandra charm, it'll fail
<mwhudson> thumper: thanks
<thumper> mwhudson: it looks like the dirname could be anything
<mwhudson> oh
<thumper> and it is the charm metadata that it actually looks at
<mwhudson> oh ok
<thumper> that surprises me too
<mwhudson> that was the problem fwiw
<mwhudson> (now my other problem is bash, sign)
<mwhudson> *sigh
#juju 2015-06-29
<veebers> If I deploy a charm and 'juju expose' it, I should be able to access that from my host, right? (local environment, charm is elastic search)
<veebers> The test curl query works if I'm on the machine itself, but not from the host
<thumper> veebers: if you are using the local provider, it is all exposed anyway
<thumper> veebers: how are you doing the query?
<veebers> thumper: it's all good, I've worked around it (using 'juju run') Its not important as im just tinkering at this point and it's not essential for the end resulting product
<frenda> - HI there, Is there any working service currently using juju gui?
<frenda> - Is juju-gui translatable?
<ddellav> morning arosales
<apuimedo> lazyPower: Hi
<apuimedo> jamespage: ping
<arosales> ddellav, morning
<jamespage> hey arosales, ddellav
<ddellav> hey guys
<beisner> jamespage, can you land this?  coreycb has reviewed.  https://code.launchpad.net/~1chb1n/charm-helpers/amulet-ceph-cinder-updates/+merge/262013
<beisner> coreycb, added functional test coverage to cinder-ceph, cinder-next, ceph-radosgw, ceph-osd and ceph;  can you review these?  http://paste.ubuntu.com/11794072/
<coreycb> beisner, yep, will do
<ennoble> Is there a way to kill a juju action that is in the running state, but isn't running any longer? juju debug-log is also repeating: ERROR juju.rpc server.go:573 error writing response: EOF & ERROR juju.worker.uniter.filter filter.go:137 tomb: dying
<thumper> ennoble: interesting
<thumper> ennoble: can I get you to file a bug and attach the relevant log parts that show the start of the uniter errors?
#juju 2015-06-30
<dweaver> Trying to deploy an openstack bundle and deploying all management services to a controller node with LXC containers.  I have multiple NICs on the physical node, but these are not exposed from the LXC containers, how do I use the charm options for multiple networks when deploying services to LXC?  Anyone got any ideas?
<stub> tvansteenburgh1: https://code.launchpad.net/~stub/charms/trusty/cassandra/spike/+merge/262608
<tvansteenburgh> stub: excellent, thanks
<stub> tvansteenburgh: There are no lxc results as yesterday's lxc run seems stuck - http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/77/
<stub> I didn't add my own timeout to the Cassandra charms, but am surprised Amulet's hasn't kicked in (at this point, I think it will be hanging on add-unit)
<tvansteenburgh> stub: ok, i'm just gonna kill it
<stub> I need another word for service-framework actions, since that term is overloaded in Juju.
<stub> Are the high level steps still called actions in services-framework-ng?
<tvansteenburgh> cory_fu ^
<cory_fu> The reactive pattern is significantly different than the services framework, and is more akin to an event (technically state) driven model.  So you will instead simply have @when decorated blocks (handlers, perhaps), in much the same way that the Hook class provides a @hook decorator currently
<stub> The timings on those Cassandra tests are all over the place. Single nodes tests take over an hour to provision a node, and then the 3 node tests go and complete in 30 mins.
<stub> Hmm.... handlers.py...
<stub> cory_fu: That sounds very similar to the @requires decorator put in charmhelpers/coordinator.py
<cory_fu> stub: There does seem to be some overlap, but the reactive pattern is intended to be more general, in a way.  It's intended to model charm behavior as responding to the evolving combined state of the charm and its various conversations with other services and the user.  It's the implementation of the things we discussed in Malta.
<cory_fu> The main idea is to extend the notion of hook events with the idea of semantically meaningful states that can be responded to in a similar way
<stub> cory_fu: Yes, just thinking that it fits in well with what you are proposing. The locks granted by the leader would be events that trigger the @when decorated block.
<cory_fu> Yeah.  It does seem like we'll definitely want to converge them, though implementation-wise it's not coming to mind right away how best to do that.  We weren't aware that this idea of locks was being worked on until just now, so we went our own direction with the states.
<cory_fu> stub: Here's what docs I have so far for the reactive pattern, if you would mind taking a look:
<cory_fu> Example charm usage: http://juju-relation-pgsql.readthedocs.org/en/latest/
<cory_fu> API docs: http://reactive-charm-helpers.readthedocs.org/en/latest/api/charmhelpers.core.reactive.html
<cory_fu> I'd like to know what you think, and how easy / difficult you think it would be to integrate.
<hazmat> interesting
<stub> charmhelpers.core.hookenv.atstart and atexit might be useful for booting up the reactor, or something similar.
<hazmat> stub: also curious what you thought of https://github.com/compose/governor
<stub> hazmat: I haven't gone over it, but want HA as part of my big rework of the PostgreSQL charm.
<hazmat> stub: i'm currently rewriting it to work with consul, but i've poked around it seems pretty reasonable all standard wal stuff with 9.4 replica slots
<stub> hazmat: I believe I could actually do HA in Juju now there is leadership, although I'm not sure on using hooks would make it reactive enough
<hazmat> stub: although i'm trying to track the logical decoding work that 2ndquandrant is pushing (odr/bdr)
<hazmat> stub: nothing wrong with depending on a secondary source of truth as a sidekick dep imo.
<stub> hazmat: I just added logical replication for bottledwater, which ended up working fine.
<hazmat> stub: the issue with notifications through juju is arbitrary delays from hook exec queue
<hazmat> stub: sweet!
<stub> (in review, not in PostgreSQL charmstore yet)
<stub> If we don't use hooks at all for failover, we are stuck with a shared ip or using proxies (which themselves need to be HA)
<stub> So I was thinking of pgpool-ii if the native juju approach doesn't fly, but I'll look at governor now you have pointed me at it.
<hazmat> stub: its more about keeping a secondary data store (consul/etcd) for leadership and notification
<hazmat> pgpool failover has all kinds of gotchas as do the trigger solutions.. pg native replication is the way to go, just need coordination for leader and failover scenarios
<stub> If I can't use leadership to coordinate who is primary and the cascading replicas, I'll need something to coordinate it.
<stub> But I think a small process running on the units that do 'if is_leader and master_not_up and quorum_available: failover', with the failover process triggered by juju-run (which I think can do the operations right now on the other units, rather than waiting for hooks)
<stub> But first, rework the horrible mess of code into something less horrible. Next, add features :)
<stub> I thought pgpool-ii does support native replication.
<stub> (It has other features besides synchronous replication)
<cholcombe> juju: is it possible to recreate the run time environment that juju is using for debugging purposes?
<apuimedo> jamespage: ping
<sebas5384> jose: ping
<jamespage> apuimedo, hey - not ignoring you but mid database recovery right now
<apuimedo> jamespage: no problem
<apuimedo> I'll be online a few hours more
<apuimedo> ping me when you have some time ;-)
<hazmat> stub: re pgpool native, its the failure scenarios that it overloads with complexity imo.. also proxy and trigger mean application awareness for ddl changes
<hazmat> stub: do you know if you can setup logical decoding of wal and hot_standby on the same server or is wal mode either or
<hazmat> would be nice to add bottledwater to my current cluster setup
<cholcombe> do the juju container suppose running fuse in them?
<cholcombe> /dev/fuse seems to be missing
<apuimedo> jamespage: ping
#juju 2015-07-01
<sebas5384> jose: ping
<jose> sebas5384: pong
<jose> sorry, was at university
<Prabakaran> I have created a charm for java. In my config-changed hooks i have written code for installation procedure, and in my start hook i have written command "java -version " to check java is installed successfully or not. Now the issue is deployment was not successful as start hooks failing stating "java - Command not found" but  i logged into juju local machine and checked java installed or not by running java -version it was working 
<stub> hazmat: I think you can do both. The limitation is apparently logical replication can only happen from the primary, whereas hot standbys can cascade.
<sto> hello, I'm trying to generate my own bundle to install openstack and want to deploy some services to specific machines... Â¿is that possible?
<sto> I was trying to do it with tags, but seems that the bundle does not handle them
<sto> juju-quickstart: error: bad API server response: invalid request: invalid bundle bundle: unsupported constraints: tags
<sto> I'll try to add the machines first and deploy to "1", "2", etc
<marcoceppi> sto: tags are supported, that error is very odd. Could you please show us your bundle file?
<sto> marcoceppi: I'll send it latter, sure
<rick_h_> marcoceppi: sto that's quickstart/the gui. I think it doesn't support tags as they're not available across systems and doesn't support deploying bundles to 'existing machines'
<rick_h_> marcoceppi: sto using the deployer directly might prove to have better results for something specific like that.
<sto> rick_h_: aha
<sto> ok, then, I'll move to deployer
<sto> thanks for the advice
<wwitzel3> whit: ping pong
<whit> hiya wwitzel3
<jcastro> bdx: heya
<jcastro> bdx: did dmitri's jusggestion wrt. passing --debug help find the cause?
<bfrank> hi, has there been any progress towards using juju on centos?
<alexisb> bfrank, juju v1.24.0 (currently in stable ppa) is the first juju release with CentOS workload support
<alexisb> bfrank, release notes: https://launchpad.net/juju-core/+milestone/1.24.0
<bfrank> thanks
<redelmann> hi there!
<redelmann> Is any simple way to get all hostname of units on a service?
<redelmann> using charmhelper
<redelmann> something like needmyunits() => ['maas-node-1', 'maas-node-2'] etc etc
<thumper> marcoceppi, lazyPower: ^^
<hazmat> redelmann: if they have a peer relation. you can just do relation-list to get them
<redelmann> hazmat, nice, i don't have peer-relation, maybe is time add one to my charm.
<redelmann> hazmat, ThankYou
<hazmat> np
#juju 2015-07-02
<tesract> Hi #juju, I've tried out juju-gui and some charms in local enviornment, and find it very compelling.  I was wondering if there is one step higher, where you can design/save an architecture and depoy the whole thing as one entity.
<hloeung> tesract: maybe look into http://pythonhosted.org/juju-deployer/?
<hloeung> tesract: or mojo which is a wrapper around juju-deployer among many other things - https://mojo.canonical.com/
<tesract> thanks hloeung
<apuimedo> lazyPower: could you please review https://code.launchpad.net/~celebdor/charms/precise/cassandra/hostname_resolve ?
<apuimedo> Or where can I see the review queue?
<apuimedo> (it was requested 04/22
<apuimedo> )
<hazmat> lazyPower: do v2 support merged
<hallblazzar> hello
<hallblazzar> does anyone encounter the error  WARNING juju.replicaset replicaset.go:98 Initiate: fetching replication status failed: cannot get replica set status: can't get local.system.replset config from self or any seed (EMPTYCONFIG)  ??
<hallblazzar> it happened when i use   juju  bootstrap  -e  maas
<hallblazzar> i can't find how to fix it
<jamespage> wow - I just pushed charm-helpers rev 400
<jamespage> nice work folks
<beisner> jamespage, delayed response.  thanks for pushing #400;  do I win a prize?
#juju 2015-07-03
<gnuoy> jamespage, https://code.launchpad.net/~openstack-charmers/charms/trusty/neutron-api/vpp/+merge/263224 is ready for review if/when you have a moment
<jamespage> gnuoy, will look
<sto> quick question, I'm deploying openstack using MAAS + JUJU and I'm using LXC containers for things like the openstack-dashboard
<sto> Is it possible to assing a DNS name to the LXC containers in MAAS?
<sto> Another question... is there a simple way to make MAAS install hardware nodes using a LVM over two discs or mount the second disk so nova-compute uses it for disk images
<sto> I have some servers with small SSD discs for the boot image and bigger SATA disks for the compute nodes
<sto> VMs I mean
<sh___> hi
<jamespage> gnuoy, some comments on your merge proposal - I think it makes sense to not try to provide anything other than 'neutron-plugin-config' and use the plugin specific configuration file to add stuff to sections in neutron.conf
<jamespage> neutron-server loads them all as a dict type structure, so the plugin specific config file (ml2_conf.ini) can change and add stuff to existing sections in neutron.conf
<jamespage> it simplifies the integration alot
<jamespage> gnuoy, actually this might work nicely for nova-compute as well
<jamespage> as all openstack daemons can do this stuff, but only some do in the packaging
<apuimedo> lazyPower: ping
<Guest25201> I installed juju(1.24.0) version  in my Ubuntu box and when tried to add a juju machine using juju add-machine its waiting in pending state for long time. The log is also not getting updated and the status is also not getting updated after more than 1 hr  This issue has been observed in some more ubuntu machines in the near past and we were not able to resolve this after cleaning the environment and reisntall.  At the same time we
<skr> Hi Team,  I installed juju(1.24.0) version  in my Ubuntu box and when tried to add a juju machine using juju add-machine its waiting in pending state for long time. The log is also not getting updated and the status is also not getting updated after more than 1 hr  This issue has been observed in some more ubuntu machines in the near past and we were not able to resolve this after cleaning the environment and reisntall.
<Guest205> At the same time we had few other machines which is working fine  and we able to deploy charms in them. The version of juju in them are lower than 1.24.0. Few of them has 1.23.2 and few 1.22.0, 1.22.1 etc.We are able to work with lower versions ie less than 1.24.0 , but with this latest version only we are getting issue  Can you help us on this please
<Guest205> Hi Team,  I installed juju(1.24.0) version  in my Ubuntu box and when tried to add a juju machine using juju add-machine its waiting in pending state for long time. The log is also not getting updated and the status is also not getting updated after more than 1 hr  This issue has been observed in some more ubuntu machines in the near past and we were not able to resolve this after cleaning the environment and reinstall
#juju 2015-07-04
<el_tigro> #join /ubuntyu
#juju 2015-07-05
<danwest>  /join #maas
#juju 2016-07-04
<kjackal> Good morning juju world!
<juju_world> Good morning kjackal
<magicaltrout> *facepalm*
<kjackal> lol!
<tzon> hello
<tzon> does anybody knows hot to fix hook failed "update-status"
<tzon> I cannot resolbe
<tzon> resolve*
<kjackal> hi tzon
<kjackal> you have an update status that is failing, right?
<tzon> kjackal, yeah it says hook failed "update-status" on nova-cloud-controller
<kjackal> ah, nova-cloud-controller not something I know anything about but I will try to help as much as I can
<tzon> ok
<kjackal> so, lets see, you are not in a state where the unit is in an error state
<tzon> yeah it is in error state
<kjackal> when you do a juju resolved --retry <unit> you are left again with an error
<kjackal> what do the logs say?
<kjackal> juju debig-log
<kjackal> *debug
<kjackal> you leave the logs running in a console and you fire a resolved --retry
<kjackal> you could filter errors of ony the failing unit with something like this: juju debug-log --include unit-mysql-0
<tzon> its gets me an error with the include
<tzon> I used juju debug-log | grep nova-cloud
<tzon> but I did not get any results
<tzon> :/
<kjackal> hm.... what kind of an include error?
<tzon> sorry I dont get you
<tzon> with the resolved --retry it says that it is already resolved
<kjackal> ok so if it says it is already resolved then your unit should not be in an error state
<kjackal> cloud you double check that
<tzon> yeah I checked it again it is in error state
<tzon> maybe its a bug?
<kjackal> doesn't seem right. Could be a bug, but it is surprising...
<kjackal> it is a rather basic "usecase": error state -> resolve --retry
<tzon> yeah I have resolve similar issues in the past with this way but I have no idea whats going on now
<tzon> also I have another service that is in executing state and running update 2 days now
<tzon> this also is not normal
<kjackal> 2 days! Super strange I would expect things to expire after 2 hours or something
<kjackal> So do you know how to trigger a update-status hook?
<kjackal> juju run --unit namenode/1 'hooks/update-status'
<kjackal> this could save you some time
<kjackal> tzon: ^
<tzon> ok I will give it a shot
<tzon> it just suck when I running it :)
<kjackal> Cool that means that the hook runs OR that another hook is now running
<infinityplusb> hi folks, after upgrading to 2.0-beta11 I have issues running any juju commands. Is there a way to "reset/reboot" juju?
<tzon> finnaly I got error timed out :/
<babbageclunk> infinityplusb: do you have a bootstrapped controller?
<infinityplusb> @babbageclunk: I do have a controller present when I do `juju list-controllers` but if I try and get details about the model with `juju models` it hangs
<infinityplusb> juju also hangs when I do `juju status` so I can't see what is happening
<babbageclunk> infinityplusb: What about when you run `juju status --debug`?
<infinityplusb> @babbageclunk: ah, that gives me ... something. It seems there is something "amiss" with lxd. I don't seem to have permissions to the charms that are already deployed ...
<babbageclunk> infinityplusb: Want to put the output into a pastebin?
<infinityplusb> @babbageclunk: http://pastebin.com/f5Gi51J3
<infinityplusb> which is odd, cause if I do a `groups` command, I can see I am in the lxd group
<babbageclunk> infinityplusb: ok, it seems like you can't connect to the container running the controller? Can you ssh to ubuntu@10.31.19.19?
<infinityplusb> @babbageclunk: via `juju ssh ...` no, but I can just via regular ssh
<babbageclunk> infinityplusb: hmm. Inside the container can you see jujud running?
<infinityplusb> @babbageclunk: if I do a `service jujud status` it returns it as "inactive (dead)" ... probably not a good sign
<babbageclunk> infinityplusb: no, doesn't sound great!
<babbageclunk> infinityplusb: What can you see in /var/log/juju?
<infinityplusb> @babbageclunk: many many errors - a lot similar to "juju.rpc server.go:576 error writing response: write tcp 10.31.19.19:17070->10.31.19.14:59690: write: connection reset by peer"
<infinityplusb> @babbageclunk: and lots of "broken pipe" messages
<babbageclunk> infinityplusb: Is that on the controller host?
<infinityplusb> @babbageclunk: yup, in the "machine-0.log"
<babbageclunk> infinityplusb: Maybe put it in a pastebin again? (This kind of stuff ends up using a lot of pastebins. ;)
<babbageclunk> infinityplusb_: Does the controller have anything deployed? Is the ip address it's trying to write to the host's?
<infinityplusb_> @babbageclunk: it's like 300k lines long. I'll pastebin it somewhere :P And I *think* there is stuff deployed, but I can't do a `juju status` to see what is deployed where.
<babbageclunk> infinityplusb_: So was this bootstrapped with a previous beta of juju?
<infinityplusb_> @babbageclunk: ... maybe. I (stupidly) didn't check if anything was up before updating
<babbageclunk> infinityplusb_: That might be part of the problem - I think we've had some backwards incompatible changes in the latest beta, could you try bootstrapping a new controller and see if you get the same issue?
<infinityplusb_> @babbageclunk: if I try a new bootstrap, I get a permission erro about being in the lxd group (which I am).
<babbageclunk> infinityplusb_: That's weird. I don't really know much about lxd permissions - might be best to email the juju list? Sorry not to be too much help.
<infinityplusb_> @babbageclunk: nah that's cool. I've learned some new things along the way. Thanks for trying. I'll keep digging. :)
<babbageclunk> infinityplusb_: ok, good luck!
<neiljerram> Morning all!
<neiljerram> If I've written a new layer XYZ, how do I publish it, so that some other charm can say "includes: [ 'layer:XYZ' ]"
<kjackal> hi neiljerram you could/should go and register your layer at http://interfaces.juju.solutions/
<neiljerram> kjackal, I see, thanks.  What about during development?  Is that just a matter of putting the layer code under ${LAYER_PATH} ?
<kjackal> neiljerram: yes putting your layer under {LAYER_PATH} will work
<neiljerram> kjackal, Many thanks.
<kjackal> neiljerram:  note that charm build will first look in you {LAYER_PATH} and then try to grab a layer from a remote repo. That means that your local copy will shadow anything else that might be outthere
 * D4RKS1D3 Hi
<cargonza> fyi: openstack irc meeting in #ubuntu-meeting this week. check out the details : https://wiki.ubuntu.com/ServerTeam/OpenStackCharmsMeeting
#juju 2016-07-05
<jmartinez916> hello, I was wondering if I could get some advice on a juju bootstrap using lxd?
<RAJITH> Hi while connecting to mariadb from remote machine  I am using command: mysql -h hostname -u username -p password -D database name , getting error: ERROR 1130 (HY000): Host '' is not allowed to connect to this MySQL server
<magicaltrout> RAJITH: remove the space after the -h I believe
<magicaltrout> mysql -hhostname -uusername -ppassword -Ddatabase
<admcleod> magicaltrout: im wondering what your irc nemsis would be called. pragmaticgrizzly? or perhaps, scientificbear.
<magicaltrout> or just
<magicaltrout> hungryfisherman
<magicaltrout> good to know you're hard at work admcleod ;)
<magicalt1out> oops byobu kill-server
<magicalt1out> wasn't the command I was looking for
<admcleod> magicaltrout: its what i do. work hard.
<magicalt1out> all things are relative I suppose ;)
<admcleod> :P
<magicalt1out> admcleod: is the review queue still the same? nothings changed yet?
<admcleod> magicalt1out: you mean re the update to the new version? it appears so
<magicalt1out> well the way us minions get stuff into it
<magicaltrout> Saiku will get its GA in the next few days and I want to get Saiku & Drill signed off if I can
<magicaltrout> once I've got Saiku 3.9 released
<magicaltrout> because it will provide connectivity to a bunch charms
<admcleod> magicaltrout: i dont think anything has changed yet, but ill ask later when the rq guys are online
<magicaltrout> cool, its no biggie just checking
<magicaltrout> be nice to finally get Saiku reviewed and signed off
<magicaltrout> jcastro: https://highlyscalable.wordpress.com/2013/08/20/in-stream-big-data-processing/ you should work on a Juju version ;)
<magicaltrout> this one is a bit old but pretty cool for instream processing background and knowledge
<magicaltrout> RAJITH unless its top secret keep it in the channel please
<magicaltrout> thats not top secret
<magicaltrout> does "hostname" return a valid response?
<magicaltrout> and hostname -f for that matter
<magicaltrout> sod it lets make saiku reactive before it goes GA
<magicaltrout> admcleod: also, I'm assuming if its up to scratch I can submit an interface to the platform somehow?
<magicaltrout> it would be good to have a drill layer so that users can hook up to it
<SaMnCo> magicaltrout: hey, I'm trying the DCOS charms
<SaMnCo> but it doesn't connect the GUI
<SaMnCo> I get : channel 3: open failed: connect failed: Connection refused
<magicaltrout> you trying a funky ssh tunnel SaMnCo or just juju expose?
<admcleod> magicaltrout: well, yeah also im not sure - sorry. ill find out
<SaMnCo> (after the SSH port forwarding is setup)
<magicaltrout> SaMnCo: on EC2 recently the straight expose works
<SaMnCo> ah, I thought I needed the SSH stuff
<magicaltrout> yeah that was initially but it seems have magically rectified itself
<magicaltrout> don't ask
<magicaltrout> its all in a bit of a state of flux as I was just working on the multi-master
<magicaltrout> the problem with that is DCOS don't like to add masters on the fly
<magicaltrout> and I don't like that plan
<magicaltrout> so I'm messing around trying to figure out which bits of config need prodding to make it realise that masters have been added
<SaMnCo> still no luck, I get connection refused on the public ip
<magicaltrout> on EC?
<magicaltrout> 2
<SaMnCo> yeah. Is there a route for the URL?
<magicaltrout> nope
<magicaltrout> 2 mins just bootstrapping to test
<SaMnCo> btw I'm on Xenial
<admcleod> SaMnCo: is the daemon bound to all ips?
<magicaltrout> ah
<magicaltrout> that'll do you
<magicaltrout> don't use xenial
<magicaltrout> use wily
<SaMnCo> ok, any reason for this behavior?
<SaMnCo> what's wrong with Xenial?
<magicaltrout> there was a bunch of upstart/systemd issues I was fiddling with, it would work on Xenial, but that didn't exist when I first built it
<magicaltrout> so I've not updated the xenial image
<SaMnCo> right ok
<magicaltrout> but wily is updated
<magicaltrout> or just push a new xenial build, it should just work
<SaMnCo> ok, restarting from scratch then, will let you know how it goes
<magicaltrout> yeah i'm just deploying a master
<magicaltrout> it should work though
<magicaltrout> just not >1 currently
<SaMnCo> compared to the native experience with CloudFormation, what diffs should I expect?
<SaMnCo> cloud load balancer addition works OOTB?
<magicaltrout> all the bits should come up, this is basically the DC/OS advanced installation done with Juju
<magicaltrout> not sure about load balancer, didn't check that far down
<magicaltrout> spin it up and file issues on github though and I'll get round to them next week hopefully
<SaMnCo> so if you create a framework in DC/OS, will it open the AWS firewall afayk?
<SaMnCo> will do
<magicaltrout> it wont open ports not defined in the charms, which is a bit of an interesting one
<magicaltrout> so you need to prod the charm to expose other ports
<SaMnCo> hmm
<SaMnCo> ok so it's the same gap as the k8s stuff
<magicaltrout> yeah
<magicaltrout> the lack of mindreading capability
<SaMnCo> well, the CloudFormation template does it for both, so it must be possible
<magicaltrout> yeah but the firewall in EC2 is managed by Juju, so you'd have to either tell DC/OS to use the same VPC stuff or hook some action up from DC/OS to Juju to expose it
<admcleod> get the charm to do aws api calls
<SaMnCo> you wouldn't want that, because once you deploy in DC/OS, you're part of the lifecycle of that app. Since Juju won't do any scheduling in there, nor be aware of it, you want DC/OS to be autonomous and talk directly to AWS
<SaMnCo> which means the charm needs to convey AWS credentials
<SaMnCo> which it not cool since there is no secret management yet
<magicaltrout> okay i might have left it in a broken state as the UI doesn't come up ;)
<magicaltrout> oh it does
<magicaltrout> i'm too quick for it
<admcleod> the charm wouldnt necessarily need to convey credentials if juju can allocate roles to instances
<SaMnCo> that's right
<SaMnCo> interesting, the default CloudFormation for DC/OS is based on CoreOS
<magicaltrout> yup
<magicaltrout> their vagrant install for testing $hit is pretty handy for debugging my juju hacks as well ;)
<SaMnCo> have you had any issue because of the multi AZ default setup of Juju so far?
<magicaltrout> nope
<magicaltrout> how do I replace an "old charm" with a new charm push setup?
<magicaltrout> or don't I?
<mbruzek> magicaltrout: I don't think you do, it is just a new version.
<magicaltrout> yeah i thought so mbruzek
<magicaltrout> i've got some weirdness
<magicaltrout> something to do with multiseries charms
<magicaltrout> i'm clearly being moronic
<mbruzek> Which one?
<magicaltrout> http://pastebin.com/zgQ5mQ9v
<mbruzek> magicaltrout: Try pushing it without the "trusty" in the name
<magicaltrout> ah yeah that works
 * magicaltrout gets confused with these new pushing calls
 * mbruzek does too
<mbruzek> But that is actually a problem I have run into before
<magicaltrout> ah cool thats deploying updated code
<magicaltrout> thanks mbruzek
<mbruzek> You are most welcome magicaltrout
<magicaltrout> http://www.bbc.co.uk/news/technology-36711989
<magicaltrout> where do you find a spare Â£40k to build a huge tetris?
* lazyPower changed the topic of #juju to: Welcome to Juju! || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP || Youtube: https://www.youtube.com/c/jujucharms || Juju 2.0 beta-11 release notes: https://jujucharms.com/docs/devel/temp-release-notes
<lazyPower> neiljerram - o/ ping
<shruthima> Hi kevin, Regarding IBM-IM amulet test , we are getting that issue only when we are running amulet test.
<neiljerram> lazyPower, hi
<lazyPower> hey neiljerram, just checking in post holiday.
<lazyPower> howd things go with the new proxy sub / etcd charm?
<neiljerram> lazyPower, thanks. I'm still working on the client charm mods that I need, but so far it looks as though etcd-19 is good.
<lazyPower> awesome, so that branch up for review you commented on is g2g from your perspective?
<neiljerram> lazyPower, yes
<lazyPower> (confirming so i can pilot that to land today)
<neiljerram> lazyPower, that would indeed be good
<lazyPower> perfect. I'll get on that and ping you when its merged. I'll keep my eye on the issue tracker as well.
<lazyPower> i see there was a question about xenial series, that seems to be a byproduct of having been pushed at the /trusty/  series prior. I'll see if we can remove the series from charm url as it supportes trusty/xenial in the same charm but is listed under a single series
<lazyPower> Thanks for giving it a go and confirming for me :) Much appreciated!
<neiljerram> I think I said before that I was planning etcd-local-proxy as a subordinate charm; but now I'm working on that as a layer, that each of my two client charms will incorporate.
<neiljerram> lazyPower, I think the trusty/xenial think is just a matter of how you publish.  If the charm metadata says both xenial and trusty (as it does?) then I think you should push to a URL that doesn't include the series.
<lazyPower> neiljerram - i did that for -3, and it still published under /trusty/
<neiljerram> lazyPower, ah OK, must be something else that I don't understand yet, then
<lazyPower> i'm going to ping the store api people for a look and will ping back if there's something specific we need to do
<lazyPower> its probably pebkac :)
<neiljerram> impossible! :-)
<magicaltrout> lazyPower: you need to get onboard with the PICNIC ancronym, its far easier to read and confuses people as to why you'd be eating food when making mistakes
<lazyPower> Problem In Chair Networking in Computer?
<magicaltrout> s/Networking/Not
<lazyPower> hah
<lazyPower> i love it
<lazyPower> deal. all references to the old acronym have been scrubbed
 * lazyPower garbage collects
 * lazyPower starts swapping due to an old java module
<lazyPower> ryebot - if you have a moment can i get you to patch the nits on https://github.com/juju/docs/pull/1100?
<ryebot> lazyPower: Yeah, I'll do it asap, thanks for the headsup
<shruthima> Hi Team, we have this configuration http://paste.ubuntu.com/18552641/ for Z machines, will Juju 2.0 work proper with this.. we need to raise a request for more environments  so validating before that ...
<shruthima> hi kwmonroe, here is the link for http://bazaar.launchpad.net/~salmavar/charms/trusty/ibm-im/ibm-im-branch/view/head:/reactive/ibm-im.sh ibm-im reactive file , can you please suggest why it is hanging at fetching empty zip from store..?
<lazyPower> shruthima - which version of juju?
<shruthima> juju 2.0
<shruthima> lazypower: juju 2.0
<lazyPower> shruthima - which beta?
<shruthima> lazypower:2.0-beta10
<lazyPower> shruthima - have you tried with 2.0-beta11? that was just released last friday
<shruthima> lazypower: no i  have not tried with beta11
<babbageclunk> marcoceppi: I'm in the process of adding a new charm hook tool - application-version-set
<babbageclunk> marcoceppi: oops, meant to ping you first!
<marcoceppi> babbageclunk: cool, do you need something from me?
<babbageclunk> marcoceppi: I'm going to add a corresponding function to charmhelpers in hookenv, but I'm not really sure what to do if the tool isn't available. Do you think I should have some fallback behaviour?
<marcoceppi> babbageclunk: there's examples of this already, it should just raise a NotImplemented error
<babbageclunk> marcoceppi: oh, ok - great!
<babbageclunk> marcoceppi: thanks
<marcoceppi> babbageclunk: I'll get you an example
<marcoceppi> babbageclunk: http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/core/hookenv.py#L848
<marcoceppi> babbageclunk: tbh, we really don't use min-juju-version, we just work around it in the charms
<babbageclunk> marcoceppi: ok, looks straightforward
<marcoceppi> babbageclunk: it should be, there's also an interesting example in status-set, where we just write out to juju-log instead
<marcoceppi> well, we used to
<babbageclunk> marcoceppi: It looks like you still do - or do you mean it used to run juju-log rather than writing to the python log (which I guess gets caught in the hook output?)
<marcoceppi> babbageclunk: well it used to just silently fallback to writing to log instead, so no exception rasied at all on OSError
<marcoceppi> babbageclunk: actually, that's basically what it does not
<marcoceppi> now*
<marcoceppi> it just silently fails to Juju-log
<marcoceppi> instead of raising an exception
<babbageclunk> marcoceppi: right. I think I'll do the same thing.
<marcoceppi> when status-set is not provided,
<marcoceppi> babbageclunk: ack, it's probably the best since it's a set only command
<babbageclunk> marcoceppi: cool - thanks for the tips!
<marcoceppi> babbageclunk: unless...are you going to be providing an application-version-get command?
<babbageclunk> marcoceppi: no, it didn't seem needed (and is a bit subtle)
<mbruzek> rjc ping
<mbruzek> Does anyone know if the cloud image for xenial was updated recently? I am getting a charm failure because python is not installed (at all!)
<Odd_Bloke> mbruzek: When you say Python, do you mean python2?
<mbruzek> Odd_Bloke: I sshed to the machine and python was not in the path.
<Odd_Bloke> mbruzek: Because I don't believe that's installed in xenial in general (although some specific clouds will have it installed because they have agents etc. that pull it in).
<mbruzek> Not python2 nor python3
<mbruzek> Odd_Bloke: The install hook failed because it has a python shebang.
<mbruzek> The error looked like this in the unit log:
<Odd_Bloke> mbruzek: cloud-init uses python3, so it's unlikely the instance doesn't have python3 installed.
<mbruzek> 2016-07-05 16:28:30 ERROR juju.worker.uniter.operation runhook.go:107 hook "install" failed: fork/exec /var/lib/juju/agents/unit-was-lib-0/charm/hooks/install: no such file or directory
<Odd_Bloke> mbruzek: But note that `python` will never run Python 3.
<mbruzek> Odd_Bloke: I am corrected, python3 appears to be there.
<mbruzek> But the install file in this charm has #!/usr/bin/python
<mbruzek> Which is why the install hook failed.
<Odd_Bloke> mbruzek: Python 2 has never been in the xenial cloud image, so I don't think this is a new failure.
<Odd_Bloke> mbruzek: (As I said, there are some clouds that will have Python 2, but that's not normal)
<cafaroo> Hello everyone! I've been trying to tackle maas and juju for deployment of openstack for way over a month now. Now when im trying to bootstrap i get following error "2016-07-05 16:38:28 ERROR juju.cmd supercommand.go:429 failed to bootstrap environment: bootstrap instance started but did not change to Deployed state: instance "/MAAS/api/1.0/nodes/node-536ef382-4135-11e6-95ad-000c29f03191/" failed
<cafaroo> to deploy". Dont know where to start looking for faults has anyone had the same error? The nodes are baremetal and I have confirmed that they can reach the internet trough maas nat forwarding.
<cafaroo> Hope i am in the right place for this kind of question, any help would be appreciated.
<mbruzek> Odd_Bloke: OK I will dig into this more. would changing the shebang to #!/usr/bin/env python  give us python3?
<Odd_Bloke> mbruzek: Nope, "python" will always be Python 2.
<mbruzek> then we will have several charms that will not run on xenial
<mbruzek> cafaroo I don't know what that error message means, there is also a #maas channel on irc if this is a MAAS problem.
<mbruzek> cafaroo: you can retry the bootstrap with --debug and -v for verbose.
<mbruzek> cafaroo: then pastebin the results
<cafaroo> mbruzek: okey ill do that i just found an error in cloud-init-output.log. Maybe thats whats causing it.
<cafaroo> http://pastebin.com/1bhNfmAP
<cafaroo> 'cciss!c0d0' should be my disk ill try to format it somehow
<cholcombe> rick_h_, what's your thoughts on this: https://github.com/juju/juju/issues/5766
<rick_h_> cholcombe: I'd like to compare that to the way the other openstack services handle the manual process to roll.
<rick_h_> cholcombe: it's come up about juju managing this but the "devil" in the details gets in the way in that what indicates a successfull upgrade, how many to upgrade at a time, etc.
<rick_h_> cholcombe: with leader election, the new application-version feature going into 2.0, and status I wonder if there's enough bits into place to build a basic layer to help this.
<cholcombe> rick_h_, hmm i'm not sure
<cholcombe> i think it would be nice to have config settings that say: upgrade x at once
<cholcombe> rick_h_, and a hook that can be called to validate that the upgrade was successful
<cholcombe> that should be enough to take care of it
<marcoceppi> rick_h_: maybe, but stacks is really the way to handle this
<rick_h_> cholcombe: yea, I can see some of that. We can bring it up at next planning sprint.
<rick_h_> marcoceppi: yea, but those tend to be with across services vs a single service like ceph/etc
<rick_h_> marcoceppi: upgrading a single web app rolling/canary style shouldn't need stacks to get involved
<marcoceppi> rick_h_: even with ceph, it's still pretty complex application ceph-mon/osd/etc
<rick_h_> marcoceppi: right, he's using the external thing that happens to be there to track the state for him
<marcoceppi> rick_h_: the new application-version, will that be present in relation data implicitly?
<rick_h_> marcoceppi: it's in status atm, we can look to add it across if it's useful.
<marcoceppi> I could see http interface growing this, so that haproxy and others could spin off a new lb group as part of a blue/green rollout
<rick_h_> marcoceppi: though we're exposing it as an application level entity so guess it's not as helpful here
 * rick_h_ is slow thinking on cold meds today
<marcoceppi> rick_h_: ah, so only leader sets it?
<rick_h_> marcoceppi: it's last unit gets to set it atm
<marcoceppi> rick_h_: perhaps it should be scoped at leader? since it knows what's going on in the world?
 * marcoceppi will stop bombarding cold med rick_h_
<petevg> Catching up on conversations ... I left a comment on the rolling upgrade ticket -- I think that the coordinator layer can handle that case.
<bryan_att> gnuoy: ping - when I reached out to the Congress team for info about how to upstream charm-congress, they said they have no idea and to reach out to openstack-charmers, which AFAICT is a group you are involved with. How do I get the charm upstreamed to https://github.com/openstack-charmers?
<marcoceppi> hey bryan_att, gnuoy is probably EOD now
<bryan_att> marcoceppi: thanks, I'll watch for the response tomorrow
<marcoceppi> bryan_att: alternativly, since openstack-charmers is a pretty big group, poking juju@lists.ubuntu.com might help get a better response
<marcoceppi> bryan_att: can't recall for sure, but I think the github is a mirror of what's in gerrit/openstack git
<marcoceppi> and there's some process or another to get that done
<bryan_att> marcoceppi: thanks, I'll start there
<marcoceppi> thedac tinwood ^^
<narindergupta> marcoceppi: thedac tinwood gnuoy jamespage bascially bryan_att  is looking for push the congress charm in github projects under openstack.org like we have other core charms. He is ready to maintain and upgrade as per his need basis though.
<marcoceppi> narindergupta: makes sense, thedac tinwood gnuoy and jamespage (and by proxy, the mailing list) would be the best place to figure that out.
<narindergupta> marcoceppi: ok thanks
<thedac> bryan_att: Hi, so there are two things we should not confuse. One is charm store inclusion which we controll and the other is an openstack upstream project which openstack.org controlls. You can have your charm code hosted anywhere and still get included in the charmstore. Whereas to get on openstack.org you need be your own project with openstack.org.
<thedac> So when you are ready for your charm to be reviewed you can let us know here (which you have) or on one of the mailing lists.
<bryan_att> The OpenStack Congress project is already there - are you saying that each OpenStack service charm has to be its own project? I would expect, like python-congressclient, that this is just another repo managed by the existing OpenStack Congress team (to which I contribute). Or do you really mean there needs to be a distinct project for this>
<thedac> If you let me know where the charm code lives now I can get the ball rolling on the review process
<thedac> bryan_att: if congress is already an upstream project having the charm live there as just another repo is fine.
<bryan_att> https://github.com/gnuoy/charm-congress is the master of which my repo https://github.com/blsaws/charm-congress is a fork
<bryan_att> I work in my fork and sync with gnuoy as needed.
<thedac> bryan_att: great. I'll bring this up to gnuoy then if you aleady have a process working
<bryan_att> OK, should I also drop a note to juju@lists.ubuntu.org? that was recommended
<bryan_att> note that also at some point I will need this included in the charm store - since OPNFV pulls charms from there for deployment
<thedac> bryan_att: right, when you say it is ready we can do the final review and push it to the charm store
<thedac> bryan_att: re mailing lists. I think we are moving to openstack-dev with [Charms] in the subject. But we would still see juju@lists.ubuntu.org
<bryan_att> Would you suggest I copy both?
<thedac> bryan_att: openstack-dev is the primary now for openstack charms
<bryan_att> ok, thanks. I'm already on that list
<thedac> great
<alai> Hi guys, can someone take a peek to see why I can't file a bug on nuage-vsd and nuage-vsc charm?
<alai> https://bugs.launchpad.net/charms/+source/nuage-vsd
<alai>  nuage-vsd" does not exist in Juju Charms Collection. Please choose a different package. If you're unsure, please select "I don't know"
<alai> it also complaints when I select 'i don't know'
<marcoceppi> alai: that's weird, https://bugs.launchpad.net/charms/+source/nuage-vsd/+filebug that link works for me
<alai> marcoceppi, the link works but after hitting the 'submit bug' button it gives the error
<marcoceppi> alai: that's odd, not sure hy
<jhobbs> alai: probably need to ask in #launchpad then
<alai> jhobbs, sure i'll ask there
<holocrono> anyone using juju 2.0beta11 and a custom image and tools metadata url? I can get juju to use my custom images url, but getting this now for the tools: ERROR juju.environs.config config.go:1130 unknown config field "tools-metadata-url"
<holocrono> it looks like this option was removed in 2.0?
<mgz> holocrono: it's agent-metadata-url
<mgz> tools- spelling has been deprecated for a while
<holocrono> i wish i could find some solid docs, i should've been able to figure that out
<holocrono> i did see that option though, thanks for pointing it out
<mgz> it's the kind of thing that should be release noted, but it's easy to forgot to re-mention when the compat naming is eventually removed
<holocrono> mgz: someone was telling me that juju 1 had support for kvm providers, do you know if this is going to be in 2.0?
<mgz> depends exactly what you mean
<mgz> the old local provider could use kvm instead of lxc
<mgz> the new one is just lxd
<mgz> I don't think that's changing
<mgz> we probably do want to support kvm as a container type for clouds that support it though (which isn't many of the public ones)
<holocrono> i'm talking about connecting to a private qemu/kvm host
<mgz> I think that works if you're using maas?
<mgz> or the manual provider
<holocrono> do you have any links to information on this?
<mgz> bug 1547665 for the local case
<mup> Bug #1547665: juju 2.0 no longer supports KVM for local provider <2.0-count> <juju-core:Triaged> <https://launchpad.net/bugs/1547665>
<mgz> I think maas.ubuntu.com/docs has some stuff on vmaas setup
<holocrono> thanks!
<mgz> jujucharms.com/docs/devel/clouds-manual is what you want for the other
<mgz> which doesn't explictly mention kvm, but does say what requirements 'machines' need to meet to work with juju
<mgz> holocrono: I'm interested in your requirements/experiences here, post to the juju list with what you're up to
<holocrono> mgz: sure, i'll do that -- thanks again
<mskalka> can anyone help me with something? I have two charms that are related, one with provides:unitid and the other with requires:unitid. The relation is labeled properly and matches both metadata files. When I go into one unit with debug-hooks and do "relation-set unitid=foo" it complies just fine, but nothing shows up when I do relation-get, just what looks like the default private-address field
<holocrono> mgz: got another question for you :D
<holocrono> so it appears that the local agent metadata and file are getting downloaded in the bootstrap, but it's hanging up on trying to download the gui:
<holocrono> http://pastebin.com/3rxn1E6n
<holocrono> i see some options for setting http proxy for apt on the model, but that's not really want i need here
<holocrono> i'd prefer not to use a proxy in any case and provide everything privately anyways
<holocrono> how would i modify this setting: https://github.com/juju/juju/blob/master/environs/bootstrap/bootstrap.go#L124
<mgz> holocrono: hm, interesting. there's a flag to pass to say no gui, which is probably what you want?
<mgz> otherwise need to proxy or mirror streams
<mskalka> does anyone know why a charm would fail to report the correct number of units when using relation-list? It spits out the first unit it's related to but none of the others
<holocrono> mgz: i'd like to see the gui working.. personally don't care but it's probably the one thing that gets people excited about juju at first glance
<holocrono> mgz: i'll make an attempt to mirror it
<holocrono> but there isn't a way to override the config? I see something about specifying an environment variable
<holocrono> https://github.com/juju/juju/blob/master/environs/bootstrap/bootstrap.go#L542
<holocrono> this is the best I can do?
<mgz> holocrono: I'm not sure the private cloud case was fully thought out for gui
<mgz> we can likely improve it
<rick_h_> mgz: holocrono the idea for a private cloud is that you can manually provide the file at any time
<rick_h_> mgz: holocrono so you can bootstrap with no-gui and then juju upgrade-gui with any revision you trust
<holocrono> ah, yes oka
<holocrono> +y - trying this now
<rick_h_> you can get any gui release from the streams links or from the GH page https://github.com/juju/juju-gui/releases
<rick_h_> the .tar.bz2 for each release
#juju 2016-07-06
<kjackal> admcleod: so in order to use the latest packages and distribution of bigtop you will need to change the following configuration variables
<kjackal>  "bigtop_version":
<kjackal>    "bigtop-master"                                                                                    "bigtop_release_url": "https://github.com/apache/bigtop/archive/master.zip"
<kjackal>     "bigtop_repo-x86_64": "https://ci.bigtop.apache.org/job/Bigtop-trunk/BUILD_ENVIRONMENTS%3Dubuntu-14.04,label%3Ddocker-slave/lastSuccessfulBuild/artifact/output/apt/"
<admcleod> kjackal: where are those values?
<admcleod> kjackal: in which file
<kjackal> these are config variables of the bigtop-base layer
<kjackal> so they have to be set at the layer.yaml of your charm
<kjackal> It would be nice if we had the option of setting them ad deployment time
<kjackal> right now they have to be set at build time
<kjackal> admcleod: there is something else to be aware of. The patches under resources inside the bigtop base have now been grouped per release
<admcleod> kjackal: i see. ok, thanks
<kjackal> for master the directory is bigtop.master
<kjackal> the idea is as soon as 1.2.0 release is out we should move all patches to a new directory re build our charms and publish them again
<kjackal> there is a titcket+email that describes the situation
<admcleod> kjackal: yes im aware of that, thanks
<eeemil_> I'm new to Juju. I'm trying to deploy openstack-base. When I'm trying to deploy using the GUI, an error occurs instantly: 'The following errors occurred while retrieving bundle changes: unknown object type "ChangeSet"' and nothing happens.
<eeemil_> When trying with the CLI, stuff seems to be deploying properly when inspected through "juju status" (however, the GUI doesnt show that anything has been deployed). Things however seems to get stuck though on that Keystone-unit won't recognize its Database-connection.
<eeemil_> Anybody knows where I should start to diagnose stuff? Shouldn't the services deployed through CLI show up in the GUI?
<eeemil_> I'm deploying on AWS by the way.
<babbageclunk> eeemil_: It sounds like there's a version mismatch between the GUI and the version of juju you're running. What beta are you on?
<babbageclunk> eeemil_: Are you on the mailing list? There was an API change recently that would cause this.
<babbageclunk> eeemil_: https://lists.ubuntu.com/archives/juju/2016-July/007505.html
<babbageclunk> eeemil_: That has instructions to upgrade the GUI so it'll work with the new API..
<eeemil_> I'll look into it, I'm not on the mailing list but have subscribed now!
<babbageclunk> eeemil_: As far as the keystone unit not connecting to the database - maybe try looking in the logs? "juju debug-log --replay"
<eeemil_> Ahh, wasnt aware of the --replay-flag. Thanks! I'll look into it further. Do you know of any more helpful commands for debugging charms? I tried SSH:ing to the machines but couldnt find much of use.
<magicaltrout> you can look at the log on the charm itself in /var/log/juju as well
<magicaltrout> but its identical to the replay log, so you won't see much else :)
<magicaltrout> of course ssh'ing when you have an error does allow you to double check configuration settings etc manually
<magicaltrout> also you can do stuff like juju run --unit dcos-master9/0 "hooks/install" to forcibly re-run a hook
<eeemil_> Yeah, the juju run-command seems really helpful, I didn't know you could force-run a hook like that!
<eeemil_> By the way, babbageclunk: the newer juju gui version did the trick, I can now see everything through the gui! :)
<babbageclunk> eeemil_: Great!
<kjackal> SaMnCo: admcleod: Michael replied and he seemed realy eager to work with us. Are you interested in talking with him? Do you have any time constraints?
<admcleod> kjackal: sure
<SaMnCo> Not after 5PM please, got people at home after that
<SaMnCo> Otherwise my calendar is up to date
<SaMnCo> kjackal: ^
<kjackal> Ok SaMnCo, thanks
<jamespage> marcoceppi, https://code.launchpad.net/~james-page/charm-helpers/apache-2.0/+merge/299320
<jamespage> marcoceppi, that was quick
<marcoceppi> jamespage: lgtm
<marcoceppi> jamespage: do you forsee major issues with moving charmhelpers to gh? I know it'd be an undertaking, and we'd have to setup lp to mirror gh for legacy - curious your opinion
<jamespage> marcoceppi, +1
<jcastro> admcleod: heya
<jcastro> are you guys going to submit to big data spain
<jcastro> http://www.bigdataspain.org/
<marcoceppi> jamespage: cool, I'll take a stab at initial migration later this week
<geetha> Hi, when I deploy my charm in juju 2.0 and try to change default value for config opion using juju set-config, config-changed hook is triggering but config.changed.<option> state is not set.
<geetha> Hi, when I deploy my charm using juju 2.0 and try to change default value for config option using `juju set-config` command, config-changed hook is triggering but config.changed.<option> state has not set. Can any one please suggest me on this?
<cory_fu> kwmonroe, petevg, admcleod, kjackal: I don't know if you guys saw the thread on the juju-dev mailing list about automatic hook retries, but we should get in the habbit of using `juju set-model-config automatically-retry-hooks=false` when doing our testing to help us spot any hook failures that go away on the retry (I know that came up at least once in the Bigtop charms).
<cory_fu> (Note: that's a 2.0 command)
<petevg> cory_fu: roger that.
<kjackal> cory_fu: ack!
<mgz> cory_fu: that seems reasonable to me.
<cory_fu> If anyone was interested in the thread, it starts with https://lists.ubuntu.com/archives/juju-dev/2016-June/005714.html
<bdx> openstack-charmers: hey whats up guys? I know, I just missed the openstack meeting, which is where I was hopping to bring these up ... but I have a few usablility concerns I wanted to get some input on if you don't mind
<mskalka> marcoceppi, I'm adding some functionality to cholcombe's Juju crate and a question came up: Does the relation-ids function work outside of a relation hook? If so, would you just have to specify the relation identifier?
<rick_h_> cherylj: do you know if anyone's around for your team's sync this week?
<cherylj> rick_h_: for the core leads call?  I seriously doubt it
<rick_h_> cherylj: more for the current "let's chat with rick_h_" call but yea
<cherylj> tim's out, ian's out, alexis is out, menno is out
<cherylj> oh
<rick_h_> ok, will just kill off then
<cherylj> yeah, I'm more certain about that one
<cholcombe> rick_h_, are there any plans for cross controller relations?  Say for example aws east and aws west juju controllers talking to one another?
<magicaltrout> yes cholcombe
<magicaltrout> and cross providers
<cholcombe> magicaltrout, oh man that's awesome!
<magicaltrout> (i'm only saying yes because I asked the same a few times)
<cholcombe> magicaltrout, is there a roadmap for that?
<magicaltrout> dunno, it cropped up on the mailing list a few times
<cholcombe> getting ceph radosgw to federate properly kinda needs it
<magicaltrout> I used the monitoring problem as a good explaination, if I want to monitor my services, I don't really want my monitoring on the same infrastructure
<cholcombe> yeah that's a good example
<magicaltrout> or if I have a bunch of models, I also don't want them all to have their own monitoring
<magicaltrout> I want a central service my models can relate to
<cholcombe> yeah i think that would be really helpful to have
#juju 2016-07-07
<nottrobin> Are interfaces a thing of the past? I found this URL on the NFS charm page - http://jujucharms.com/interfaces/mount - which appears to no longer exist.
<rts-sander> hey I installed juju-core from ppa:juju/stable but there's no juju command
<rts-sander> had to install juju-1-default, now there's a juju command..
<admcleod> jcastro: i think thats probably a good idea, ill bring it up today (big data spain)
<admcleod> jcastro: only problem is it conflicts directly with apachecon
<magicaltrout> aye
<magicaltrout> oh it only conflicts with  ApacheCon Core, not ApacheCon Big Data
<magicaltrout> that would have been really silly
<magicaltrout> i'm submitting talks to both and planning a week in the autumn sun ;)
<eeemil_> What is the best practise for installing (and later running) an external program from a charm? If, for example, I would like to download source from git and compile it during the install-hook, where should I place the executable? Would it be fine to symlink, "ln -s $GIT_ROOT/bin/program /usr/local/bin"?
<magicaltrout> general best practice would dictate you compile it prior to charm building I would have thought, sysadmins would cry if you told them you were compiling at install time ;)
<magicaltrout> other than that, your suggestion seems fine eeemil_
<eeemil_> Hehe. :) Thanks!
<eeemil> I'm still quite new to Juju, when I try to deploy some units, their public adress (as reported by `juju status`) is a 10.0.0.*-address. Commands such as `juju ssh unitname/0` doesnt work. Are some units supposed to behave this way? If not, what could be wrong?
<magicaltrout> I don't believe they are supposed to work that way eeemil
<magicaltrout> did I spot you are an openstack user? If so, I've never touched it so I could be wrong
<magicaltrout> but it sounds unlikely they should be spinning up and unaccessible
<eeemil> Yeah, I'm working on a charm that is supposed to work with openstack. I myself has quite limited experience of the inner workings of openstack, so right now I'm learning about OpenStack and Juju at the same time... :)
<magicaltrout> yeah I've not touched it before, I prefer a simpler life :)
<magicaltrout> see if jamespage et al, are around
<magicaltrout> they should be able to help
<eeemil> Haha ;) Sure!
<jamespage> morning folks
<jamespage> eeemil, are you deploying to an openstack cloud?
<jamespage> or deploying a openstack cloud?
<jamespage> eeemil, btw we also have an #openstack-charms channel now as well
<eeemil> I'm deploying a openstack cloud on aws!
<magicaltrout> sounds like a cost effective way of getting a cloud.... in a cloud....! ;)
<admcleod> magicaltrout: ah good point re the confs
<magicaltrout> i have many good points
<eeemil> I heard you like clouds so i put a cloud in your cloud, so you can compute when you compute... I'm just testing stuff out. :)
<jamespage> ionutbalutoiu, hey - I just made some comments on https://review.openstack.org/#/c/335276
<jamespage> needs some test case fixes to deal with different mechanism driver configurations between releases.
<jamespage> dosaboy, hey can you join #openstack-charms pls
<jamespage> ionutbalutoiu, hey can you join #openstack-charms as well pls
<jcastro> admcleod: magicaltrout: it might be worth having 2 teams go, one to each show, or an overlap, whatever makes the most sense?
<magicaltrout> jcastro: ApacheCon Core in Vancouver wasn't anything special, so having Big Data folk go to ApacheCon Big Data and then onto Big Data Spain would probably suffice... who knows
<magicaltrout> I have Pentaho Europe Community meetup the week before as well, so I'll be doing Antwerp -> Spain -> Spain
<magicaltrout> I need a bigger suitcase
<admcleod> jcastro: yeah, basically what magicaltrout said
<magicaltrout> see
<magicaltrout> I have many good points
<jcastro> are they both in barcelona or different parts of spain?
<magicaltrout> Seville and BCN
<jcastro> dang, further than I was expecting
<admcleod> seville?
<jcastro> yeah
<jcastro> that's not a train ride is it?
<magicaltrout> just get mark to send his jet
<admcleod> bigdataspain is in madrid
<magicaltrout> ah
<magicaltrout> fair enough, I assumed jcastro knew what he was talking about :P
<admcleod> magicaltrout just wants to go to seville
<magicaltrout> no
<jcastro> magicaltro+â¡ Seville and BCN
<magicaltrout> ApacheCon is in Seville
<magicaltrout> Big Data Spain... it in Madrid
<magicaltrout> jcastro mentioned barcelona first
<magicaltrout> so I blame him
<admcleod> hmm.. something is also happening in bcn
<admcleod> or i thought it was. anyway, madrid to seville is only 2.5 hr
<jcastro> it sounds like a big data team spanish train tour is in order
<magicaltrout> hehe
<magicaltrout> what are the trains like admcleod ? :)
<jcastro> I took madrid->caceres by train once and they were awesome
<magicaltrout> I've not travelled in spain just been for meetups
<dosaboy> jamespage: sure thing
<ionutbalutoiu> jamespage, thank-you! I will fix them asap. Joined #openstack-charms as well :)
<magicaltrout> jcastro: http://conf.dato.com/2016/us/agenda_day2/ this one just popped up in my feed. Too late this year but worth keeping an eye out for next year
 * jcastro nods
<magicaltrout> also Intel are making some efforts to gain traffic in that sector with TAP
<magicaltrout> http://trustedanalytics.org/what-is-tap/ you guys might want to make contact with them to discuss Juju with them
<jcastro> oooh
<magicaltrout> I can tell you who to make contact with
<magicaltrout> 2 mins
<magicaltrout> https://twitter.com/mchmarny find this guy on linkedin or somewhere and ping him
<jcastro> dang, the installation pages are blank github wiki pages
<magicaltrout> the link is borked jcastro
<magicaltrout> https://github.com/trustedanalytics/platform-wiki-0.7/wiki
<jcastro> got it
<jamespage> ionutbalutoiu, awesome!
<neiljerram> mbruzek, are you around for another question about etcd/TLS?
<mbruzek> neiljerram: Yes I am
<neiljerram> mbruzek, Thanks. I wanted to check my understanding of how client certificates are generated.  IIUC, the etcd leader unit (via layer-tls and tlslib) generates just one client certificate, and hands this out to however many etcd clients connect to it - as opposed to, say, generating a different client certificate for each client that connects.
<neiljerram> mbruzek, Is that correct?
<mbruzek> neiljerram: Yes I think for the client certificate that is correct. Each peer (unit of the same charm) generates a certificate signing request (CSR) and the leader/CA signs a server certificate for them.
<mbruzek> neiljerram: but yeah the client cert/key is only generated once and shared on leader data if I am not mistaken.
<neiljerram> mbruzek, Thanks. I don't see any problem with that, just wanted to check.
<mbruzek> neiljerram: OK
<neiljerram> mbruzek, A detail, though: does this imply that the client cert doesn't need to have a name in it (e.g. hostname) that is specific to each client?  Is that because the etcd/TLS server doesn't check any such field?
 * mbruzek is checking the code
<mbruzek> neiljerram: OK it looks like the client certificate is generated with the CN of the leader server, and the same Subject Alt Names as the leader...
<mbruzek> But I thought for some reason that easy rsa omitted the name in the client cert generation
<neiljerram> Unfortunately I don't have a deployment up right now, so can't confirm that...
<mbruzek> I am bootstrapping... let me check this out
<mbruzek> neiljerram: Are you getting a problem with the client cert?
<neiljerram> No, not at all - just wanted to fully understand how things are working.
<mbruzek> OK
<mbruzek> neiljerram: I am spinning up an etcd cluster now to verify, you can ask all the questions you wish.
<neiljerram> mbruzek, I have a cluster on the way up too...
<mbruzek> http://paste.ubuntu.com/18701906/
<mbruzek> neiljerram: This is the client.crt on an etcd cluster. The issuer is always going to be set, but the Subject name is "client"
<neiljerram> OK.
<mbruzek> It looks like the code does set subject alt name to the three things identifying the server: IP Address:10.84.141.15, IP Address:10.84.141.15, DNS:juju-f9bdca-0
<mbruzek> It probably doesn't need to do that
<neiljerram> Yes, I see that too.
<neiljerram> So presumably, when an etcd client submits its certificate, the server (i.e. etcd master node) is happy with "client" ?
<mbruzek> That is my understanding. the same TLS code is used for kubernetes and I have been successful using the client key in kubernetes as well
<neiljerram> OK, that's great, thanks very much for checking all this.
<mbruzek> neiljerram: If you have any problems or want us to change they way they are generated:  https://github.com/juju-solutions/layer-tls/issues
<neiljerram> Sure - thanks!
<mbruzek> neiljerram: I don't think SANS should be set for the client, as this could be used by any client
<neiljerram> mbruzek, I'm not sure I understand the impact there...
<mbruzek> neiljerram: We worked hard to get the SANS in the server certificates so they could be referred to by the public and private and dns name. I don't think the client certificate needs that. I don't think there is a problem with them there just not technically correct.
<neiljerram> mbruzek, Do you mean "not technically correct" because those SANS are names/IPs of the server and not of the client?  If so, I think I understand :-)
<mbruzek> yes. My understanding of the client certificate is it can be used by any client (like your laptop) and if the SANs are baked in then those addresses are not correct.
<mbruzek> I don't think they are checked
<mbruzek> We were working so hard to get SANs in the sever cert I must have put them everywhere.
<neiljerram> :-)
<mbruzek> tls is hard
<icey> is there a reasonably direct way of accessing all unit IPs of a service in amulet?
<marcoceppi> icey: loop over the dict?
<marcoceppi> icey: public or private?
<icey> marcoceppi: doesn't really matter, I'm testing something that is outside of Juju's scope but I want to test from amulet
<jrwren> icey: x['public-address'] for x in d.sentry[appname] ?
<icey> jrwren: thanks, looks like it should do it :)
 * marcoceppi recommends doing x.get() to avoid key errors
<arosales> marcoceppi: coreycb: urulama|afk just let me know that there can only be one promulgated charm per user across all series
<arosales> so there can't be a promulgated xenial zookeeper under cory and  separate promulgated precise zookeeper under james
<arosales> the issue this really brings up is how we handle upgrades between non-layered and layered charms
<mbruzek> neiljerram: Still here?
<neiljerram> mbruzek, Hi, yes.
<mbruzek> neiljerram: https://github.com/juju-solutions/layer-tls/pull/40
<mbruzek> neiljerram: Please review and let me know if that is OK with you
<neiljerram> mbruzek, Sure, will do.
<mbruzek> I built etcd with that tls layer, everything still worked and there were no SANs in the client certificate.
<lazyPower> mbruzek - i pruned the layer-tls issue tracker. it looks like we hadn't been in there in a bit, and had closed out an additional 3 bugs
<mbruzek> lazyPower: I would like your help on one of them
<lazyPower> sure, whats up?
<mbruzek> lazyPower: Wasn't there a need to put additional SANs in the leader's cert for 10.1.0.1 ? Or was that an issue on a different layer?
<lazyPower> mbruzek - we do need to get the SDN address of the unit added to its CSR
<mbruzek> lazyPower: I need your help on how to do that in a generic way, that sounds like something the tls layer will have to grow support for
<lazyPower> i dont think we have any notion of delaying until the SDN probe has completed. we might have to re-tool the sequence of states so the SSL CSR Generation is dependent on the SDN being available first
<lazyPower> mbruzek - so, i guess there's 2 approaches here. either we delay CSR generation, or we have ot account for and allow charms to update their CSR and re-request a certificate
<mbruzek> lazyPower: hrmm. that is a problem. I thought the toughest problem would be to have a layer above send additional SANs to the tls layer
<lazyPower> we can use the unitdata module to pass info through the layers, i dont think thats as tough of a nut to crack as getting the sequencing right and not taking 30 minutes to deploy k8s
<lazyPower> if we delay on sdn turnup, that'll likely bloat the install time :/
<lazyPower> but there's really no way to know what ip range we've been given until that happens
<mbruzek> Good point batman, this is a tough riddle.
<lazyPower> mbruzek - i'm not sure just yet what is the right way to do this... i guess we can "pick a path and revise"
<mbruzek> lazyPower: we pass in a cidr config parameter for kubernetes. Could we not calculate the SDN address based on the cidr?
<lazyPower> thats a huge range we've given it by default
<lazyPower> thats a big guess :)
<lazyPower> mbruzek - i think we're hard coding that 10.1.0.1 address for the api server. Its that or when we spin up, flannel is always getting the exact same response for which cidr to consume.
<Prabakaran> Hello Team, Getting this error http://pastebin.ubuntu.com/18712357/ while bootstrapping the enviroment in Juju 2.0 installation on Ubuntu Trusty Power machine. Please advise.
<Prabakaran> Juju 2.0 Installation Steps which i have followed is http://pastebin.ubuntu.com/18712502/
<rick_h_> Prabakaran: Juju 2.0 uses lxd which is in xenial. The devel PPA is not supported with lxd on trusty at this time
<rick_h_> Prabakaran: ic, you grabbed lxd from backports.
<rick_h_> seems there's some combo of issue init'ing lxd there. can you specify what version of lxd you have?
<Prabakaran> Hello Kevin, Getting this error http://pastebin.ubuntu.com/18712357/ while bootstrapping the enviroment in Juju 2.0 installation on Ubuntu Trusty Power machine. Juju 2.0 Installation Steps which i have followed is http://pastebin.ubuntu.com/18712502/ . Please advise
<Prabakaran> <rick_h_> Do u have any command to check lxd version
<rick_h_> Prabakaran: lxd --version
<Prabakaran> it is 2.0.3
<rick_h_> Prabakaran: do you get an error message if you run the lxd command manually that might give a hint? From line 5 in the pastebin with your output from bootstrap
<Prabakaran> I am not getting any error when i am running lxd commands manually.... I am getting this issue while bootstrapping juju
<rick_h_> Prabakaran: right, but Juju tried to run that command and it failed.
<kwmonroe> Prabakaran: can you paste the logs from  /var/log/lxd/lxd.log and /var/log/lxd/juju-*/*.log?
<rick_h_> Prabakaran: so I'm wondering if it works correctly by hand or gives more clear messaging than "exit 1"
<rick_h_> ah, and kwmonroe points out the logs that might be much more useful :)
<jobot> hello, I am trying to push a charm to the store, but "charm login" gives an error ERROR login failed: cannot get user details for .... / When I login to the SSO page, it shows that the charm command has logged in properly though.
<Prabakaran> Logs are here //paste.ubuntu.com/18713495/
<rick_h_> jobot: can you run charm whoami ?
<Prabakaran> Error: whoami is not a valid subcommand  usage: charm subcommand    Available subcommands are:     add     build     compose     create     generate     get     getall     help     info     inspect     layers     list     promulgate     proof     pull-source     refresh     review     review-queue     search     subscribers     test     unpromulgate     update     version
<rick_h_> Prabakaran: sorry, meant jobot there
<jobot> whoami gives an error that says i'm not logged in
<Prabakaran> kevin i am seeing lots of logs are in the sub directories
<urulama|afk> jobot: did you login through jujucharms.com as well?
<Prabakaran> See these logs if it helps
<Prabakaran> http://paste.ubuntu.com/18713849/ http://paste.ubuntu.com/18713851/ http://paste.ubuntu.com/18713852/
<rick_h_> urulama: ah, might be a "I don't have a record in charmstore" issue?
<urulama> rick_h_: yes
<urulama> jobot: so, please login through jujucharms.com first, then use charm login again
<jobot> ok will try thanks
<jobot> @urulama that worked. thank you guys
<rick_h_> jobot: what docs were you following? We should update for that case
<jobot> https://jujucharms.com/docs/devel/authors-charm-store#submitting-a-new-charm
<rick_h_> jobot: ty, will go through that and make sure that's in there
<cory_fu> petevg: https://plus.google.com/events/col2a1aertqj329rjb8tgg314u0
<cory_fu> petevg: (related to https://github.com/juju-solutions/jujubigdata/pull/59)
<urulama> rick_h_: ok, adding an issue for those docs
<kwmonroe> Prabakaran: still looking.. i have trusty/lxd working on intel, so i'm trying to compare your logs with mine.
<petevg> cory_fu: cool. I'll make a note to drop in.
<Prabakaran> Is there a clash between Juju 1.25 which i was installed before in the same machine? Because i uninstalled juju 1.25 which i have installed in the same machine. and tried juju 2.0
<kwmonroe> Prabakaran: there shouldn't be a clash.. when you install juju-2.0, it should have made 2.0 the default.  i have both juju-1 and juju-2 installed on my host and they work fine together.
<kwmonroe> Prabakaran: can you verify your user is indeed in the lxd group?  run "id | grep lxd"
<kwmonroe> Prabakaran: it looks like something related to seccomp -- line 132 of http://paste.ubuntu.com/18713851/ says "ERROR    lxc_seccomp - seccomp.c:lxc_seccomp_load:615 - Error loading the seccomp policy"
<kwmonroe> but i'm not sure what that really means.  anyone else have issues with lxd on trusty?
<kwmonroe> specifically with failures to bootstrap
<kwmonroe> Prabakaran: can you paste the output of 'lxc-checkconfig'?
<lazyPower> kwmonroe - i didn't know we supported lxd provider on trusty. i thought it was wiley +
<kwmonroe> lazyPower: i don't know that we do either, but i know it's working well for me using 14.04.4 (kernel 3.13.0-91-generic) with lxd-2.0.3 and juju2 beta 11.
<lazyPower> nice
<lazyPower> #TIL
<lazyPower> i suppose it was only a matter of time before the packages got backported
<kwmonroe> yup yup, trusty-backports ftw
<Prabakaran> kwmonroe .. id | grep lxd  ----> Shows nothing
<Prabakaran> lxc-checkconfig ----> output is http://pastebin.ubuntu.com/18718695/
<kwmonroe> Prabakaran: woohoo! i think we have an easy fix.. do you see your user listed with 'grep lxd /etc/group'?
<Prabakaran> lxd:x:121:pll
<Prabakaran> here i have a question..can i install juju 2.0 as a non-root user?
<Prabakaran> i have tried in both the ways
<kwmonroe> ok Prabakaran, try running 'newgrp lxd' as the pll user.. then you should see output from 'id | grep lxd'
<kwmonroe> dunno about juju-2 as non root.. i always install with 'sudo apt install juju-2.0'
<Prabakaran> here i am not able to run this command newgrp lxd.. so switched as root
<Prabakaran> and got this o/p "uid=0(root) gid=121(lxd) groups=0(root),121(lxd)"
<Makyo> Hi folks.  Trying to bootstrap an LXD controller, but running into a problem on 2.0-beta12: http://paste.ubuntu.com/18719911/ Any thoughts on this?
<Makyo> nvm, got it.  Wish the error mentioned --upload-tools :/
<lazyPower> Makyo - rockin that not-in-devel-ppa beta release i see :)
<kwmonroe> Prabakaran: were you able to bootstrap with the root user?
<Makyo> lazyPower: GUI dev knows no bounds.
<Prabakaran> no i am not able to do bootstrap
<kwmonroe> same error?
<Prabakaran> kwmonroe , yes i am getting same error "ERROR failed to bootstrap model: cannot start bootstrap instance: Error calling 'lxd forkstart juju-f46a2e-0 /var/lib/lxd/containers /var/log/lxd/juju-f46a2e-0/lxc.conf': err='exit status 1'"
<kwmonroe> Prabakaran: anything new in /var/log/lxd/juju-f46a2e-0/*.log?
<Prabakaran> i think logs looks simillar.. anyway see the logs http://paste.ubuntu.com/18722657/ http://paste.ubuntu.com/18722658/ http://paste.ubuntu.com/18722660/
<Prabakaran> kwmonroe i think logs looks simillar.. anyway see the logs http://paste.ubuntu.com/18722657/ http://paste.ubuntu.com/18722658/ http://paste.ubuntu.com/18722660/
<Prabakaran> <kwmonroe>, If you find any fix on this ...please mail me on the same. And its time for me to sleep.. Thanks :)
<kwmonroe> np Prabakaran -- have a good night!
<jobot> Hello again. After doing ,"charm push" and/or "charm publish" does the code eventually end up on launchpad?
<rick_h_> jobot: no, it's disconnected from LP
<rick_h_> jobot: do your charm dev where you wish, and push to the store when it's ready
<lazyPower> \o/  its liberating to say that huh rick_h_
<rick_h_> :)
<jobot> Hah ok thanks. But to connect bug reporting, I would still need to forward to something like launchpad?
<rick_h_> jobot: sure, there's a charm set-bug-url and set-homepage-url I think that lets you set those atrributes
<rick_h_> jobot: so supply those to where the charm is managed
<jobot> Ok. Lastly, I recall an ingestion cycle from lp to the store. Is that similar for the new method? The charm is not yet searchable
<lazyPower> jobot https://jujucharms.com/docs/devel/authors-charm-store - we have a document guiding how the new publishing process works
<lazyPower> jobot - any feedback here is appreciated, as its still relatively new documentation, and we may have forgotten something or need to clarify some steps.
<jobot> Ok thanks for your help
<lazyPower> mbruzek - ping
<mbruzek> pong
<lazyPower> have a sec to take a look at this? https://code.launchpad.net/~lazypower/charms/trusty/kibana/dockerbeat-dashboard/+merge/297804
<mbruzek> 25 files of json added? Are these custom files or did you copy them from somewhere?
<lazyPower> mbruzek - imported from a fork of the beats dashboard repository
<jhobbs> admcleod: hi there, any chance you could have a look at https://bugs.launchpad.net/juju-core/+bug/1600054 and see if I'm doing something obviously wrong or unsupported?
<mup> Bug #1600054: Running generate-image twice with separate virt-types overwrites rather than appends <v-pil> <juju-core:New> <https://launchpad.net/bugs/1600054>
#juju 2016-07-08
<kjackal> micvog: is that you Michael?
<kjackal> Good morning juju world!
<admcleod> jhobbs: hi - i ... can have a look at that but not really sure if you've got the right person
<magicaltrout> fix it admcleod !
<admcleod> magicaltrout: *sends box of white chocolate blue cheese and anchovy balls*
 * magicaltrout hurls them over the fence
<admcleod> haha.. chocolate anchovies are a thing. thanks google.
<magicaltrout> bad times
<kjackal> hey admcleod, we setup a VM for micvog and we are ready to move on with juju
<micvog> indeed thanks kjackal!
<kjackal> micvog: I would also like you to meet magicaltrout. magicaltrout is an awesome community member and he basicaly gives us a reason to work on Juju
<kjackal> micvog: magicaltrout is also London based (at least for now)
<micvog> hey all !
<kjackal> ok, enough with the introductions
<kjackal> the first thing to do is setup a monitoring
<kjackal> so micvog, to get the status of services/applications/machines you just do a "juju status"
<micvog> so, should we bootstrap first ?
<kjackal> ahh yes you are absolutely right
<kjackal> "juju bootstrap" it is then!
<admcleod> kjackal: how bout you set up a shared tmux/screen session so we can watch? :}
<micvog> bootstraping..
<kjackal> admcleod: that sounds good, never done this but should be easy, right?
<admcleod> kjackal: sure! computers!
<micvog> so, bootstrap has installed some packages on remote host (which host?) and started the agent
<micvog> I assume everything is local since environments.yaml is not configured
<kjackal> yes, everything is local
<kjackal> admcleod: what is your public ssh key?
<admcleod> kjackal is a fan of lxd, so its probably an lxd instance known as 'machine-0' which is the juju controller
<admcleod> kjackal: https://launchpad.net/~admcleod/+sshkeys (top one)
<micvog> so the .juju/environments/local.jenv was created with bootstrap and contains all info about the current state ?
<kjackal> micvog: yes you are correct
<kjackal> but the file with the credentials you should be conserned with is this .juju/environments.yaml
<micvog> yup
<kjackal> micvog: .juju/environments.yaml has placeholders for configuring all supported providers
<kjackal> micvog: can you do a "tmux attach-session -t shared"
<micvog>  ~$ Michael? Can you read this?
<micvog> I think we're good
<micvog> brb
<kjackal> micvog: when you do a "juju deploy apache-spark " juju will forst provision a vm
<kjackal> in this case juju will provision an lxc container
<micvog> ah ok
<kjackal> first time takes a long time because an ubuntu image has to be downloaded
<kjackal> after that lxc container provisioning is very fast
<kjackal> micvog: after getting the container juju will "bootstrap" the vm/unit and will deploy apache spark on it
<kjackal> micvog: we are not installing the containers software "apt-get update & upgrade etc"
<micvog> ok
<kjackal> "Fetching resources" means we are downloading apache-spark
<admcleod> kjackal: micvog sorry guys afk for 10 min
<kjackal> ah we can see what is happening by looking at the logs
<kjackal> console 3 shows the logs "juju debug-log"
<micvog> aha
<kjackal> btw micvog we are deploying https://jujucharms.com/apache-spark/10
<micvog> is this the remote machine logs  or the juju "controller" logs
<kjackal> micvog: spark deployment finished
<kjackal> we have spark on our machine 1
<kjackal> it is in cluster mode but has only one worker
<kjackal> should we login to that cluster or should we add some extra nodes?
<kjackal> micvog: ^
<admcleod> kjackal: do both :}
<kjackal> ack lets add some units
<micvog> have you started it in HA mode ?
<micvog> using quickstart ?
<kjackal> for HA we need zookeeper
<kjackal> so here is what we will do
<kjackal> lets add 3 more units: juju add-unit -n 3 apache-spark
<micvog> let me try
<micvog> ah
<micvog> done
<kjackal> in the mean time lets also deploy and apache-zookeeper: "juju deploy apache-zookeeper"
<kjackal> micvog: go ahead and try the above
<micvog> how do you know the zookeeper and spark are compatible? or how do you enforce versions?
<kjackal> micvog: spark and zookeeper share an interface
<kjackal> if we are to create versions that do not relate we will have to change the interface
<kjackal> you can see which interface is used in the web pages and readme files of each charm
<kjackal> lets look at the states
<Guest_94447> Allah is doing
<Guest_94447> sun is not doing allah is doing
<micvog> fetching resources
<micvog> thats to download spark on the 3 new nodes
<Guest_94447> moon is not doing allah is doing
<Guest_94447> stars are not doing allah is doing
<Odd_Bloke> !op
<Odd_Bloke> What's that command?
<kjackal> micvog: isn't it nice? 4 node spark cluster on your machine EXACTLY how it would look on AWS
<micvog> awesome
<Guest_94447> planets are not doing allah is doing
<Guest_94447> galaxies are not doing Allah is doing
<magicaltrout> lol
<kjackal> micvog: and you basicaly you didn;t do much "juju deploy, juju add-unit"
<Guest_94447> oceans are not doing Allah is doing
<magicaltrout> its like slow polite trolling
<admcleod> magicaltrout: i think i see where this is going
<magicaltrout> i like a bit of morning poetry.....
<Guest_94447> mountains are not doing Allah is doing
<admcleod> magicaltrout: at some point we're going to encounter the eclesiastical rift concerning whether god is doing white chocolate and blue cheese.
<magicaltrout> hehe
<magicaltrout> Guest_94447: I don't believe in God, last time I checked a good muslim friend of mine told me I was going to burn in hell
<magicaltrout> so we had a good chat about that
<magicaltrout> but I'm okay with it
<Guest_94447> trees are not doing Allah is doing
<magicaltrout> hmm
<magicaltrout> bloody trees
<kjackal> micvog: you see now we have a spark cluster with 4 nodes and 1 master
<micvog> all ready
<kjackal> micvog: we also have a zookeeper
<kjackal> lets relate zookeepr and spark
<kjackal> juju add-relation apache-spark apache-zookeeper
<kjackal> try this ^ micvog
<kjackal> and look at the state
<Guest_94447> mom is not doing Allah is doing
<Guest_94447> dad is not doing Allah is doing
<Guest_94447> boss is not doing Allah is doing
<micvog> where can I see they're now related apart from debug logs?
<Guest_94447> job is not doing Allah is doing
<Guest_94447> dollar is not doing Allah is doing
<Guest_94447> degree is not doing Allah is doing
<micvog> damn he's posting more frequently now
<Guest_94447> medicine is not doing Allah is doing
<magicaltrout> hehe
<magicaltrout> it is making me chuckle
<Guest_94447> customers are not doing Allah is doing
<Guest_94447> you can not get a job without the permission of allah
<kjackal> who is the moderator here?
 * magicaltrout asks the permission of no one......
<micvog> ah by looking at juju status you can see the relation ?
<kjackal> yes, the full status
<Guest_94447> you can not get married without the permission of allah
<kjackal> but you see micvog that spark is now in HA mode
<kjackal> cool eh?
<micvog> yup
<kjackal> automagicaly :)
<Guest_94447> nobody can get angry at you without the permission of allah
<kjackal> lets ssh to one of these machines
<kjackal> console 4
<kjackal> micvog: you do not need to remember the ips or the strange strings each cloud provider will give you
<Guest_94447> light is not doing Allah is doing
<Guest_94447> fan is not doing Allah is doing
<micvog> yes that's really handy
<Guest_94447> businessess are not doing Allah is doing
<kjackal> micvog: you just do "juju ssh <unit>"
<magicaltrout> micvog fyi: /ignore Guest_94447
<Guest_94447> america is not doing Allah is doing
<magicaltrout> should sort you out
<Guest_94447> fire can not burn without the permission of allah
<kjackal> micvog: admcleod: I am being called for lunch
<kjackal> lets see bundles and micvog work when I get back ok?
<micvog> catch up later
<micvog> I'll try to run smth
<Guest_94447> knife can not cut without the permission of allah
<Guest_94447> rulers are not doing Allah is doing
<Guest_94447> governments are not doing Allah is doing
<Guest_94447> sleep is not doing Allah is doing
<Guest_94447> hunger is not doing Allah is doing
<Guest_94447> food does not take away the hunger Allah takes away the hunger
<admcleod> if someone is in the blufin building, or logged in over the vpn: /oper ${your_ud-ldap_uid} ${irc password without the leading NICK}
<Guest_94447> water does not take away the thirst Allah takes away the thirst
<Guest_94447> seeing is not doing Allah is doing
<Guest_94447> hearing is not doing Allah is doing
<magicaltrout> aww admcleod
<magicaltrout> you're so mean
<Guest_94447> seasons are not doing Allah is doing
<Guest_94447> weather is not doing Allah is doing
<Guest_94447> humans are not doing Allah is doing
<Guest_94447> animals are not doing Allah is doing
<Guest_94447> the best amongst you are those who learn and teach quran
<Guest_94447> one letter read from book of Allah amounts to one good deed and Allah multiplies one good deed ten times
<Guest_94447> hearts get rusted as does iron with water to remove rust from heart recitation of Quran and rememberance of death
<Guest_94447> heart is likened to a mirror
<Guest_94447> when a person commits one sin a black dot sustains the heart
<Spads> sounds healthy
 * Spads sustains his heart
<magicaltrout> lol
<magicaltrout> bloody black dots
<Guest_94447> to accept Islam say that i bear witness that there is no deity worthy of worship except Allah and Muhammad peace be upon him is his slave and messenger
<admcleod> lunch is not doing, All.. oh wait yes, im going to lunch
<magicaltrout> allah made your lunch?
<magicaltrout> I need to have a word
<magicaltrout> mines still in the cupboard
<admcleod> i dunno, i suspect i may have some pork based product, since spain. anyway, bbl
<magicaltrout> allah will cast you down with increased risk of bowel cancer then admcleod
<magicaltrout> so says......
<magicaltrout> the doctors
<kjackal>  /ignore Guest_94447
<Guest_94447> read book http://www.fazaileamaal.com
<Guest_94447> read book http://www.muntakhabahadith.com
<Guest_94447> need spiritual teacher visit http://www.alhaadi.org.za
<Guest_94447> allah created the sky without any pillars
<magicaltrout> aww kjackal
<magicaltrout> a) remove the space
<magicaltrout> b) where's your sense of adventure, you should join up
<Guest_94447> allah makes the sun rise from the east and Allah makes the sun set in the west
<magicaltrout> hmm
<Guest_94447> allah gives life and Allah gives death
<kjackal> magicaltrout: my irc skills are non existent!
<Guest_94447> all creation are useless,worthless,hopeless
<Guest_94447> can not do
<magicaltrout> so it seems kjackal
<Guest_94447> can not benefit
<Guest_94447> can not harm
<Guest_94447> allah is the doer of each and everything
<magicaltrout> that said its better than some of the ibm'ers who like to address kwmonroe as kevin... or <kwmonre> with some weird copy paste thing
<magicaltrout> tab completion IBMers! tab completetion!
<kjackal> magicaltrout: lol
<Guest_94447> when Allah wants us to stand we stand
<Guest_94447> when Allah wants us to sit we sit
<Guest_94447> i am not doing Allah is doing
<Guest_94447> you are not doing Allah is doing
<Guest_94447> atom bomb is not doing Allah is doing
<Guest_94447> rice is not doing Allah is doing
<Guest_94447> all creation get together can not create one grain of rice
<Guest_94447> all humans get together can not stop rain
<Guest_94447> all humans get together can not make anybody hungry
<magicaltrout> Guest_94447: can you impart your wisdom on those in ##saiku they all need your help
<Guest_94447> all humans get together can not move sun one second up or down
<magicaltrout> or ##pentaho
<magicaltrout> they also need som guidance
<Guest_94447> we can not count the hair on our head
<Guest_94447> we can not count the rain drops
<Guest_94447> we can not count the particles of sand
<Guest_94447> medicine has no power to cure
<Guest_94447> two people take same medicine one passes away and one does not
<Guest_94447> degree has no power to give job
<kjackal> micvog: admcleod: I am back
<Guest_94447> many people have degrees but do not have jobs
<micvog> wb
<Guest_94447> sustenance does not depend on effort
<Guest_94447> one person is working very much but is earning very less
<micvog> I uploaded the jar to one of the spark nodes, seems that the input files are missing - working on it
<Guest_94447> other person is working very less but is earning very much
<magicaltrout> awww
<magicaltrout> balls
<magicaltrout> where's he gone
<micvog> "Found 57 outliers" - looks like we're onto something
<micvog> :)
<kjackal> micvog: nice!
<kjackal> micvog: give me some time to go over the blog and the program
<micvog> sure
<micvog> fyi channel, we're trying to deploy this using juju - https://micvog.com/2016/05/21/using-spark-for-anomaly-fraud-detection/
<kjackal> micvog: ok I think I got the big picture
<kjackal> I am thinking...
<kjackal> we have a good start
<kjackal> and a relatively good idea what we want to get out of this...
<kjackal> So... we will need to complete the stack of charms
<kjackal> We will need a way to feed realtime data to a kafka queue as shown in the chart
<kjackal> then have spark consume the queue and persist everything (eg to HDFS)
<kjackal> and then visualise the results. it would be great to do that through zeppelin
<micvog> sorry dc
<micvog> last message was that the list of charms need to be completed
<magicaltrout>  12:12  kjackal| We will need a way to feed realtime data to a kafka queue as shown in the chart                              â blr
<magicaltrout>  12:13  kjackal| then have spark consume the queue and persist everything (eg to HDFS)                                        â blr
<magicaltrout>  12:14  kjackal| and then visualise the results. it would be great to do that through zeppelin
<kjackal> Any ideas magicaltrout?
<magicaltrout> I wasn't paying a great deal of attention whilst mulling over life with the quran
<magicaltrout> what part are you mulling over kjackal ?
<magicaltrout> try that again
<magicaltrout> I wasn't paying a great deal of attention whilst mulling over life with the quran
<magicaltrout> what part are you mulling over kjackal ?
<kjackal_> magicaltrout: do you recommend any tool for awesome visualisations on (streaming) data that can interface with spark?
<magicaltrout> zeppelin seems like a sound choice on the Juju platform
<magicaltrout> one consideration I would ponder is how do you present it to the user.
<magicaltrout> If you're streaming data, it would be good to hook up websockets or long polling and have it update the zeppelin charts on the fly without a user refreshing the page etc
<magicaltrout> https://gist.github.com/granturing/a09aed4a302a7367be92 this chap did some streaming spark map example
<magicaltrout> or just write some websocketed thin client with your own choice of vis toolkit
<magicaltrout> and use streaming spark or write it back to kafka and have kafka write it to the sockets
<kjackal_> sounds cool! Thanks magicaltrout
<micvog> this looks interesting reg visualization - https://www.youtube.com/watch?v=BhCXL8AHiB0&index=9&list=PL-x35fyliRwiy50Ud2ltPx8_yA4H34ppJ
<micvog> if you can send data through websockets then D3.js is a very good option (and realtie)
<magicaltrout> yeah its very lightweight as well
<magicaltrout> so your browser shouldn't drown
<magicaltrout> which used to be an issue with streaming data.. when to flush it
<rcj> coming late to the 'multiseries' charm game.  Had a question on updating an existing charm...  Was told to push to somewhere new and launchpad.net/ubuntu-repository-cache was suggested.  Is it really necessary to create an entire LP project for the charm?  What are the requirements?
<rcj>  ^ re: https://code.launchpad.net/~rcj/charms/trusty/ubuntu-repository-cache/multiseries/+merge/299472
<rick_h_> rcj: so you can push using the charm push command from anywhere. I think the key thing is that you're pushing a charm that works on multiple series to a series-i-fied url which seems a bit untrue
<rcj> rick_h_, yeah I wanted a diff that showed the 4 line change against the current code so no one is looking at this as entirely new.
<rick_h_> rcj: understand. The new review queue will show the diff of actual files which will help
<rick_h_> rcj: not sure where that's at atm, but will check on it
<rick_h_> rcj: but appreciate where you're coming from there
<rick_h_> rcj: maybe recommend making the change to the new path first, getting that "just migrate existing code" and then do the patch for the 4 lines on top of that?
<rcj> rick_h_, otherwise, it has already been pushed to https://jujucharms.com/u/rcj/ubuntu-repository-cache and I need to know how to get it queued for review.
<rcj> https://jujucharms.com/u/rcj/ubuntu-repository-cache/122
<rcj> rick_h_, either way, how do I get this on the review queue (or view the queue)?  http://manage.jujucharms.com/tools/review-queue linked from https://jujucharms.com/community/charmers just gives me 'Internal Server Error'
<rick_h_> rcj: ah, bad link: http://review.juju.solutions/ is the current queue.
<rick_h_> rcj: from the docs "If something needs review, subscribe ~charmers."
<rick_h_> rcj: looking at https://jujucharms.com/docs/master/reference-reviewers
<rcj> thx.  Also for bad links from http://review.juju.solutions/ it's suggests reading "Charm Review Process" @ https://juju.ubuntu.com/docs/charm-review-process.html -> DNS_PROBE_FINISHED_NXDOMAIN
<rick_h_> rcj: yes, filing a bug on that bad link now
<rick_h_> heh well filing on jujucharms.com/community, guess there's another one from the actual review site
<rick_h_> tvansteenburgh: what's the latest on new queue timing?
<tvansteenburgh> rick_h_: marcoceppi ^
<jhobbs> admcleod: yeah ok probably wrong person haha thanks
<admcleod> jhobbs: no worries :)
<lutostag> stub: hoping you could push the built/latest published charm to the branch "built" like you had been previously for git.launchpad.net/postgresql-charm if possible
<xilet> using 2.0-beta7-xenial-amd64, any idea where to start looking for why after a full system reboot 'juju status' just hangs?
<cory_fu> marcoceppi, petevg, and I are doing a live session discussing testing in charms and layers: https://plus.google.com/u/1/events/col2a1aertqj329rjb8tgg314u0
<cory_fu> We're currently discussing this PR: https://github.com/juju-solutions/layer-apache-bigtop-base/pull/28
<cory_fu> And this PR: https://github.com/juju-solutions/jujubigdata/pull/59
<cory_fu> xilet: We're currently up to beta11, so one option might be to upgrade to that.  What provider are you using, though?  lxd, aws, etc?
<icey> jrwren: unfortunately, ceph_ips = [x['public-address'] for x in self.d.sentry['ceph-mon']] doesn't work: http://pastebin.ubuntu.com/18792425/
<xilet> lxd
<xilet> It had deployed an openstack fine, but after the reboot it has been several hours and still nothing,  strace shows something is timing out but isn't giving me any really good indicators. I see quite a few of the services have launched according to ps.
<xilet> let me try upgrading juju
<magicaltrout> ... said no one on a Friday
<cory_fu> xilet: I think that after a reboot the lxc containers might not be set to start back up automatically.  I'm not familiar enough with lxd to know the command to check for the running state of the containers
<xilet> cory_fu: it has in the past, I ran into a bug  with adding a second nova-compute unit, so Iw as bouncing the whole system to see what might changed, I had rebooted several times prior,  curious if there is some kind of central juju log that I could check
<cory_fu> xilet: There is a log on the bootstrap container, but I'm not sure if it would have the info you're looking for or not
<xilet> so other question, I had (I think, I am new to this) upgraded to juju-2 using the xenial-proposed branch,  what is the official method these days for updating it?
<stub> lutostag: It is already pushed
<lutostag> stub: perfect, thanks
<mbruzek> marcoceppi: Do you know where the tab completion code is for juju?  Re: https://bugs.launchpad.net/juju-core/+bug/1600257
<mup> Bug #1600257: The tab completion on juju yeilds KeyError: 'services' <juju-core:New> <https://launchpad.net/bugs/1600257>
<marcoceppi> mup: it's probably in the packaging branch
<mup> marcoceppi: I apologize, but I'm pretty strict about only responding to known commands.
<mbruzek> marcoceppi: I searched for python in juju/juju could not find it
<marcoceppi> mbruzek: it's probably in the packaging branch
<marcoceppi> mup: when did you get so saucy
<mup> marcoceppi: In-com-pre-hen-si-ble-ness.
<marcoceppi> omg. who taught you to talk back.
<mbruzek> mup you steal all my best lines
<mbruzek> *the best lines
<marcoceppi> wow, now mup is all quiet
 * marcoceppi feels accomplished.
<mbruzek> Yeah mup must know who is in charge
<lazyPower> marcoceppi - its been doing that for quite some time now
<lazyPower> mup what is the air speed velocity of an unladen swallow?
<lazyPower> welp, ya broke it
<babbageclunk> mbruzek: it's in juju/juju/etc/bash_completion.d (in case you haven't found it)
<mbruzek> Yes I just found it.
<babbageclunk> mup: anything exciting planned for the weekend
<mup> babbageclunk: Roses are red, violets are blue, and I don't understand what you just said.
<babbageclunk> that's fun
<mbruzek> The error looked like a python traceback, I didn't expect the file to be bash, with embedded python.
<mbruzek> thanks babbageclunk
<babbageclunk> mbruzek: yeah, it's a bit of an obscure location
<babbageclunk> mbruzek: Are you seeing that error, or do you just need it for something else?
<mbruzek> babbageclunk: I am seeing the error, and looking to try to fix it
<babbageclunk> mbruzek: It's already been fixed, I think - you can reinstall the completions by running `make install-etc`.
<babbageclunk> mbruzek: (I went through this same series of steps a while back.)
<mbruzek> babbageclunk: OK I see that the bash completion file does not contain "services" in juju (master) but I see "services" on the completion file on my system in /etc/bash_completion.d/juju2 on my system
<babbageclunk> mbruzek: Right - that is what I found when I had this error.
<mbruzek> babbageclunk: OK thanks, just trying to help
<petevg> marcoceppi: not production ready, but here's an invocation that fixes the import in the docker-layer unit tests we discussed in the hangout: http://paste.ubuntu.com/18798385/
<petevg> (Basically, you make sure that the "charms" module exists, and then you patch a mock object into it.)
<neiljerram> lazyPower, mbruzek - Hi there; just wondering what the plan/schedule is for publishing a new etcd charm with all recent fixes.  Currently I'm using cs:~lazypower/etcd-19 - which is fine - but I guess there should be a new publication to cs:~containers ?
<lazyPower> neiljerram - cs:~containers/etcd-4 exists
<lazyPower> its currently development channel, i want to have an ironclad migration from etcd-2 to etcd-4+ before it goes stable
<neiljerram> lazyPower, Oh, that's easy then! :-)
<lazyPower> so its fine to do new deploymets, but not recommended for upgrades yet
<neiljerram> lazyPower, perfect.  Thanks!
<petevg> marcoceppi: or you can do this, which is much simpler :-)  http://paste.ubuntu.com/18798975/
<petevg> (I always forget about the create=True flag)
<lazyPower> #TIL with mock.patch('charms.layer', create=True):
<lazyPower> petevg - i'm going to have to try and osmose some of that testing knowledge you have been dropping the last couple weeks
<petevg> lazyPower: feel free to ping me with questions, or pull me into a hangout. I'm always happy to spread the testing love :-)
<lazyPower> do we have an idea on what the upper boundary limits are for filesizes  and resources?
<rick_h_> lazyPower: I think it's in the couple hunderd of MB atm. I was just talking with the team on upping that while we work on some quota-like safety measures
<rick_h_> lazyPower: but I think it's a timing things vs a size thing so it's how many bytes you can push in the timeout window
<lazyPower> Thats what i was looking for, ta rick_h_
 * lazyPower decides its a bad idea ot publish over a mifi with 1 bar of service, and a 2gb payload.
<jrwren> lazyPower: yup, we can't currently process 2GB :[ sorry.
<jrwren> lazyPower: Incidently, what is the 2GB payload?
<lazyPower> jrwren - i was actually being silly, i believe the payload is quite large though. This was in response to a versioning issue/question from Nexenta.
<lazyPower> jrwren - its packing in chef and some other build time deps like ruby libs
<jrwren> lazyPower: ah. cool.
<lazyPower> what theyd o today is they host a bintray style repo and fetch over the wire. They haven't versioned the bins, just kept a running-tip publication.
<lazyPower> sooo, either they need to version, split up payloads and do multiple payloads, or should be actively participating in planning sessions for the feature as they would be a big consumer of it.
<lazyPower> at least thats my take on it
<jrwren> lazyPower: to expand further, my current working theory is that the timeout happens because of server side processing time. The upload completes. The charmstore processes the charm and this can take some time. Currently we have a 50s timeout(I'm about to up it) so if the server takes longer than 50s to process a large charm it doesn't send a response in that 50s and it times out.
<lazyPower> ah yeah, i can see that being problematic
<jrwren> lazyPower: We've also, AFAIK, not observed the limit on resources. Its likely we don't process resources as heavily and so they would be done more quickly.
<jrwren> lazyPower: As a general rule, I'd say many resources for each item is better than bundling things into a single resource.
<lazyPower> jrwren - i've only used 2 myself, and they were small golang bin attachments.  I as well would +1 multiple smaller deliveries rather than one large payload.
<marcoceppi> petevg: nice, thanks!
<lazyPower> it seems like the bigger they get, the more trouble they cause
<jrwren> lazyPower: i wonder if we can document this guidance?
<lazyPower> sure can
<lazyPower> https://jujucharms.com/docs/devel/developer-resources
<jrwren> lazyPower: thanks.
<lazyPower> jrwren - ping me if you submit and i'll be happy to review
<cory_fu> geetha: You were asking about http://pastebin.ubuntu.com/18774917/ and saying that you were not seeing that handler being called even after you changed one of the config values.  That snippet looks fine to me.  Is there any chance you can provide me with some of the juju logs from that unit?
<cory_fu> Or perhaps someone else here can spot something in that code that I'm missing.
<lazyPower> cory_fu is @when_not_all an alias for when_not?
<lazyPower> or is when_not more like when_not_any
<geetha> Same handler function being called in juju 1.25 when I change config value
<geetha> using 'juju set' command
<cory_fu> lazyPower: It is not the same as when_none (there is no when_not_any).  when_not == when_none, when_not_all is its own thing (i.e., trigger if one of the given states is not set, vs when_none only triggers if not a single one state is set)
<cory_fu> geetha: Hrm.  So it's specific to Juju 2.0?  That's very strange
<rcj> stub, Why do I need to create a new home for ubuntu-repository-cache prior to promulgation?  I don't want to create a new top-level project in LP and my understanding would be that once promulgated it will end up in charms/ubuntu-repository-cache, right?  If it ends up there I don't want a second place to confuse people.
<rcj>  ^ re: https://bugs.launchpad.net/charms/+bug/1600243
<mup> Bug #1600243: review ubuntu-repository-cache charm now with multiseries support <Juju Charms Collection:New> <https://launchpad.net/bugs/1600243>
<stub> rcj: When it is promulgated it ends up in cs:ubuntu-repository-cache (jujucharms.com/ubuntu-repository-cache).
<stub> rcj: Where do you want the main branch to live? Currently it is in bzr at lp:~charmers/charms/trusty/ubuntu-repository-cache/trunk, with the alias lp:charms/trusty/ubuntu-repository-cache.
<stub> rcj: I can merge it in there, but it seems silly having a multi-series charm in a trusty specific namespace. And it still needs to be promulgated by the ecosystem team.
<stub> rcj: It also means you still need ~charmers to land stuff in the future, since it is owned by that team.
<geetha> cory_fu: you can see the log when I change the config option http://pastebin.ubuntu.com/18805681/
<cory_fu> geetha: Is that all of the log?  I don't see the bit where it actually calls reactive_main to see how it's evaluating the tests.  Also, it might be helpful to add the line `charms.reactive -y get_states` to the top of your handler file to see what the states are when it gets called
<rcj> stub, I would like charmers to assist with landing changes in the future.  I would hope that it could end up @ lp:charms/ubuntu-repository-cache
<rcj> And my understanding is that this would replace cs:trusty/ubuntu-repository-cache cleanly.  My goal here is a blessed charm with a single entry in the charm store that makes it 100% clear that it is supported with no confusion as to which charm to pick.
<stub> rcj: Ok. I think you need ecosystem for that branch URL or alias - I'm not sure how to do it. And they can help with the 'charm push', 'charm publish' and promulgation to get that branch promulgated into the charm store.
<geetha> cory_fu: I could not able to paste full log, It has more lines. I have added `charms.reactive -y get_states` and again ran the command to change the config option . It's failing with non zero exit code.http://pastebin.ubuntu.com/18808692/
<rcj> stub, who to ask?
<stub> marcoceppi: Are you or your team available to get rcj promulgated? lp:~rcj/charms/trusty/ubuntu-repository-cache/multiseries / https://bugs.launchpad.net/charms/+bug/1600243
<mup> Bug #1600243: review ubuntu-repository-cache charm now with multiseries support <Juju Charms Collection:New> <https://launchpad.net/bugs/1600243>
<cory_fu> geetha: From that `eval '{config.changed:'` line, it looks like you might have an errant { in your code.  I didn't see it in the pastebin from earlier, so I'd need to see more of the reactive handler file to pinpoint it
<geetha> http://paste.ubuntu.com/18811635/
<marcoceppi> stub rcj I think we can help
<marcoceppi> rcj: will you be the sole person maintaining this charm?
<cory_fu> geetha: Very strange.  That all looks fine to me
<geetha> cory_fu: Even I have tested in juju 1.25 same code and without resources and terms. It worked fine
<cory_fu> geetha: I can't imagine what 2.0 would be doing differently that would cause that.
<cory_fu> geetha: Which beta version of 2.0 are you using?  beta11?
<geetha> cory_fu: No, it's beta7
<lazyPower> magicaltrout - https://kognitif.bandcamp.com/track/walking-on-sunshine  may the jazz flute compel you.
<rcj> marcoceppi, it will be the cpc team <cpc@canonical.com>
<rcj> which includes me
<cory_fu> geetha: Unfortunately, my best recommendation at this point is to try upgrading to a newer beta.  I can't find anything that points to what's causing it
<cory_fu> geetha: Is this charm available in Launchpad, GitHub, or jujucharms.com?  I can try to replicate on my side and do some more digging
<magicaltrout> thanks lazyPower I'll check it out!
<geetha> Thanks cory_fu, I will try to upgrade to newer beta version. This charm is available in jujucharms.com: https://jujucharms.com/u/ibmcharmers/ibm-was-base/trusty/15. But still I'm working on this charm to fix other issues also.
<cory_fu> ok
<magicaltrout> thats some pretty far out jazz flute lazyPower
<lazyPower> i'm sayin
<magicaltrout> reminds me of woodstock type stuff in some ways
<lazyPower> this entire album was made from samples
<lazyPower> i am impress
<magicaltrout> also very jazz fusion weather report esque https://www.youtube.com/watch?v=pqashW66D7o
<xilet> I just upgraded to 2.0-beta11-xenial-amd64, now when I bootstrap it hangs at Running apt-get update,  the instance is up and the apt-get process finishes inside, but nothing returns back
<magicaltrout> told you not to upgrade on a friday ;) its against the rules
<xilet> Hah indeed you did, well the entire system was hosed due to an lxc issue, I couldn't even get lxc to return commands, so I figured I would restart from scratch.
<lazyPower> groovy, dig this too magicaltrout
<magicaltrout> yeah we used to play a bunch of weather report and similar tracks in my old jazz band lazyPower
<magicaltrout> always preferred them over the traditional stuff
#juju 2016-07-09
<setuid> Can someone give me a hand debugging why keystone -always- fails on 14.04 using openstack-install?:
<setuid> I'm getting:  idle - hook failed: "config-changed"
<setuid> I see this in the juju logs on that keystone host:
<setuid> 2016-07-09 21:25:37 INFO worker.uniter.jujuc server.go:172 running hook tool "juju-log" ["-l" "ERROR" "FATAL ERROR: Could not determine OpenStack codename for version 8.1"]
<setuid> 2016-07-09 21:25:37 ERROR juju-log FATAL ERROR: Could not determine OpenStack codename for version 8.1
<setuid> I wiped the entire host, reinstalled, over and over, it dies here
<setuid> This didn't work:
<setuid> $ juju resolved --retry keystone/0
<setuid> ERROR unit "keystone/0" is not in an error state
<setuid> juju status --format tabular shows:
<setuid> ID         WORKLOAD-STATE AGENT-STATE VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE
<setuid> keystone/0 unknown        allocating          1                            Waiting for agent initialization to finish
<setuid> $ juju debug-hooks keystone/0 config-changed
<setuid> ERROR error fetching address for unit "keystone/0": public no address
<setuid> At this point, I'm out of options
<setuid> juju debug-log shows:
<setuid> machine-0: 2016-07-09 21:44:14 ERROR juju.worker runner.go:223 exited "firewaller": machine 1 not provisioned
 * setuid tries with the /experimental ppa
<jrwren> setuid: juju ssh to keystone/0 and check /var/log/juju/unit-keystone*.log
<jrwren> setuid: err, nevermind. I see you already did.
<setuid> Moving to experimental ppa seems to have fixed the keystone issue, though I don't know if it's going to introduce other instability that may be fixed in stable
<setuid> i'll work with this for now
<setuid> I stand corrected, that failed too
<setuid> Exactly the same error message:
<setuid> 2016-07-09 22:47:16 DEBUG juju.worker.uniter modes.go:31 [AGENT-STATUS] error: hook failed: "config-changed"
<setuid> I'm floored, 14.04 has been out for how long? And openstack doesn't install on it -AT ALL-, for what, a year or more now?
<setuid> Sorry, 2 years
<jrwren> setuid: it does, but maybe not every openstack version.
<jrwren> setuid: 14.04 was released with icehouse release AFAIK. So icehouse should run on it well.
<setuid> This is liberty, the default in stable
<setuid> Ok, let me try that
<jrwren> well, https://wiki.ubuntu.com/OpenStack/CloudArchive  ubuntu supports openstack releases LTS same years as ubuntu distro
<jrwren> libery it unsupported in about 10months.
<jrwren> but liberty should be supported and work. I'm sorry. I don't know.
<setuid> crazy, because it keeps pulling down 198 of the keystone charm, but 255 is current in stable
<setuid> Right, so basically... NONE of this works
<setuid> I've tried every version, every permutation, every tool.. openstack-install, conjure-up, juju, etc. on 14.04 and 16.04, with every version of openstack
<setuid> been fighting this for 4 weeks now, hundreds of install attempts
<setuid> icehouse blows up right away
<setuid> Unable to create container:  (lxc-create: utils.c: get_template_path: 1340 No such file or directory - bad template: ubuntu-cloud
<setuid> lxc-create: lxccontainer.c: do_lxcapi_create: 1435 bad template: ubuntu-cloud
<setuid> lxc-create: lxc_create.c: main: 318 Error creating container openstack-single-setuid)
#juju 2016-07-10
<stokachu> setuid: do you open bugs on the relevant projects with logging information so we can try and help you?
<setuid> stokachu, Yes, once I define what and where the bugs actually are
<setuid> There's way, Way, WAY too many moving parts and pieces, and they're all changing at a rate that does not support stability at the moment
<stokachu> the charms are pretty stable
<stokachu> you've said to have done hundreds of installs, to be sure there are error messages that you can post bugs on so we can try to figure out what's going on
<setuid> I did (see above), I should be pulling v255 of keystone, but it keeps pulling 198, and I can't figure out why
<stokachu> ok then that's a bug in the installer
<stokachu> it should be pulling the latest
<setuid> Where is it hard-coded?
<setuid> tried to debug it with thedac earlier this week
<setuid> (@internal)
<stokachu> that's the thing we don't rely on the hard coded rev any longer
<setuid> So where would it be picking it up from, if it's supposed to grab it from upstream and get 255?
<stokachu> so if you can post your config.yaml
<stokachu> this on liberty?
<setuid> Yes, I'm trying icehouse now.. one sec, I'll dig up the config.
<stokachu> ok
<stokachu> icehouse is pretty old though
<setuid> groan
<setuid> Ok, canceling
<setuid> There's 5 versions, 2 major OS revisions and a dozen different ways to install it. And none of it works.
<stokachu> just doesn't work for you
<setuid> 2016-07-10 01:14:27 INFO worker.uniter.jujuc server.go:172 running hook tool "juju-log" ["-l" "ERROR" "FATAL ERROR: Could not determine OpenStack codename for version 8.1"]
<setuid> 2016-07-10 01:14:27 ERROR juju-log FATAL ERROR: Could not determine OpenStack codename for version 8.1
<setuid> 2016-07-10 01:14:27 ERROR juju.worker.uniter.operation runhook.go:107 hook "config-changed" failed: exit status 1
<setuid> 2016-07-10 01:14:27 INFO juju.worker.uniter modes.go:567 ModeHookError starting
<setuid> 2016-07-10 01:14:27 DEBUG juju.worker.uniter modes.go:31 [AGENT-STATUS] error: hook failed: "config-changed"
<setuid> Latest attempt
<setuid> stokachu, http://paste.ubuntu.com/18935681/
<setuid> config.yaml ^
<setuid> This seems relevant, but doesn't include a fix: https://bugs.launchpad.net/charms/+source/keystone/+bug/1572358
<mup> Bug #1572358: keystone FATAL ERROR: Could not determine OpenStack codename for version 8.1.0 <keystone (Juju Charms Collection):Invalid> <openstack-base (Juju
<mup> Charms Collection):Fix Released by 1chb1n> <openstack-telemetry (Juju Charms Collection):Fix Released by 1chb1n> <https://launchpad.net/bugs/1572358>
<setuid> stokachu, any thoughts?
<setuid> quiet out here :)
<marcoceppi> setuid: always quiet before the storm ;)
 * magicaltrout checks out the window....
 * lazyPower makes whooshing sounds
 * lazyPower clangs some pots and pans
<setuid> marcoceppi, Indeed!
<setuid> stokachu, ping me when you've got time to beat up the broken 14.04/16.04 openstack-install and conjure-up frameworks
<setuid> Still can't figure out how to get a working keystone, so the entire stack installation and configure is a dead-end
#juju 2017-07-03
<cnf> hmz
<jacekn> hello. How do I submit charm for promulgation review? There does not seem to be any info here: https://jujucharms.com/docs/stable/authors-charm-store#submitting-a-new-charm
<magicaltrout> the review queue is going away jacekn
<magicaltrout> i dont' think you'll get it reviewed any more
<jacekn> magicaltrout: so what's the process for charm promulgatino?
<magicaltrout> none, the store is changing to most popular charm wins type model
<jacekn> magicaltrout: oh. Is there any documentaitn about that? I have a task to get a few charms promulgated, I'd like to close it with link to the new process
<magicaltrout> er
<magicaltrout> dunno
 * magicaltrout dials up the canonicalers
<magicaltrout> rick_h: kwmonroe kjackal or someone might know
<kjackal> magicaltrout: reading
<kwmonroe> jacekn: magicaltrout:  we were just talking about the state of the revq last week.  tvansteenburgh was gonna put some words together; perhaps he can summarize here?
<kwmonroe> iirc, the gist is that namespaced charms aren't bad, but we sort of got into a situation where everyone wanted a top-level charm.  the rev process didn't scale well and it generally just made people grumpy.
<tvansteenburgh> jacekn: if you need a charm promulgated i can do it for you
<magicaltrout> kwmonroe: thats mostly cause you all tried your best to hide non top level charms! :P
<kwmonroe> lol magicaltrout, we were young back then and didn't know any better
<jacekn> tvansteenburgh: thanks! What I'm after is promulgating all those: https://jujucharms.com/u/prometheus-charmers/
<tvansteenburgh> jacekn: okay
<Budgie^Smore> o/ juju world
<bdx> does anyone know where the bakery work is being done?
<jrwren> yes.
<jrwren> bdx: https://github.com/go-macaroon-bakery/py-macaroon-bakery
<bdx> jrwren: awesome, thx
#juju 2017-07-04
<stub> jacekn: At a minimum we need a README. Some of them were inheriting the base layer's README IIRC
<jacekn> stub: good point
#juju 2017-07-05
<cnf> ugh, debugging juju is a pain
<rick_h> cnf: :( trouble is paradise?
<cnf> not a lot of paradise, tbh
<cnf> once again got a deploy that seems to be waiting for stuff
<rick_h> bummer cnf
<rick_h> cnf: waiting for the machine? or waiting for download of agents onto the machine? or the charm getting setup?
<cnf> i don't know
<cnf> it says "waiting for machine"
<rick_h> cnf: k, and so if you run juju status there's a machine with a # there that's got some info?
<rick_h> cnf: and if you juju show-machine X it should give you some hint on what's up with that machine coming up?
<cnf> nope
<cnf> juju show-machine  1/lxd/0
<cnf> model: openstack
<cnf> machines: {}
<rick_h> try just "juju show-machine 1"
<cnf> nothing that indicates what could be the problem
<rick_h> cnf: hmm, guess the next thing would be the lxd logs on the machine 1 to see that it got the request and why it's not creating the container?
<cnf> i'm looking at them
<cnf> it has created the container
<cnf> it is running
<cnf> hmm, how do i attach to a running container again
<cnf> right, it's lacking ip
<cnf> wtf juju
<cnf> >,<
<cnf> it is downright ignoring the constraints
<cnf> i'm seriously getting fed up with juju :(
<cnf> constraints: "spaces=space-maas" should put that container in that space, right?
<cnf> hmm, MaaS has an IP allocated for it
<cnf> but juju didn't apply it to the container
<cnf> any suggestions?
<rick_h> cnf: you need a bind for the application
<cnf> rick_h: sorry?
<rick_h> cnf: https://jujucharms.com/docs/2.2/charms-deploying#deploying-to-spaces
<cnf> i'm not sure what you are trying to tell me
<rick_h> cnf: sorry, so you're deploying something into a container on machine 1 and want it to use the IP address for the space-maas?
<cnf> well, "want", it NEEDS an IP in that space, or nothing will work
<cnf> so i have a constraint to the service, which used to work in 2.1
<cnf> but now it seems to get ignored
<cnf> for _all_ my containers
<cnf> jamespage: prod?
<cnf> or jam ^^;
<cnf> hmz
<rick_h> Coming up in 30min, The Juju Show #16 where we talk about modeling relations across models. https://www.youtube.com/watch?v=d_vMN_opVlA #jujucharms
<rick_h> marcoceppi: bdx kwmonroe arosales aisrael and anyone else that might be interested in sitting in ^
<arosales> modeling relations across models, is that a thing now ?
<rick_h> arosales: :) come sit in and talk about it :P
<arosales> rick_h: sounds interesting, will do
<wolsen> what's the process to get a new version of charmhelpers uploaded to pypi?
<tvansteenburgh1> wolsen: the process is to ping me
<wolsen> tvansteenburgh: ping
<wolsen> :-)
<tvansteenburgh> yeah, i saw that coming :P
<kwmonroe> rick_h: got a live link for the show?
<rick_h> kwmonroe: yep, getting it now sec
<rick_h> https://hangouts.google.com/hangouts/_/ej2jhdqknnfdzkpildnekcgw2ye for those joining in
<rick_h> https://www.youtube.com/watch?v=d_vMN_opVlA for those that want to watch
<rick_h> lol wolsen
<rick_h> "in that case...consider yourself pinged"
<wolsen> tvansteenburgh: there's a bug fix in the contrib.openstack code which helps a bit with the ordering of dictionaries for rendering config files - its triggering undesirable restarts of services due to iterating over a dictionary
<tvansteenburgh> wolsen: will release momentarily
<wolsen> thanks tvansteenburgh !
<wolsen> rick_h: to be fair, tvansteenburgh changed his nick just after telling me to ping him
<tvansteenburgh> oh, you caught that
<rick_h> lol
<rick_h> even more awesome
<wolsen> sly and clever
<rick_h> "I'll trick tab complete /nick zrick_h"
<rick_h> bwuhahahaha
<tvansteenburgh> wolsen: new release uploaded
<wolsen> tvansteenburgh: thank you very much sir
<rick_h> aisrael: you up for joining and talking docs sprint?
<tvansteenburgh> wolsen: np
<aisrael> rick_h: head down on pre-travel backlog. Maybe next time!
<kwmonroe> holy crap rick_h -- just skipped through some youtube playback.  my hamsters are clearly dead :/  really sorry bout that!!
<rick_h> kwmonroe: all good, it was close enough
<Budgie^Smore> o/ juju world
<rick_h> Budgie^Smore: what's up
<Budgie^Smore> not much rick_h, enjoy the holiday?
<rick_h> you know it, some off roading, mountain biking, fireworks and beverages. All make for a good time :)
<Budgie^Smore> sounds like it, although living in CA we have to watch fireworks not fire them
<rick_h> I prefer watching. You get better fireworks https://www.flickr.com/photos/7508761@N03/albums/72157682727171142 :)
<Budgie^Smore> yeah also gives you a chance to take photos too
<rick_h> Heh no doubt
<aisrael> I might be over-thinking this, but are spaces the only way to add/manage machines with multiple network interfaces? If so, is that still MAAS-only?
#juju 2017-07-06
<cnf> morning
<cnf> so anyone around that can help me figure out why juju isn't putting ip's in my containers even though i have the correct constraint on the services?
<wpk> What's the setup?
<cnf> i used to do it fine, it stopped doing it when i upgraded to the latest version
<cnf> wpk: running on MaaS
<wgrant> cnf: It's not adding the extra interfaces to the containers?
<cnf> wgrant: right
<wgrant> cnf: Juju 2.2.0? That's fixed in 2.2.1.
<cnf> wgrant: it _is_ getting ip's from maas...
<cnf> oh
<cnf> so i need to upgrade again
<cnf> can you link the bug?
<wgrant> cnf: https://bugs.launchpad.net/juju/+bug/1698443
<mup> Bug #1698443: Juju-2.2 does not create interfaces in an LXD for all spaces <4010> <canonical-bootstack> <containers> <cpe> <cpe-sa> <eda> <maas-provider> <network> <juju:Fix Committed by wpk> <juju 2.2:Fix Released by wpk> <https://launchpad.net/bugs/1698443>
<cnf> thanks
<cnf> hmz
<cnf> right, so juju messed up my networking, again
<cnf> >,<
<cnf> so when MaaS configures the machine, it works
<cnf> juju takes over, and i can't even make arp work anymore on that vlan
<cnf> damn machine stops replying to arp requests o,O
<wpk> cnf: do you have console access to that machine? e/n/i would be useful
<wpk> or could you set log level to debug, try to add the machine then and look what's in the log
<cnf> i do
<wpk> cnf: Could you post /etc/network/interfaces and /etc/network/interfaces.(backup...) somewhere?
<cnf> wpk: https://bpaste.net/show/59224fa597dc
<cnf> i have 2 backups
<cnf> and a .new
<wpk> cnf: paste everything if you can
<cnf> wpk: https://bpaste.net/show/4873574782b0
<wpk> and the .new?
<wpk> also, what's happening when you try to do ifup -a ?
<cnf> https://bpaste.net/show/a13728fa885b
<cnf> sorry if i'm a bit slow, helping a coworker install a VM
<cnf> ifup -a just gives a new prompt
<cnf> no messages
<cnf> exit code 0
<wpk> yet the machine has no network connectivity?
<wpk> I can't see anything wrong with those files, the bridges seem to be configured correctly
<cnf> wpk: only the vlan 4013
<cnf> the rest works fine
<cnf> i can see arp requests coming in
<cnf> but no replies going back out
<wpk> cnf: could you remove post-up/pre-down lines from bond0.4013 definition (leave them for br-bond0.4013) and then do ifdown -a; ifup -a ?
<cnf> ok, i'm back :P
<cnf> and if i do a ifdown -a, won't i lose connectivity? PP
<cnf> wpk: ^^
<cnf> i just rebooted
<wpk> I thought you were using a console of some sort, you probably would :)
<cnf> no, juju ssh
<cnf> it's running on MaaS
<cnf> ok, no difference
<cnf> also, all interfaces have MTU 9100 or 9000
<cnf> except br-bond0.4013
<cnf> wait, i just noticed the routes in /e/n/i are wrong
<cnf> and also not what is actually set on the system
<cnf> wt...
<wpk> that is a wtf
<wpk> where is it getting them from then?
<cnf> i have no idea
<cnf> it should not even be touching that  subnet
<cnf> apart form that, i can't even ping the gateway on the 4013 vlan
<cnf> it works when maas brings it up
<cnf> juju takes over, and it stops working
<cnf> hmm, and it's not outputting icmp packets, either, on that vlan o,O
<cnf> not even when i ping -I
<cnf> i'm flabbergasted o,O
<wpk> The fact that e/n/i is inconsistent with what's really there is weird for me
<cnf> yes
 * cnf sighs
<cnf> i'm getting a serious juju burnout...
<wpk> hm, one more idea - could you disable jujud on this machine 'permanently', reboot it and see if e/n/i is consistent with configured networking?
<wpk> systemctl disable jujud-machine-0
<wpk> cnf: juju might be messing with it, but we have to rule out that something is broken in Ubuntu itself
<julen> Hi there! does someone happen to know the right API endpoint address for MaaS?
<julen> in the docu it says that http://$ip:5240/MAAS
<julen> but my MaaS controller is not listening on port 5240 for IPv4
<julen> I also tried http://$ip/MAAS/api/2.0/  but it's also not working
<joedborg> hey everyone
<joedborg> is there a "set_staus()" eqv for old bash charms?
<admcleod> joedborg: i think its just status-set ?
<admcleod> joedborg: https://github.com/juju-solutions/charms.reactive/blob/7322a9bc13ffde1960bcfcb9166e3c09341d27bb/docs/index.rst
<joedborg> cheers @admcleod that looks promising
<admcleod> joedborg: !
<cnf> wpk: rebooting
<cnf> wpk: still the same
<joedborg> admcleod works perfectly!  thank you sir
<admcleod> joedborg: !
<admcleod> joedborg: Success rate is 100 percent (2/2), round-trip min/avg/max = 4/6/8 ms
<wpk> cnf: and jujud is not running?
<cnf> uhm, a lot of jujud things are running
<cnf> https://bpaste.net/show/7f8d48b33757
<cnf> is that normal?
<wpk> cnf: have you disabled jujud-machine-0 before rebooting?
<cnf> i did  sudo systemctl disable jujud-machine-0
<cnf> hmm
<cnf> wpk: was that not enough?
<wpk> cnf: hm, it should be...
<wpk> cnf: systemctl list-unit-files \*juju\*
<cnf> juju-clean-shutdown.service          enabled
<cnf> jujud-unit-neutron-gateway-0.service enabled
<cnf> jujud-unit-ntp-3.service             enabled
<wpk> disable all of those, reboot and see
<wpk> hmm
<wpk> but before that
<wpk> set logging to DEBUG, reboot, and copy logs somewhere (or mail them to me)
<cnf> in syslog?
<wpk> in /var/log/juju/
<cnf> how do i set logging to debug in juju?
<cnf> especially as i just disabled all things juju?
<wpk> reenable jujud-machine-0
<wpk> something really bad is happening there, I'd guess that multiple units in LXCs are trying to start up container networking all at once
<cnf> ok, how do i systemd jujud-machine-0 ?
<wpk> systemctl start jujud-machine-0
<cnf> Failed to start jujud-machine-0.service: Unit jujud-machine-0.service not found.
<wpk> juju model-config logging-config="<root>=DEBUG"  should work
<cnf> ok
<cnf> but now juju is still disabled, right?
<wpk> But the units should receive this logging
<cnf> but if juju isn't running, how can it log things?
<wpk> jujud-unit-* are running and will be logging
<cnf> $ systemctl list-unit-files \*juju\*
<cnf> UNIT FILE                   STATE
<cnf> juju-clean-shutdown.service disabled
<cnf> 1 unit files listed.
<cnf> nothing is running?
<wpk> what about the jujud-unit-ntp3.service and jujud-unit-neutron-gateway-0?
<cnf> no, i stopped them all
<wpk> Ah, ok. Reboot is a way to go then probably
<wpk> I have a suspiction of what might be happening, but having a debug log from the units would be really helpful
<wpk> to see -why- it's happening
<cnf> i'm rebooting
<cnf> ok wpk, what log do you want?
<cnf> machine-0 ?
<cnf> wpk: https://bpaste.net/show/c60636a36174
<cnf> anything in there?
<cnf> k, wpk it's time to go home here, can I poke you tomorrow?
<cnf> I have meetings till 13:00 CET
<wpk> cnf: Please do.
<wpk> cnf: The only unusual thing in the logs is "2017-07-06 13:27:25 INFO juju.api.common network.go:118 no addresses observed on interface "bond0.4013""
<cnf> hmm
#juju 2017-07-07
<jac_cplane> how do i unset no-proxy env variable in juju 2.2?  this was not set in 2.1   - juju model-config no-proxy="" give error ERROR no-proxy: expected string, got nothing
<tvansteenburgh1> jac_cplane: juju model-config --reset no-proxy
<jac_cplane> no --reset will set it to the default value - which is set to localhost
<jac_cplane> there is no --unset command
<Hetfield> good morning. i have a hook failed: "config-changed" error on a unit. someone did something but i don't know what. is there a sort of log of juju action on a units?
<Bringe> Hey, i do have a question about juju.
<zeestrat> Bringe: Fire away.
<Bringe> What is the best practice if a machine with active services breaks down, how am i supposed to replace that? I'd like to "redeploy" it without having to build a "new" machine and adding every service to that new machine.
<Bringe> Is there a way to do that?
<Bringe> In a private cloud with maas.
<rick_h> Bringe: so you'd add the new machine and then run add-unit to the applications using --to to specify where to place it?
<rick_h> Bringe: not sure on how it was setup to start with
<rick_h> Bringe: but that way relations/config/etc are kept and you're just adding a new unit for things
<Bringe> I agree, i can do that. Is there no more automated way of saying "put services that were on machine X onto machine Y"? :(
<digv> juju bootstrap fails Error: ld.so:object 'libeatmydata.so' from LD_PRELOAD cannot be preloaded , Running module ntp failed
<rick_h> Bringe: no, I suppose that's something someone could play with as a plugin using libjuju to script out a search of what's there andrecreate
<digv> any idea?
<rick_h> digv: wow, that's a new one for me. Doing a quick look
<rick_h> digv: what cloud/series is this?
<digv> MAAS
<rick_h> digv: do the machines come up cleanly if you just start them from MAAS?
<rick_h> digv: what OS are they in MAAS? trusty/xenial/?
<Bringe> Well thanks for the information, rick_h!
<rick_h> Bringe: sorry, always wish I could say "yea we can do that" but sometimes, not there yet
<digv> yes.. it is clean setup with ubuntu xenial
<Bringe> digv: We encountered a similiar problem, try forcing the kernel to be at least at xenial (hwe-16.04), that fixed it for us.
<Bringe> rick_h: No problem, we will find a solution.
<digv> Bringe: ok.. Let me try it quickly
<julen> Hi there! I'm also having some problem bootstrapping juju with MaaS. It stops at:
<julen> 13:50:14 INFO  juju.juju api.go:67 connecting to API addresses: [10.0.100.47:17070]
<julen> 14:00:12 DEBUG juju.api apiclient.go:837 error dialing websocket: Forbidden
<digv> Bringe: It's working.. thanks a lot :)
<julen> Is it possible to define multiple "no-proxy" addresses as:
<julen> juju bootstrap machine --config http-prox=http...  --config no-proxy=192.168.1.2,10.0.0.2
<julen> At the node bootstrapped by juju, I can run "nc -vz 10.0.0.51 17070"  and it works, but juju still gets stuck on that step
<julen> If I make curl to that address I get some binary stuff, so the http-proxy stuff seems right, but the API is still not responding
<cnf> wpk: poke
<wpk> cnf: pokeback
<cnf> \o/
<cnf> i'm done with meetings for today
<cnf> back to fixing the juju thingies
<wpk> yay!
<wpk> Could you start 'from scratch'?
<wpk> kill this machine, then add it, and step by step see what happens?
<cnf> i can juju destroy-model, and juju deploy, if you want
<cnf> it'll take a while, booting HP machines isn't fast
<cnf> wpk: will that work, or did you want something more specific?
<wpk> destroy-model, add-model, model-config logging-config "<root>=DEBUG", deploy
<wpk> that should be enough
<cnf> ok
<cnf> Waiting on model to be removed, 16 machine(s), 16 application(s)...
<cnf> ERROR expected "key=value", got "logging-config" ?
<cnf> juju model-config logging-config="<root>=DEBUG" works
<wpk> Ah, sorry, right
<cnf> hmz, juju is being weird
<cnf> wpk: how do i force remove a model from juju?
<wpk> what do you mean by 'weird'?
<cnf> MaaS shows all machines as ready
<cnf> juju says it can't release the machines
<cnf> i'm stuck at Waiting on model to be removed, 4 machine(s)...
<cnf> oh, seems it timed out, and did it anyway
<cnf> o,O
<cnf> ok, now we wait half an hour
<cnf> wpk: machines are up, juju is doing its thing
<wpk> cnf: cool
<cnf> wpk: it's almost there
<cnf> wpk: what's the next step?
<wpk> cnf: what's the current status?
<wpk> is the machine dead as it was previously?
<wpk> well, network-wise dead
<cnf> yep
<cnf> on that vlan, anyway
<wpk> could you copy /var/log/juju/* from it somewhere?
<cnf> https://bpaste.net/show/21ea96526ca7
<cnf> do you need unit-neutron-gateway-0.log and unit-ntp-3.log as well?
<wpk> cnf: could you also provide output of grep "" /etc/network/interfaces/* and ip a ?
<cnf> you mean /etc/network/interfaces.d/* ?
<wpk> /etc/network/interfaces*
<wpk> all files there, new and backups
<cnf> ok
<cnf> you had a / there that made me wonder :P
<wpk> typo :)
<cnf> wpk: https://bpaste.net/show/c2b08cb540ab
<cnf> i still don't get where it gets 172.20.19.248 from, it should not have _antyhing_ in that subnet
<wpk> and ip a?
<wpk> and ip r
<cnf> oh, https://bpaste.net/show/55a4b7746004
<cnf> i forgot to link it :P
<wpk> ok, so I don't see bond0.4013 there
<wpk> I see br-bond0.4013 but no bond0.4013
<cnf> wpk: https://bpaste.net/show/4fe735871db0
<cnf> wpk: where?
<cnf> item 30 in ip a
<wpk> ah, sorry, yues, it's there
<wpk> but why master ovs-system?
<wpk> could you also do brctl show ?
<cnf> brctl show br-bond0.4013 ?
<cnf> https://bpaste.net/show/ecb674077e42
<wpk> brctl show ovs-system ?
<wpk> where is this getting from, it's not in the e/n/i anywhere
<cnf> bridge name	bridge id		STP enabled	interfaces
<cnf> ovs-system		can't get info Operation not supported
<cnf> wpk: i'm betting that's a neutron thing
<wpk> hm, it might be. and it's eating bond0.4013
<cnf> hmm
<wpk> cnf: did it worked on 2.1?
<cnf> yes, but now that you point to that, some other details changed
<cnf> let me try some shizzle
<cnf> i need to destroy everything though
<cnf> anything you'd like me to check before I do?
<wpk> cnf: copy of syslog and unit-neutron-gateway-0.log
<wpk> cnf: and then do your thing
<cnf> k, deploying again
<cnf> it'll take half an hour :P
<cnf> ugh, now juju isn't asking for all the machines it needs o,O
<cnf> why is it not asking for more machines? o,O
<cnf> wpk: any reason juju isn't asking MaaS for all its machines?
<wpk> whaa?
<cnf> i have 4 machines configured in my model
<cnf> juju is only asking for 1 from maas
<cnf> i'm watching the maas logs, it's just not getting any other requests
<cnf> i can't juju retry-provisioning 0
<cnf> because machine 0 isn't in an error state
<cnf> i see in MaaS
<cnf> Jul  7 16:58:50 MAAS maas.api: [info] Request from user juju to acquire a machine with constraints: [('tags', ['compute', 'openstack']), ('interfaces', ['0:space=5;1:space=1;2:space=2']), ('agent_name', ['a702d342-3c94-4ff5-8548-c296fb5d2dae']), ('zone', ['default'])]
<cnf> which is right
<cnf> but no requests for the other 3 machines
<wpk> So the machine is in proper state in juju but not it's not provisioned?
<wpk> (brb)
<cnf> wpk: https://bpaste.net/show/77d220e2a785
<cnf> 3 machines in "pending"
<cnf> nothing happening
<cnf> now the 1 machine is deployed
<cnf> the other 3 are not moving
<cnf> wpk: so i just ran the same juju deploy openstack.yml again
<cnf> now it's getting the other 3 machines
<cnf> O,o
<julen> Hi there!
<julen> I'm having a weird problem while bootstrapping
<julen> the process stops waiting for the websocket, at:
<julen> INFO  juju.juju api.go:67 connecting to API addresses: [10.0.0.52:17070]
<julen> and then... DEBUG juju.api apiclient.go:837 error dialing websocket: Forbidden
<julen> I am behind a http-proxy, but I the no_proxy and no-proxy variables are defined and from the shell I can access that IP:port with curl
<julen> how can it be that curl can connect but juju not?
<cnf> well, curl isn't connecting to the websocket
<julen> really? ... but after the request, it gets some stuff... binary, but something comes out
<julen> the error says "error dialing websocket: Forbidden", but I do see the IP and the port, and the proxy is not blocking it...
<julen> cnf: from my limited knowledge, I got the picture that a websocket is a port listening to http requests, right?
<cnf> it's listening to websocket requests
<julen> so, curl http://10.0.100.52:17070  is equivalent to what juju is trying to do
<cnf> on the same port as your http request, generally
<cnf> websockets do an upgrade on a http request, and then become a websocket
<cnf> your curl is just getting the html/css/js files
<julen> aha!
<julen> so, my error might have to do with some settings within the API
<cnf> it might be you not being authenticated
<cnf> or a proxy not allowing it
<julen> hmm... that sounds possible...
<julen> but, I don't think our http-proxy has such a complex settings, to allow traffic but drop authentication
<julen> is there any way I could check that?
<cnf> not authentication
<cnf> just not allowing websockets
<cnf> you send a websocket upgrade through the http proxy
<cnf> it responds with "nope!"
<cnf> that _might_ be one of the reasons
<cnf> not saying that is the reason
<julen> ... and I probably can not check that with some curl request or something, right?
<cnf> well, use your browser, and enable webdev / debug tools
<cnf> you should be able to see all requests, and the responses
<cnf> generally under network you can see the headers sent and received
<julen> well.. the browser seems to work. I get the headers...
<julen> the response  is just a couple of red dots, I don't know what that means
<julen> cnf: I have an openstack with nova with same settings and it works fully, so I guess I can discard the option of the proxy blocking the websockets, right?
<cnf> wpk: prod
<wpk> cnf: ?
<cnf> wpk: shizzle works, i'm going home :P
<wpk> cnf: could you explain what have you done?
<wpk> cnf: for future generations :)
<cnf> wpk: changed the data-port: in my model
<cnf> wpk: so i shuffled some machines around
<cnf> and my data-port was set to the wrong interface
<cnf> i still don't know where it gets the 172.20.19 network
<cnf> or why it wasn't aquireing the machines
<cnf> but it was ovs claiming an interface that was already provisioned (which i think it should not be allowed to, tbh)
<cnf> wpk: so i guess it was me being an idiot
<cnf> anyway, i'm going home
<cnf> wpk: thanks a lot for your help! and have a nice WE!
<Budgie^Smore> o/ juju world'
#juju 2017-07-08
<ak_dev> Hello, a small doubt, is there any way to exchange info via a relation between two units of a subordinate charm, which are under two different principal charms?
<tvansteenburgh> ak_dev: have you tried using a peer relation?
<ak_dev> hey, thanks for the reply, I looked at peer relations and couldn't find any example as to how to implement it, but if thats the way to go, then will keep looking :-)
<tvansteenburgh> ak_dev: here's an example of a peer relation implementation https://github.com/juju-solutions/layer-tls
<tvansteenburgh> nope sorry
<tvansteenburgh> this one https://github.com/juju-solutions/interface-tls
<tvansteenburgh> the first link is a charm layer that uses it
<ak_dev> tvansteenburgh: thanks a lot for that, exactly what I was looking for!
<tvansteenburgh> ak_dev: np, i found those by searching for 'peer' on this page http://interfaces.juju.solutions/
<ak_dev> tvansteenburgh: oh, thats what even I should have done probably, don't know why it didn't strike me
#juju 2017-07-09
<ybaumy> hi. when will the kubernetes elastic charm updated to 1.7 kubernetes
<tvansteenburgh> ybaumy: you mean the bundle?
<ybaumy> tvansteenburgh: yes
<tvansteenburgh> ybaumy: possibly tomorrow
<ybaumy> tvansteenburgh: or is there an easy upgrade path when i install the old version without breaking anything
<ybaumy> tvansteenburgh: ah ok
<ybaumy> tvansteenburgh: that long i can wait no problem
<tvansteenburgh> ybaumy: sure, if you already have the old one deployed, you can upgrade just the k8s components
<ybaumy> tvansteenburgh: no i havent though i want to start with it next week possibly
<ybaumy> tvansteenburgh: i got a new project but im not in a hurry
#juju 2018-07-02
<babbageclunk> thumper, wallyworld: why do we run a global clock updater on every controller machine?
<wallyworld> babbageclunk: not sure, i would have thought it's only needed on mongo primary?
<babbageclunk> That's what I'd have thought too, but it gets run in each one
<thumper> NFI
<thumper> wallyworld: got a few minutes to chat?
<wallyworld> thumper: sure
<thumper> wallyworld: 1:1
<thumper> ?
<veebers> babbageclunk: will git complain saying "my tip is behind the remote" if I've done some rebasing locally? Or have I somehow backtracked my local and built on top of that
<babbageclunk> veebers: more context?
<veebers> babbageclunk: hah sorry, so I'm pushing updates to a branch in github, git rejects it and says "Updates were rejected because the tip of your current branch is behind its remote counterpart . . ." normally I would think "Oh, push rejected because I squashed commits, I'll --force" but it's saying I'm behind the remote, I should 'git pull'.
<babbageclunk> If you've rebased (or otherwise messed with history) you won't be able to push to a branch (to which you've already pushed) without --force-with-lease. Is that what you mean?
<veebers> babbageclunk: so perhaps I should just do what it says and actually just git pull
<babbageclunk> Oh, if you squashed then you should just do a --force-with-lease.
<babbageclunk> (which is safer than --force as I understand it, although really I just do what magit does.)
<babbageclunk> https://developer.atlassian.com/blog/2015/04/force-with-lease/
<babbageclunk> veebers: ^
<veebers> babbageclunk: ack, cheers :-)
<veebers> babbageclunk: that worked, thanks again
<babbageclunk> :)
<veebers> wallyworld: FYI pushed up those changes, waiting for the unit test run to finish (already fixed the one issue that's popped up)
<wallyworld> ok, will look
<wallyworld> veebers: a couple of questions around validation/naming. lgtm to land once you have looked at the comments; the validation one probably needs at least a todo
<veebers> wallyworld: ack, renaming Resource -> Metadata atm. I'll add a card for the validation todo
<kelvin_> wallyworld, would u mind to take a look this PR when u got time? thanks https://github.com/juju/juju/pull/8876
<babbageclunk> thumper: have a moment? want to talk something through.
<thumper> babbageclunk: in a call with wallyworld and jam just now, but maybe after
<babbageclunk> ok cool
<thumper> babbageclunk: wanna chat now?
<wallyworld> kelvin_: sorrt, was in call, reviewed
<kelvin_> thanks, wallyworld
<babbageclunk> thumper: oops, yup! In 1:1?
<thumper> babbageclunk: ack, btw, moved to a meet
<babbageclunk> sweet, I always go via the calendar to be safe anyway
<veebers> wallyworld: I've pushed the changes, I haven't squashed the latest commit as I wanted you to eyeball it quickly as there where a handful of changes since your approval comments
<wallyworld> sure
<wallyworld> veebers: looks like a couple of unneeded aliases in deploy.go?
<wallyworld> import aliases
<veebers> wallyworld: ah, I needed them but might not now, let me dbl check
<wallyworld> veebers: lgtm though, with the aliases removed if they ar eno longer needed
<veebers> wallyworld: ah, because validateResourceDetails(resources map...), I could rename that map in the function ^_^
<wallyworld> you can have a params same name as in import
<wallyworld> but rename the parsm is better here
<wallyworld> "res" or something
<veebers> wallyworld: done. In that last push I also removed a commentary comment and just simplified the err return a bit
<wallyworld> veebers: great, land it, no need for me to look again
<veebers> wallyworld: sweet, will do
<kelvin_> wallyworld, changed the name to bitcoin miner. :->
<wallyworld> good :-)
<kelvin_> lol land it now
<vino> wallyworld: sorry i forgot to ping u.. i pushed PR for review few hrs before.
<vino> when u get time plz take a look.
<vino> https://github.com/juju/juju/pull/8881
<vino> i am going to start working on CLI part.
<wallyworld> vino: my internet dropped for a bit there. i've left some comments, see if they makde sense
<wallyworld> vino: having internet problems here at the moment, not sure if you saw my reply - review done, let me know if you have questions
<vino> wallyworld: just checking.
<w0jtas> anyone happy to help with fresh new juju openstack setup ? i cannot launch first instance, "No valid host available"
<stickupkid> manadart: CR for this one https://github.com/juju/juju/pull/8879
<manadart> stickupkid: Looked at that one this morning, but I'll review properly now.
<stickupkid> manadart: yeah, I was manual testing it, as I want to make sure that it worked correctly
<stickupkid> manadart: so this won't get a cert from the API, it assumes you have access to everything...
<stickupkid> manadart: there is a flakey test in the worker suite - `ProvisionerSuite.TestStopInstancesIgnoresMachinesWithKeep` - i'll try and rebuild it and see if that goes a way
<w0jtas> anyone could help? default localhost openstack setup is not working , neutron.log have error "The resource could not be found.", also in keystone i have openssl error
<manadart> stickupkid: Got time for a HO?
<manadart> stickupkid: Successfully bootstrapped to a remote, but had some questions about the interactive add.
<rick_h_> morning party people
<manadart> rick_h_: Morning.
<w0jtas> anyone ?? really need help https://pastebin.com/0Dz7gUY3
<rick_h_> w0jtas: sorry, you'll have to check with the openstack folks. I'm not sure how that is set up. If you can get into the python file and debug what the command it's trying to run it maybe you can run it from the cli on the host and see why the openssl command is returning non-0
<rick_h_> w0jtas: check out https://github.com/openstack-charmers/openstack-community
<w0jtas> rick_h_: ok will try on openstack chan ;) thanks for answer anyway
<w0jtas> tinwood: any chance to help?
<magicaltrout> kwmonroe: we tested a manual Hue deployment over the Bigtop stuff the other day and it worked pretty well, so we're going to continue on that path and figure out whether to shove it up to bigdata-charmers at a later date
<magicaltrout> the lovely rmcd is also starting work on the Druid charms
<magicaltrout> which will eventually back on to HDFS
<magicaltrout> I was messing around with Apache Ignite over the Yarn stack over the weekend
<magicaltrout> that worked pretty well
<rathore_> all: how to find out why juju is rejecting my bundle.yaml ?
<rathore_> ERROR invalid charm or bundle provided at "./bundle.yaml"
<rick_h_> rathore_: try using charm proof against it
<rick_h_> rathore_: oic, is this from juju deploy ./bundle.yaml ?
<rathore_> yes it is
<rick_h_> rathore_: there's a charm tool for charm and bundle authors and it has a lint tool "charm proof" to help find any issues in them
<rathore_> FATAL: No bundle.yaml (Bundle) or metadata.yaml (Charm) found, cannot proof
<rick_h_> rathore_: and that bundle in the cwd?
<rathore_> charm proof is complaining it doesnt find
<rathore_> yes
<rathore_> i just modified some bits of openstack-lxd-xenial-queen and juju has started complaining
<rathore_> got it to work, just had to run charm proof instead of charm proof ./bundle.yaml
<rick_h_> rathore_: gotcha
<rick_h_> rathore_: so maybe it's just juju deploy bundle.yaml vs the ./
<rick_h_> ?
<rathore_> naah juju deploy bundle.yaml is not giving out any errors
<rfowler> :~$ juju run-action ceph-osd/0 zap-disk /dev/sdb i-really-mean-it
<rfowler> ERROR argument "/dev/sdb" must be of the form key...=value
<rfowler> how am I suppose to type that
<rick_h_> rfowler: juju run-action ceph-osd/0 zap-disk="/dev/sdb"?
<rfowler> ~$ juju run-action ceph-osd/0 zap-disk="/dev/sdb"
<rfowler> ERROR invalid unit or action name "zap-disk=/dev/sdb"
<rfowler> rick_h_: same
<rick_h_> oh sorry
<rick_h_> the action name is the param it's not a arg to it
<rick_h_> sec, have to look at the action in the charm.
<rick_h_> rfowler: ok, so looks like you need the argument flag first
<rick_h_> rfowler: so: run-action ceph-osd/0 zapdisk device="/dev/sdb" i-really-mean-it=true
<rick_h_> rfowler: or something like that
<rick_h_> rfowler: https://api.jujucharms.com/charmstore/v5/ceph-osd/archive/actions.yaml for the action definition
<rick_h_> sorry, the arg is "devices" with an S
<rfowler> rick_h_: works thanks
<rfowler> rick_h_: except it fails and says the disk is mounted went i know it isn't
<rick_h_> rfowler: doh, well not sure about that. That's going to fall into the work the charm does itself.
<rick_h_> rfowler: but glad we could get it executing
<stickupkid> manadart: this PR brings in new error messages, removes the old tools/lxdclient from the provider (i've kept the code around for now!).
<stickupkid> manadart: anychance you can have a look?
<stickupkid> manadart: I've also removed the ProviderLXDServer interface, in preference to the Server interface. It made testing a lot easier in the long run
<manadart> stickupkid: Added some comments. I have to attend to kids now; might check back later or failing that, first thing in the morning.
<stickupkid> manadart: sure thing
<rathore_> all: whats the correct way of upgrading a bundle
<rathore_> i have one deployed and i need to make some changes
<rick_h_> rathore_: so there's no method of upgrading a bundle. You just make the changes you need.
<rick_h_> rathore_: since bundles aren't really entities that an be tracked and tell what's changed from one install to a later one
<kwmonroe> magicaltrout: i'm curious when you say you tested a manual Hue over bigtop... did you add puppet/packaging stuff back to bigtop for hue, or did you do a standalone hue that interacted with other bigtop components?
<kwmonroe> either way, good to hear that hue worked pretty well!
<stickupkid> manadart jam - seems autoload-credentials is throwing an error concerning oci ERROR could not detect credentials for provider "oci": `stat /home/simon/.oraclebmc/config: no such file or directory`
<rathore_> Hey all, anyone knows ab example of neutron gateway ha with juju?
<thumper> morning
<thumper> rick_h_: seen any official go ahead?
<thumper> rick_h_: did you poke solutions qa?
<thumper> rick_h_: also, bug 1776995 is very important for upgrade series work
<mup> Bug #1776995: subordinate can't relate to applications with different series <upgradeseries> <juju:Triaged> <https://launchpad.net/bugs/1776995>
<thumper> rick_h_: we should look to get either hml or externalreality to weave it in with current work
<cory_fu> wallyworld: Ping me when you get in?
<rick_h_> thumper no word from qa I saw today. Ty for heads up on the bug. I'm out ATM with an appointment.
<thumper> rick_h_: ack
<magicaltrout> kwmonroe: sorry missed you earlier, we just grabbed the latest and manually stuck a build on there, i don't plan on trying to backport it into bigtop since its been removed
<rick_h_> jhobbs: any idea on ok to release? Haven't seen Chris reply to emails yet.
<kwmonroe> roger that magicaltrout -- i figured re-importing a puppet manifest atop a bigtop repo clone would be more hassle than it was worth (considering they removed it on purpose).  still good to know integration worked.
<kwmonroe> and you don't have to worry with those pesky debs.  tar to production is the way to go ;)
<magicaltrout> i was considering snapping it up
<magicaltrout> i dunno, you can't always appease the ASF, I think its a good UI for demos etc at the very least because then business managers etc can grok whats going on
<magicaltrout> rather than doing some hdfs dfs -ls command and showing them a terminal prompt :P
<kwmonroe> magicaltrout: +1 on hue being a great Hadoop User Experience for demos (see what i did there?).
<kwmonroe> magicaltrout: that said... and i say this with much fear at your retort, won't the business manager be equally impressed with the namenode UI + whatever dashboard you want (like zeppelin)?
<magicaltrout> surely that depends on whether you're interested in getting data in or out
<kwmonroe> i think all the data has already gone in.  we just care about the output now magicaltrout.  and it doesn't look good (for humanity).
<kwmonroe> still, going for hue against http://mail-archives.apache.org/mod_mbox/bigtop-dev/201804.mbox/%3C5BA7B1B4-B514-4196-ADCB-2D8ECBCCC97F%40oflebbe.de%3E makes me think you secretly want to take over bigtop maint.  you have my +1. not sure how much that pays tho.
<magicaltrout> me and my interns against the world!
<kwmonroe> my hopes are with you
<wallyworld> thumper: release call?
<thumper> coming
<cory_fu> wallyworld: When you're done with that call, can I have a few minutes of your time?
<veebers> wallyworld: thoughts on migration strat for the docker resource collection? I don't imagine critical as it'll just download the resource if not found. Oh, but what about CLI provided resources, will they get lost?
<wallyworld> cory_fu: yeah, saw your ping :-) had just crawled out of bed with a coffee in tome to make the meeting. free now
<wallyworld> veebers: migration will just need an update to the model description format to add the new collection
<wallyworld> that can come a bit later
<veebers> wallyworld: I'll add the new resource collectin to the 'ignore' in the migration_internal_test for now (and add a card)
<cory_fu> wallyworld: np.  PMed you a Hangout link,
<cory_fu> Cynerva: Are you still around?
<Cynerva> cory_fu: yeah, what's up?
<cory_fu> PM'd you a Hangout link if you have a minute
<wallyworld> veebers: sgtm, that's what we normally do for that
<cory_fu> wallyworld: https://github.com/juju-solutions/layer-caas-base/pull/5 and https://github.com/juju-solutions/charm-kubeflow-jupyterhub
<wallyworld> cory_fu: great ty, will try them out today
<cory_fu> Heading out.  o/
<rick_h_> release the hounds! errr...I mean 2.4. 0
<rick_h_> wheeeee
#juju 2018-07-03
<vino> veebers: i am here
 * thumper goes to make a coffee while tests run
 * thumper headdesks
 * thumper primal screams...
<thumper> FFS
<thumper> our fake authorizer is SO fake, it was never checking anything
 * blahdeblah grabs popcorn
<thumper> it came with a hard coded "admin" default user for the dummy provider
<thumper> and the fact that the admin permission is called "admin"
<thumper> and we have a fake drop through that is a "helper"
<thumper> that allows you to name the user based on the permission you'd like it to have
<thumper> so the fake authorizer always had the default user with admin permissions
<thumper> no matter what you set
 * thumper runs all the tests again after the latest pass of fixing
 * thumper nips out to look at a car
<wallyworld_> kelvin: if you get a chance, a review of this PR would be great - it's adds the ability for k8s to report storage info back to juju for recording in the model https://github.com/juju/juju/pull/8884
<kelvin> wallyworld_, looking now
<wallyworld_> kelvin: a lot of the code will be new to you - let me know if you have questions
<kelvin> wallyworld_, yup, thanks.
<manadart> externalreality: Got 5 for a HO?
<icey> I have a deployed openstack, and I have different types of hypervisors; I can control the hypervisors that guests land on by using flavors; how can I control which flavors juju will use by default?
<rathore_> Hi all, how can I install 2 charms that use ntp as subordinate on same machine? They all try to use the same port which fails some charms. It is to support both nova-compute and neutron-gateway
<blahdeblah> rathore_: You shouldn't do that.
<blahdeblah> You should configure one ntp subordinate on the bare metal and leave it at that.
<rathore_> How can i do that, when i mention ntp to be deployed on specific hosts neutron gateway still tries to create subordinate charms
<blahdeblah> rathore_: I'd need to see your configuration to be sure, but likely what you need to do is remove the relation between the neutron-gateway charm and ntp.
<rathore_> :blahdeblah : http://termbin.com/1gam
<rathore_> I am attempting to have 3 servers for control plane. All 3 will have all required services in HA
<blahdeblah> rathore_: right, so you are hulk-smashing both nova-compute and neutron-gateway to nodes 0-2.  So removing the ntp relation on neutron-gateway should be safe in that situation.
<blahdeblah> incidentally, the same would apply to other similar subordinates like nrpe, telegraf, landscape-client, etc.
<blahdeblah> In some of our clouds we get around this by using cs:ubuntu as a bare metal base charm and relating all the machine-specific subordinates to that.
<blahdeblah> rathore_: Also, in a 5-node deployment there's no value in enabling auto-peers on the ntp charm; it will only reduce performance due to the way the peer relation is implemented at the moment.
<blahdeblah> (reduce performance of relation convergence, that is - it shouldn't have any effect on the actual running system once juju has deployed things and set up the relations)
<rathore_> blahdeblah: thanks. Is there anywhere an example to implement this? I am new to openstack and not sure if having nova-compute and neutron-gateway is a bad idea
<blahdeblah> I'll leave that for others more qualified to comment further, but we certainly have several clouds where both nova-compute and neutron-gateway are deployed together.
<blahdeblah> (on the same bare metal nodes)
<rathore_> blahdeblah: Thanks, is there any example for the cs:ubuntu trick anywhere in wild
<blahdeblah> rathore_: I'm not sure; but it would look very similar to what you have already
<blahdeblah> Probably not necessary if you're not adding lots of subordinates
<rathore_> Is it something like https://github.com/gianfredaa/joid/blob/20af269d65cd053ba29ac8d9701bace4b17520be/ci/config_tpl/bundle_tpl/bundle.yaml
<blahdeblah> rathore_: also, #openstack-charms might be a good place to ask about this
<rathore_> blahdeblah: Thanks
<stickupkid> manadart: I'm about to push my changes for ensuring that the local server is created and cached inside the server factory, but it seems there is an issue with bionic. Is there any chance you can have a look?
<manadart> stickupkid: Sure. Just let me polish off this PR.
<rick_h_> Morning party people
<manadart> jam externalreality: Should you have a moment - https://github.com/juju/juju/pull/8885
<stickupkid> anybody know what's changed in mongo recently, we're getting a lot of failures recently
<stickupkid> s/mongo/mongo code/
<manadart> stickupkid: What issue did you have? Looks like it's spinning OK here.
<stickupkid> manadart: i can look into it more later, but http://ci.jujucharms.com/job/github-check-merge-juju/2054/console
<manadart> manadart: I was meaning re the PR from earlier.
<stickupkid> ah, sorry
<stickupkid> manadart: let me check
<stickupkid> manadart: let me just create a new lxc bionic container
<rick_h_> externalreality: if you get a sec can you put https://github.com/juju/juju/pull/8885 on your "to look at" list please?
<rick_h_> jam: with 2.4.0 out we should be able to do that merge 2.3 to 2.4 now if that's cool
<jam> rick_h_: thanks for the reminder.
<rick_h_> 2.4.0 release announcement formatted up on discourse. I probably missed some formatting changes so feel free to suggest/edit.  https://discourse.jujucharms.com/t/juju-2-4-0-has-been-released/53
<rathore_> hi all, anyone has seen ubuntu@juju-04c90f-0-lxd-1:~$ sudo ls sudo: unable to resolve host juju-04c90f-0-lxd-1: Connection timed out
<rathore_> lxd containers and hosts are unable to resolve themselves
<rick_h_> rathore_: is this on bionic?
<rathore_> xenial
<rick_h_> rathore_: is their hostname in /etc/hosts?
<rathore_> no it isn't
<rathore_> rick_h_: Is this a known issue in bionic?
<rick_h_> rathore_: sorry, just checking. I know that systemd does something where it resolves with a local daemon thing vs the older way to do things
<rathore_> rick_h_: Thanks, i am also trying bionic now to see if it has same issue
<rathore_> rick_h_: If it helps, the bare metal hosts have the same issue. I am using Maas for provisioning
<jam> manadart: feedback on your PR
<manadart> jam: Ta.
<rathore_> rick_h_: Cannot deploy bionic. Install fails saying it cannot find image for bionic/amd64. So don't know if it works
<rick_h_> rathore_: gotcha, yea you'll have to pull those images down into your MAAS to try it out
<rick_h_> rathore_: sorry, on the phone a bit this morning. If you can replicate it please file a bug with notes on the MAAS/Juju verison and such.
<rathore_> rick_h_: Maas has bionic and bare metal was provisioned with bionic. Its the lxd which complains. I will check if it happens again on xenial and file a bug.
<rathore_> rick_h_: Just to update, the issue is fixed with Juju 2.4
<hml> externalreality: is pr 8864 ready to move from WIP status to review?
<rathore_> rick_h_: I take my words back. It seems to be random.
<jhobbs> any chamhelpers reviewers around? This one is pretty small... https://github.com/juju/charm-helpers/pull/194
<manadart> jam: Fixed it.
<maaudet> After a clean bootstrap with Juju 2.4.0 on AWS, and then running juju enable-ha I get the following warning (but no error state): WARNING juju.mongo mongodb connection failed, will retry: dial tcp REDACTED-MACHINE-0-EXT-IP:37017: i/o timeout
<maaudet> Is it cause for concern?
<maaudet> Both machine 1 and 2 outputs this error in a loop
<rmescandon> Does anybody know if influxdb and telegraf charms related as influxdb:query - telegraf:influxdb-api work out of the box. Or is it needed additional configuration?. Documentation say so, but I don't see that telegraf is adding the influxdb output plugin to /etc/telegraf/telegraf.conf
<rick_h_> rmescandon: sorry, not sure off the top of my head.
<rick_h_> maaudet: hmm, that doesn't seem right. Looks like a timeout trying to get the new controller dbs in sync with the first one? If you can replicate it please file a bug with the debug-log details please.
<maaudet> rick_h_: Will do, I replicated it 3 times already, I'm going to put all the details in the bug report
<rick_h_> maaudet: ty
<kwmonroe> maaudet: just reading the 2.4 release notes, it says this about juju-ha-space bootstrap config: When enabling HA, this value must be set where member machines in the HA set have more than one IP address available for MongoDB use, otherwise an error will be reported.
<kwmonroe> maaudet: could it be you're missing that config? https://docs.jujucharms.com/2.4/en/controllers-config
<kwmonroe> maaudet: here's the actual section that talks about juju-ha-space in more deats: https://docs.jujucharms.com/2.4/en/controllers-config#controller-related-spaces.  looks like it's only a "must" if your controller has multiple ip addrs. not sure if that applies to you.
<maaudet> kwmonroe: I tried it, but when I run juju controller-config juju-ha-space=my-space it says that my machine is not part of that space, but when I check juju show-machine 0 it says that it's in that space
<rick_h_> maaudet: kwmonroe so that space stuff is only on MAAS since AWS doesn't know about spaces tbh
<maaudet> that's what I figured
<kwmonroe> aaaah, thx rick_h_.  i'll go back to my cave now.
<rick_h_> kwmonroe: all good
<rick_h_> kwmonroe: heads up, juju show thurs (since the holiday wed) to celebrate 2.4
<rick_h_> and hopefully some other fun stuff to show
<kwmonroe> w00t
<rick_h_> bdx: zeestrat ^
<bdx> rick_h: good deal, count me in
<bdx> congrats on the 2.4 all!!
<rick_h_> woooo!
<pmatulis> rick_h_, maaudet, kwmonroe: i should add MAAS & AWS only for that stuff on controller spaces
<rick_h_> pmatulis: yea, I was just asking folks about that as it *should* work anywhere we support spaces but seems it's MAAS-only atm
<pmatulis> but the main docs do link to the network-space page
<pmatulis> which do say MAAS/AWS only
<maaudet> I'm on AWS actually
<pmatulis> maaudet, in Juju did you create a space for an available AWS subnet?
<maaudet> pmatulis: Yes, 2 subnets for 1 space, both subnets were in 2 different az
<pmatulis> maaudet, and 'juju spaces' looks fine?
<maaudet> pmatulis: I took down the controller for now, but yes, I could see the right ranges in the right space + 2 FAN network ranges
<cory_fu> rick_h_: I'm trying to help someone debug the controller returning "could not fetch relations: cannot get remote application "kubeapi-load-balancer": read tcp 172.31.5.119:36590->172.31.5.119:37017: i/o timeout" and came across https://bugs.launchpad.net/juju/+bug/1644011  They're also having `juju status` hang, but checking on the controller they still see a /usr/lib/juju/mongo3.2/bin/mongod process.  Any clues on what could cause that or how to
<cory_fu> track it down?
<mup> Bug #1644011: juju needs improved error messaging when controller has connection issues <usability> <juju:Triaged> <https://launchpad.net/bugs/1644011>
<rick_h_> cory_fu: hmm, so is this a cross model relation?
<cory_fu> rick_h_: Nope.  Just a normal relation in a CDK deployment
<rick_h_> cory_fu: hmm, someone else was getting an i/o timeout today setting up HA.
<rick_h_> cory_fu: I'm trying to see if I can replicate it and see what's up
<cory_fu> rick_h_: This was mid-deployment with conjure-up
<rick_h_> cory_fu: with 2.4?
<cory_fu> Not sure
<rick_h_> cory_fu: please check, I'm wondering if there's an issue in 2.4 around this
<cory_fu> Asked, waiting on a response
<rick_h_> cory_fu: k
<zeestrat> rick_h_: thanks for invite. Cruising around Sweden atm so looking forward to watch when back :)
<rick_h_> zeestrat: oooh have fun!
<cory_fu> rick_h_: Apparently, it's Juju 2.3.8, and the juju-db service seems to still be running.
<rick_h_> cory_fu: ok, and 172.31.5.119 is the controller IP address?
<cory_fu> Yes, the cloud internal IP address.
<cory_fu> rick_h_: Just like in the bug that you filed about reporting db connection errors better, it seems to be the controller timing out talking to the db, but in this case the DB seems to be up and running.
<cory_fu> At least, systemctl is reporting it as running, and there were no obvious errors in the log, with update messages from the controller in there after the time of the error
<rick_h_> cory_fu: did we try to restart it?
<rick_h_> cory_fu: as well as the jujud ?
<cory_fu> rick_h_: Unfortunately no.  The person in question apparently wanted to just tear it down and redeploy, since this happened during deployment.  I asked about trying to get more debugging info, but it seemed like they were trying to get EOD before the holiday
<cory_fu> rick_h_: They said that, if it happens again, they'll follow up on Thursday
<rick_h_> cory_fu: ah ok. Yea sorry. I'd like to get to the bottom of it. Feel free to connect us more directly if we can help
<cory_fu> rick_h_: Will do
<thumper> babbageclunk: ping
<thumper> bug 1779904 did we not test the rc upgrades with the upgrade step?
<thumper> also the upgrade steps should be idempotent
<mup> Bug #1779904: 2.4.0 upgrade step failed: bootstrapping raft cluster <juju:New> <https://launchpad.net/bugs/1779904>
<thumper> rick_h_: re bug 1779897, looks like a potential recent lxd change
<mup> Bug #1779897: container already exists <cdo-qa> <foundations-engine> <juju:New> <https://launchpad.net/bugs/1779897>
<thumper> care to put a card on your board to track?
<rick_h_> thumper: rgr, on it
<babbageclunk> thumper: crap
<babbageclunk> No, I didn't test upgrading from an rc to 2.4.0
<babbageclunk> Ah, and because it wasn't a state upgrade I didn't use the standard check upgraded data test for steps, which would have checked it was idempotent. Bugger.
<babbageclunk> thumper: ok, I'll change the upgrade step now
<thumper> babbageclunk: thanks
<veebers> thumper: does that mean we'll be releasing 2.4.1 sooner than expected?
<babbageclunk> thumper: can you review https://github.com/juju/juju/pull/8886 please/
<babbageclunk> ?
<babbageclunk> I'm just testing both cases now
<thumper> babbageclunk: reviewed, just wondering if just checking the existence of the directory is sufficient to consider raft bootstrapped?
<babbageclunk> thumper: I think so - that directory is created in the process of creating the logstore or snapshot store, immediately before the bootstrapping.
<thumper> ok
<babbageclunk> I'll put that on the PR for posterity too.
#juju 2018-07-04
<veebers> wallyworld_: you have a couple moments? What's the best way to expose a controller config to a cmd? (i.e. https://github.com/juju/juju/blob/develop/cmd/juju/application/deploy.go#L248)
<veebers> wallyworld_: oh, something along the lines of getting an APIContext which we can then use to communicate with the controller and retrieve the needed info?
<thumper> wallyworld_: https://github.com/juju/juju/pull/8888
<thumper> or anyone really...
<veebers> thumper:lgtm
<thumper> veebers: thanks
<wallyworld_> sorry thumper was distracted with #@%#$$#@ k8s storage
<thumper> wallyworld_: that's fine, veebers has sorted it
<veebers> wallyworld_, thumper: Is this the right direction to head for having a boostrap config for charmstore url and being able to access it in a command context? (i.e. the deploy command creates a charmstore client and needs to know that url)
<veebers> oops, and the actual link: http://paste.ubuntu.com/p/nWMGhGff7s/
<thumper> I think we need to be very clear about the behaviour we are expecting
<thumper> in the difference between client and server aspects
<thumper> I don't think we should be looking at bootstrap config to determine this
<thumper> for the client side, I'd much rather just use an environment variable to override the behaviour
<thumper> for Juju, it should be controller config
<thumper> wallyworld_: thoughts?
<thumper> hmm...
<thumper> we have a --reset for model-config but not controller-config
<wallyworld_> um
<wallyworld_> yes controller config server side
<wallyworld_> but at bootstrap, it can be passed in to bootstrap command
<wallyworld_> using the existing controller config mechanism
<wallyworld_> it's not a deploy command thing
<wallyworld_> it's set once only at bootstrap
<wallyworld_> and for now is immutable
<wallyworld_> if you try and mutate itvia reste or whatever, you will get an error, like for api port
<thumper> do any client side things need to know or care about this value?
<wallyworld_> not sure, but if so they get it via controller config
<thumper> no...
<thumper> a client wouldn't necessarily have access to it
<wallyworld_> all clients can get controller config
<wallyworld_> via the controller facade
<wallyworld_> we already do that in places
<wallyworld_> can't recall where off hand
<thumper> do we really want to add another api call to all call sites?
<thumper> hmm..
 * thumper thinks
<thumper> we probably don't want a client side environment variable
<wallyworld_> no, i don't think so
<thumper> if we are thinking about  enterprise charm stores
<veebers> wallyworld_: I cribbed some of that code from destroy command (i.e. func (c *destroyCommandBase) getControllerEnvironFromStore() )
<thumper> veebers: we can't rely on the store
<thumper> because most users won't have it
<wallyworld_> the deploy command does go tto store and gets the charm and then uploads to controller
<wallyworld_> i think
<wallyworld_> deploy and controller should use the same store
<thumper> where store here refers to the charmstore
<wallyworld_> for enterprise case where store is behind firewall, i think it's reasonable that deploy client has access to that store
<thumper> and not the local config store
<wallyworld_> yes
<thumper> too many things have the same name
<wallyworld_> state
<thumper> api
<thumper> backend
<wallyworld_> client
<thumper> :)
<wallyworld_> veebers: so we don't get store url from bootstrap config since that's only present on that one client used to bootstrap
<thumper> veebers: where is the tip of the 2.4 branch going to appear in snaps?
<wallyworld_> i think it takes about 4 hours
<wallyworld_> edge
<wallyworld_> oh wait
<thumper> wallyworld_: edge is tip of develop
<wallyworld_> 2.5 is current
<wallyworld_> misread
<veebers> thumper: working with vino to get that job deployed
 * thumper nods
<thumper> will it be in candidate?
<thumper> or beta?
 * veebers checks the job
<vino> veebers: see the canonical #juju
<veebers> thumper: just checking the config. Vino sweet, just getting an answer for Tim ^_^
<veebers> thumper: candidate
<veebers> phew that took longer than it should have ^_^
<thumper> veebers: thanks
<veebers> wallyworld_: sorry was distracted. Ok, what's the best way to proceed, i.e. I need the cs url to use in the deploy command (and others) but can't (or won't) get it from controller config
<wallyworld_> why not from controller config?
<wallyworld_> that would be the single source for that info
<veebers> wallyworld_: ah sorry, I misread bootstrapconfig. Yeah controller config is fine, in that diff I pasted I'm using a controllerConfig from the bootstrapConfig. Or is that code not doing what I think it is?
<wallyworld_> the pastebin is getting stuff from local yaml files
<wallyworld_> you need to go to the controller itself to ask for its config
<wallyworld_> there's only 1 place i think we currently do that from the cli, which is the controller config (get) command
<veebers> wallyworld_: ah is that the ClientStore? a local cache of info?
<veebers> wallyworld_: ah right, and that's a ControllerCmd
<wallyworld_> right, ClientStore is a local cache of stuff
<wallyworld_> bootstrapconfig is stored locally as yaml but it really needs to die with fire
<wallyworld_> it's there because reasons. mostly restore -b which we don't even support anymore
<wallyworld_> so we should be able to get rid of it
<veebers> wallyworld_: ack, I'll not use it then lest you come knocking at my door with matches
<veebers> wallyworld_: I'll sort out having a way to hit the controller for that info
<wallyworld_> look at the controller/config.go command
<wallyworld_> there's an api facade which provides the info
<veebers> can do
<veebers> wallyworld_: as in check out cmd/juju/controller/configcommand.go?
<wallyworld_> yeah
<veebers> sweet, did skim that before I'll dig in
<thumper> veebers, vino: how soon do you think we'll see 2.4.1 proposed snaps in candidate?
<veebers> thumper: I think vino is just about to update the job, we can fire it off anytime from then (or if it's needed *now* we can manually edit the job). Then I'm not sure how long it takes for LP to recognise the updated branch, build and publish> I can't imagine too long though
<veebers> (30 min - an hour or so?')
 * thumper nods
<thumper> ok, thanks
<veebers> thumper: vino is still getting grief from JJB, I can manually edit the job now if you want that process in motion
<veebers> thumper: fyi I updated the job and fired it off now, will have to wait for the LP parts to do it's magic
<thumper> ack
<axw_> congrats on the 2.4 release! :)
<veebers> thumper: FYI snap candidate has been published, now 2.4.1+2.4-b5eced6
<wallyworld_> axw_: thanks, hopefully it will work well for folks. how's things with you?
<axw_> wallyworld_: pretty good thanks. published the first (beta) release of the APM Go agent recently. nothing very exciting otherwise :)
<veebers> hey axw o/ Congrats on the release ^_^
<axw> thanks :)
<axw> one of these days I'd like to hook it up to Juju
<veebers> wallyworld_: something more along these lines? http://paste.ubuntu.com/p/GhBPvWcFBB/
<manadart> stickupkid: Going to merge #8882 ?
<stickupkid> manadart: i was having issues with CI yesterday
<stickupkid> manadart: seems it's fixed
<manadart> stickupkid: Ja.
<thumper> manadart: https://bugs.launchpad.net/juju/+bug/1779897
 * thumper bugs and leaves
<mup> Bug #1779897: container already exists <cdo-qa> <foundations-engine> <juju:Triaged by rharding> <https://launchpad.net/bugs/1779897>
<manadart> thumper: Saw it; will look presently.
<rathore_> Hi all, one of my charm fails in install hook and logs show "DEBUG install FileNotFoundError: [Errno 2] No such file or directory: 'config-get'" I tried to see and this seems to be provided by juju to charms. What could be wrong
<magicaltrout> been a while
<magicaltrout> for standard charms whats the sanest way to add some apt dependencies to install at build time?
<magicaltrout> layer apt or is there some stuff you can stick in layer.yaml i seem to recall
<rick_h_> maaudet: I just assumed folks used the apt layer
<rick_h_> err magicaltrout that is ^
<rick_h_> magicaltrout: as it handles some cases ok for you like already installed/etc
<maaudet> I created the bug report about yesterday's issues with MongoDB in an HA context: https://bugs.launchpad.net/juju/+bug/1780086
<magicaltrout> fair enough
<mup> Bug #1780086: MongoDB connection errors after juju enable-ha <aws> <enable-ha> <mongodb> <juju:New> <https://launchpad.net/bugs/1780086>
<rick_h_> ty maaudet, I'll replicate it here and check it out.
<manadart> stickupkid: Can you review https://github.com/juju/juju/pull/8891 ? It's a tiny one.
<stickupkid> manadart: sure nps
<stickupkid> manadart: done
<manadart> stickupkid: Thanks. While you are there, this one backports all the same commits with preamble to 2.4: https://github.com/juju/juju/pull/8889
<maaudet> rick_h_: Thanks for looking it up! Do you you think that this warning has it's place since it's basically never going to work, but works through another address?
<rick_h_> maaudet: I've been pondering it. I mean, it's good to know if you're expecting it to work, but of course in this situation I know it won't. However, we've not done things like specify to Juju "your traffic should be on X" so that if it can't/won't do that it knows that's a real error
<rick_h_> maaudet: but on the other hand, warning that all's working ok isn't helpful
<rick_h_> maaudet: so I'm not sure the best way forward. I feel like what it needs is better visibility that HA is up and in a happy state and if that's true...well then ignore these. Maybe they're debug messages vs warning level.
<maaudet> rick_h_: Yeah, it should probably have something like (1/3 connections successful to the target machine) or something like that appended to the warning
<rick_h_> maaudet: yea, the logging is coming deeper in the code so it's that bubbling/aggregating that would need to be done in some fashion
<rick_h_> maaudet: it's basically the mongo driver code trying to connect, failing, and erroring down there without knowing there is other connections goin
<maaudet> rick_h_: I see. That explains it.
<rick_h_> maaudet: yea, I mean you're totally right it's just more work :)
<stickupkid> manadart: LGTM - just a question about this really - https://github.com/juju/juju/pull/8889#discussion_r200153837
<stickupkid> manadart: PR review - https://github.com/juju/juju/pull/8893?
<stickupkid> manadart: this one moves the old logging, to the new container package.
<manadart> stickupkid: Looking.
<rick_h_> thanks for landing that stickupkid
<rick_h_> maaudet: so in looking we have this bug on the error notifications: https://bugs.launchpad.net/juju/+bug/1644011
<mup> Bug #1644011: juju needs improved error messaging when controller has connection issues <usability> <juju:Triaged> <https://launchpad.net/bugs/1644011>
<rick_h_> maaudet: so I noted that while we're in there fixing the error UX we should also hit up the warning on regular working since we'll have to be in the same area cleaning that up
<stickupkid> manadart: https://github.com/juju/juju/pull/8894 PR, but I'm not sure this is right - so feedback would be appreciated :D
<magicaltrout> i know marco has told me this once before
<magicaltrout> if i'm writing a charm and want to save state between runs so I can compare last run to this run
<magicaltrout> how do I store it?
<magicaltrout> config works, but i'm wanting internal storage not config.yaml hopefully
<rick_h_> magicaltrout: hmm, I'm not sure. Most folks are out with the big US holiday today
<rick_h_> magicaltrout: https://charmsreactive.readthedocs.io/en/latest/charms.reactive.helpers.html#charms.reactive.helpers.data_changed ?
<magicaltrout> yeah not that i was poking through the docs trying to remember
<magicaltrout> no problem
<magicaltrout> data changed just tells you if something changes
<magicaltrout> not what
<magicaltrout> there is some KV mechanism somewhere IIRC
<rick_h_> magicaltrout: right but you can stick the data you want to track into that can't you?
<rick_h_> magicaltrout: I mean it takes the data to track as an argument?
<rick_h_> magicaltrout: oic, you want the state itself vs arbitrary data
<heckles1000> hello everyone, I'm experiencing some oddities ... trying to figure some things out with basic relations
<heckles1000> as you can see here https://paste.ubuntu.com/p/dRcCF9ygpz/
<magicaltrout> mostly, I'm trying to reconstruct a config file rick_h_ when a relation changes
<magicaltrout> but the config is made up from data from a bunch of relations
<heckles1000> (preface I am just testing a redis <-> test charm relation here)
<magicaltrout> so its brining the values from a range of relatinos back to a single config
<rick_h_> heckles1000: k, nothing looks odd there so far. What's up?
<rick_h_> magicaltrout: oh hmmm...
<magicaltrout> why did you get the short straw rick_h_ ? you secretly wish the british were still in change? ;)
<heckles1000> https://github.com/chrisheckler/layer-redis-test/blob/master/reactive/redis_test.py#L19,L34
<heckles1000> I'm thinking I can make this handler only execute 1 time by gating with my own custom flag
<rick_h_> magicaltrout: meh, mid-week holiday :( so swapped it out for friday so I can head to the lake earlier
<heckles1000> https://github.com/chrisheckler/layer-redis-test/blob/master/reactive/redis_test.py#L20 - this is the flag
<heckles1000> even though I set it at the end of the handler, it seems the handler still executes 3 times
<rick_h_> heckles1000: right, you can use your own flags and just set it once the method runs and it'll gate future executions of it
<rick_h_> heckles1000: hmm, what's the trigger stuff about? /me looks up the trigger stuff
<heckles1000> oh, I should take that out
<heckles1000> or push my current code
<heckles1000> my b
<heckles1000> I'm trying to eliminate this handler from executing > 1 time
<heckles1000> https://paste.ubuntu.com/p/h8hhP9D2Qz/
<heckles1000> if I login to the machine and cat the file I'm writing to, you can tell the handler has ran multiple times
<rick_h_> heckles1000: I'd drop some log lines in to help diagnose why it's rerunning. You should be able to add logs to check the status of the flags, etc. I see you unset it in the broken handler. Maybe that got triggered somehow?
<rick_h_> heckles1000: check out the hookenv.log and see if you can narrow it down
<heckles1000> ok
<rick_h_> heckles1000: the trigger thing I think definitely needs to go as that should be global in the file and not rerun on multiple function runs
<heckles1000> ahh, ok, I was reading the trigger docs and it shows ithem in the handlers
<rick_h_> yea, I do see that as well. I don't get why you'd let there be an option to create the trigger several times though.
<rick_h_> maybe it makes sure not to I guess...
<heckles1000> yeah, I'm a bit confused there, I was thinking I might use it to ensure my handler only runs once
<heckles1000> let me see if I can slim this down any
<heckles1000> rick_h_: does this look better https://github.com/chrisheckler/layer-redis-test/blob/master/reactive/redis_test.py
<rick_h_> heckles1000: yea, seems reasonable
<heckles1000> rick_h_: seems to be working as intended now, thanks for the insight
<rick_h_> heckles1000: awesome
<thumper> morning team
<rick_h_> morning thumper
<bdx> are you guys seeing the goal state errors on unit with remote relations?
<bdx> https://paste.ubuntu.com/p/Rpb2R93VgM/
<bdx> I filed this https://bugs.launchpad.net/juju/+bug/1780154
<mup> Bug #1780154: goal-state error on remote relation <juju:New> <https://launchpad.net/bugs/1780154>
<rick_h_> bdx: not sure about that. I'm not sure how much you can know about the remote end
<rick_h_> bdx: k, I'll bring it up on afternoon calls
<bdx> cool, thx
<thumper> oh...
<thumper> it certainly shouldn't error
<thumper> bdx: thanks, thrown it at wallyworld
<bdx> np, thank you
<veebers> wallyworld: It's probably bad taste to add to ModelCommandBase something like GetControllerConfig, or GetCharmstoreAPIURL right, perhaps I should just add the helpers to the commands that need ti (deploy and upgradecharm so far)
<wallyworld> veebers: still otp, one sec
#juju 2018-07-05
<magicaltrout> if nothing else kwmonroe at least Hue shows me whats missing and whats completely borked currently in the Big Data stack :P
<veebers> kelvin_: can you remember what the "extra thing" was to use docker installed from snap? i.e. so you didn't need to use sudo
<kelvin_> veebers, the thing was snap version of docker was a little bit out of date, so the flag we need was introduced does not work
<veebers> kelvin_: ah, sorry this is me using it locally on my machine. I tire of having to use 'sudo', I'm sure there was something to work around that, but perhaps it required a logout or something
<kelvin_> veebers, it's not related the user group.
<kelvin_> veebers, if u wanna use docker without sudo, then u just need to add ur user to the docker group
<veebers> kelvin_: ah I see, it's in the readme https://github.com/docker/docker-snap
<veebers> kelvin_: also, that states that docker snaps won't be updated, so you made the right choice with the install type for that job! L:-)
<kelvin_> aha :->
<veebers> wallyworld: Hmm, I'm having trouble with the charm command trying to push a charm with a docker resource (I get: unsupported resource type "docker", after adding a filename to the resource as it complains about that too which it shouldn't)
<veebers> this is with the edge snap of charm installed
 * veebers tries building from source
<wallyworld> veebers: is that a cmd message or one from the store?
<veebers> wallyworld: good question, I assumed from the command. I do have JUJU_CHARMSTORE exported, let me check I have that setup properly
<veebers> wallyworld: that seems fine, charm whoami stats I'm not logged into the staging
<veebers> ah shoot, I need to remember my username/password now ^_^
<wallyworld> lol
<veebers> ah wait, which sso does the staging use? Probably staging sso?
<veebers> hmm, charm login doesn't work anyway
<veebers> wallyworld: yeah, if I set JUJU_CHARMSTORE to the staging env, I can't charm login. I'll fire off an email
<wallyworld> veebers: ah bollocks ok
<thumper> babbageclunk: just a quick check, but this lease refactoring keeps both bits working right?
<babbageclunk> thumper: both bits meaning singular and leadership?
<babbageclunk> yes, I think so - tests all pass.
<thumper> both bits meaning mongo and raft
<babbageclunk> raft isn't there yet
<babbageclunk> I mean, it's there but doesn't know about leases
<thumper> ok, so this is just paving the way?
<babbageclunk> yup
<babbageclunk> But the idea is that they'll both be working, and we can select between them at bootstrap time.
<thumper> um...
<thumper> at controller config time?
<thumper> don't we want to be able to switch in a running system?
<babbageclunk> Well, maybe, if we're alright with all of the leadership changing at that point.
<thumper> I think that'd be fine.
<thumper> I've been thinking a little
<babbageclunk> Then yeah, that should be fine
<thumper> it would be nice if an operator could say "make that unit the leader"
<thumper> just something to think about
<thumper> consider our rolling upgrades
<thumper> upgrade a single non leader unit
<babbageclunk> Yeah, you've mentioned something like that before.
<thumper> then make it the leader
<thumper> then upgrade the others
<thumper> something like that...
<thumper> I imaging that there will be some weird edge cases...
<babbageclunk> Yeah, definitely - what if that machine dies first.
<babbageclunk> But worth a try. I'm keen to get the existing stuff working first though.
<thumper> definitely
<kelvin_> wallyworld, can i have ur a few minutes to dicuss charms for gpu testing?
<thumper> babbageclunk: fyi tests failed, didn't check what
<babbageclunk> thumper: ooh, thanks
<veebers> babbageclunk, thumper: is there a way in a JujuConnSuite (or so) to set a controller config? If so I can alter these deploy test just a little to cover using the charmstore url from controller config across the board
<babbageclunk> veebers: there is, I'm just trying to remember how you do it.
<babbageclunk> veebers: Try setting s.ControllerConfigAttrs in the SetupTest before you call s.JujuConnSuite.SetupTest
<veebers> babbageclunk: awesome, cheers! I'll give that a bang
<babbageclunk> veebers: you can see an example in apiserver/admin_test.go
<veebers> babbageclunk: is s.Session.DB setup by the ConnSuite?
<babbageclunk> uhhh
<veebers> I ask because I need to get the charmstore url created by new server as a controller config, if I need to do that before conn.SetupTest then I have an issue ^_^
<veebers> I think it does
<babbageclunk> oh, yeah, it's setup in MgoSuite.SetUpTest, which gets called from JujuConnSuite.SetUpTest.
<babbageclunk> Sounds like you need a closed time-like loop.
<babbageclunk> So what generates the charmstore url?
<babbageclunk> veebers: ^
<veebers> babbageclunk: charmstore.NewServer(db, nil, "", params, charmstore.V5) creates a handler which is then passed into httptest.NewServer(handler), the result of that has the URL. (not db in NewServer is from s.Session.DB("juju-testing"))
<babbageclunk> hmm
<veebers> babbageclunk: hah is MgoSuite.SetUpTest idempotent? ^_^
<veebers> no, it's not
<babbageclunk> worth a try though
<veebers> babbageclunk: hmm, actually being able to do that might not be as useful as I originally thought, so not a biggie that I can't. Thanks for sorting me out on that though
<babbageclunk> You could extend the ControllerConfigAttrs handling so that if there's a callback set it calls it to get the config attrs
<babbageclunk> And by default the callback just returns s.ControllerConfigAttrs
<babbageclunk> That would give you a chance to get the url
<babbageclunk> It's awful but the JujuConnSuite setup is already terrible
<veebers> ^_^
<veebers> babbageclunk: quick query, I'm doing this, is there a better way? https://pastebin.canonical.com/p/WVgsk2pKbf/
<veebers> as the patched function now takes the url to use, but it needs to be that one specifically
<babbageclunk> looking...
<babbageclunk> oh, I see - yeah, I think that's ok. Not sure how else you could do it, without knowing the rest of the code.
<veebers> babbageclunk: ack, cheers. wasn't sure if there was a better way than taking a ref and using that
<babbageclunk> Well, depending on what else it does if you could bypass calling the patched out function that would probably be simpler.
<babbageclunk> but I think that's fine.
<veebers> The patched function does what was returned pretty much (at this point at least) but felt it was patching too much, would be nice to not even need the patch
<babbageclunk> yeah, if you can avoid it better not to, but sometimes that's really difficult.
<babbageclunk> ugh, just found a test that passes if it's run with the others in its suite but fails when run by itself (or in a ci build, apparently)
<veebers> ugh :-\ that's going to be fun to nail down
<babbageclunk> well, easier than the other way around
<veebers> true, half-full then :-)
<veebers> wallyworld: FYI https://github.com/juju/juju/pull/8896
<veebers> ah damn, the formatting for the PR comment is borked, I misunderstood what it would look like from the vim buffer 'hub pull-request' bought up
 * veebers will fix that after dinner
<vino> wallyworld: do u have min to discuss regarding the version increment
<wallyworld> sure
<vino> HO
<vino> wallyworld: sorry i was a bit away during ur discussion.
<wallyworld> no problem, i'll jump back in
<vino> :p
<vino> k thank u
<stickupkid> I'm guessing that file shouldn't be there - https://github.com/juju/juju/blob/develop/apiserver/dependencies.tsv
<stickupkid> PR of removal https://github.com/juju/juju/pull/8898
<JaniferHe> help
<JaniferHe> juju status
<rathore_> Anyone faced any issues with bionic host and bionic lxc containers?
<rathore_> My bionic containers are started and has IP ( saw in host) but Juju is still thinks it doesnt have IP
<hml> rick_h_: so it turns out there is a MinRootDiskSizeGiB() used by vsphere and gce, but not the others.- 8Gb.
<rick_h_> hml: ah gotcha that makes sense
<rathore_> rick_h_: Would you have any idea. juju starts bionic lxc containers and wait indefinitely
<rick_h_> rathore_: no, there were some issues addressed in 2.4.0 with pulling in maas resolve info and such but nothing I can think of about not getting an IP?
<rathore_> rick_h_: sudo lxc list on host shows me that IPs are there. I have tried 2.4 and 2.4(beta)
<rick_h_> rathore_: is there anything in the debug-log about the machines not coming up? anything more in juju status --format=yaml around the machines status?
<rathore_> I am just deploying again and would paste as soon as I see any logs
<rathore_> rick_h_: https://paste.ubuntu.com/p/HdDw4HDWCR/ is from yaml output
<rathore_> rick_h_: Ok, so the containers are still downloading the tools from controller
<rathore_> somehow the networking in container is messed up
<rathore_> rick_h_: The file downloads are fine on host but toooo slow in container
<rathore_> rick_h_: It seems to be lxdbr0 issue, it is not connected to any network
<rathore_> https://paste.ubuntu.com/p/6YqtgNBpxZ/
<cory_fu> rick_h_: I've got some updates for conjure-up and cloud integrator charms for the Juju Show, FYI
<rick_h_> cory_fu: woot woot
<stokachu> rick_h_, is that discourse board going to be officially staying? i want to direct all non bugs from github conjure-up to that forum
<rick_h_> stokachu: yes, we're just ramping it up
<stokachu> rick_h_, is it ok if i start pointing people there?
<rick_h_> stokachu: by all means
<stokachu> thanks
<rick_h_> stokachu: I'm going to bring it up on the show and we're slowly starting to port docs/notes/etc and will kill the lists in a few
<stokachu> rick_h_, ack, sounds good
<cory_fu> rick_h_: Oh, and if there's time for it, also a change to the interface for layer options.
<rick_h_> cory_fu: k, sounds good
<rick_h_> cory_fu: did you see there's a charms and charming in the new discourse as well?
<rick_h_> cory_fu: it'd be great to kick some discussions/etc that way as we flesh it out
<rick_h_> bdx: have a test bundle I can demo today?
<cory_fu> rick_h_: Yep, I was working on a write up for the layer options bit on there already
<rick_h_> cory_fu: <3
<rick_h_> cory_fu: kwmonroe bdx and anyone else that wants in, 10min to Juju Show
<rick_h_> https://hangouts.google.com/hangouts/_/kstii25wdnd5jorqvle3grpblae for joining the conversation and ...
<rick_h_> https://www.youtube.com/watch?v=R0R5DC7_Dio for watching me make a fool of myself live on the interwebs :)
<cory_fu> https://tutorials.ubuntu.com/tutorial/tutorial-charm-development-part1#0
<kwmonroe> https://discourse.jujucharms.com/
<cory_fu> I didn't realize you could bootstrap an older controller.  That will be very useful
<cory_fu> https://docs.conjure-up.io/devel/en/conjurefile
<cory_fu> https://jujucharms.com/u/containers/aws-integrator/
<cory_fu> https://jujucharms.com/u/containers/gcp-integrator/
<cory_fu> https://jujucharms.com/u/containers/openstack-integrator/
<cory_fu> Full changelog for conjure-up: https://github.com/conjure-up/conjure-up/blob/master/CHANGELOG.md
<rick_h_> cory_fu: kwmonroe feel free to update https://discourse.jujucharms.com/t/juju-show-37-2-4-0-lxd-show-and-tell-and-more/60 if you have additional links/notes to drop in there
<rick_h_> ty for the idea kwmonroe
<thumper> morning team
<dparrish> g'morning maestro!  ;-)
<veebers> Morning all o/
<magicaltrout> hacked together an oozie charm kwmonroe as i don't see one in the store anywhere thats current
<magicaltrout> i'll get someone to clean it up and send it upstream if you want to add it to the bigtop charms
<kwmonroe> +100 magicaltrout!  i saw the open jira about puppet and oozie client/server.  did you sort that out?
<magicaltrout> well
<magicaltrout> it's a bodge
<magicaltrout> but it works
<kwmonroe> i'll google bodge later
<magicaltrout> run puppet -> fix the package -> rerun puppet to finish the install
<kwmonroe> ah, so normal puppet then ;)
<magicaltrout> pretty much
<maaudet> What is supposed to show under the "LABELS" column when running juju metrics ?
<rick_h_> maaudet: it's optional for charms to supply a label to the metric in order to tell things apart in the data coming out
<maaudet> rick_h_: I see. If I'd like to add a label to my metrics on my charms, where would I put it? Is it simply a key "label" to add in my metrics.yaml objects?
<rathore_> rick_h_: Nice show today. For life of me I cannot get juju to work with bionic. The lxd containers have some issue with networking. The packet speeds are in bytes/sec. The configuration works well with xenial. https://paste.ubuntu.com/p/6YqtgNBpxZ/
<veebers> wallyworld: I got access to the staging store, I needed to log in via the web ui, I needed to build charm from source and I also needed to RTFM for the image attach :-)
<wallyworld> veebers: great, so unblocked.....
<veebers> wallyworld: aye, onwards and upwards
<wallyworld> babbageclunk: i need a more experienced person to review a change with an upgrade step https://github.com/juju/juju/pull/8900
<wallyworld> no rush
<veebers> wallyworld: just sorting out the test failures on my WIP PR, I think it'll need some more. If you get that chance could you review and assess?(https://github.com/juju/juju/pull/8896)
<wallyworld> sure
<wallyworld> veebers: quick initial comment - the controller config has the default value as "". It should probably be csclient.ServerURL. That avoids checks for != "" in places and also makes it explicit when printing controller config what the url used will be
<veebers> wallyworld: ah, good point. I though leaving it to the construction of the client and passing "" would default fine, but what you're suggesting is the same result && more explicit /obvious
<wallyworld> veebers: there's a place in the PR outside of the actual client where we check for != ""
<wallyworld> this would avoid that as well as being explicit
<veebers> ack
<wallyworld> veebers: left some initial comments
<veebers> wallyworld: ack.  Oh, you dont' think conConfig doesn't just roll off the tongue? Like Con-Air? :-)
<wallyworld> vino: i left some comments - ping me if it's unclear
<vino> sure wallyworld
#juju 2018-07-06
<wallyworld> babbageclunk: a couple of thoughts, see what you think?
<babbageclunk> wallyworld: ok, will take a look
<babbageclunk> wallyworld: I agree with both of those, thanks!
<wallyworld> babbageclunk: so to check, the only place you will change to plug in the raft backend is (st *State) getLeaseClient()
<wallyworld> getLeaseStore() even
<babbageclunk> wallyworld: no, it'll need to be higher up - not a method on state at all
<babbageclunk> wallyworld: because the new manager won't be running as a state worker.
<babbageclunk> It'll be a method on the facade context.
<wallyworld> righto
<babbageclunk> Oh, hang on - that's how an API facade will get the checker or claimer it needs.
<babbageclunk> The store will be passed into a manager worker running in the machine agent manifolds, constructed from the FSM and the hub.
<wallyworld> makes sense
<babbageclunk> cool
<wallyworld> let me know when the pr is not WIP
<babbageclunk> wallyworld: ok, just doing the last set of tests.
<veebers> wallyworld: I've pushed up PR fixes and additions. There is a bit of duplication there (for the config retrieval); where would be the best place to put a common function or 2 so there is only 1 impl (plus will be easier to test that one)
<wallyworld> veebers: just saw the above, was reviewing the commit. it's not too bad have a few lines of code duplicated. can you make sure you fully test against the staging controller, deploying and upgrading charms etc, before landing
<veebers> wallyworld: can do
<vino> wallyworld: could u plz look at PR. I have modified the mock to export state obj.
<wallyworld> ok
<vino> wallyworld: i am adding back the bundles.go file in facades/client/client
<wallyworld> vino: i left a couple of comment s- it's coming along nicely
<wallyworld> a few things to look at. i'll be back in 15 minutes
<vino> sure.
<vino> wallyworld: i have added back the files bundles.go and bundles_test.go fixed the issues becuase of my changes in NewFacade.
<wallyworld> ok
<vino> ... adding validation for unit test n export method. yes i has issue becuase nothing is filled in.
<vino> i thought validation is enough.
<vino> do u want to mock serialize methods as well ?
<wallyworld> vino: not mock serialise methods, just return a model.Description that is then serialised as normal and checked
<wallyworld> description.Model i mean
<vino> ok.
<wallyworld> the mock state would just return description.NewModel()
<wallyworld> with args having an app or unit or something
<wallyworld> just something to result in a really simple result to check
<vino> i am adding a simple app. and get that validated.
<babbageclunk> wallyworld: ok, that PR isn't WIP anymore. I'm doing a smoke test then I'll review your PR
<wallyworld> yay, ty
<wallyworld> babbageclunk: i have to head to hospital in 5, will quickly review, then back in an hour
<babbageclunk> ok - no rush
<wallyworld> babbageclunk: i think it looks ok, lgtm. we should run some manual tests as well with charms etc
<wallyworld> bbiab
<babbageclunk> ok
<veebers> babbageclunk: why can I not do something like: "s.ControllerConfig[controller.CharmStoreURL] = s.srv.URL" at the end of "func (s *charmStoreSuite) SetUpTest(c *gc.C)". It's not an error, but that value isn't sticking
<vino> wallyworld: pushed a commit. can u please verify...
<veebers> ah, probably because I"m just changing the copy in the test suite, whereas the one in the actual code will be a version based on that which was passed into the bootstrap
<veebers> thumper: you have a moment?
<veebers> or babbageclunk ?
<veebers> and finally wallyworld ? ^_^ http://paste.ubuntu.com/p/x8G9BN5qcz/ The problem being that the conn suite does the bootstrap, and setups up the DB that's needed to start the charmstore and http server, but we need that url *before* the bootstrap so we can set the charmstore controller config :-\
<veebers> so my fix here was to shoehorn in a patch
<wallyworld> veebers: jujuconnsuite has controller config attrs
<wallyworld> you can put any initial values to use with bootstrap in there in setup
<veebers> wallyworld: ack, you need to set them before calling SetUpTest
<wallyworld> setupsuite
<wallyworld> or, do that and then call jujuconnsuite setup test
<veebers> wallyworld: But you need to call SetUpTest to prepare the DB that's used to spin up the charm store, which is where we get the url that we need to set
<wallyworld> ie set the controller cfg, and then call jujuconnsuite setuptest
<wallyworld> veebers: so SetupTest calls setupConn() which bootstraps
<wallyworld> so before calling SetUpTest(), set the controller cfg
<veebers> wallyworld: but I don't know what the value is until the charmstore is spun up, and I can't spin up the charmstore until after setuptest
<wallyworld> what does the charmstore depend on in setup test?
<veebers> wallyworld: the db
<wallyworld> but not the same one as juju uses
<babbageclunk> veebers: I think you could change JujuConnSuite to allow you to set a callback that it would call to get attrs for controller config.
<wallyworld> charmstore and juju controller state are different dbs
<wallyworld> or should be different dbs
<wallyworld> they are not the same db in the real world
<babbageclunk> And then abuse the fact that the session would be set up by the time the callback runs
<veebers> wallyworld: unless I'm reading this wrong: it does: "db := s.Session.DB("juju-testing")", then ".. =  charmstore.NewServer(db, nil, "", params, charmstore.V5)"
<wallyworld> so that's unfortunate
<wallyworld> whoever wrote the test is abusing things
<veebers> hah ok, so it can be improved.
<wallyworld> but it can still be made to work
<veebers> I think we can all agree though that my attempted fix is pretty shoddy and can be done better
<wallyworld> set up mongo ahead of time
<wallyworld> i'll have a quick look at the code, going from memory here
<wallyworld> the fix is "ok" but we don't want to introduce new Patch functions if we can help it
<veebers> wallyworld: ack thanks. I gotta EOD shortly, visiting family but I'll check back in later on
<wallyworld> no rush, we can land monday
<veebers> agreed, it would be nice to just set the controller config and have it all shake out like in real life
<wallyworld> kelvin_: reviewed. there's an issue with test setup to get the charms. see my comments and can chat if needed
<wallyworld> looking good though
<kelvin_> wallyworld, thanks, looking it now.
<wallyworld> veebers: quick thought, in charmstoreSuite we do JujuConnSuite.SetUpTest(), then s.srv = httptest.NewServer(handler), then charmstore client set up
<wallyworld> we can move s.srv = httptest.NewServer(handler) to the top of the set up if we get a different mongo session
<wallyworld> no strict need to use the same one as for juju
<wallyworld> the sessions will use the same mongod which is running, but will be constructed individually
<wallyworld> using the s.Session for both was just a shortcut
<wallyworld> worth digging a bit to see how it pans out
<wallyworld> kelvin_: the charm testing comment make sense? at a high level, it's a matter of correctly setting up the (global) testcharms.Repo to point to the correct path ("quantal" or "kubernetes")
<kelvin_> wallyworld, yes, i m working on it right now. thx
<wallyworld> great, ty
<veebers> wallyworld: ack, will do
<wallyworld> veebers: ty. i haven't dug too deeply but it should be doable once we see how the s.Session is constructed
<wallyworld> just make one to use for the store storage itself, remembering to add to cleanup
<manadart> Anyone able to do a quick back-port review? https://github.com/juju/juju/pull/8901
<rathore_> all: anyone tried juju 2.4 with bionic?
<rathore_> Trying to get bionic lxd containers work with ti
<rathore_> fails do proper bridge setup
<manadart> rathore_: What is the particular issue you are seeing?
<rathore_> manadart: The container networking seems to be issue on bionic. Everything works well on series xenial and lxd containers are not able to complete initial bootstrap
<rathore_> https://paste.ubuntu.com/p/6YqtgNBpxZ/
<rathore_> manadart: I am trying to install openstack using https://github.com/openstack-charmers/openstack-bundles/blob/master/development/openstack-lxd-bionic-queens/bundle.yaml
<manadart> rathore_: On what provider are you attempting to deploy the bundle?
<rathore_> Maas
<rathore_> manadart: maas
<manadart> rathore_: I'll try the same bundle shortly. In the meantime, if you can locate any specific logged errors, that would be good info for a Launchpad bug.
<rathore_> manadart: I am trying to get a simplest bundle to reproduce it.
<naturalblue> hi everybody
<naturalblue> I am having an issue with juju and maas. juju reload-spaces doesnt reflect the new space changes. it still shows an old space that is no longer there. also juju show-controller shows an ols model that is not there and i dont seem to be able to remove it
<rathore_> manadart: Simplest bundle to reproduce: https://paste.ubuntu.com/p/4T75kzCt3B/
<rathore_> manadart: paste has more information on the issue
<rathore_> seems to be lxdbr0 is not using provider network
<rathore_> manadart: it also looks that containers have got IP address from provider network (br-bond0) but are connected to lxdbr0
<manadart> rathore_: This *could* be an issue with cloud-init networking and netplan (Bionic) vs e/n/i (Xenial). I will investigate.
<rathore_> manadart:
<rathore_> manadart: I have a feeling it could be mtu 9000. Trying with mtu 1500 now
<manadart> rathore_: Oooh, I have definitely seen that one.
<rathore_> manadart: Thats a relief. In that case it should work with mtu 1500, i will confirm in few minutes
<rathore_> manadart: Yes confirmed it is mtu 9000 issue. It doesn't work with bionic.
<rathore_> all: any ideas to kill containers that are stuck in error  state
<rathore_> 0/lxd/5   error                   pending              bionic           Creating container
<teste_> juju --helo
<teste_> juju --help
<teste_> juju --help;
<teste_> shutdonw now
<rathore_> all: What is the way to kick a container stuck in error state while being created
<rathore_> rick_h_: Any suggestions ?
<rathore_> Getting quite a lot of "juju.provisioner provisioner_task.go:1115 failed to start machine" in 2.4 bionic
<kwmonroe> rathore_: try "juju status --format=yaml <machine_number>".  sometimes the yaml output will yield more info about provisioning issues.
<kwmonroe> rathore_: there's also "juju retry-provisioning <machine_number>" if you want juju to retry a failed machine.
<rathore_> kwmonroe: It doesn't allow retry for containers
#juju 2018-07-08
<veebers> Morning
<thumper> morning
<babbageclunk> morning
<thumper> babbageclunk: how goes leases?
<babbageclunk> thumper: I landed that change on Friday - should I work on some bugs now or keep going with the FSM?
<thumper> there are a number of bugs around goal state which would be good to get fixed this week
<thumper> personally I think taking a break and fixing some 2.4.1 bugs would be good
<babbageclunk> ok
<babbageclunk> yeah, I was thinking the same thing
<babbageclunk> cool cool
#juju 2020-06-29
<manadart_> stickupkid: https://github.com/CanonicalLtd/juju-qa-jenkins/pull/474
<ignaziocassano1> Hello, Anyone can suggest me a bundel for ceph nautilius with ceph-mon. ceph-osd, radosfw and ceph dashboard ?
<flxfoo> hi all
<flxfoo> I was trying to add a seconda interface to an instance
<flxfoo> then came accross this
<flxfoo>  https://aws.amazon.com/premiumsupport/knowledge-center/ec2-ubuntu-secondary-network-interface/
<flxfoo> so how do you guys realing with that with juju?
<manuvakery> hi guys
<manuvakery> ERROR cannot connect to API: no API addresses , am getting this error for almost all commands
<manuvakery> any idea
<petevg> manuvakery: I'd checkup on your controller machine to make sure that it is still up and running and properly connected to the network. What cloud are you running on? Are you seeing these errors out of the blue, or did they happen after a change or other event?
<manuvakery> petevg:  am trying to deploy openstack  https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/ussuri/install-juju.html , am getting  error after bootstrapping the controller
<manuvakery> https://www.irccloud.com/pastebin/jkZ1z0GN/
<manuvakery> the bootstrap command gets stuck at "Running machine configuration script..."
<petevg> manuvakery: are you in a limited egress environment? The controller might be having trouble fetching packages or othe things that it needs.
<petevg> I'd check the logs from the controller machine via the OpenStack tooling.
<petevg> I'd also check your list of instances to see how big of an instance OpenStack allocated for the cluster. Make sure that you have enough disk, ram and such.
#juju 2020-06-30
<timClicks[m]> We have an inconsistency wrt how we describe LXD
<timClicks[m]> Within the CLI, it's the "localhost" cloud
<timClicks[m]> But that term isn't used in the public docs
<jam> wallyworld, kelvinliu Is there an easy way for a Unit agent to determine the name of the K8s pod names? (or for 1 unit to find the names of all pods)
<jam> without hitting the K8s API?
<kelvinliu> u mean to find that unit's workload pod?
<jam> kelvinliu, right. AIUI, there are config options where you need to configure Mongo with the pod DNS names for its replica set
<jam> I'm mostly acting as a proxy for the question from MarkMaglana
<kelvinliu> i assume it's a statefulset
<kelvinliu> and the pod's names are fixed
<jam> kelvinliu, for Mongo I'm sure it would be  (if you need to configure the name of pods before the pods exist, they have to be fixed)
<jam> kelvinliu, I don't know the exact use case MarkMaglana is requesting. Just that he wanted to know the names of the pods, and I'd like to have general answers for them
<kelvinliu> we do have the pod name in `cloudcontainers` for statefulset
<kelvinliu> im thinking he can get it
<kelvinliu> how the uniter can get it.
<jam> kelvinliu, that is in the pod spec or ?
<kelvinliu> nah
<kelvinliu> we get the pod name after the pod has been created
<kelvinliu> so we update it into that doc once k8s tells us the pod name
<kelvinliu> we can easily get the pod name by k8s API because we have proper labels and annotations to look for
<jam> kelvinliu, sure, but we don't currently expose that to the charm, do we?
<kelvinliu> we have k8s python client built into our operator image already
<kelvinliu> so charm can have a in-cluster k8s client to get access of it
<jam> kelvinliu, ok. But that isn't talking to Juju or coordinating with Juju data. That said, it is still stuff that we can point charmers at.
<kelvinliu> https://github.com/kubernetes-client/python
<kelvinliu> I don't think currently charm can get the pod name from juju for now
<kelvinliu> but if it's the action scripts want to know the pod name then it's easy
<kelvinliu> because k8s expose the pod name in ENV inside the pod itself
<kelvinliu> juju run --unit mariadb-k8s/0  'env | grep mariadb-k8s-0'
<kelvinliu> HOSTNAME=mariadb-k8s-0
<jam> kelvinliu, so I'm pretty sure this is more about during eg 'config-changed' than during an action.
<jam> And wanting to know all pod names
<kelvinliu> probablly we have to let the leader to filter the pod list using application name labels
<kelvinliu> wait u mean he want this pod name list to generate the replica string?
<kelvinliu> then the charm should be operator charm
<kelvinliu> for an operator charm to provision a new instance of mongo, it might be reasonable to do it via an action?
<kelvinliu> im not quite sure what's the current status of operator charm feature yet. might be good to discuss on mattermost further
<jam> kelvinliu, so as mentioned I don't know the exact use case. If it was something like Mongo, I don't think it quite fits the idea of an operator charm, other than you can solve anything in some way eventually :)
<kelvinliu> ok
<stickupkid> achilleasa, approved the integration test changes
<achilleasa> stickupkid: cool. thanks! no issues with the changes in the arg parser then?
<stickupkid> achilleasa, nope, I tested the output of all flags
<achilleasa> manadart_: what do you think about moving ProviderInterfaceInfo into core/network? this (https://github.com/juju/juju/blob/develop/environs/networking.go#L98) is the only thing in the Networking interface that forces you to import juju/network
<manadart_> achilleasa: It's already duplicated. See core/network/nic.go
<manadart_> achilleasa: So by all means, get rid of the juju/network one.
<achilleasa> ah nice! will do
<manadart_> achilleasa: And we should be using HardwareAddress over MACAddress, because Infiniband devs do not have a MAC.
<achilleasa> manadart_: ok, I will try to rename it... hopefully it won't break too many tests :D
<manadart_> achilleasa: NBD if it's a hassle.
<manadart_> achilleasa: Can you look at this one? https://github.com/juju/juju/pull/11778
<achilleasa> looking
<achilleasa> manadart_: running QA now; in the meantime can you look at https://github.com/juju/juju/pull/11779?
<manadart_> achilleasa: Yep.
<achilleasa> manadart_: seems to fail during the merge: https://pastebin.canonical.com/p/Cr25N4KymT/
<manadart_> achilleasa: Did you change the MAC address? Let me try it again...
<achilleasa> yes, I changed name, mac (the :fe:fe at the end) and providerid
<manadart_> achilleasa: I think omitting the _id has affected this. I ran it again with success. https://pastebin.canonical.com/p/g9sq3qFxf7/
<achilleasa> manadart_: ah crap... forgot that IDs are generated by us... sorry
<achilleasa> approved
<yan0s> Is it possible to add a new LXC having an interface on a space that its host didn't originally had this space?
<yan0s> but the host now has a manually provided interface on the subnet of this space
<yan0s> I want to add the LXC on a new vlan interface wityhout having to redeploy the host
<pmatulis> can i configure a lxd profile on a per-machine basis?
<pmatulis> hml, ?
<hml> pmatulis:  with juju?  or in general?
<hml> pmatulis:  you can add a profile with more info to an existing machine, the machine may get rebooted.
<hml> pmatulis: not sure if juju would pick that up or not
<pmatulis> hml, ideally with juju
<hml> pmatulis:  juju lxd profiles are via the charm and per application.
<hml> pmatulis:  you can add what you want to the default profile and juju uses that for all machines
<pmatulis> right ok
<pmatulis> so i will have to do it manually then
<pmatulis> that too bad. maybe i will open a wishlist bug
#juju 2020-07-01
<timClicks> new introduction to the "localhost" cloud provided by LXD https://juju.is/docs/lxd-cloud
<stickupkid> manadart_, it looked better in the error message https://github.com/juju/charm/pull/310#discussion_r448183798
<stickupkid> manadart_, the problem with juju error messages is they're crap
<manadart_> stickupkid: OK.
<stickupkid> manadart_, we really should slap localisation on them, and then you can have better language support
<stickupkid> manadart_, have a localisation of "dev" and it just prints the raw error message out, everything else is much nicer
<achilleasa> manadart_: reminder for taking a look at 11781 ;-)
<manadart_> achilleasa: Literally just typing that I've been looking at it and am doing QA.
<achilleasa> manadart_: haha ok. thanks!
<stickupkid> anybody know why `juju model-config` isn't isomorphic?
<Chipaca> stickupkid: isomorphic to what?
<stickupkid> well juju model-config --format=yaml outputs a different type to what it consumes if you pass in a yaml file
<stickupkid> so the type isn't isomorphic, which is really weird
<Chipaca> i don't have anything constructive to add, i was just puzzled by that use of 'isomorphic' :)
<Chipaca> (i'm still very much a juju n00b)
<yan0s> I'm going to ask again, because yesterday I got no answer
<yan0s> Is it possible to add a new LXC having an interface on a space that its host didn't originally had this space?
<yan0s> I can add the vlan/IP on the host manually
<yan0s> and then try to launch an lxc on this host having the new space
<yan0s> but then I get  "matching subnets to zones: space "new-space-name" not found"
<Chipaca>  /j #lxd
<Chipaca> er, oops
<Chipaca> yan0s: maybe ask in #lxd?
<Chipaca> (i joined first to double-check that was where it was at)
<yan0s> THis is a juju question
<Chipaca> ok, i'll shut up then
<yan0s> I need the container to knwow about the space so I can use it as a binding in juju
<yan0s> thanks for the suggestion though
<achilleasa> manadart_: AFAICT, juju loses track of the parent nic for container link-layer devices. However, while implementing Networking it seems like we can pull this info (via another API call) for lxd. Is that something we want?
<manadart_> achilleasa: Probably, but as per https://github.com/juju/juju/blob/ad9586173d624d35d32bf3fc372297cff7bbae11/apiserver/facades/agent/provisioner/provisioner.go#L1069 we want to change how it is put together - create the container, then get the info for filling in.
<manadart_> achilleasa: This would be great though, because then the provisioner would not need the networking common API and we could nuke it and move SetObserved... to the machiner.
<stickupkid> anybody know how to set resource-tags via model-config?
<stickupkid> ah ignore me
<stickupkid> nice, resource-tags also break the yaml output like cloudinit-userdata
<achilleasa> manadart_: does this look legit? https://pastebin.canonical.com/p/rvZFKRrXBv/
<achilleasa> manadart_: ah wait; got the wrong network id; let me do another run
<achilleasa> manadart_: updated version: https://pastebin.canonical.com/p/tQcS84Sfry/
<manadart_> achilleasa: https://github.com/juju/juju/pull/11785
<stickupkid> manadart_, https://github.com/juju/juju/pull/11784
<stickupkid> CR for this, if possible
<stickupkid> as you know a bit about it
<manadart_> stickupkid: Yep.
<achilleasa> manadart_: looking
<manadart_> stickupkid: Requested changes. I have to run; we can talk about it tomorrow if you don't agree.
<stickupkid> pretty sure I do
<stickupkid> :)
<hml> stickupkid:  achilleasa  quick pr review?  https://github.com/juju/juju/pull/11787
<hml> petevg: ^^
<stickupkid> manadart_, responded
<achilleasa> manadart_: looks good; need to tweak a broken test. Can you take a look at https://github.com/juju/juju/pull/11788 tomorrow?
<stickupkid> CR anybody https://github.com/juju/juju/pull/11786
<timClicks> Have created a (first draft of) a page for adding a Pull Request workflow back into updating our docs https://discourse.juju.is/t/updating-documentation/3300
#juju 2020-07-02
<thumper> https://github.com/juju/juju/pull/11791 for someone
<thumper> wallyworld, tlm, kelvinliu, hpidcock ^^^
<hpidcock> looking
<tlm> cheers
<hpidcock> thumper: lgtm
<thumper> sweet, ta
<thumper> tlm: did the k8s upgrade branch land?
<thumper> wallyworld: how's the centos 8 one going?
<tlm> sorry  thumper just saw this. Waiting on a clean test run now
<wallyworld> thumper: here's the simple jenkins PR https://github.com/CanonicalLtd/juju-qa-jenkins/pull/476
<thumper> wallyworld: done
<wallyworld> tlm: all good with the model operator PR?
<manadart_> achilleasa: Can you look at https://github.com/juju/juju/pull/11794 ?
<achilleasa> manadart_: sure
<gokhani> hi ubuntu folks, I wonder if we can use gpu virtualization on ubuntu kvm compute nodes?  In nvidia docs, I can't see support of ubuntu kvm hypervisor ( https://docs.nvidia.com/grid/7.0/product-support-matrix/index.html ). Is there any suggestions for gpu virtualization in an OpenStack environment which works on ubuntu 18.04?
<manadart_> achilleasa or hml or stickupkid_, Forward merge: https://github.com/juju/juju/pull/11795
<stickupkid_> achilleasa, PR for you https://github.com/juju/juju/pull/11796
<achilleasa> yaml fun :D
<achilleasa> stickupkid_: have some questions on that one
<stickupkid_> achilleasa, i'll do it next week - EOD/EOW
<achilleasa> stickupkid_: have a nice weekend
#juju 2020-07-03
 * stephencheng[m] sent a long message:  < https://matrix.org/_matrix/media/r0/download/matrix.org/UmJRfYpxKKIsZmUmAMaDKbmM >
<wallyworld> kelvinliu: here's a small vsphere fix https://github.com/juju/juju/pull/11800
<kelvinliu> looking now
<kelvinliu> wallyworld: so this fix is to fix the workaround of /MyFolder/MyDC/vm/juju-root
<kelvinliu> but not fix the vm-fold sets to `juju-root`, right?
<wallyworld> kelvinliu: i  used the same logic from their on site patch - it's just for handling  a vmfolder that is not a subpath of the DC folder
<kelvinliu> dcfolders.VmFolder.InventoryPath == /MyDC/vm but
<kelvinliu> the real dc folder is /MyFolder/MyDC/
<wallyworld> yeah https://github.com/nobuto-m/juju/commit/26ab58c38fb3bc41905f8085c6939bc3628181b5
<kelvinliu> yeah, I know
<kelvinliu> that's just a workaround
<wallyworld> the vmfolder should be able to point anyhwere though right?
<kelvinliu> with current fix, if the vmfold sets to juju-root, we will still get the error `folder path "/MyDC/vm/juju-root" not found`
<wallyworld> so they need to set the vmfolder correctly
<wallyworld> if an absolute path is needed, they should provide that
<wallyworld> or no?
<kelvinliu> $ govc find | grep juju-root
<kelvinliu> /MyFolder/MyDC/vm/juju-root
<kelvinliu> vsphere API thinks the dc fold is /MyDC/vm/juju-root
<kelvinliu> I think this might be fix for their particular issue with the workaround, but we will need find why the API tells the wrong datacentre path
<kelvinliu> wallyworld: so we will just ask them to test ur branch or?
<wallyworld> kelvinliu: yeah, we can ask them to test th edge snap. i thought juju just appended "jju-root" to whatever the folder ended up as?
<kelvinliu> yeah, that's what Juju does currently
<kelvinliu> the problem here is the root DC fold was wrong in the API
<wallyworld> so i think this will be ok - they have the option of a relative or absolute path, their choice
<wallyworld> i don't know why the DC fold was wrong
<kelvinliu> wallyworld: did u see petro's comment?
<kelvinliu> It seems govmomi is not returning the entire absolute path with
<wallyworld> i did see them but don't understand the api enough
<kelvinliu> [/MyFolder/]MyDC/vm/juju-root
<kelvinliu> wallyworld: just approved to allow them to test it.
<wallyworld> sgtm ty
<kelvinliu> ty for the fix
