#juju 2011-10-24
<jefimenko> i just discovered juju
<jefimenko> is it similar at all to opennebula?
<jefimenko> nm, i think i know the answer to my own question
<jefimenko> rather, can juju use something like opennebula to start VMs in a private cloud?
<hazmat> good morning
<rog> hazmat: hiya
<hazmat> hi rog, how's plan9 kicking it these days?
<rog> hazmat: fragmenting...
 * hazmat vaguely remembers that was the topic of the conference
<rog> hazmat: but always interesting
<rog> hazmat: i drank too much beer too
<rog> ... and discovered that Madrid 3am finishes don't mix well with conference 7am starts.
<hazmat> rog, yeah.. rude awakening it is.
<rog> :-)
<hazmat> rog, there are these crazy kids at the python conference who go drinking to the wee hrs, and then go running at like 8am before the conf starts..
 * hazmat tries rebooting kicking xchat to kill a unity problem
<rog> hazmat: i guess that would wake you up. although it might not get rid of the heatache too well...
 * rog can't quite bring himself to upgrade to oneiric
<hazmat> for the most part its a good upgrade, unity2d is a pretty solid fallback
<hazmat> SpamapS, you at hadoop world?
<hazmat> oh.. pre uds
<SpamapS> hazmat: no.. ?
<hazmat> SpamapS, oh.. just saw your email early in the am, assumed you were east coast
<SpamapS> ah, no, stupid smoke alarms
<tauren> hazmat: to experiment with juju, is it possible to run a complete environment using lxc running within a kvm virtual machine?
<hazmat> tauren, it is possible to use the local/lxc provider in a virtual machine
<hazmat> and create a service deployment that way
<tauren> cool. but will that cause more headaches that I wouldn't have if I just set it all up on a physical box?
<SpamapS> tauren: we've had people doing it in virtualbox vms too
<hazmat> there is a caveat that the vm performance cause some failures atm, its something we're working on (handling transient disconnects for the internal connection topology)
<tauren> hmm, ok
<hazmat> tauren, it definitely works, just that you need to be aware that the vm can be overloaded if your going past its capacity
<tauren> ok, good to know.
<SpamapS> yeah I've overloaded my laptop using the local provider a few times..
<tauren> now i assume it isn't possible to set up a full openstack environment without multiple physical systems. is that true?
<hazmat> i end up doing most of my dev testing / charm development with a local provider
<SpamapS> installing packages can be hard on the disks ;)
<hazmat> ssd ftw ;-)
<hazmat> SpamapS, did you end up ordering an ssd?
<SpamapS> no
<SpamapS> I can't decide on one. Analysis paralysis.
<hazmat> SpamapS, which laptop model are you doing this for?
<hazmat> i can give a single suggestion if that would help
 * hazmat is still waiting for native 7mm ssd
<SpamapS> macbook pro 5,1
<SpamapS> SATA II only
<SpamapS> tho some SATA III drives are known to work fine
<tauren> ooo, i'm interested too. macbook pro 5,5
<SpamapS> hazmat: considering getting the superdrive replacement kit too since my superdrive no longer seems to function
<hazmat> SpamapS, hard to narrow down to one, so i'd go probably go with a ocz vertex3 or kingston hyperx (i'm partial to the sandforce controlllers), i think the crucial/micron m4 is pretty good as well, the intel 510 or 320  are both pedestrian but solid. here's a nice roundup http://www.anandtech.com/show/4421/the-2011-midrange-ssd-roundup.. i'd probably just go with the crucial/micron  after verifying size compatibility. the lack of sata III means you won't really
<hazmat> be pushing any of these to their limits, as you'll saturate the bus, the sandforce controllers have slightly higher power usage (their doing more work in the controller).
<SpamapS> don't care much about power consumption on this beast
<SpamapS> 2 hours is a triumph
<SpamapS> hazmat: now to see which of those is available via Amazon Prime ;)
<hazmat> if you do go with the crucial m4 you'll need to a firmware double check to make sure its on the latest.. http://www.anandtech.com/show/4712/the-crucial-m4-ssd-update-faster-with-fw0009
<hazmat> SpamapS, pretty much all of them are on prime ;-)
<SpamapS> You're the second person to suggest kingston hyperx
<SpamapS> so I'll probably look at that
<SpamapS> I wonder if I can have it shipped to the Caribe Royale. ;)
<SpamapS> Now if my machine could actually resume from suspend, I'd be quite happy. :-P
<hazmat> doh
<SpamapS> Its unbelievable that this is an issue, still.
 * hazmat takes a break from email bbiab
 * hazmat attacks the review queue
<hazmat> rog, will you'll be able to do a review on niemeyer's  go-new-revisions branch?
<rog> hazmat: yes - coming up. there's been lots of discussion to try to absorb!
<hazmat> indeed, lots of good discussion
<rog> hazmat: hmm, when i do bzr qdiff go-trunk go-new-revisions i get an error. how is go-trunk not a parent of go-new-revisions? the log looks like it is. http://paste.ubuntu.com/717939/
<hazmat> cd go-new-revisions && bzr qdiff -r ancestor:../go-trunk
<hazmat> rog, ^
<hazmat> rog, you need to tell bzr that you want an ancestor branch diff
<rog> hazmat: ah, i haven't seen ancestor: before
<rog> and i thought -r just took a revision number.
<rog> cool.
<rog> thanks
<hazmat> bzr help revisionspec has some gory details of the many ways to spell a revision
<rog> i just looked. ultra-simple it is not.
<rog> hazmat: hmm, something's really wrong. the diff should not look like this: http://i.imgur.com/Q2PoM.png
<rog> (note invalid code on right, redundant ifs on left)
<rog> the output from bzr diff looks ok, so maybe qdiff isn't coping with the ancestor: spec very well
<hazmat> rog its a context diff, you can switch to complete by clicking the radial button
<hazmat> the output between qdiff and diff should be equivalent, just formatted differently (side by side vs unidiff)
<rog> hazmat: ah, the gray spaces represent omitted lines. i see.
 * rog is heading off.
<rog> hazmat: sorry, haven't quite got to the end of go-new-revisions, got side tracked by thinking about error handling vs identifiable errors.
<hazmat> rog, no worries, if you can it would be great to have it reviewed by end of day, its pending for a while
<hazmat> i'm digging into some of the other items in the queue
<hazmat> hopefully next week we can talk about better team review practices to keep the queue down
<hazmat> jimbaker, thanks for the review
<jimbaker> hazmat, sounds good, i can't wait until the retry work lands, it's going to be very helpful
<hazmat> jimbaker, yeah.. i'm looking forward to local environments surviving suspend
<hazmat> it should close out about 4 related issues
<tauren> hazmat: any chance you could answer this question I asked previously? "i assume it isn't possible to set up a full openstack environment without multiple physical systems. is that true?"
<hazmat> tauren, oh.. sorry missed that earlier.. it is possible to setup openstack without multiple machines.. it depends on how you want to do it though.. if you want to do it with juju or just run openstack
<hazmat> tauren, openstack can run with uml or lxc on a local machine in a developer setup.. it can also be setup that way in ec2.. as for being driven by juju
<hazmat> there was a guy who had virtualbox setup with netboot and orchestra in a vm to manage the other vms, which he pointed juju to
<hazmat> but if you mean like best practice... for prod use, its physical machines  + orchestra  + juju deploying openstack
<hazmat> we have some docs on the orchestra wiki
<tauren> my goal is to experiment with openstack, juju, lxc, etc before moving forward with purchasing multiple physical machines. if i could do it all on a single box for testing, that would be awesome.
<tauren> thanks for the answer! i'll look for those docs.
<hazmat> tauren, this blog post has the links for getting all the pieces in place https://wiki.ubuntu.com/ServerTeam/OrchestraJuju
<tauren> great! thanks for the help.
<hazmat> rog ping?
<hazmat> oh not here
<bcsaller> jimbaker, hazmat: thanks for the review on statusd
<hazmat> bcsaller, np
<hazmat> fwereade, what's the base for dynamic-unit-state?
<hazmat> woot only one more review till i've hit the full queue
 * SpamapS would be interested in working on a gamification of lp's merge review UI ;)
<SpamapS> "You earned the Review WARRIOR badge (10 reviews in a day)"
<hazmat> nice
<hazmat> i started working on a touch ui for lauchpad, but i'm doubtful now i'll have it ready for a uds lightning talk
<hazmat> sigh.. so little time
 * hazmat saves the last review for tomorrow
<hazmat> tis beer o clock, cheers
<SpamapS> hazmat: nastravi!
#juju 2011-10-25
<_mup_> juju/scp-command r413 committed by jim.baker@canonical.com
<_mup_> Initial commit
<_mup_> juju/scp-command r414 committed by jim.baker@canonical.com
<_mup_> Added missing file from previous commit
<_mup_> juju/scp-command r415 committed by jim.baker@canonical.com
<_mup_> Support relative paths
<_mup_> juju/scp-command r416 committed by jim.baker@canonical.com
<_mup_> Test with absolute path
<_mup_> juju/scp-command r417 committed by jim.baker@canonical.com
<_mup_> Explicit test of -r option in bug report
<jcastro> SpamapS: I need your flight info pls.
<jcastro> anyone else coming in today?
<kaaloo> hazmat: Hi, working on elb charm.  --policy=local  doesn't work with deploy for me and couldn't find the option anywhere on trunk.
<hazmat> hmmm. i might have gotten the option wrong
 * hazmat double checks
<hazmat> jcastro, i'm coming in today
<hazmat> jcastro, airport arrival 3:26pm
<hazmat> kaaloo, oh.. right. we removed the placement policy cli option
 * hazmat sighs
<hazmat> kaaloo, its only available as environments.yaml option now
<kaaloo> hazmat, ok I'll see what effect that has in a bit, debugging on ec2 now.  Will that still work with the ec2 provider ?  Everything on machine 0 ?
<hazmat> kaaloo, it will still work with ec2 its just not truly a deploy time decision anymore, its an environment config property
<hazmat> we changed this out at the last minute b4 release because we had some uncertainty about the correct interface, and the placement work wasn't fully fleshed out to cover additional use cases
<kaaloo> hazmat, no problem, I'll check that out.  Thanks for you help
 * hazmat switches out to packing and racing the airport
<drt24> so the documentation appears to indicate that juju only works with ec2 but https://help.ubuntu.com/community/Orchestra/Juju indicates it might also work with orchestra. Does that mean that I can build a local cloud and then use juju on it?
<hazmat> drt24, you can use juju with a local cloud, or deploy a local cloud with juju
<SpamapS> jcastro: I am on the plane.. its blue and white. WATCH FOR US
<SpamapS> jcastro: that info?
<SpamapS> jcastro: 4:10pm ua261
 * SpamapS defies the order to shut off electronics while frantically downloading jujju and local dev deps before takeoff
 * SpamapS wishes every flight had inflight wifi
 * SpamapS signs off
<robbiew> lol
<hazmat> jcastro, are you doing an airport pickup?
<rog> off for the day. see y'all tomorrow.
<robbiew> bcsaller: around?
<bcsaller> robbiew: yeah
<robbiew> got time for g+ catchup?
<bcsaller> of course
<robbiew> cool deal...will send an invite
<bcsaller> robbiew: just sent one, but I'll look for yours
 * hazmat yawns
<_mup_> juju/expose-refactor r410 committed by jim.baker@canonical.com
<_mup_> Initial commit
#juju 2011-10-26
<shazzner> hey everyone
<shazzner> I've been reading up on juju/ensemble
<shazzner> and I'm eager to try it out
<shazzner> what's a good way to set something up and practice with?
<shazzner> I was thinking Amazon's EC2 free account
<shazzner> also is juju only really good for web-facing servers?
<shazzner> or can I use it to deploy something like OpenERP?
<dosdawg> you could use it for internal servers if you ran openstack or some other ec2 api compatable
<shazzner> dosdawg: do you have a tutorial or something about setting up openstack? Thanks!
<SpamapS> shazzner: its non-trivial :)
<shazzner> then I'll need the most un-trivial tutorial/guide there is! :)
<SpamapS> shazzner: you can use the orchestra provider to install machines for specific workloads using juju
<SpamapS> shazzner: For OpenStack you need several components.. mysql, rabbitmq, nova (for virtualization and the API), glance (for hosting machine images)..
<SpamapS> shazzner: here's a good starting point http://cloud.ubuntu.com/2011/10/ubuntu-cloud-deployment-with-orchestra-and-juju/
<shazzner> thanks! sorry I'm a bit distracted watching the #OWS stuff
<SpamapS> #OWS ?
<shazzner> Occupy Wall Street, twitter hashtag
<SpamapS> ah
<SpamapS> shazzner: if you just want to "play" with juju you can use the "local" provider which will start LXC containers (like a VM but more lightweight) to stand in for EC2 VMs or real servers.
<shazzner> ah ok, thanks
<SpamapS> shazzner: as far as OpenERP, yes, you could deploy it with juju
<shazzner> is there a charm built for it?
<SpamapS> no, but they're pretty easy to write. :)
<SpamapS> and I believe at least some of its backend services have charms
<SpamapS> looks like it has a web component and a server component, and it uses postgresql..
<SpamapS> shazzner: anyway, local provder should be documented at http://j.mp/juju-docs .. heading to sleep, but good luck, and let us know if you have any success w/ an OpenERP charm. :)
<shazzner> Ok, thanks again!
<shazzner> Get some sleep :)
<shang> quick question, I install orchestra, but did not see the management class comes up, did I missing something?
<fwereade> shang: you mean you install ubuntu-orchestra-server, and cobbler runs, but the juju-orchestra-* mgmt classes are not created?
<fwereade> shang: if so, I don't think I can help, I use orchestra but I'm not familiar with how it's all set up
<fwereade> shang: but ping me if you think it's something juju-related
<shang> fwereade: ok, thanks!
<doitdistributed> hi
<doitdistributed> I have a problem installing juju on lucid
<doitdistributed> any hints?
<doitdistributed>  sudo apt-get install juju
<doitdistributed> Reading package lists... Done
<doitdistributed> Building dependency tree
<doitdistributed> Reading state information... Done
<doitdistributed> The following NEW packages will be installed:
<doitdistributed>   juju
<doitdistributed> 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
<doitdistributed> Need to get 0B/383kB of archives.
<doitdistributed> After this operation, 2,879kB of additional disk space will be used.
<doitdistributed> (Reading database ... 170192 files and directories currently installed.)
<doitdistributed> Unpacking juju (from .../juju_0.5+bzr409-1juju1~lucid1_all.deb) ...
<doitdistributed> dpkg: error processing /var/cache/apt/archives/juju_0.5+bzr409-1juju1~lucid1_all.deb (--unpack):
<doitdistributed>  trying to overwrite '/usr/bin/relation-list', which is also in package ensemble 0:0.5+bzr292-0ensemble1~lucid1
<doitdistributed> dpkg-deb: subprocess paste killed by signal (Broken pipe)
<doitdistributed> Errors were encountered while processing:
<doitdistributed>  /var/cache/apt/archives/juju_0.5+bzr409-1juju1~lucid1_all.deb
<doitdistributed> E: Sub-process /usr/bin/dpkg returned an error code (1)
<drt24> doitdistributed: perhaps this is due to already having ensemble installed?
<doitdistributed> hmpf
<doitdistributed> :-)
<doitdistributed> that was it - stupid me! Thanks a lot!
<_mup_> Bug #881991 was filed: Devops page does not exist on juju.ubuntu.com <marketing> <website> <juju:New> < https://launchpad.net/bugs/881991 >
<_mup_> txzookeeper/session-and-conn-fail r59 committed by kapil.foss@gmail.com
<_mup_> try to make the retry wrapper functions generic, privatize some of the helpers, address some doc string comments
<hazmat> g'morning juju folks
<rog> niemeyer: hiya
<niemeyer> rog: Yo
<niemeyer> Greetings from almost-UDS
<rog> niemeyer: having fun?
<niemeyer> Not yet :)
<niemeyer> Just got in the room
<dosdawg> whats juju do that chef or puppet won't ??
<hazmat> dosdawg, orchestration
<hazmat> and reuse
<SpamapS> http://askubuntu.com/questions/52840/differentiator-between-juju-and-front-runners-puppet-and-chef/66287#66287
<_mup_> Bug #66287: fglrx freeze machine <linux-restricted-modules-2.6.17 (Ubuntu):New> < https://launchpad.net/bugs/66287 >
<hazmat> dosdawg, ^ answer from a puppet founder at the askubuntu link
<rog> some config files have a "juju: environments" line at the start. does this have any significance?
<rog> s/config/environments.yaml/
<rog> i've noticed that in at least one place in the testing code it still says "ensemble: environments" so i'm guessing it doesn't signify too much :-)
<rog> any particular reason it's there at all?
<dosdawg> interesting about puppet vs juju
<dosdawg> thanks
<SpamapS> rog: not required, I think originally it was meant to just signify how to interpret the file, but the filename does that already
<rog> SpamapS: ok, thanks. i thought that, but i just thought i'd check before ignoring it entirely. i wonder if parsing it should fail if it's there, in fact. (obviously it would need deprecating first)
<SpamapS> deprecation warnings for cruft is a great idea
<hazmat> rog, its basically a tag header to ensure its not some random yaml file
<rog> hazmat: ok. well, it would be if it was actually required :-)
<rog> as it is, it's noise AFAICS
<rog> i think it may as well just be a comment.
<rog> alternatively we could enforce it.
<hazmat> rog, it should be the python impl
<hazmat> enforced that is
<rog> hazmat: not as far as i can see.
<rog> hazmat: as in: many of the tests omit it, or have it wrong (e.g. ensemble: environments). and the parsing code doesn't seem to look for it.
<rog> but i may easily have misinterpreted
<rog> that's why i'm asking...
<hazmat> rog, indeed it is superflous on environments.yaml
<hazmat> the charm metadata does have a required header
<rog> ah, ok. maybe it should be required in environments too, for consistency
<hazmat> hmm.. that doesn't seem to be the case either
<hazmat> weird.. i  know there was a header check at some point
<rog> hazmat: personally, i think that the structural check is sufficient.
<rog> hazmat: but i'm fine with doing a header check too.
<hazmat> rog, yeah.. the structural check is more relevant
<evan__> Does anyone have experience with installing Hbase using juju charms hadoop-master / slave + Hbase
<hazmat> evan_, i don't believe there is an hbase charm yet
#juju 2011-10-27
<_mup_> juju/expose-refactor r411 committed by jim.baker@canonical.com
<_mup_> Refactored tests for firewall mgmt to break dependence on provisioning agent
<_mup_> juju/expose-refactor r412 committed by jim.baker@canonical.com
<_mup_> Added missing files from commit and removed debugging
<_mup_> Bug #882492 was filed: initial juju package. <juju:In Progress by rogpeppe> < https://launchpad.net/bugs/882492 >
<Pau> hi
<niemeyer> rog: Can you please resubmit the merge proposal against the proper target (lp:juju/go)?
<rog> ah, i just went with what lbox propose did
<rog> hmm, i wonder why it chose lp:juju
<niemeyer> rog: Because that's the default project series
<niemeyer> rog: You can use -for lp:juju/go
<niemeyer> rog: With lbox
<niemeyer> rog: You'll also need -bug N
<niemeyer> rog: (to reuse the bug already opened)
<rog> niemeyer: https://code.launchpad.net/~rogpeppe/juju/go-initial-juju/+merge/80580
<niemeyer> rog: Cheers!
<niemeyer> rog: As a minor, the description is also a bit cryptic, even more considering it goes into a bug as well
<niemeyer> rog: Note the lack of context: https://bugs.launchpad.net/juju/+bug/882492
<niemeyer> rog: I'll handle this one
<rog> niemeyer: so what should the bug be? "There is no juju package."
<niemeyer> rog: Go port must handle environments.yaml
<niemeyer> rog: Just changed
<rog> i always find it a bit strange that it's all "bugs" and not "issues"
<niemeyer> rog: You can mentally translate it to your preferred term :)
<rog> yeah, but it's the mind set of "there's a problem"... which there isn't!
<niemeyer> rog: I personally prefer "ticket"
<rog> yeah. that's better. it seems weird that every feature change has to create a "bug".
<niemeyer> rog: We're creating our own process on top of the existing infrastructure.. certain trade offs are inevitable
<rog> niemeyer: ah, i thought it was designed this way
<niemeyer> rog: No.. our kanban view benefits from some additional associations
<niemeyer> rog: So we enforce those
<niemeyer> rog: The kanban is something Jamu wrote for Landscape originally, and extracts data via the API
<rog> niemeyer: i hadn't heard of kanban
<niemeyer> rog: http://j.mp/juju-florence?
<rog> that's a kanban?
<niemeyer> rog: http://en.wikipedia.org/wiki/Kanban
<niemeyer> rog: http://leankitkanban.com/
<niemeyer> rog: It's inspired in those concepts
<rog> ok, i'll have a look
<rog> columns ordered by priority - is there more to it than that?
<niemeyer> rog: These web sites contain more details than I could explain
 * rog has to go and buy a birthday card
<niemeyer> rog: I thought we had talked about Bootstrap and Destroy in Conn already
<niemeyer> rog: Or I guess I misunderstand the terminology used there
<niemeyer> rog: There are three different terms: conns, provider, and environ.. what's what?
<rog> a conn is what juju layers on top of an environ
<niemeyer> rog: 485	+func (dummyProvider) NewEnviron(name string, attributes interface{}) (e juju.Environ, err os.Error) {
<niemeyer> 486	+	cfg := attributes.(schema.MapType)
<niemeyer> 487	+	return &dummyConn{
<niemeyer> rog: let's please compact these terms a bit more as we talked the last time
<rog> hmm, it should be named dummyEnviron there indeed
<rog> what should i call Conn, though. it will have state in the future. all the stuff that juju knows about that the environs do not
<rog> s/\./?/
<niemeyer> rog: Look at the existing code
<niemeyer> rog: Also, please keep up with the existing convention for gocheck:
<niemeyer> +	C "launchpad.net/gocheck"
 * rog hates importing to .
<rog> and i think it's unnecessary
<niemeyer> rog: Sorry about that, but we'll need to listen to each other a bit more if we want to work together on this.
<niemeyer> rog: We have existing code there using a convention
<niemeyer> rog: and that convention is used in every single gocheck package
<niemeyer> rog: Because that's how it has been built
<niemeyer> rog: Just telling me you hate my convention won't make things easier
<rog> the reason seems to be entirely one of convenience.
<rog> i will change it. but i won't like it :-)
<niemeyer> rog: I hope you like consistency.
<rog> the go authors have considered removing import ".", BTW
<rog> they might even do it some time
<niemeyer> rog: Oh yeah, I bet
<rog> yeah, i'd prefer if everything was made consistent to avoid importing . :-) when looking at a gocheck package, i can never remember exactly which identifiers gocheck brings in
<evan_> How would I expose port 59999 with Juju using hadoop-master charm?
<niemeyer> rog: ping
<rog> niemeyer: hi
<niemeyer> rog: Yo
<niemeyer> rog: Just an early note before you live, the branch looks very good so far
<niemeyer> rog: Some very minor comments, but awesome otherwise
<rog> i'm glad
<niemeyer> rog: Still haven't finished, but running through it as I can
<rog> s/glad/so glad/ :-)
<rog> Conn ok?
<niemeyer> rog: I think Conn is Environ
<rog> niemeyer: then what's Environ? :-)
<niemeyer> rog: I haven't finished the review yet, as I pointed out, but it feels like they're both the same thing so far
<niemeyer> rog: It's creating a new Environ just to embed in a Conn
<niemeyer> rog: and Conn has Bootstrap and Destroy commands, which as we already talked should be in the Environ interface
<rog> niemeyer: the difference, as i envision it, is that Conn gets all the juju methods that the individual providers don't implement, e.g. Deploy, AddRelation, etc etc
<niemeyer> rog: That's not how it works today.. we're reimplementing the existing system, rather than doing a complete new architecture from the ground up
<niemeyer> rog: There is a well established model
<niemeyer> rog: If we're changing it, I want to see a full fledged proposal that covers it all
<niemeyer> rog: Rather than lose bits
<niemeyer> rog: Otherwise we'll face new issues with the new design, and will not meet in the end
<rog> erm, i'm trying quite hard not to change it fundamentally. it's just moving things around as warranted (i think!) by the different Go package model
<niemeyer> rog: Not in that case.. you picked two arbitrary methods out of the Environ interface
<rog> so the Conn gets the stuff that lived in control... (i think - i'll just go and have another look)
<niemeyer> rog: control is the command line stuff
<niemeyer> rog: destroy_environment lives in the MachineProvider interface today, which is Environ
<niemeyer> rog: We're not going to change these interfaces without a good reason to do so, which should cover the whole application and be talked about
<niemeyer> rog: We had already talked about that.
<rog> ok, that's exactly why i'm putting out this merge request
<rog> because it's a concrete thing we can talk about
<rog> rather than me waving my hands a lot!
<niemeyer> rog: That's not how it works.. even more considering we had _already_ talked about this
<rog> i don't think you mentioned this aspect before
<rog> or at least if you did, i didn't understand
<rog> which is entirely possible
<niemeyer> Oct 13 13:56:39 <niemeyer>      We cannot fight over details like this rog.. an Environ needs a Destroy method on its interface. There's zero  doubts about that.
<niemeyer> Oct 13 13:56:54 <rog>   oh, definitely!
<rog> erm, Environ *does* have a Destroy method
<rog> doesn't it?
<rog> checked. yes, it does
<niemeyer> rog: So what is this method about?
<rog> which method?
<niemeyer> *Destroy*
<rog> the Destroy on Conn just calls the Destroy on the underlying Environ
<niemeyer> rog: Ok.. why?
<rog> most users will never need to know about Environ
<niemeyer> rog: Ok.. why?
<rog> because Environ does the low level stuff
<rog> Conn does all the actual juju stuff
<rog> Environ is the foundation on which juju lives
<niemeyer> rog: There's no Conn today.. and the Conn in your proposed branch does nothing.
<niemeyer> rog: Let's please get rid of it until we find a reason for it to exist.
<rog> no, it's there as a placeholder
<rog> thinking about it, it's really Control
<niemeyer> rog: Yeah.. that placeholder doesn't exist in the current code base, despite it working!
<rog> except i think Conn works better as a name in this context than Control
<niemeyer> :-(
<rog> would it look ok to you if Conn was renamed Control ?
<niemeyer> rog: Conn does nothing.. please remove it until we agree it's a needed concept.
<rog> ok
<niemeyer> rog: Or.. make an actual proposal
<niemeyer> rog: Otherwise you're alone trying to introduce a concept in the code that does absolutely nothing, does not exist in the existing implementation, and so makes no sense unfortunately
<rog> well, my actual proposal is that Conn (or Control) is where all the juju logic lives. i.e. all the shared code that we don't want to duplicate in each individual provider
<niemeyer> rog: Ok, but by _actual_ proposal I mean, read the code, and tell me how existing ideas map into your model
<rog> at the moment a lot of it is done with a superclass (which we can't have)
<niemeyer> rog: Because the existing model we have today works and is well understood
<niemeyer> rog: You're talking about Control, for instance, which in the current code base is just command line tools, indicating there's a big disconnect
<niemeyer> rog: I already explained this as well :-(
<rog> oh doh
<rog> i meant State
<rog> i think
<rog> hold on
<rog> yes, state. sorry.
<niemeyer> rog: Ok, try this:
<rog> or ServiceStateManager to be more accurate
<rog> (in one case at least)
<niemeyer> rog: grep destroy juju/state/*.py
<niemeyer> rog: You see.. there's a huge disconnect
<niemeyer> rog: Which is why I can't feel ok about the proposal so far
<niemeyer> rog: I don't want to discourage you from proposing something by any means, but this isn't going in a good direction yet
<rog> ok.
<niemeyer> rog: We may end up with juju.Conn, and I'm fine with it as long as we have an architectural plan that is clear and still feels like a good idea after mapping the ideas we have today into it
<niemeyer> rog: Otherwise, the existing model is well known
<TheMue> niemeyer: Hi Gustavo. welcome back. Could you please give me any hint (URL) about the architectural documentation of the current juju and how far the go implementation has gone?
<niemeyer> TheMue: Hey!
<niemeyer> TheMue: Thanks!
<niemeyer> TheMue: All the docs live at https://juju.ubuntu.com/docs
<rog> niemeyer: i'll send you an email with a brief description of how i see the current design
<niemeyer> TheMue: There's a lot of architectural details there
<niemeyer> TheMue: The Go port is still young
<niemeyer> rog: Sounds good, thanks, and sorry for making things more painful than they may look to you
<TheMue> niemeyer: OK, I'll take a deeper look. I'm already working from my new Ubuntu notebook.
<niemeyer> TheMue: Woohay! :)
<TheMue> niemeyer: It's great, and building a go release is extreme fast. *bigGrin*
<rog> niemeyer: email sent
<rog> niemeyer: does that make some sort of sense?
<niemeyer> rog: It does.. responded, let me know
<rog> niemeyer: sent response. PTAL.
<niemeyer> rog: Sounds good
<niemeyer> rog: I mean, the plan sounds good
<rog> niemeyer: erm, which plan?
<niemeyer> rog: Your email :)
<rog> niemeyer: it's done
<niemeyer> rog: Cool
<rog> niemeyer: hence the "PTAL"
<niemeyer> rog: Oh, ok
<_mup_> juju/expose-refactor r413 committed by jim.baker@canonical.com
<_mup_> PEP8, PyFlakes, docstrings
<niemeyer> rog: Review delivered
#juju 2011-10-28
<perlstein> hello
<perlstein> are there juju charms that automatically deploy openstack for some common uses?
<perlstein> i'm going by the openstack guide, but others on my team are suggesting that there's some charms i can check out to make it easier?
<perlstein> i am having zero lucki finding them
<sumanah> hazmat: m_3: got a moment?
<sumanah> I'm writing up https://www.mediawiki.org/wiki/NOLA_Hackathon#Topics.2C_Goals.2C_.26_Outcomes
<sumanah> (nice photo: https://commons.wikimedia.org/wiki/File:NOLA_Hackathon_13.jpg ) :)
 * sumanah emails you both
<SpamapS> perlstein: the charms are at http://code.launchpad.net/charm
<SpamapS> perlstein: there's some info on how to use them spread out on wikis.. we're working on consolidating it
<SpamapS> jamespage: hey, so, charm testing..
<jamespage> SpamapS: hey
<jamespage> any thoughts on my email from last month?
<SpamapS> jamespage: we had a fairly deep conversation about it yesterday.. and we came up with something not at all compatible with what you guys have done. :p
<jamespage> SpamapS, w00t
<jamespage> whats the thinking then?
<SpamapS> jamespage: we can likely use any of the tests you've already written, but the relation to a testing service has problems for automation of testing the entire graph of combinations of charms, which we very much want to do.
<jamespage> SpamapS, I agree
<SpamapS> https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing
<SpamapS> jamespage: the whiteboard details the way that we'll do it..
<SpamapS> jamespage: we'll have to blacklist any charms that have no sane defaults (like the recently submitted ELB charm that needs an AWS account #)
<jamespage> TBH this is really what I wanted to see
<jamespage> but I think that this is something a little different to the complex deployment testing objective we have
<SpamapS> The idea behind this one is simply to ensure that testing happens automatically on all charms in the charm store.
<jamespage> which I agree is v. important and fulfills a clear objective for juju and the charm store
<SpamapS> I don't think the test runner will actually be that complex to write.
<jamespage> it would be nice if this work could hook into the complex deployment testing stuff
<SpamapS> The dependency resolution bits are actually already done in an older branch of juju.. so its just wrapping that around the right sequencing.
<jamespage> i.e. we could use the internal charm tests as part of testing an specific stack of charms
<jamespage> as well as testing it from the outside as well
<jamespage> openstack being a case in point
<SpamapS> Indeed
<jamespage> SpamapS: are 'stacks' expected to be delivered soon i.e. the ability to specify which charms, how many units and relationships without scripts
<SpamapS> jamespage: hazmat sent out a proposal for a near term roadmap that included import/export
<jamespage> so with complex deployment testing the stuff I have today works around not having that
<jamespage> the internal charm testing is broadly equivalent to __install__ tests
<SpamapS> jamespage: right
<jamespage> so I can probably bin that
<jamespage> and just provide a nice way to hook in a set of black box system tests once the environment is fully deployed
<jamespage> SpamapS: so with the charm testing I think we need a way to pull out test results and report on what passed and failed.
<jamespage> an overall status is good - we can use that for acceptance into the charm store
<jamespage> but details of what went wrong are important as well
<jamespage> (sure that you already discussed this)
<SpamapS> glossed over the details, but yes, the idea is to provide help for fixing whatever broke
<jamespage> this is so lining up to become a jenkins plugin :-)
<SpamapS> I'm hoping its just a script that one runs in jenkins. :)
<jamespage> (have been holding off as I did not want to re-invent the wheel)
<jamespage> yeah - but think about this
<jamespage> an deployed juju environment is just a test fixture
<SpamapS> I was thinking about it and really we can just have a thing that watches all branches for a commit, when it sees one, it merges that into a single bzr branch that is "the charms".. and then runs the test runner for the affected charm.
<jamespage> so having a nice way to manage (import/export) and environment setup would be sweet
<SpamapS> jamespage: right.. I could see it working as a jenkins plugin, but I couldn't see myself writing a jenkins plugin. ;)
 * jamespage is up for that challenge
<jamespage> heck you can even write them in python these days
<jamespage> *experimental only
<SpamapS> jamespage: I'm not sure I fully understand the value in making it a jenkins plugin vs. just running things in response to a bzr commit.
<jamespage> so for the charm testing that is sufficient
<jamespage> I was more thinking about complex deployment testing TBH
<jamespage> i.e. set me up this environment
<jamespage> I'll then test/run/build in it
<jamespage> and then tear it down again
<jamespage> it needs more thinking about
<SpamapS> It feels odd when I think about how a juju env relates to jenkins.
<jamespage> can we make sure that servercloud-p-juju-charm-testing happens before servercloud-p-complex-deployment-testing.
<SpamapS> A general juju plugin would make a lot of sense.. since as you say, juju envs are just test fixtures. But how it would work.. I'm really not sure.
<SpamapS> jamespage: yeah good idea
<SpamapS> jamespage: as of last night it was before
<jamespage> I'll add a deps
<jamespage> done
<SpamapS> ahh very nice
<SpamapS> I forget that deps exist ;)
<rog> hey guys, i'm off now for the afternoon. flight to UDS 6.30am tomorrow. looking fwd to seeing you there.
<SpamapS> heh, I'm in Orlando airport waiting to fly home for the weekend. ;)
<SpamapS> be back on Tuesday morning. :)
<SpamapS> rog: see you here then. ;)
<rog> SpamapS: great, hope all went well this week
<SpamapS> rog: indeed, very productive (stayed almost completely focused on the charm store)
<SpamapS> hrm, why doesn't our bot tell us about trunk commits? :-P
<hazmat> SpamapS, yeah.. it would be nice if the commits weren't client side
<hazmat> er. messages about them
<_mup_> juju/expose-refactor r414 committed by jim.baker@canonical.com
<_mup_> Fix interaction of watch setup and retry test
 * SpamapS awaits flight attendant rebuke and continues deleting... err.. reading email
<_mup_> juju/expose-refactor r415 committed by jim.baker@canonical.com
<_mup_> Docstrings, cleanup
<_mup_> txzookeeper/session-and-conn-fail r60 committed by kapil.foss@gmail.com
<_mup_> 2nd refactor, simplify retry impl, remove magic instance rebinding, just simple delegator methods
<_mup_> txzookeeper/session-and-conn-fail r61 committed by kapil.foss@gmail.com
<_mup_> refactor core tests for less exposed surface.
<_mup_> juju/unlocalize-network r409 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<_mup_> juju/security-agents-with-identity r315 committed by kapil.thangavelu@canonical.com
<_mup_> exposed grant access api, exists check on acl
<_mup_> juju/trunk-merge r277 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<_mup_> juju/trunk r411 committed by kapil.thangavelu@canonical.com
<_mup_> merge unlocalize-network [r=bcsaller,jimbaker][f=873335]
<_mup_> Force explicit "C" locale to normalize output parsing of the libvirt cli.
#juju 2011-10-29
<_mup_> juju/expose-refactor r416 committed by jim.baker@canonical.com
<_mup_> Observer tests
<_mup_> juju/expose-refactor r417 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<Marx> helloo
<Marx> helloo guys
#juju 2011-10-30
<_mup_> juju/expose-refactor r418 committed by jim.baker@canonical.com
<_mup_> Verify FirewallManager is started and invoked
<_mup_> juju/expose-refactor r419 committed by jim.baker@canonical.com
<_mup_> PEP8, PyFlakes
#juju 2013-10-21
<synergy___> Hello everyone! I tried uploading a theme in Wordpress and  got: 413 Request Entity Too Large
<synergy___> nginx/1.1.19...
<synergy___> But I can't find any evidence of nginx running in juju.
<synergy___> I believe I created the right nginx.conf file, but can't find the service running to restart it with the new settings.
<synergy___> I don't see it in the juju-gui and when I try to restart it in the command window, it says : nginx: unrecognized service
<AskUbuntu> Creating and getting hold of a server in the cloud? | http://askubuntu.com/q/362990
<Manoj> ERROR juju.state open.go:93 TLS handshake failed: x509: certificate has expired or is not yet valid
<Manoj> trying to juju bootstrap
<Manoj> the bootstrap node is failing
<Manoj> I found above error in /var/log/cloud-init-output.log of bootstrap node
<Manoj> please help
<marcoceppi> Manoj: what version of juju are you using, what cloud
<AskUbuntu> OpenStack Best Resources | http://askubuntu.com/q/363384
<zradmin> is there a documented way to upgrade charms for openstack from grizzly to havana?
<Azendale1> is there some debug collection tool if I want to report a bug in a charm (like ubuntu-bug on ubuntu) or should I just use the Launchpad webUI and paste the traceback that I get running the hook in a debug hooks section?
#juju 2013-10-22
<nemothekid> Do I install juju on a single machine (like a CI server) or do I install juju on every developers machine?
<hatch> nemothekid, you install the juju client on every developers machine
<Makyo> nemothekid: The client talks to a central "bootstrap" node that controls all of the units involved in the environment.
<Makyo> (or any number of environments)(
<nemothekid> Makyo: okay, can I have the GUI exist on the "bootstrap" node and do everything from there?
<nemothekid> basically all I want to have is the ability to just push to a git repo, and then for the bootstrap node to figure everything out
<Makyo> nemothekid: you can have the GUI on the bootstrap node with `juju deploy cs:precise/juju-gui --to 0`
<Makyo> nemothekid: as for the second part, others may have suggestions for post-receive hook type stuff wrt git repos? What one could do is have the config-changed hook pull HEAD of master, then the post-receive hook in your repo do something like, say, juju set <service> last-update-timestamp=<curr time>
<nemothekid> thats perfect
<nemothekid> really glad I cam across this project then
<Makyo> nemothekid: however, the bootstrap node doesn't necessarily listen for changes, that'd be something you'd have to automate outside of juju.  Another option is to, say, up the revision number of your charm and then upgrade the charm for your service.
<Makyo> There are a few options, I guess :)
<Makyo> Cheers
<Makyo> The config-changed idea would also work for CI stuff, too, I guess.
<Makyo> </thinkOutLoud>
<Makyo> hatch: https://www.exratione.com/2013/02/nodejs-and-forever-as-a-service-simple-upstart-and-init-scripts-for-ubuntu/
<nesusvet_> Hello everyone!
<nesusvet_> I tried to deploy the MAAS environment, and run into the issue when I see assigned server under MAAS, but juju can't recognize it. I did the "destroy-environment " command and deploy everything again via "juju bootstrap" but the same issue happens again and again
<nesusvet_> I see only one sever of two
<Jujufy> hey, how long should a `juju status' take?
<nesusvet_> Jujufy, sorry for the delay, it shows information immediately!
<Jujufy1> thanks for answering, although the webchat stopped responding
<Jujufy> ok finally set up a proper IRC client
<Jujufy> i am updating 13.04 -> 13.10 and i spotted some maas updates
<Jujufy> will try to see if juju feels better afterwards
<Jujufy> upgrade done but nothing changed with juju
<Jujufy> i'm trying to get a time for it
<Jujufy> got the result back: http://paste.ubuntu.com/6283048/
<Jujufy> i already have an environment bootstrapped
<Jujufy> it shows up in `juju env'
<Jujufy> i'm trying `juju status' with `-e' but it looks like it will turn out the same
<Jujufy> meanwhile, the original bootstrapping process did convert one maas node to `Allocated to root'
<Jujufy> so i guess at least that worked
<Jujufy> got the same error , this time it does mention my environment
<Jujufy> the timeout seems to be 10min
<marcoceppi> Jujufy: What version of juju did you bootstrap with?
<danielesalatti> Hi all! I need a little helpâ¦ How can I tell juju to use a t1.micro when deploying? I tried bootstrapping with --constraints "cpu-count=0 mem=512M", "cpu-power=0 mem=512M" and "instance-type=t1.micro" but none of these works for meâ¦
<marcoceppi> danielesalatti: cpu-power=0 mem=128M should have done it
#juju 2013-10-23
<routelastresort> gahhhhhhhhhhhhhhhhhh
<routelastresort> I'm just writing my README.md, then pull request time
<Jujufy> marcoceppi: the bootstrap was created on u13.04 but now i'm on u13.10
<Jujufy> should i destroy and re-bootstrap?
<Jujufy> i think i'll try that
<Jujufy> nothing to lose
<routelastresort> A++++, would destroy again
<Jujufy> ok new bootstrap created
<Jujufy> root node allocated
<Jujufy> juju status shows the same symptoms though; i expect it to say it failed after 10min passed
<Jujufy> all the other nodes are off
<Jujufy> but i expect it should say something regardless
<Jujufy> yep
<Jujufy> `no reachable servers'
<Jujufy> but i also got a `...go:282 Unable to connect to environment "".'
<Jujufy> which is strange, because several lines before it said `....go:32 opening environment "my-maas2".'
<Jujufy> juju version => 1.16.0-saucy-amd64
<Jujufy> hm, if i visit the webserver at /api/1.0/node-... i get the json
<Jujufy> and it has a "status": <number>
<Jujufy> i selected maas to manage DNS as well, maybe something will change
<AskUbuntu> Juju/MAAS in vSphere to test OpenStack | http://askubuntu.com/q/364332
<Jujufy> that looks just like my issue
<Jujufy> although i'm on hardware
<Jujufy> i'll bookmark it; hopefully an answer will appear
<Jujufy> OK, following that AskUbuntu question I logged into the root node
<Jujufy> but my /var/log/cloud-init-output.log is different
<Jujufy> after a couple of ... dial tcp ... refused
<Jujufy> it did manage to connect
<Jujufy> and reached ... juju suppercommand.go:286 command finished
<AskUbuntu> juju bootstrap error no default environment found | http://askubuntu.com/q/364477
<g0twig> I want to use JuJu on Debian 7.0 but Juju is not part of debian sid anymore
<g0twig> please give me full instructions how to let it run, I like to develop eventually charms and deploy them
<g0twig> 32bit
<mgz> g0twig: you just want the client tools, right?
<mgz> you're not trying to also run juju machines on debian?
<mgz> in which case, you can either straight up get an ubuntu deb and unpack it, or branch the project and build it yourself, I'm happy to help you with that if you need it
<g0twig> mgz: no, an actual juju server
<mgz> g0twig: you know the charms are all written against precise, right?
<mgz> you're not going to get them running on debian sid without changing every charm you use
<g0twig> mgz: I saw charms for newer releases
<mgz> there are some, and there's no reason you couldn't write a charm that would run on both a debian release and an ubuntu release... but that's more work
<g0twig> mgz: so you say I cant use juju on a non-ubuntu server?
<g0twig> oh
<g0twig> because Debian 7.0 seems to be the only option I've got
<mgz> g0twig: ah, and they're also preinstalled servers?
<g0twig> yeah
<g0twig> virtual server
<mgz> the general juju setup is to start from a blank cloud machine and do all the setup,
<g0twig> why is juju so distro-dependend?
<g0twig> I dont like the fact, even when I would have a Ubuntu server
<mgz> we've just added a manual provider, which lets you start from a less-clean state,
<mgz> but that's a less juju-y way, you can't just say "give me 10 more units for this service" like that
<g0twig> hm...
<g0twig> whats a better option to run juju on ubuntu server, version 12.04 or 13.10?
<mgz> g0twig: at essence, because a charm needs to set up your service, and "apt-get install X" is somewhat distro/series dependent
<mgz> g0twig: 12.04, but again, you really want a cloud provider that gives you any number of pristine instances (or you can run everything in lxc containers locally)
<g0twig> mgz: I just want to test stuff...
<mgz> it's not like you're asking your hosting provider for 3 machines with mysql and php then putting juju on it
<mgz> that's just not the model
<g0twig> I only got one machine
<mgz> g0twig: so, for just testing stuff, we go back to the beginning, get the ubuntu deb, unpack it, use the binary from there, I expect it will even work on sid
<g0twig> ok
<mgz> you do need an SSL mongodb, which I hope debian has picked up by now
<mgz> and it will download an ubuntu precise image to run in the lxc containers
<g0twig> but 13.10 has a much newer juju version
<mgz> Some Hacking Required(tm)
<mgz> right, you probably want the one from either saucy or our ppa
<g0twig> mgz: so now I installed juju-core, juju-local, juju
<g0twig> what else?
<mgz> lxc, mongodb (needs ssl enabled), and you don't want 'juju' probably
<g0twig> juju-local installed    debootstrap libboost-filesystem1.49.0 libboost-program-options1.49.0   libboost-system1.49.0 libboost-thread1.49.0 libcap2-bin libpam-cap   libpcrecpp0 lxc mongodb-clients mongodb-server
<mgz> right, that metapackage pulls in the right bit
<mgz> that looks good then. `juju version` should say 1.16 or similar
<g0twig> what do I want here? Configuring for Amazon AWS Configuring for Windows Azure Configuring for HP Cloud Configuring for OpenStack Configuring for MAAS Configuring for LXC local provider (Linux)
<g0twig> LXC? I just want to get my stuff going, I dont know much about the cloud
<g0twig> and different things
<g0twig> I just want to have the options to easily install services, manage them, etc. all on this machine
<mgz> yeah, I'd try the local provider, just to experiment
<g0twig> does JuJu need a linuxkernel > 3.8?
<mgz> but, it sounds like you maybe want some other tool if you just want to manage services on one box
<mgz> ...the latest lxc may well do, surely sid has a modern linux kernel?
<g0twig> I am on debian 7.0, no sid
<g0twig> last time I tried sid, my machine didnt boot up again...
<g0twig> ok, it tells me ERROR error parsing environment "local": no public ssh keys found what I am missing?
<g0twig> ok, I made a ssh key
<ahasenack> hi, could someone review lp:~ahasenack/charms/precise/apache2/apache2-default-servername ?
<ahasenack> r46 (latest) introduced a regression that broke one of my deployments using apache2
<AskUbuntu> juju WordPress with nginx permalink issues | http://askubuntu.com/q/364526
<dagelf> Hi everyone. Where can I find info on how much space juju needs, and in what paths - and how to change that?
<dagelf> juju-local that is... it seems it eats a lot of space in /usr, regardless of where your root path is set in the config
<dagelf> Okay, I found it... /var/lib/lxc
<dagelf> Urg. So what does ERROR Get http://10.0.3.1:8040/provider-state: dial tcp 10.0.3.1:8040: connection refused mean? How do I restart whatever it's looking for? What is looking for what?
<dagelf> No, really... dagelf@ubuntu:~$ sudo juju bootstrap ERROR Get http://10.0.3.1:8040/provider-state: dial tcp 10.0.3.1:8040: connection refused
<dagelf> Pretty useless error message and documentation at this point
<sarnold> does netstat -lnp show anything listening on that port and IP?
<dagelf> nope, nothing running, what should I run?
<dagelf> it's a clean install on 13.04, I've just created some test environments and then destroyed them... 3 times... now this happens. I don't feel like rebooting or re-installing everything.... at the moment I'm playing with jujud to see where it fits in...
<dagelf> (local environment of course)
<sarnold> dagelf: are you using the juju ppa or the distribution-packaged juju? I think the ppa might give better results, if you're not using it
<dagelf> ppa:juju/stable
<dagelf> Did I break it?
<dagelf> Are there logs somewhere?
<dagelf> or verbosity switches?
<sarnold> dagelf: iirc, /var/log/juju/ has some, but I'm afraid it's been a while since I've used it, my knowledge is slowly getting out of sync with what's new :/
<dagelf> Before it's bootstrapped it actually deletes all the directories... nothing. :-( And all the other options and commands says: Sorry, bootstrap it first.
<dagelf> But it won't bootstrap anymore and there's nowhere to figure out why or how to fix it. Sad.
<dagelf> Okay, I just created another user on the system now it magically works.
<dagelf> I'll try wipe the ,juju path and re-run init. Obviously something bad left there...
<sarnold> surprising :/
<dagelf> Well, there are some charm caches,  environments.yaml and local.jenv
<dagelf> I'm guessing something in local.jenv did that
<AskUbuntu> Juju deploy of Charm (Mysql) in MAAS provider failing after successful bootstrap. Juju status stuck in "Pending" state | http://askubuntu.com/q/364714
<Preytell> when deploying to ubuntu MAAS, I have the regional controller all ready to go, and I have nodes ready, but when I try to run 'juju sync-tools' I am getting: ERROR error parsing environment "maas": no public ssh keys found
<Preytell> I can add this but I am unsure which keys it is looking for
<marcoceppi> Preytell: you have to add your SSH keys to MAAS, then make sure it's either the id_rsa in your ~/.ssh directory or you can specify your public key in .juju/environments.yaml as "authorized-keys"
<Preytell> nod. I have done that, and I checked to make sure that the key was correct, and in the ~/.ssh directory. I will try adding to environ file.
#juju 2013-10-24
<Preytell> seems to work if I add it to the environ file... hmmm... wonder why it didn't work the other way.
<cjohnston> heya.. I'm trying to setup a local env for using Juju... I'm getting ERROR error parsing environment "local": no public ssh keys found when running juju bootstrap.. the docs don't seem to cover anything about adding public keys somewhere.. any ideas?
<AskUbuntu> Reset MAAS after loosing Juju configuration? | http://askubuntu.com/q/364821
<Roconda> cjohnston: try: ssh-keygen
<cjohnston> Roconda: I have ssh keys
<Roconda> cjohnston: you sure you've got a public one? Cause I had the same error and generating my keys solved it
<cjohnston> Roconda: yup
<Roconda> cjohnston: maybe your key hasn't the right permissions. Could you try: mv ~/.ssh ~/.ssh_backup && ssh-keygen
<Roconda> just to be sure
<cjohnston> Roconda: it works for ssh elsewhere
<cjohnston> It's 644 which is what a pub key is supposed to be IIRC
<Roconda> cjohnston: well thats weird. If generating new ones doesn't do the job then I would not know what else to do. I'm not a JuJu dev or something
<TSK> Greetings.  Been trying to follow the Juju "Getting Started" doc, but upon reaching the "sudo juju bootstrap" I get "ERROR Get http://10.0.3.1:8040/provider-state: dial tcp 10.0.3.1:8040: connection refused" and hours of web search and poring over documentation has turned up a whole lotta nothin'.  Anyone else have a similar issue, or better yet, anyone know where I ought to be reading to solve this issue?
<marcoceppi> TSK: What version of Ubuntu are you on?
<TSK> Currently on 13.04 tho I have been considering upgrading soon.
<marcoceppi> TSK: What version of Juju do you have?
<TSK> 1.16.0-raring-i386
<marcoceppi> TSK: do you have the "juju-local" package installed?
<TSK> Aye, I do indeed.
<marcoceppi> TSK: Are you on an encrypted home directory?
<TSK> I am not.
<marcoceppi> does ps -aef | grep mongo show a mongod process running?
<TSK> Sure does.
<TSK> Yes indeed.
<marcoceppi> TSK: So, we encountered something like this yesterday. Where it took mongod longer to start up than bootstrap expected and resulting in this false positive
<TSK> How long did it take?  I've been tryin' for a few hours now to get this runnin'.
<marcoceppi> TSK: can you run `sudo juju destroy-environment` then rebootstrap one more time with --debug --show-log flags?
<marcoceppi> TSK: bootstrap will only wait for X seconds before giving up, even if mongod/juju-db starts shortly after that time
<TSK> By golly, that seems to have fixed 'er right up.
<TSK> Thank you very much.  Seems to be workin' as expected now.
<marcoceppi> TSK: the next time you bootstrap it might fail, let me know if it does, we may increase the timeout for bootstraps waiting on mongdb
<TSK> I surely shall.  Thank you.  It's good to know where the source of the issue is.  I appreciate the help greatly.
<TSK> The web was not much help, sadly.  Is this a fairly new issue?
<marcoceppi> TSK: I first heard of it yesterday. to my knowledge it hasn't been an issue
<TSK> Right on.  That's good.  Explains why there's not much info on the web about it.
<TSK> If not many folk have seen the issue, then not many folk would have posted about it yet.  :)
<TSK> Juju is pretty neat thus far.  Might have to use this to provision all my virtual machines from now on.
<[TSK]> Howdy.  So, now I'm gettin' that same error as before, except when trying to access the same already bootstrapped environment after a reboot.
<[TSK]> Is there somewhere in the code I should look/test changing some variables to see if it helps anything?  I'm familiar with Python if that helps any.
<marcoceppi> <[TSK]> Well, this is golang, so it's compiled
<marcoceppi> <[TSK]> So when you run juju status, you're getting an error?
<[TSK]> Aye.  Same error as when trying to bootstrap.
<marcoceppi> <[TSK]> But you said it's already bootstraped?
<marcoceppi> <[TSK]> You only bootstrap once, unless you've torn down the environment since last we talked
<[TSK]> Aye.  Already installed some things and tested them, then restarted the system to see how it'd go.
<[TSK]> Aye.  It was already bootstrapped and working before the reboot.
<marcoceppi> <[TSK]> So, the local environment should survive reboot, what happens when you run `juju status --show-log --debug`
<[TSK]> 2013-10-24 16:50:25 INFO juju.provider.local environprovider.go:32 opening environment "local"
<[TSK]> 2013-10-24 16:50:26 ERROR juju supercommand.go:282 Unable to connect to environment "local".
<[TSK]> Please check your credentials or use 'juju bootstrap' to create a new environment.
<[TSK]> and then
<[TSK]> Error details:
<[TSK]> Get http://10.0.3.1:8040/provider-state: dial tcp 10.0.3.1:8040: connection refused
<[TSK]> (And of course before that all the key junk and stuff)
<[TSK]> Not seeing anything in there that really looks useful to ME, bein' that I'm entirely new to juju as yet.
<[TSK]> Looks like such a potentially interesting tool, too.  I'd be bummed out if I couldn't actually use it in practice.
<marcoceppi> <[TSK] Well the Local provider is a very special provider and relatively new
<marcoceppi> <[TSK] try `sudo juju destroy-environment` then `sudo juju bootstrap` again. While it's designed to survive reboots, it may not have in this case
<[TSK]> destroy and re-bootstrap does sorta fix the problem but of course that leaves me with a shiny new empty environment again.
<[TSK]> LOVE the use of YAML, BTW.  SO much better than say XML for example.
<marcoceppi> <[TSK] yes, I'll look into reboot survival
<[TSK]> Looks like the bootstrap issue seems to happen on all three of my machines (including the more powerful gaming rig I just tested it out on for curiosity.)  I'm on a 50 megabit down/20 megabit up DSL line, and 100 megabit LAN, so that should not affect things, should it?
<[TSK]> Of course your original suggestion to destroy and re-bootstrap works equally well on all three machines, too.
<marcoceppi> <[TSK] the bootstrap problem being the timeout?
<[TSK]> Aye
<marcoceppi> Or the reboot survival?
<[TSK]> Have not yet tested reboot survival on the other two machines yet.
<marcoceppi> <[TSK] interesting. Let me ask the core developers what logs to collect to help troubleshoot this problem
<[TSK]> One is a netbook, the other is the gaming rig.
<[TSK]> Figured I'd test on a wide variety of hardware to see if the issue was specific to any one machine, or if it was on all machines.
<[TSK]> (The first test was on my development server.)
<marcoceppi> <[TSK] thanks for taking the time to test this out
<[TSK]> Hey, no problem.  I'm always happy to help when I'm able.  :)
<TSK> This is actually the first real project I've ever played with that was written in go.  (I'ma Python guy myself)
 * TSK goes to search GitHub for Juju source.
<marcoceppi> TSK: http://launchpad.net/juju-core
<TSK> Looks like someone mirrored that at GitHub about 9 months ago.  https://github.com/prabhakhar/juju-core
<marcoceppi> TSK: yeah, that's largely out of date. We're looking to mirror it at https://github.com/juju/juju-core
<TSK> Aye.  Doesn't look like they got ANY of the actual code history, either.  Just a straight mirror of the state of the code 9 months ago.
<marcoceppi> TSK: right
<TSK> Ah, looks like your original source repo is in bzr
 * TSK installs bazaar
<TSK> OOoo neat!  There's a plugin for my filemanager for Bazaar now.
<TSK> sudo aptitude show dolphin-plugins-bazaar
<TSK> Only of interest to KDE users, but still...  Nifty.  :)
<Azendale> Could someone explain to me what the 'vip' and 'vip_cidr' options for charms do/are? I get the idea that they are a shared/load balanced IP to have a HA API. But should the be an address in the same range as the addresses the machines already have assigned by MaaS (but one that the DHCP server won't hand out?) Or should the been on a different NIC on the server and on their own subnet?
<marcoceppi> Azendale: to my understanding it can be either
<marcoceppi> Azendale: I believe the times I've used it, it's been on the same NIC but outside the range of IPs that the DHCP server provides
<Azendale> marcoceppi: Ok, thanks. I think I'll try that as that'll probably be easiest to set up for now. (I imagine there is some more configuration if you want to specify that the IP be on a specific interface)
<zradmin> Is there any way to force a service to delete itself? I destroyed it yesterday but juju is still reporting the service exists
<mgz> zradmin: using `juju destroy-environment` takes everything down
<zradmin> mgz: yeah i dont want to do that for just the one service though
<mgz> zradmin: running it again wouldn't hurt, and you need terminate-machine after to actually kill resource usage
<zradmin> mgz: my process to down a service looks like this: remove the units from the service, destroy the service, terminate machines... running a juju stat service just brings up a blank result
<AskUbuntu> What's the lastest version of Openstack that I can deploy with Juju? Where can find this information? | http://askubuntu.com/q/365216
* Topic unset by jcastro_ on #juju
* jcastro_ changed the topic of #juju to: Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP
* jcastro_ changed the topic of #juju to: Welcome!! Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP
#juju 2013-10-25
<routelastresort> I know that Ubuntu GNOME is like 99% the same, but maybe I'm the only person in the world using juju + Saucy GNOME
<routelastresort> much less issues on my new stock 13.10 system
<routelastresort> I'm searching the bugs, but is there a bug for "dying" services that can never be deleted on local provider?
<routelastresort> I've just been destroying my environment because it's easier
<sarnold> routelastresort: I think zradmin was fighting that earlier -- zradmin, did you ever find an answer for that?
<routelastresort> lp:charms/pbuilder
<routelastresort> written for quantal
<routelastresort> fails because precise doesn't have pbuilder-scripts w/o backports
<routelastresort> easy fix, but a) dying services should still be able to be killed
<routelastresort> and b) should it be letting me install Charms that aren't for precise in the first place??
<zradmin> sarnold: i found a subrelation that didnt destroy itself... it wouldnt show under juju status, but if i wrote it to a file it showed up
<sarnold> zradmin: crazy :) thanks
<zradmin> sarnold: no problem, i kind of wish there was a juju "view tasklist" so you could see what was holding it up sometimes
<sarnold> marcoceppi: ^^ I like zradmin's idea of a "view tasklist"  :)
<marcoceppi> sarnold: zradmin there's talk of exposing this information from the juju-core team this cycle. Not sure where it lands on the roadmap
<sarnold> marcoceppi: *nod* you guys are ambitious :)
<nesusvet_> Hello everyone. I have the following question: I tried to deploy hosts via MAAS and everything went well after the juju bootstrap command, but after deployment the whole environment, I see only one machine after  the "juju status" command
<AskUbuntu> Not able to find juju charm mysql root password? | http://askubuntu.com/q/365557
<marcoceppi> nesusvet_: How many nodes do you have in MAAS?
<Azendale> I have machines running on MaaS using Juju. Some of them failed to deploy because  a hook didn't run because of an (invalid) setting I set in the config. In the UI, I tried marking them as resolved, and then trying to remove them (and repeated through a few cycles of them going green and and then red). (I believe doing resolve + remove will make juju not get stuck on the fact that the hook didn't work, and let juju just get rid of the machine)
<Azendale> Now I have units that seem stuck and say "agent-state: error, life: dying" in juju status. I've tried destroying the units and the machines they are on. Is there any way to just give up on those units and recycle the machines the are on for another try?
<Azendale> If I just stop those machines in MaaS (I'm pretty sure that unassigns them) will juju notice, or will it just break juju more?
<kurt__> you have to mark them as "resolved"
<kurt__> "juju resolved"
<kurt__> then they will destroy
<Azendale> That's what I was trying though the juju gui, just tried it on the command line and I'm getting conflicting messages. "ERROR cannot set resolved mode for unit "ceph-osd/1": already resolved" when I tried to mark it resolved, but status says "agent-state: error, agent-state-info: 'hook failed: "install"',  life: dying"
<kurt__> you may need to destroy env and start again
<kurt__> if I cannot destroy services after resolving, that is tropically the path I take
<kurt__> s/tropically/typically
<Azendale> kurt__: not my first choice (other things that took a bit are working in the evironment) but it is a test environment, so if I have to I can do that
<kurt__> Azendale: understood and definitely not mine either
<Azendale> kurt__: I've typically done the same thing the other times I've run into this
<Azendale> kurt__: I've just been trying to learn a bit more about how to fix things instead of always just starting over
<kurt__> Azendale: your cause is noble.  If you have the time, create an AskUbuntu and the experts will get to it, just perhaps not in the timeframe you need.
<kurt__> I know they are all on a plane today
<mgz> Azendale: it never hurts to file a bug against juju-core with all the logs attached for cases like this
<kurt__> that too :)
<mgz> you want all-machines.log from machine 0 if you're not sure which particular machine was at issue
<Azendale> kurt__: ok, I probably will ask on askubuntu. I started with IRC because of the faster iteration, but I'm trying to get into the habit of documenting what I've learned on AskUbuntu because it seems like there's a lot of semi confused people trying this stuff
 * Azendale realizes I confused kurt__ and mgz
<kurt__> confusion?
<kurt__> lol
<Azendale> thanks kurt__ and mgz, I will have to get back to this in about an hour or two, but I will do what you suggested
<kurt__> Azendale: good luck
<AskUbuntu> Juju remove units stuck in dying state so I can start over? | http://askubuntu.com/q/365724
<AskUbuntu> JUJU and ERROR environment has no access-key or secret-key | http://askubuntu.com/q/365734
<AskUbuntu> What's the correct way to share a Juju environment? | http://askubuntu.com/q/365807
<zradmin_> is there anyone on dealing with the precise/havana charm updates?
<marcoceppi> zradmin_: care to elaborate?
<zradmin_> macoceppi: sure thing, I've been setting up a havana lab with juju and everything seems to stand up just fine, except quantum/neutron doesn't seem to work at all which is preventing me from launching anything in nova or logging into horizon. trying to run a neutron net list gives me a 503 server unavailable error
<marcoceppi> zradmin_: jamespage and adam_g should be working on that
<zradmin_> marcoceppi: ok hopefully they're on today :)
#juju 2013-10-26
<Azendale> zradmin_: could you eloborate (or just point me to something the tells me) what is up with quantum/neutron in havana? I'm trying to set up my first HA setup (in a test lab) and I have been trying to get Quantum and Havana
<AskUbuntu> If a Juju HA Cluster charm fails to deploy, will removing the HA cluster charm service mess up the service that was behind it? | http://askubuntu.com/q/366059
<Azendale> I'm trying to debug why the havana rabbitmq-server is ha-relation-changed hook failing for me. Looking at it with the python debugger, it looks like it is trying to get the IP address if an interface that is part of a bridge. I have posted the log of the debugging session at http://paste.ubuntu.com/6308674/ and the section of my config for the two charms at http://paste.ubuntu.com/6308674/.  Could someone tell me if it's something I'm doing/setting wrong? Or is
<Azendale> specifically, is setting "vip_iface: 'eth0'" for the rabbitmq-server breaking it?
 * Azendale decides he'll try re-deploying rabbitmq without that option and find out
<stryderjzw> How does one debug a install hook? Just destroy-service and deploy? Can I just run the hook?
<Azendale> stryderjzw: first, run 'juju status' so that you know exactly which unit of the service failed
<Azendale> stryderjzw: (unit(s) are grouped together to make a service, which is what you deploy)
<stryderjzw> I see
<Azendale> stryderjzw: a unit name should have something like 'mysql/2' (unit names end in a slash and some number)
<Azendale> stryderjzw: then run 'juju debug-hooks <unitname>' from the juju client
<stryderjzw> My node-app/0 install hook is failing
<stryderjzw> so i'd have to run juju resolved --retry after that?
<Azendale> stryderjzw: yep, you got it
<stryderjzw> Azendale: Thanks!
<Azendale> stryderjzw: it will ssh you to the box. Then when you run retry, it will start another prompt for you with the name of the hook
<Azendale> stryderjzw: then you just want to run/debug the script 'hooks/<hook_name>', which in your case would probably be 'hooks/install'
<Azendale> stryderjzw: See (if you haven't already) http://askubuntu.com/questions/362687/juju-debug-hooks-how-to-run-hook-in-debug-terminal-or-get-more-information
<stryderjzw> Azendale: Thanks. So, when I make changes to the hooks/install... what's the best way to update the charm?
<Azendale> stryderjzw: no problem, your welcome
<Azendale> stryderjzw: unfortunately, I really don't know the best way to update. I'm learning it from the user side towards learning how to debug/fix, so I haven't tried to develop a charm before -- I've only tried to get it to work when it breaks
<stryderjzw> Azendale:  Ah. This node-app charm doesn't take a PRIVATE repo from github. So I'm trying to modify it.
#juju 2013-10-27
<Azendale> So, I tried with a fresh rabbitmq install without the "vip_iface: 'eth0'" setting and and the HA charm still had trouble. So, I went looking for where it was getting the interface value, and the hacluster charm is getting it from relation-get. So I went and looked at the rabbitmq charm side, and it is getting it from config-get. So, I guess I'm trying to figure out where the default settings are set?
<melmoth> Azendale, you can change the default setting in a yaml file to be use with a --configure when you deploy rabbitmq
<melmoth> the defalt value is set in the config.yaml file that comes with the charm itself.
<Azendale> melmoth: ok, so that file ISN'T just documentation
<melmoth> that s my understanding.
<Azendale> melmoth: thanks, I was hoping to see if I could submit a patch/branch
<Azendale> melmoth: since I just used the default settings and it broke
<melmoth> hmm, last time i try hacluser/rabbitmq i did set both vip-interface and ha-bindiface (even though i set them to the default value)
<melmoth> and it 'just worked', well, sort of :)
<Azendale> melmoth: hm, what version (I'm using Havana)? I didn't set it so it defaulted to "eth0". (I'm assuming that you are saying that you set these on the rabbitmq charm?)
<Azendale> melmoth: Here's the config I used http://paste.ubuntu.com/6308688/
<melmoth> i was deploying stuff on an openstack cloud, i m not sure of the release of.
<melmoth> and yep, those were set in the rabbitmq config file
<melmoth> Azendale, this was my test case http://pastebin.com/cKmd8j0V
<melmoth> (i did set stuff for fsid and monitor secret, i just put * in the patebin)
<Azendale> melmoth: do you still have that setup running? It would be interesting if you could ssh to one of the rabbitmq servers and do an ifconfig
<melmoth> let me check.
<melmoth> Azendale, http://pastebin.com/g6dr8qQ7
<Azendale> melmoth: ok, I see why it worked. On the newer version (or just something quirky about how it set mine up) it puts the ethernet interface into a bridge, and then puts all the addresses on the bridge interface.  The hook in the newer version breaks when it tries to get the IP assigned to the ethernet interface, because the ethernet interface has no IP, the bridge that it is a part of has the address
<Azendale> melmoth: thanks, that probably narrows it down to being in just the Havana version (you're on Grizzly, which is the version just before the one I'm running. They go in alphabetical order just like Ubuntu)
<melmoth> i deployed charm from the grizzly cloud archive, but i have no idea whta is the openstack version of the cloud i was deploying stuff into
<Azendale> melmoth: ah, so you were testing the charm on top of an openstack cloud then?
<melmoth> yes.
<melmoth> i was not really interested in using rabbit (i bet it wold not work because i will not have multicast in this environment), just trying to reproduce a pacemaker issue
<Azendale> melmoth: ah, ok. Well, thanks for the help anyways! :)
<Azendale> Where are the default settings for a charm stored? I tried changing config.yaml, but for some reason, when I tested it it didn't make any change. I assume it's set somewhere else?
#juju 2014-10-20
<lmpeiris> Hi guys. I have deployed a juju bundle in AWS. When using command 'juju status' it says (some) of the services are running
<lmpeiris> but i don't see any evidence that they are
<lmpeiris> and cannot find where the files have been copied and logs are printed
<lmpeiris> is there a default location where juju deploy / print logs
<gnuoy> jamespage, Small bug fix to neutron-api charm https://code.launchpad.net/~gnuoy/charms/trusty/neutron-api/fix-db-migration-bug/+merge/238847
<gnuoy> jamespage, Charmhelper 0mq mp https://code.launchpad.net/~openstack-charmers/charm-helpers/0mq/+merge/238855
<jamespage> gnuoy, +1
<gnuoy> ta
 * jamespage twiddles his thumbs some more
<jamespage> gnuoy, grep 'Unknown hook' all-machines.log  | wc -l
<jamespage> gnuoy, guess how many
<gnuoy> 300?
<jamespage> 62620
<gnuoy> ah
<gnuoy> I think you are right, there maybe some scope for optimisation
<jamespage> gnuoy, just removing those symlinks is an easy and low risk optimization
<jamespage> gnuoy, I'll figure out which ones...
<jamespage> gnuoy, http://paste.ubuntu.com/8600096/
<gnuoy> ok
<ahasenack> hi guys,
<ahasenack> can a relation be established, but no units take part in it?
<ahasenack> basically, relation_ids(relation) returned something
<ahasenack> but relation_list() for that relation id is empty
<gnuoy> jamespage, fix for charm helper bug https://code.launchpad.net/~gnuoy/charm-helpers/restore-distro-src/+merge/238908
<gnuoy> jamespage, sorry to nag but I need that ^ mp for the percona fix
<jamespage> gnuoy, +1
<jamespage> sorry
<gnuoy> ta
<gnuoy> jamespage, fwiw zmq mps http://paste.ubuntu.com/8603467/
<Tug> Hi, is there any plan to have juju support google compute engine?
<lazyPower> tvansteenburgh: mattyw has a branch in review that fixes the issue with config we saw in brussels: https://code.launchpad.net/~mattyw/charm-helpers/config-keys/+merge/237968
<lazyPower> mattyw: thanks for the MP. :)
<mattyw> lazyPower, always a pleasure seldom a chore
<tvansteenburgh> lazyPower, mattyw: thanks, will review today
<mattyw> tvansteenburgh, appreciated thanks
<jrwren> charmers: why doesn't my PR here: https://code.launchpad.net/~evarlast/charms/trusty/elasticsearch/add-version-config/+merge/237916  show up here: http://review.juju.solutions ?
<tvansteenburgh> marcoceppi|unavail: ^
<lazyPower> jrwren: i have a sneaky suspicion that ingest has gone away.
<lazyPower> I'm not seeing anything new in here, and i know work has been done
<jrwren> lazyPower: what does ingest mean in this context?
<lazyPower> jrwren: meaning the review queue trolls launchpad looking for items to list in the queue. its a poll model every x minutes.
<lazyPower> so i suspect the launchpad crawler has tanked - but i have no evidence to back up that suspicion.
<jrwren> lazyPower: got it, thanks. I wanted to make sure it wasn't at all related to charmstore ingestion.
<lazyPower> jrwren: nah, not at this time anyway
<lazyPower> jrwren: remind me on wednesday when i return to take a look at that MP and i'll add it to my list. I'll be doinga full day in teh queue as we're kind of backed up and I have an opportunity to boogie through the open MP's
<lazyPower> stub: speaking of which, i never got your updates to postgres to deploy. I'll be re-visiting that on wednesday. Have you had a chance to run it locally on that rollup branch?
<stub> lazyPower: Yes, the integration tests mostly run apart from the usual flakyness and it is better than trunk.
<lazyPower> stub: well it ried deploying the charm stand alone without running the tests
<lazyPower> as the tests were giving me false positives due to general wifi flakeyness in brussels
<lazyPower> it may have been purely due to the networking problems I had... sorry that's taken me so long to get back to. Running the conf. circuit has been nuts
<lazyPower> i haven't forgot about ya tho, and will get that out sooner rather than later.
<stub> lazyPower: The charm deploys standalone here too... just double checked ;)
<lazyPower> stub: great news. I'll get to that when i'm not on teh hook for demoware and 1:1 with you about concerns you had re: landing postgres modifications.
<lazyPower> i know there was a bit of back and forth in brussels adn I want to capture that feedback so i can bring it up at our weekly.
<stub> ta
<lazyPower> tvansteenburgh: you around?
<tvansteenburgh> lazyPower: yup
<lazyPower> tvansteenburgh: mwenning had an issue with amulet and filed a bug in LP, just to confirm are we dual-tracking the bugs or have you shifted to only monitoring github issues?
<lazyPower> here's the issue in question: bind 127.0.0.1
<lazyPower> well thats not the issue >.>
<lazyPower> https://bugs.launchpad.net/amulet/+bug/1375344
<mup> Bug #1375344: Amulet fails to bring up a machine to run relation-sentry running openmanage charm <Amulet:New> <https://launchpad.net/bugs/1375344>
<tvansteenburgh> lazyPower: still tracking bugs in LP
<lazyPower> ah ok. Just curious as I think the last time i asked it was referenced that issues are beign triaged in gh. perhaps I misunderstood.
<tvansteenburgh> lazyPower: either place is fine, i'm watching both
<lazyPower> ty for clarification.
<tvansteenburgh> lazyPower: re that particular bug, i'll take a look soon. i'm really close to landing the new sentry implmentation
<lazyPower> sounds good to me. I just wanted it on your radar since its blocking mwenning
<tvansteenburgh> roger, thanks
<mwak> hi there, someone test online labs cloud?
<mwak> http://labs.online.net
<mwak> based on physical arm hardware
<rsynnest> hi all, is there a place to share/find "unapproved" or WIP juju charms other than github?
<rsynnest> or, if anyone here has heard of a gene BLAST charm someone is working on that would be cool too
<hatch> rsynnest: if the author put the charm on launchpad it may be in the charmstore under their username
<hatch> rsynnest: https://jujucharms.com/?text=blast doesn't look like it
<marcoceppi_> jrwren: I don't know why our review queue hates you so much
<jrwren> marcoceppi_: I can imagine a few reasons. It is probably REALLY smart.
<jrwren> marcoceppi_: and a good judge of character.
<marcoceppi_> I'll look in to it later
#juju 2014-10-21
<gnuoy> dosaboy, jamespage https://code.launchpad.net/~gnuoy/charms/trusty/percona-cluster/respect-the-vip/+merge/239010
<dosaboy> gnuoy: tx will take a look
<jamespage> gnuoy, dosaboy: that proposed percona change alters the ping-pong nature of the relation
<jamespage> gnuoy, the idea was that db_host is only set to the access_network IP address if the remote client provides a hostname that resides within it
<jamespage> i.e. that the connectivity is good
<gnuoy> jamespage, ok, I'll take a look
<jamespage> gnuoy, just thinking about it
<gnuoy> coreycb, I've got a few branches up for review if you have a moment http://paste.ubuntu.com/8612273/
<coreycb> gnuoy, will do
<gnuoy> thanks
<gnuoy> jamespage, dosaboy, I've updated https://code.launchpad.net/~gnuoy/charms/trusty/percona-cluster/respect-the-vip/+merge/239010
<jamespage> gnuoy, can you +1 me on https://code.launchpad.net/~james-page/charms/trusty/nova-cloud-controller/lp1375631/+merge/238882
<jamespage> that's causing unbalanced haproxy configs in nova-cc right now
<jamespage> gnuoy, actually I landed it on the basis of xianghui's +!
<jamespage> 1 rather
<jcastro> jamespage, can one of you guys look at this? http://askubuntu.com/questions/539327/avoiding-situation-when-nova-conf-file-overwritten-by-juju
<jamespage> jcastro, ansered
<jcastro> nice!
<lazyPower> upboated
<rick_h_> jcastro: joining marketing call today?
<lazyPower> nice answer jamespage
<jcastro> rick_h_, I could, what's up?
<rick_h_> jcastro: just asking on behalf of sally
<jcastro> oh, on my way
<jcastro> rick_h_, can't join the hangout
<jcastro> is antonio there?
<jcastro> I'm trying to catch up on reviews
<rick_h_> jcastro: boo, no not in atm
<jcastro> it tells me "the party is over"
<jcastro> jamespage, last one, I promise: http://askubuntu.com/questions/538240/nova-get-password-returns-empty-string
<tvansteenburgh> mwenning: i took a look at https://bugs.launchpad.net/amulet/+bug/1375344
<mup> Bug #1375344: Amulet fails to bring up a machine to run relation-sentry running openmanage charm <Amulet:New> <https://launchpad.net/bugs/1375344>
<mwenning> tvansteenburgh, cool
<tvansteenburgh> mwenning: i wonder if you would be willing to try it with this branch of amulet: https://github.com/marcoceppi/amulet/pull/46
<tvansteenburgh> i'm curious to know the results
<mwenning> tvansteenburgh, ok, I'll give it a shot.  Probably won't be today .   Is there a ppa I can point to:
<mwenning> ?
<tvansteenburgh> mwenning: sorry no, it's just a pending pull request at this point. you'll need to clone the repo and `python setup.py install` it
<natefinch> rick_h_: do you know if quickstart is supposed to include the bundle plugin?  The docs say it does, but someone on another IRC channel just said it's not getting recognized as a valid command
<natefinch> i.e. > juju bundle proof
<natefinch> ERROR unrecognized command: juju bundle
<rick_h_> natefinch: no, bundle is part of charmtools?
<natefinch> rick_h_: yeah, it seems to be installed with quickstart (at least, it was when I just tried it)
<natefinch> or hmm.. maybe it was already installed from charmtools
<rick_h_> natefinch: it's in the same juju stable ppa, but I don't recall us dep'ing on that.
 * rick_h_ looks
<rick_h_> natefinch: Depends: python (>= 2.7), python (<< 2.8), python:any (>= 2.7.1-0ubuntu2), python-jujuclient, python-yaml, python-urwid
<rick_h_> natefinch: which docs say it includes it?
<natefinch> rick_h_: https://juju.ubuntu.com/docs/charms-bundles.html
<rick_h_> natefinch: ah, yea that's doc failure. charmtools is not required to use quickstart and is a diff package entirely. The docs assume you have that already.
<natefinch> rick_h_: yep
#juju 2014-10-22
<gnuoy> jamespage, I've updated https://code.launchpad.net/~gnuoy/charms/trusty/percona-cluster/respect-the-vip/+merge/239010
<gnuoy> and here are the neutron db migration mps:
<gnuoy> https://code.launchpad.net/~gnuoy/charms/trusty/nova-cloud-controller/make-neutron-db-authority/+merge/239172
<gnuoy> https://code.launchpad.net/~gnuoy/charms/trusty/neutron-api/remove-db-migrations/+merge/239171
<jamespage> gnuoy, +1
<jamespage> looks goot
<gnuoy> jamespage, thanks
<jamespage> gnuoy, so for those 2nd two, nova-cc will always handle the migration right?
<gnuoy> jamespage, right
 * jamespage thinks
<ayr-ton> marcoceppi, That wordpress-mu charm does exist? http://askubuntu.com/questions/256759/deploy-multiple-wordpress-sites-with-juju
<dpb1> If I'm on trusty, how do I launch utopic local-provider containers?  I get tools errors by default.
<natefinch> dpb1: I'm not sure we have utopic tools deployed in streams yet.  sinzui ?
<sinzui> what? We testing and build the ubuntu devel series from the week it was created. we even say in the announcements that we created juju for utopic, and backported to trusty and precise.
<natefinch> sinzui: sorry, I don't always read the announcements that carefully
<dpb1> sinzui, natefinch: an extra parameter to bootstrap perhaps?
<dpb1> ah, maybe --series or upload-series?
<dpb1> seems weird though
<dpb1> let me try again to confirm
<dpb1> interesting
<dpb1> dpb@helo:slaves$ juju bootstrap -v
<dpb1> uploading tools for series [precise trusty]
<natefinch> dpb1: you can add default-series to your environments.yaml
<sinzui> dpb1, juju does not fully support non-lts series. juju lts clients will only provide tools for lts series. non-lts clients are designed to provide tools for itself and the lts
<dpb1> so, when bootstrapping the local env, it always uses upload tools?
<sinzui> dpb1, juju is a punk https://bugs.launchpad.net/juju-core/+bug/1302119
<mup> Bug #1302119: sync-tools ignores tools-metadata-url <ci> <sync-tools> <juju-core:Triaged> <https://launchpad.net/bugs/1302119>
<dpb1> interesting.
<sinzui> dpb1, I think you can bootstrap, then run juju sync-tools which will get all the combinations for the version of the state server
<dpb1> sinzui: doing 'juju --upload-tools --upload-series utopic,trusty,precise' worked it seems
<sinzui> excellent
<dpb1> sinzui: I'd still like to understand if the local provider uses upload-tools by default.  I guess I've never noticed that?
<dpb1> sinzui: I guess it's this question: http://askubuntu.com/questions/486542/upload-all-release-versions-of-tools-with-jujus-local-lxc-provider
<sinzui> dpb1, yep, except we changed the name of the option to --upload-series
<mattyw> dimitern, this is how I ended up wording it: https://bugs.launchpad.net/juju-core/+bug/1384336
<mup> Bug #1384336: juju resolved reports already resolved on a unit in error state if a hook is still running <juju-core:New> <https://launchpad.net/bugs/1384336>
<dimitern> mattyw, +1 sgtm
<jcastro> evilnickveitch, thanks for that doc merge
<natefinch> rick_h_, jcastro, anyone else.... do we have a canonical bundle that the tosca guys can try out?  The guy working on the tosca->juju translator is having trouble with bundle proof not liking anything he throws at it.
<rick_h_> natefinch: https://jujucharms.com/bundle/elasticsearch/15/cluster/ https://jujucharms.com/bundle/mongodb/5/cluster/ are two that we know are good/on the front page
<natefinch> rick_h_: what's with the random number in the url?
<jcastro> hah!
<rick_h_> revision
<rick_h_> :)
<natefinch> disapprove
<rick_h_> natefinch: it's changing :)
<natefinch> :)
<natefinch> how the hell do I get help on a plugin subcommand?  i.e. help on juju bundle proof?  juju help bundle proof  doesn't work, juju bundle help proof  doesn't work, juju-bundle help proof  doesn't work
<natefinch> all of them just return "usage: charm-help subcommand" and then a list of subcommands
<natefinch> charm-help bundle  and charm-help proof also don't work
<natefinch> jcastro: ^?
<jcastro> no clue
<jcastro> `juju-bundle proof .` is the only way I use it
<jcastro> `juju-bundle proof <target>` rather
<rick_h_> and charm prooof works
<rick_h_> it auto detects a bundle
<jcastro> do we use bundle for anything else?
<rick_h_> not that I can recall
<jcastro> seems like we could just sunset that command if we're not?
<natefinch> heh
<jrwren> natefinch: depends on the subcommand, but  juju charm add -h   just worked for me.
<natefinch> yeah, I figured that out eventually.... it just is infuriating that "help" is a valid subcommand but doesn't actually do anything useful except print out the subcommands
<natefinch> this happened the last time I tried to get help on charm-proof (which is evidently what juju-bundle is, just renamed... which means the help says "usage: charm-proof" even though I'm invoking it using "juju bundle"
<jrwren> natefinch: juju is amazing. :)
<natefinch> jrwren: well, to be fair, it's the plugin's fault, not juju's :)
<natefinch> pretty sure I already filed a bug about this on charm-proof...
<natefinch> found it - https://bugs.launchpad.net/charm-tools/+bug/1327179
<mup> Bug #1327179: charm help should show how to get help about a specific command <Juju Charm Tools:New> <https://launchpad.net/bugs/1327179>
<marcoceppi> ayr-ton: no, not yet
<bloodearnest> hazmat: heya, another small deployer MP for you: https://code.launchpad.net/~bloodearnest/juju-deployer/annotate-branches/+merge/239270
<bloodearnest> going to start work on improving diff capabilities next
<lazyPower> marcoceppi: setting default isntance type for AWS in ~/.juju/environments/yaml is "default-instance-type: m1.medium" - for example, right?
<lazyPower> i'm going to plug this into teh docs as well - since it appears to be missing from the amazon provider docs.
<marcoceppi> that's a super depriciated field that probably doesn't work anymore
<marcoceppi> juju bootstrap --constraints is the way to set this stuff
<lazyPower> oh! i didn't know it was depreciated. ta!
<marcoceppi> where did you find that?
<lazyPower> well, it was from memory
<marcoceppi> I'm like 90% sure it's depreciated from pyjuju days
<lazyPower> its now been depreciated from my memory
<marcoceppi> you might want to double check
<marcoceppi> maybe thumper knows
<lazyPower> i'd rather tell people to use --constraints myself....
<lazyPower> it keeps it consistent across providers
<marcoceppi> right
<marcoceppi> that, I believe, was the idea
<lazyPower> marcoceppi: yeah, that combined with set-constraints seems to be what i was looking for
<lazyPower> why bother knowing what instance types per provider.
<lazyPower> ta, again. ya kept me from spreading FUD
#juju 2014-10-23
<ktosiek> can juju work with mixed environment? Multiple networks, different hypervisors etc., or would I have to show it one API? (for example, setup OpenStack under Juju)
<ktosiek> and another question: what's the story for doing orchestrated, rolling upgrades with Juju? At the moment I have an Ansible setup that will take a batch of instances, remove them from loadbalancing and silence monitoring, upgrade each of them, wait for OK from monitoring, and only then put  them back into LB. Can I do something like that with Juju?
<gnuoy> jamespage, a couple of charmsync'y reviews if you have a moment:
<gnuoy> https://code.launchpad.net/~gnuoy/charms/precise/rabbitmq-server/sync-charmhelpers/+merge/239343
<gnuoy> https://code.launchpad.net/~gnuoy/charms/trusty/mysql/fix-source/+merge/239344
<jamespage> gnuoy, +1 to both - marked as such
<jamespage> gnuoy, how are we across the OS charms with regards to release to stable today?
<jamespage> gnuoy, I had a walk through some proposals last night
<gnuoy> jamespage, ta
<gnuoy> jamespage, I don't know of any that need to go in before the big switch.
<jamespage> gnuoy, https://docs.google.com/a/canonical.com/spreadsheets/d/1jMjgmlH0gcKRecsLVIZTK47X8Su8ASR4loVDmmCROOw/edit#gid=0
<gnuoy> jamespage, I'm not sure what 'done' indicates
<jamespage> gnuoy, mp's reviewed
<jamespage> anything merged if required
<gnuoy> jamespage, is the initial in the done column indicating who has reviewed the list or who had done the individual mps reviews?
<jamespage> gnuoy, its who has reviewed and said nothing needs to be landed
<jamespage> gnuoy, https://code.launchpad.net/~gnuoy/charms/trusty/nova-cloud-controller/next-multi-console-fix/+merge/233612
<gnuoy> jamespage, I don't get your comment there. It just runs on the leader
<jamespage> gnuoy, your proposed change relies on hook execution to move console-auth around
<jamespage> if the leader dies, nothing else will take over right?
<gnuoy> true
<ktosiek> hmm, is askubuntu.com a better place for questions about overall Juju usage (orchestration, assumptions about environment etc.)?
<thedellster> Hey hoping for a bit of help on a openstack installâ¦
<thedellster> Anyone here able to provide help with that?
<thedellster> Hi all wondering if anyone knows how to make a dedicated network for nova cinder iscsi traffic on juju
<ktosiek> at what times is this channel more lively?
<sarnold_> seems busier to me in west coast afternoon
<ktosiek> and what timezone would that be? GMT+7?
<sarnold_> gmt-8
<ktosiek> thanks, I'll try nagging people then :-)
<sarnold_> :)
<jcw4> fwiw; I think folks from UTC-5 ish monitor this channel too
 * thedellster slaps head should have asked ktosieks question
<sarnold_> thedellster: I knew how to answer his :P
<thedellster> ;) will revert back at a more friendly west coast hour.
<sarnold_> tell me about it, this east-coast stuff is too early for me
<thedellster> Iâve always thought that I should live on the east coast and say I lived on the west coast. That way I could start working at 12pm and tell everyone itâs 9am.
<sarnold_> hahaha
<jamespage> coreycb, I'd like to take https://code.launchpad.net/~corey.bryant/charms/trusty/nova-cloud-controller/fix-step/+merge/238476 for release
<jamespage> could you update inline with my comments pls
<coreycb> jamespage, yes
<coreycb> jamespage, so, my main goal of adding that was so that I could test upgrades prior to  ppa:ubuntu-cloud-archive/juno-staging prior to it landing in the UCA
<coreycb> jamespage, it should probably throw an exception, but then it would have to get hacked to let the code continue each release for early migration testing.. not a big deal I guess
<jamespage> coreycb, my intent is that we have update populated by b1 every cycle for the CA
<coreycb> jamespage, I like that.  alright I'll fix this up.
<mattyw> tvansteenburgh, have you seen this before from charmhelpers? https://bugs.launchpad.net/charm-helpers/+bug/1384723
<mup> Bug #1384723: charmhelpers.fetch.SourceConfigError: Unknown source: u'None' <Charm Helpers:New> <https://launchpad.net/bugs/1384723>
<tvansteenburgh> mattyw: introduced by a commit on 8/21
<tvansteenburgh> unknown sources used to just no-op silently, now they raise. jamespage fixed a particular case ('distro') on Monday, but i think we need to restore the original behaviour. too much stuff breaking on this now
<jamespage> tvansteenburgh, +1
<jamespage> tvansteenburgh, gnuoy fixed it actually
<tvansteenburgh> stub: are you interested in looking into this? iirc you're using the SourceConfigError?
<jamespage> gnuoy, 2014-10-23 14:10:31 INFO unit.nova-cloud-controller/0.shared-db-relation-changed context.go:473 oslo.config.cfg.ConfigFilesNotFoundError: Failed to read some config files: /etc/neutron/neutron.conf
<jamespage> from nova-cc
<jamespage> gnuoy, neutron.conf_unused
<jamespage> explosion
<gnuoy> ?????
<jamespage> gnuoy, the nova-cc charm always does the neutron migration right?
<gnuoy> argh
<gnuoy> I see
<jamespage> but if the neutron-api charm is related, it disables its neutron.conf and stops generating it
<gnuoy> yep
<jamespage> gnuoy, hence the explosion
<tvansteenburgh> mattyw: it appears to be an easy fix since SourceConfigError is only caught by tests, i'll make time for it today if you or stub can't
<mattyw> tvansteenburgh, did you have in mind anything more complicated than: if source is None or source == "None": ... ?
<gnuoy> jamespage, I could keep generating the file but remove the neutron-server service from the resource map
<tvansteenburgh> mattyw: replace the `raise SourceConfigError` with `pass`
<jamespage> gnuoy, yes
<jamespage> gnuoy, and the plugin as well
<tvansteenburgh> mattyw: so that anything undefined is a no-op
<jamespage> gnuoy, but that feels ugly
<jamespage> gnuoy, also we have a divergent charm issue in mysql - https://code.launchpad.net/~charmers/charms/trusty/mysql/trunk
<gnuoy> jamespage, it is fugly but I don't see another way
<mattyw> tvansteenburgh, If you're happy with that I'll submit something now for it
<jamespage> gnuoy, actually its ok as neutron now syncs the db for all plugins, not just the configured one
<tvansteenburgh> mattyw: it's not awesome but that's the way it was before 8/21, so +1
<jamespage> gnuoy, gah - the precise charm does not have the allowed hosts fix either
<jamespage> gnuoy, you focus on nova-cc
<gnuoy> ack
<coreycb> jamespage, updated https://code.launchpad.net/~corey.bryant/charms/trusty/nova-cloud-controller/fix-step/+merge/238476
<mattyw> tvansteenburgh, https://code.launchpad.net/~mattyw/charm-helpers/unknown-source-noop/+merge/239374
<tvansteenburgh> mattyw: grep for SourceConfigError, I believe there are a few tests that will need adjusting
<mattyw> tvansteenburgh, ah dammit - I thought I'd added those
<jamespage> coreycb, the intent of that change is to enforce upgrade paths via the CA only right?
<coreycb> jamespage, correct
<coreycb> jamespage, which, it's the only charm that has that afaik
<coreycb> jamespage, it started as a work around for upgrading to the juno staging ppa because the open() in step_upgrade was exploding
<mattyw> tvansteenburgh, sorry about that, forgot to commit the test again - I'll get this right one of these days https://code.launchpad.net/~mattyw/charm-helpers/unknown-source-noop/+merge/239374
<coreycb> jamespage, eh, not good.  upgrading nova-cc to openstack-origin=cloud:trusty-juno/proposed also blows up on open(/etc/apt/sources.list.d/cloud-archive.list) -- the file doesn't exist
<jamespage> coreycb, so this is a first upgrade from trusty to juno CA right?
<coreycb> jamespage, yeah, I'd only tested using the ppa before
<tvansteenburgh> mattyw: cool, thanks! will review shortly
<mattyw> tvansteenburgh, no hurry, and feel free to throw it away and fix it another way, just thought I'd do something quick while I was in that part of the code
<gnuoy> jamespage, lp:~gnuoy/charms/trusty/nova-cloud-controller/fix-db-migrations should fix it, testing now
<jcastro> tvansteenburgh, I think I found a bug in charm tools
<jcastro> W: README.md includes line 6 of boilerplate README.ex
<jcastro> W: README.md includes line 8 of boilerplate README.ex
<jcastro> when I run it on my newly made README.md
<jcastro> but lines 6 and 8 are newlines.
<marcoceppi> jcastro: it's just a confusing message
<marcoceppi> it's matching line 6 and 8 of the boilerplate of matchable lines
<marcoceppi> so not new lines and not headers
<marcoceppi> any line over X characters
<jcastro> oh, do you think it's the headers?
<jcastro> like #Usage  and so on?
<lazyPower> jcastro: more than likely. that happens to me constantly.
<marcoceppi> I didn't realize this was still happening to people
<cory_fu> Does anyone else find the screen keybindings for tmux in `juju debug-hooks` annoying?  If you prefer the default tmux keybindings, 'juju ssh $unit touch .tmux.conf && juju debug-hooks $unit' will make it use the default key bindings.
<marcoceppi> we should fix the consistent matches, or at the very least the messaging
<cory_fu> (You can also upload a custom .tmux.conf file that way, using juju scp.)
<jcastro> I would just ignore anything in # and ##
<lazyPower> cory_fu: good fodder for a quick post on teh solutions blog.
<jcastro> those are supposed to be templated titles anyway
<cory_fu> Hey, good idea.  Thanks
<marcoceppi> jcastro: yeah, I'll patch that with everything else tvansteenburgh has unless he beats me to it
<jcastro> ok so I'll push it anyway?
<tvansteenburgh> marcoceppi: it's all you
<marcoceppi> we'll roll a release after this charmschool thing
<jcastro> marcoceppi, are you planning on doing your shift today or bumping it due to other stuff?
<marcoceppi> jcastro: I'm probably going to bump to later today
<jcastro> because if you are, the top three new charms on the top of the queue have good sign off from corey, I think you can just do a final check and promulgate
<marcoceppi> jcastro: sweet, thanks for the heads up
<jcastro> They Look Easy(tm)
<jcastro> lol, probably hard.
<coreycb> jamespage, I pushed a fix to https://code.launchpad.net/~corey.bryant/charms/trusty/nova-cloud-controller/fix-step/+merge/238476 that fixes the open() issue
<coreycb> jamespage, I'll run a full upgrade test now to juju set nova-cloud-controller openstack-origin=cloud:trusty-juno/proposed
<gnuoy> jamespage, https://code.launchpad.net/~gnuoy/charms/trusty/nova-cloud-controller/fix-db-migrations/+merge/239389
<coreycb> jamespage, I mean,  I'll run a full upgrade test now to cloud:trusty-juno/proposed for all the charms
<jcastro> mbruzek, fyi, hpcc has a green box in the review queue, I didn't notice that before
<jcastro> marcoceppi, got a sec or you charm schooling?
<marcoceppi> jcastro: schooling
<mattyw> cory_fu, ping?
<cory_fu> mattyw: Hey, how's it going?
<mattyw> cory_fu, not bad thanks, hope you're well?
<mattyw> cory_fu, thanks for your review of my mongodb auth branch. I started looking at it but I had a question that I put on the review: https://code.launchpad.net/~mattyw/charms/precise/mongodb/auth_experiment/+merge/162887
<jcastro> tvansteenburgh, ok I've already fixed like 5 or 6, incoming into the queue
<jcastro> tvansteenburgh, this is much better than before. <3
<lazyPower> mattyw: I think your security concerns are valid
<tvansteenburgh> jcastro: \o/
<lazyPower> mattyw: the only issue with this model is that it has potential to cause a temporary intermittant outage should the passowrd change while the webapp is running and delivering.
<cory_fu> mattyw: I agree, it's less secure than not storing it, but the password is also available on the relation, so it's possible to retrieve it that way (though ever so slightly more difficult).  whit and I were just discussing this morning the need for a better credentials management system for charms
<lazyPower> mattyw: suggest to store a hash of the password to disk and validate against that hash. eg: if sha1sum(password) != cached_sha1_of_password
<jrwren> cory_fu: please update the docs with that information: https://juju.ubuntu.com/docs/authors-hook-debug.html  https://github.com/juju/docs/blob/master/src/en/authors-hook-debug.md
<cory_fu> mattyw: I'm not averse to recreating the password, and I gave the merge my +1.  You could also just touch a flag file to indicate that you'd already set the password for that relation.
<lazyPower> jrwren: did you ever get an answer re: your merge not showing up in the queue?
<jcastro> my merges are also not showing up in the queue
<cory_fu> lazyPower: You can't retrieve the password to test against the hash from the providing side of the relation
<jrwren> lazyPower: nope.
<lazyPower> cory_fu: ah good point.
<cory_fu> I think changing it each time is fine.  It was more of a passing thought, really.
<lazyPower> cory_fu: my only concern is that it will cause an outage
<cory_fu> I just thought it might be surprising for the admin if it changed out from under them.
<lazyPower> if your app cannot connect to mongo, your users will see a 500 error, or your daemon may panic
<mattyw> lazyPower, cory_fu I don't think there will be an outage if the relation changes at the moment, It just adds a new user, the old one is still valid
<lazyPower> ahhh, ok.
<cory_fu> True
<lazyPower> mattyw: good insight. ty for clarification
<lazyPower> tbh i had not looked @ the branch as of yet
<coreycb> jamespage, gnuoy: unit tests added to https://code.launchpad.net/~corey.bryant/charms/trusty/nova-cloud-controller/fix-step/+merge/238476
<mattyw> lazyPower, cory_fu It's still not great - but in a different way ;)
<lazyPower> was going based on the meta discussion in here
 * lazyPower returns to trolling in silence
<mattyw> lazyPower, I prefer that approach :)
<cory_fu> So that leads to a potential proliferation of users.  So, the option of storing a flag (either in config, StoredContext, or a flag file) saying that you'd created a user and set the password for a given relation would solve that without needing to store the password on disk
<lazyPower> jrwren: link me to your branch again if you dont mind
<lazyPower> i'll take a look - i'm giong to be context switching to the queue as soon as ingest lands this last bundle
<mattyw> cory_fu, that's a good idea, sounds sensible
<cory_fu> :)
<mattyw> cory_fu, I'll make that change - thanks very much
<jamespage> coreycb, that looks good - did it test OK?
<coreycb> jamespage, yep tested ok
<mattyw> cory_fu, I'll also take a look at the amulet tests - I don't have much experience with those, but I need to get some
<jrwren> lazyPower: https://code.launchpad.net/~evarlast/charms/trusty/elasticsearch/add-version-config
<coreycb> jamespage, but ci caught me red handed with unit test issues!  those are fixed in that mp
<lazyPower> mattyw: i'm available to help with them as i wrote them :)
<lazyPower> jrwren: ta!
<jamespage> coreycb, indeed!
<jamespage> merged - thanks!
<mattyw> lazyPower, that would be perfect, Can I go as far as putting something in the diary?
<thedellster> Hi hoping someone can give me some guidance on setting up multiple interfaces on juju openstack. Particularly an ISCSI network so that storage traffic goes out the right nic.
<lazyPower> mattyw: not parsing re: diary? Do you mean on the calendar?
<mattyw> lazyPower, yeah - that's what I meant
<mattyw> lazyPower, can you not read minds yet?
<cory_fu> mattyw: It shouldn't be too difficult.  Just adding another line in validate_relationships()
<mattyw> :)
<lazyPower> mattyw: i'm close... my psychic helmet is in the shop this week for upgrades.
<lazyPower> but yeah that sounds good.
<mattyw> lazyPower, awesome
<lazyPower> cory_fu: i've been lending myself out to people getting their hands dirty with amulet as we've been changing the story around amulet quite a bit lately
<cory_fu> mattyw: And also not a big issue.  I don't think the tests currently check the actual relation data, just the relation existence
<lazyPower> older info still works but there's enough knowledge now for patterns to be recommending, such as your unit testing template
<mattyw> cory_fu, that would be nice, last time I was unable to even run the tests, so I'd like to at least get that far
<lazyPower> and how to approach the tests from a bundle standpoint vs a charm standpoint
<cory_fu> mattyw: This might be of help: http://blog.juju.solutions/cloud/juju/2014/10/02/charm-testing.html
<cory_fu> I was able to run up to and including the 200_deploy test without issue with bundletester
<cory_fu> Though 200_relate_ceilometer.test had an issue
<cory_fu> (Unrelated to your change)
<mattyw> cory_fu, awesome, thanks very much, I'll take a look
<cory_fu> np, thanks for working on this.  :)
<thedellster> Hey all quick question. Trying to define more than two networks on juju and openstack outside of public and private. Is this possible currently, or do I need to configure the hosts using another method once juju deploys them. Online for ceph I saw the following answer posted. âCurrently Juju has a very simple networking model, which assumes only a "private" (inside the cloud environment) and "public" (externally accessible)
<thedellster> networks. â Is this true for all openstack services?
<thedellster> Is there a better forum or place I should go for information relating to this? Iâve looked online and havenât really been able to find anything.
<jamespage> gnuoy, hows you nova-cc testing coming along?
<gnuoy> jamespage, I did a deploy with it and added the comment to the mp
<jamespage> gnuoy, link? sorry - that will be folderized somewhere which takes time :-)
<gnuoy> <gnuoy> jamespage, https://code.launchpad.net/~gnuoy/charms/trusty/nova-cloud-controller/fix-db-migrations/+merge/239389
<lazyPower> cory_fu: wait, the ceilometer test had an issue?
<lazyPower> cory_fu: do you recall what that issue was?
<jamespage> gnuoy, +1
<jamespage> gnuoy, pls land :-)
<cory_fu> lazyPower: I thought it seemed like an environment issue or I would have noted it on the review, but I don't recall exactly
<lazyPower> ok. phwew
<jamespage> marcoceppi, re mysql - was it intentional what precise and trusty branches have diverged
<jamespage> ?
<lazyPower> cory_fu: checking myself now - thanks for teh tip. last thing i want is jamespage pinging me that we broke openstack dependencies
<marcoceppi> jamespage: no, I'm trying to get the fixes out the door today but it's not looking good
 * lazyPower hides
<cory_fu> :)
<cory_fu> lazyPower: I was just running bundletester, so it should be easy enough to replicate
<lazyPower> ack, thats what i'm doing on hPCloud
<marcoceppi> jamespage: going to simply submit some small fixes for the charm today, ones that move to charm helpers completely and implement better configuration and has support for oracle/5.6/5.7
<jamespage> marcoceppi, awesome
<gnuoy> jamespage, merged
<jamespage> gnuoy, awesome thankyou!
<jamespage> gnuoy, OK working through each charm for release now
<jamespage> gnuoy, for reference i'm saving the existing trunk charm to /old-stable under ~openstack-charmers
<gnuoy> jamespage, ack
<jamespage> gnuoy, all done
<jamespage> gnuoy, must remember that precise hacluster != trusty hacluster
<gnuoy> \o/
 * jamespage had to do some reverting
<adjohn> Anyone have good resources on using Juju with existing puppet manifests?
<ktosiek> it looks more lively, so I'll try now - can I use Juju to setup a deploy like this: for each batch of app hosts, silence (disable?) their monitoring, remove them from load balancer, upgrade the app, re-enable monitoring, wait for monitoring to turn green, put back in load balancer
<ktosiek> I'd like the whole process to stop as early as possible if something goes wrong, too. I can script juju calls if that cannot be done inside Juju itself, but I'm not sure how disabling only some units would work
<ktosiek> hmm, or am I thinking with wrong categories... I can split app into few groups, and then for each unlink them from LB, do the upgrade (which should make Juju unlink/link with monitoring by itself, right?), check if it's OK with external tool, and put them back into LB
<lazyPower> adjohn: We dont have a good example for that, but i'm happy to work with you to develop that story
<lazyPower> adjohn: the most helpful tips I can give you, are to split your manifest resources out into the contexts that the hooks themselves handle, and execute the manifests as standalone 'recipes' in each of the hook contexts. How familiar with chef are you? i've got a few examples leveraging chef that should translate fairly well.
<adjohn> lazyPower: pretty familiar with Chef, that would be helpful!
<lazyPower> adjohn: ok, i have some patterns here that should help then - its leveraging chef-solo, and each hook is outlined as a resource in the cookbook -   https://code.launchpad.net/~charmers/charms/precise/rails/trunk
<lazyPower> if you come up with any specific questions feel free to ping me
<adjohn> Thanks, will do!
<lazyPower> jrwren: will review your branch after i wrap up stub's pending review of postgres. so you're up next
<lazyPower> but a cursory review looks good - i just want to do some deploy testing
<jrwren> lazyPower: any idea why it missed the queue?
<lazyPower> jrwren: the ingest is turning into chunky salsa on a bug - which is preventing items that come after it from ingesting.
<lazyPower> jrwren: i'll probably put in some OT on helping get a fix for this. I have an idea that may help prevent this from happening in teh future - creating a "problematic items" report so we can skip items if they fail ingest for whatever reason - but continue on the ingestion path. There's probably always going to be something that winds up being a twit and doesn't ingest for w/e reason. Character encoding, weird formatting of a date, something.
<marcoceppi> jcastro: jrwren review queue is fixded
<marcoceppi> there's a ton of stuff in the review queue now
<marcoceppi> <3 you all
<jcastro> marcoceppi, huh, I had like 10 that should be in there now
<marcoceppi> jcastro: the ingest is still running
<jcastro> oh, is it still ingesting?
<jcastro> ack
<marcoceppi> we're moving away from celery and on to a better delayed task tool in the next week
<marcoceppi> so errors will get caught quicker
<jcastro> marcoceppi, my metadata spam compells you
<lazyPower> stub: are you around?
<marcoceppi> lazyPower: some interesting stuff in the testing page: http://reports.vapour.ws/charm-tests/charm-bundle-test-1256-results
<lazyPower> stub: landed https://code.launchpad.net/~stub/charms/precise/postgresql/integration/+merge/233666
<lazyPower> filing a bug re: tests for follow up work.
<marcoceppi> tvansteenburgh output is looking better ^
<lazyPower> stub: https://bugs.launchpad.net/charms/+source/postgresql/+bug/1384894
<mup> Bug #1384894: Tests Fail Consistently <postgresql (Juju Charms Collection):New> <https://launchpad.net/bugs/1384894>
<ktosiek> let's say I have a simple app on top of PostgreSQL with hot standby, what's the chance of split brain? Would putting 1 app instance and 1 postgres instance on a separate partition cause it?
<ktosiek> oh, nevermind, the postgresql node won't upgrade to master without confirmation from juju main node
<ktosiek> hmm, which make the state server a sinngle point of failure... how would it handle being brought up after environment changed?
<marcoceppi> ktosiek: you can scale the state server out to avoid it being a single point of failure
<ktosiek> right, but my second question still stands: what would happen if something changed (like nodes failing) while state server was failing over?
<marcoceppi> Since all state servers are in sync, if one state server is failing the others would know that the postgresql server was failing as there's a call home ever X period of time. So the state would be updated accordingly regardless of the state servers state as long as there was still a state server running
<marcoceppi> I need to look in to the clustering stuff a bit more, but that's my understanding as not a juju dev
<ktosiek> yeah, so the state server is used as a source of truth for charms with HA. Sounds sensible
<ktosiek> thanks, I'm starting to feel I've got the basic architecture :-)
<lazyPower> marcoceppi: latent response - yeah i'm aware. I put this on amir's radar and will be revisiting this again. most of it is cosmetic - but there are some legitimate things bothering me re: namenode relationshipc hanged failed.
<thedellster> Seems like you guys are in the middle of some serious dev/testing. I have an openstack maas juju environment up, and need to do some tweeking with the network settings on cinder and nova, and integrate with a third party cinder driver. I realize you guys are probably less inclined to answer admin type questions, but Iâve really tried engaging multiple sources for more info including the openstack team at HP, online documentatio
<thedellster> etc, and am coming up short on answers. Is there a good forum to get help on? Or a should I come back at a later time? Thanks for any response.
<ktosiek> thedellster: I'd try askubuntu.com or even stackoverflow as a last resort
<thedellster> Thanks ktosiekâ¦
<lazyPower> thedellster: the mailing list is also a great way to get latent responses to questions - juju@lists.ubuntu.com
<lazyPower> not everyone monitors the irc channel, but there are a fair number of eyes monitoring the mailing list
<thedellster> Ah great!
<marcoceppi> thedellster: and most of the OpenStack charmers are on UK time :)
<marcoceppi> the mailing list is a great place to start, if you send in your question I'll make sure to poke them tomorrow to take a look. Ask Ubuntu is another great place as well (a bit more SEO than a mailing list)
<thedellster> Writing the email nowâ¦
<thedellster> Might stick around till UK time
<thedellster> thanks again everyone!
<marcoceppi> o/
 * thedellster sends email crosses fingers. Preparing to drink pot of coffee for 4am UK time handoff. 
<lazyPower> thedellster: maybe take a nap before hand so you're fresh and ready to go when they arrive.
<thedellster> Lol a very diplomatic way to get me out of hereâ¦. Thanks again Iâll wait on the mailing list reply, and maybe come back at 4amâ¦
<thedellster> replacing tequila with coffee
<thedellster> thanks for the help
#juju 2014-10-24
<stub> lazyPower: Ta muchly ;)
<gnuoy> jamespage, https://code.launchpad.net/~gnuoy/charms/trusty/nova-compute/cell-support/+merge/239357 if you get a moment. I'd also like to port it to stable as it addresses a bug which affects cells
<jamespage> gnuoy, I took a libbo and pushed two trivial fixes to stable and next for glance
<gnuoy> kk
<jamespage> the api configuration was defining workers twice = one with {{ workers }} and a second with 1
<jamespage> not good
<jamespage> gnuoy, one thing I have noticed is that I have todo this:
<jamespage> -mechanism_drivers = openvswitch,hyperv,l2population
<jamespage> +mechanism_drivers = openvswitch,hyperv
<jamespage> to really fully disable l2pop
<jamespage> otherwise neutron-api continues to send fdb add's, even though the edges ignore them :-)
<gnuoy> oh, interesting. that's not right
<jamespage> gnuoy, going to raise a bug for that
<Mmike> Hola, lads. When one does 'juju status', I get a 'charm:' line that says, for instance: "charm: cs:trusty/mongodb-3". What does -3 stands for?
<Mmike> (I did plain: juju deploy trusty/mongodb)
<Odd_Bloke> Mmike: I believe it's the version of the charm that's deployed.
<gnuoy> jamespage, do you object to the keystone admin password being stored in the the peer db?
<jamespage> gnuoy, context?
<jamespage> why I guess
<gnuoy> jamespage, Bug#1385105
<jamespage> bug 1385105
<mup> Bug #1385105: keystone identity-admin relation does not support updates to admin-password <keystone (Juju Charms Collection):In Progress by gnuoy> <https://launchpad.net/bugs/1385105>
<gnuoy> jamespage, I don't have to store it in peer db. If the leader dies I could update it to whatever the new leader thinks it should be
<gnuoy> jamespage, mea culpa for merging the identity-admin branch in this state tbh
<jamespage> gnuoy, still think I'm missing something
<gnuoy> jamespage, well, problem 1) If admin-password is set it is not set in the identity-admin relation
<gnuoy> 2) If there are more than one keystone units then they each set a different password down the relation
<gnuoy> and point the client to their own individual ip
<jamespage> gnuoy, oh because its always retrieved from configuration right?
<jamespage> in 1)
<gnuoy> jamespage, yeah
<jamespage> so its ok so long as a) you don't cluster and b) you don't provide an admin password
<gnuoy> spot on
<gnuoy> I have a branch to set the password regardless of how it was generated and to use resolve_address(ADMIN) to set the endpoint
<gnuoy> s/endpoint/service_hostname/
<gnuoy> but I have two options for the admin password. !) Have the leader set it and share it via peer_db
<gnuoy> 2) Change it everytime the leader changes which I think will br fragile
<thedellster> Hi any openstack charmers online and willing to help with a openstack deployment?
<jamespage> thedellster, wassup?
<thedellster> Hi james,
<thedellster> Trying to figure out how to deploy a cinder driver from a storage vendor. And how to use the multiple interfaces on my servers. Particularly want to segerate iscsi traffic out from the rest of it. Each of the servers in the cluster have 4 nics.
<jcastro> heya noodles775
<jcastro> can one of you guys check this out? https://code.launchpad.net/~s-matyukevich/charms/trusty/elasticsearch/elasticsearch-dns-bug-fix/+merge/239547
<thedellster> I sent a email to the list on the advice of someone here last night. But it says waiting approval.
<jcastro> niemeyer, can you check the mailing list queue for approvals?
<jamespage> thedellster, ok - so for a cinder driver - you probably need to write a new backend charm - see cinder-ceph or cinder-vmware for examples
<jamespage> thedellster, in terms of using a specific network interface for iscsi traffic - the charms did just grow features for splitting out internal/admin/public and data traffic onto different nets - however that does not include iscsi from cinder nodes
<niemeyer> jcastro: I have been doing that every day.. which one did I miss?
<jamespage> it probably just needs plumbing in
<jcastro> thedellster, which email address did you send from?
<niemeyer> jcastro: Which mailing list, that is
<thedellster> ndell@nddit.com
<niemeyer> Which mailing list?
<thedellster> juju@lists.ubuntu.com
<niemeyer> thedellster: Sorry, you'll need to resend your mail
<niemeyer> The juju mailing list for some reason (its name, maybe?) is a hot spot for spam
<niemeyer> If it did not reach the list, it means I discarded it in a batch
<noodles775> jcastro: sure
<thedellster> should I resend now?
<niemeyer> I'm considering changing the rule for that one list to auto-discard
<niemeyer> Or rather, auto-reject
<niemeyer> thedellster: Yes please
<thedellster> Resent
<niemeyer> thedellster: If you want to make sure you never get a message lost, subscribe to the list with the respective email first
<jcastro> noodles775, does your team have charm powers? or do they go in the queue like everyone else's?
<niemeyer> thedellster: But if you send right now I can make sure it's properly filtered
<niemeyer> and it already has more spam
<niemeyer> thedellster: Done.. I have also whitelisted your email
<niemeyer> thedellster: So it won't get held, even if you're not subscribred
<niemeyer> subscribed
<thedellster> Thank niemeyer!
<niemeyer> thedellster: No problem
<thedellster> jamespage it sounds like I might need to look at a different deployment â¦  The problem is that the storage is on a different switching infrastructure.
<jamespage> thedellster, which driver?
<noodles775> jcastro: we've been able to land changes to charm-helpers, I've not tried landing changes to a charmstore charm.
<jcastro> ok
<thedellster> Pure storage, also have  clients with various hp products like lefthand, 3par
<thedellster> and a few with nexenta
<thedellster> and netapp.
<thedellster> But the one I need to action on right away is pure .
<jamespage> thedellster, so do the pure storage arrays present iscsi directly?
<jamespage> or via a cinder head?
 * jamespage reads a bit
<jamespage> thedellster, so this is all in Juno by the looks of things - and all the cinder server needs is a few bits of config
<jamespage> thedellster, this is pretty much how the cinder-vmware charm works
<jamespage> thedellster, so I'd recommend basing a cinder backend charm off of that for your purposes
<jamespage> thedellster, in terms of access over a different network - I guess so long as your compute nodes are cabled up and configured correctly, you should be good
<thedellster> james I believe the cinder head directs traffic to the array. They require the multipath driver on the nova compute nodes
<thedellster> The Pure Storage Volume Driver selects a FlashArray iSCSI port that it can reach and always uses that
<thedellster> port when first establishing a connection with the array. If the OpenStack compute nodes have access
<thedellster> Best Practices
<thedellster> Dedicated Purity User Account
<thedellster> Page 26 August 20, 2014
<thedellster> to the same SAN subnet as the node where the driver is running, then the compute nodes are able to
<thedellster> connect to that port as well.
<thedellster> Confirm the following:
<thedellster> â¢ From each OpenStack nodes where the volume driver will run, confirm that you can ping the
<thedellster> arrayâs management port and the array's iSCSI ports.
<thedellster> â¢ From each of the OpenStack compute nodes that will connect to the FlashArray, confirm that you
<thedellster> can ping the array's iSCSI ports.
<thedellster> Sorry for the SPAMâ¦. Pasted that in
<sarnold> pastebins are your friend :)
<thedellster> Sarnold yeah I think they are the channels friend to. Been a while since I used ircâ¦ Feel like iâm in highschool again
<sarnold> hehe :)
<thedellster> jamespage in terms of the different network. Would I just log on to the hosts in questions and carve out that nic. Or should I describe them in a juju charm?
<rbasak> marcoceppi: about amulet, I'm pushing ahead with some hacky shell scripts in the meantime, which are working OK for now. So no rush on that issue for me.
<marcoceppi> rbasak: cool, there's a new release landing today and we should have another in a week or so. tvansteenburgh has been doing a lot of the development on it as of late so features are once again landing
<noodles775> jcastro: I just approved another elasticsearch MP, which show me as community, so I guess I don't have charm powers :) https://code.launchpad.net/~evarlast/charms/trusty/elasticsearch/add-version-config/+merge/237916
<jcastro> ack
<jcastro> lazyPower, if you've got time to check out those elasticsearch fixes that would be <3
<jcastro> noodles775, what other charms do you guys maintain?
<jcastro> maybe we can give you ownership of the ones that you do
<noodles775> jcastro: hrm, bloodearnest is helping with the maintenance of the gunicorn charm. I don't think there are any others that our team maintains.
<bloodearnest> jcastro: if Patrick (avoine) is ok with it, I'll happily take it on. I have a bunch of changes planned anyway.
<jcastro> yeah I think it would just be easier if you guys didn't have to gate fixes on us
<jcastro> like how we do for the openstack guys
<jcastro> I'll bring it up during our next team meeting
<jcastro> bloodearnest, do you guys have a team?
<jcastro> or we can do ~elasticsearch-charmers or something
<noodles775> jcastro: might be better if we create ~onlineservices-charmers or similar
<jcastro> ok
<jcastro> if you create the team and put the right people in it
<jcastro> I'll propose ES and gunicorn on the juju list to move over to you?
<marcoceppi> noodles775: whatever group you create just make sure ~charmers is a member and admin as well
<marcoceppi> bloodearnest: ^ (as well)
<bloodearnest> kk
<noodles775> jcastro, marcoceppi: https://launchpad.net/~onlineservices-charmers
<jcastro> noodles775, bloodearnest: proposal out and I've made it verbally on our team list and it seems to be universally agreed, after our call we'll do the approval bits.
<noodles775> Great, thanks.
<bloodearnest> awesome
<marcoceppi> tvansteenburgh: it's in pip, charm-tools, backportpackage for ppa uploading is complaining that distro data is out of date but the package isn't in trusty updates yet
<marcoceppi> so I can't publish to ppa
<tvansteenburgh> pypi is enough for me
<tvansteenburgh> i'll update testing as soon as amulet hits pypi
<jrwren> how does one join onlineservices-charmers? :]
<marcoceppi> jrwren: you must first walk over the pit of broken glass
<jcastro> hazmat, what's your holiday schedule? I have "charm testing" as a charm school on 12/19 but we did it already, I would like to do "Juju on Digital Ocean" with you if you're interested
<jrwren> marcoceppi: no problem, already did that 3 times this morning.
<marcoceppi> tvansteenburgh: any issue with moving amulet from my namespace to juju-solutions?
<tvansteenburgh> marcoceppi: not at all, great idea
<hazmat> jcastro, i'm game.. but might want to push to jan
<marcoceppi> tvansteenburgh: 1.8.0 is on pypi
<tvansteenburgh> marcoceppi: excellent, thanks!
<marcoceppi> Will email the list after ppa uploading starts working again
<marcoceppi> I'm out fot he after noon o/
<tvansteenburgh> later man
<jcastro> hazmat, ok, I'm kind of looking for this calendar year, I'll put you in the next-year pile.
<jcastro> jose, I want to make sure I join the right hangout thing this time
<jcastro> do I sign is as the UoA user?
<noodles775> marcoceppi: Automated tests on charm branches is *awesome* - thanks. I've not done any changes in a while, and was just thinking I had to create HP/ec2 accounts.
<aisrael> There isn't a charmhelpers (python) helper to run a command as a specific user, is there?
<jose> jcastro: I'll handle it, don't worry
<jcastro> ok
<jcastro> https://plus.google.com/hangouts/_/hoaevent/AP36tYfGTmFkqf0B3qhcmaf_yL1vnc0vvhpiuuCbM530FY0z4Zfjlg
<jcastro> here's the hangout for anyone who wants to hang out
<jcastro> we'll be doing "Relationships" as a charm workshop here in a few minutes.
<Erazm> Hi I have question regarding juju-quickstart. Will juju-quickstart attempt to recreate local envrioment.jenv file in case it is missing or corrupted ? e.g. I have valid AWS credentials and juju environment running on AWS but missing .jenv file so I cannot execute command line juju commands..
<sinzui> WTF look at the signed windows downloads https://launchpad.net/juju-core/+milestone/1.20.10
<sinzui> We had 225  downloads for 1.20.9
<jcastro> sinzui, that can't be right
<sinzui> jcastro, The mac downloads are double so I expect some 550 win downloads
<jcastro> yeah but windows says 22,715
<jcastro> that can't be right can it?
<jcastro> I mean, if so, then awesome!
<sinzui> jcastro, 1.20.5 was stable for about 1 month https://launchpad.net/juju-core/+milestone/1.20.5
<sinzui> and 1.18.4 was stable for months https://launchpad.net/juju-core/+milestone/1.18.4
<sinzui> That 22,715 is implies we are getting 1000s of downloads was day
<marcoceppi> noodles775: you can thank tvansteenburgh he's been spearheading that for quite some time
#juju 2014-10-25
<rrocsal> having a hard time with juju
<rrocsal> i have a couple of machines in a manual environment ( no provisioning )
<rrocsal> everything ok. then i installed the doocean plugin and used it to provision anlther machine
<rrocsal> now i cannot connect to the state server
<rrocsal> and if i try to get the status, i get : Error initialising SSH storage failed: failed to create storage dir
<marcoceppi> rrocsal: okay, so lets start with the manual environment
<rrocsal> hi @marcoceppi, how do we start ?
<rrocsal> i have also made a backup of the environment, but it seems there's no such option to restore now.
<rrocsal> i have copied SSH keys too, my server's one and juju's generated ones manually, but had no success.
<rrocsal> @marcoceppi
<rrocsal> 	 i have also made a backup of the environment
<rrocsal> is juju restore a deprecated option ?
<ktosiek> is there a list of all interfaces in charm store somewhere?
<ktosiek> and BTW, is jujucharms.com running on a charmstore charm?
<tvansteenburgh> ktosiek: there's this https://manage.jujucharms.com/interfaces
<ktosiek> thanks, that's it!
<ktosiek> hmm, is there any link from https://jujucharms.com/ to https://manage.jujucharms.com/?
<ktosiek> hmm, I see there's a "python-django" charm for setting up a Django app, that can talk to subordinate charm for additional config/scripts. Is this the preferred way for charms reuse in composition of bigger ones?
<ktosiek> Or is there some way to have shared parts between charms? (If I'd try to use Juju in production, I'd need a way to make ~10 similar charms for different internal apps)
<rick_h_> ktosiek: manage.jujucharms.com is the back end api that jujucharms.com uses to display data
<rick_h_> ktosiek: next week we'll have an update to help make things simpler and nicer
<rick_h_> ktosiek: and yes, right now jujucharms.com is running the juju-gui charm but in a sandbox only mode
#juju 2014-10-26
<ktosiek> ok, so I'm starting my first charm - are there any good examples of charms written with Ansible?
<ktosiek> and BTW do I have to use bzr?
<ktosiek> should charm's service start with the system (especially if it's basicaly stateless), or should it be started in config-changed?
<ktosiek> oh, I see it should (at least after "start")
<ktosiek> now, an architectural question: I want to make a Sentry charm that actually uses relation (the one in store now is pretty simple). Sentry is a server that aggregates errors from other services - the errors are provided by a dedicated (HTTP-based AFAIK) protocol, assigned to an app, and then grouped etc.
<ktosiek> how should I provide the application name for a given relation?
<ktosiek> should I make it one of relation's options, and make the consumer set it?
<jrwren_> plz consider: https://github.com/juju/juju/pull/966
<ktosiek> hmm, what should I do about internal app's secrets that users might or might not care about? Like Django's secret key?
<ktosiek> Put them on a peer relation?
<ktosiek> I have re-added a service after removing it, and now the machine is stuck in "agent-state: pending". What can I do about it? What should I look at?
<ktosiek> hmm, and it looks like destroying that machine outside juju didn't help at all... I'll just redo that environment
<lazyPower> ktosiek: Good questions - which service was stuck in agent state pending? As that should only happen during the provisioning of a machine
<lazyPower> typically it means the machien never fully came online
<lazyPower> ktosiek: looks like you've had a day full of juju
<ktosiek> yeah, it looks pretty promising. But it would be much easier with more active people here ;-)
<ktosiek> it was a service I'm trying to write, and I didn't know about "resolved" or looking at error in "status" when I was asking about that
<ktosiek> turned out to be a combination of low RAM and slow HDD (and watching movies from said HDD)
<ktosiek> lazyPower: do you use Juju in production?
<lazyPower> ktosiek: i sure do
<lazyPower> ktosiek: we're usually active mon-fri from ~ 4am EDT to ~ 8pm EDT
<ktosiek> how do you store secrets? I mean things like Django's secret key, or SSL keys
<lazyPower> we've got a mix of people from Europe and the US working on the project.
<lazyPower> ktosiek: using the RAILS charm as an example, there is an ENVIRONMENT export config option
<lazyPower> now, that relies on foreman
<lazyPower> not sure how its used in the django charm - i dont have extensive hands on experience with it.
<ktosiek> oh, so you have foreman next to juju
<lazyPower> negative
<lazyPower> foreman the ruby gem, puts the exports in teh upstart job it builds.
<lazyPower> thats how heroku does it
<lazyPower> which i know i know, so many projects with the name foreman its difficult to keep them straight.
<lazyPower> however, you can use whatever config management framework you with with juju - so long as you write the underlying scripts in alignment with juju's event driven system, and they are idempotent.
<lazyPower> ktosiek: re-using rails as an example, it wraps chef-solo to do the heavy lifting.
<ktosiek> hmm, but how do you provide the secrets to new units?
<lazyPower> ktosiek: and you asked about a solid ansible charm example - i suggest taking a look at our ElasticSearch charm - its written 100% in ansible
<lazyPower> ktosiek: once you set it on the service, and you juju add-unit 'service' - they are distributed amongst the units.
<lazyPower> eg: i set the SECRET_KEY_BASE environment variable for my rails app, and every new unit i spin up in that service cluster, automatically receives that SECRET_KEY_BASE
<ktosiek> ok, so you set it in service's configuration. Then it's stored in juju state server?
<lazyPower> basically
<lazyPower> http://i.imgur.com/9dc2jpr.png  -- also you asked about me using juju in production - there's my prodstack
<ktosiek> is there a way to impersonate a unit (to peek at some other unit's secrets)? (state server looks like a pretty lucrative attack target...)
<lazyPower> just juju debug-hooks on that unit
<lazyPower> then you can inspect the data being sent over the wire with relation-get
<lazyPower> or config-get
<lazyPower> ktosiek: https://juju.ubuntu.com/docs/authors-hook-debug.html
<ktosiek> oh, I mean as another unit (like when someone breaks into one of the world-facing servers)
<ktosiek> not as an admin
<lazyPower> nope
<lazyPower> you have to be within the context of juju to query juju information
<ktosiek> cool ^_^
<lazyPower> i mean its probably possible - if you work hard enough at it. I dont know what would be involved. thats a question better suited for #juju-dev when the core devs are around
<lazyPower> much like trying to interrogate a chef-controlled unit, it can be done remotely but takes an unwholly amount of efffort
<ktosiek> and about that screenshot... what are those heartbeat icons?
<lazyPower> they signify a subordinate service
<lazyPower> subordinates are deployed into an existing service machine - they occupy scope: container - so if its in an lxc container ona  node, it lives in that lxc container
<lazyPower> if its on the host, it lives on the host
<ktosiek> oh, ok. Haven't played with those yet, but I've read that part of manual
<lazyPower> hey so you've read docs all day
<lazyPower> how do you feel about our documentation - as you've viewed it today. was it helpful?
<ktosiek> well...
<lazyPower> honesty points count ;)
<ktosiek> first thing - navigation is awful. I have to find current site in the menu before I can go to the next one
 * lazyPower nods
<lazyPower> good feedback - keep it comin
<ktosiek> having next/prev links with titles at the bottom would be great :-)
<lazyPower> Did anything leap out at you as overly complex in explanation? or anything you had to re-read to understand?
<ktosiek> not really, but I had some expectations about the overall workflow already (I've seen a talk about Juju on pycon pl)
<lazyPower> Awesome. Thanks for the feedback ktosiek
<ktosiek> but there's a lot of info I either missed or hadn't found yet - like how do I destroy things, what's the difference between {remove,destroy}-* commands, any info on resolved --retry
<lazyPower> ah, well - lets break it down
<lazyPower> remove-relation simply un-relates services. It doesnt' do anything destructive (unless the charm is implicit about how it hands that removal of relation - which depends on the context of the relationship - most subordinates will remove application binaries on relation-removed)
<ktosiek> and I'm still not sure what "Added charm "local:trusty/sentry-7" to the environment." means (I mean, "added"? not "replaced ...-6 with ...-7"?)
<lazyPower> destroy-service will remove teh service from teh machine, but the machine will be left in your environment
<lazyPower> destroy-machine will terminate the machine at the cloud host (or lxc supervisor)
<lazyPower> juju resolved is a YOLO brand of "who cares what happened, just go green" - juju resolved --retry will attempt to re-run the failed hook - and will continue to error if the hook exits with a code greater than 0
<lazyPower> when it says added, that just means that it was submit to the state server - as it handles pushing the new blob to the agent(s)
<lazyPower> at one point and time we used git to do this delivery, but that was problematic when users would edit something on the machine and basically trip git up, so we went to all or nothing blob delivery
<ktosiek> oh, ok
<lazyPower> but would it be more useful to you to know that instead of 'added' - if we had an existing charm, to say "updated" "upgraded" or "replaced"?
<ktosiek> actually, "submitted for redistribution" would be pretty nice (as it would also tell me something about what really happens :-))
<lazyPower> i can see how that would be confusing to other users though since the charm store is the primary delivery mechanism for charms
<lazyPower> they might think the just inadvertantly published a charm
<ktosiek> hmm, that's a good point too
<lazyPower> but its good feedback to have regardless (i'm capturing this input to bring up at the next standup i attend)
<lazyPower> I just happened by IRC before I sat down to do some more charming of my own network of services :) I'm off on Monday halleluja
<ktosiek> haha
<ktosiek> thanks for all the info, I'd like to pick your brain a little more but I've got to go to sleep now (it's already past midnight for me)
<lazyPower> i'm usually around, feel free to ping me direct if i'm listed as present.
<lazyPower> good to meet you ktosiek
<ktosiek> see you tomorrow then ;-)
#juju 2015-10-19
<gnuoy`> jamespage, if you get a moment would you mind taking a look at https://code.launchpad.net/~gnuoy/charms/trusty/keystone/lp1506397/+merge/274856 ?
<jamespage> gnuoy`, I think that looks ok - marked ready for review to get osci turning over
<gnuoy`> ta
<jamespage> gnuoy`, are you happy this does not impact the action managed upgrade path?
<gnuoy`> jamespage, no, I will check on that and update the bug
<jamespage> gnuoy`, hmm no that will be a problem - the action helper needs to return to set outcomes
<gnuoy`> jamespage, got a sec for https://code.launchpad.net/~gnuoy/charms/trusty/neutron-gateway/lp1506046/+merge/274417 and https://code.launchpad.net/~gnuoy/charms/trusty/nova-compute/lp1506046/+merge/274416 ?
<jamespage> gnuoy`, +1 +1
<gnuoy`> jamespage, thanks
<gnuoy`> thanks
<jamespage> gnuoy`, https://code.launchpad.net/~james-page/charms/trusty/ceilometer/status-fixes-actions/+merge/274699
<jamespage> if you have time
<jamespage> ;)
<gnuoy`> jamespage, +1
<beisner> hi gnuoy` - can we do a c-h sync across all next charms?  i think most have been sync'd recently, but it'd be good to square 'em up and re-test.
<gnuoy`> beisner, yep
<beisner> gnuoy`, ps thx for the 2 merges
<gnuoy`> np
<beisner> ddellav, re: ssl spec fails, here's what automated tests are seeing.  is this what you're seeing?  http://paste.ubuntu.com/12861113/
<ddellav> beisner, yep, that's exactly it
<beisner> ok raised bug @ https://bugs.launchpad.net/charms/+source/keystone/+bug/1507619
<mup> Bug #1507619: With SSL, install hook fails: KeyError: 'getpwnam(): name not found: juju_keystone' <openstack> <uosci> <keystone (Juju Charms Collection):New> <https://launchpad.net/bugs/1507619>
<beisner> ddellav gnuoy` jamespage coreycb fyi^
<coreycb> beisner, thanks I'll look shortly
<ddellav> beisner, thanks for checking that out, i wasn't quite sure if it was me or an actual issue. It seems to be present through all of the different openstack versions
<beisner> ddellav, autobot sees the same
<beisner> thedac, gnuoy` - fyi p-i stable metal spec deploys are failing after switching to the lp:charms/precise/x namespace.
<beisner> but precise using trusty/next charms on metal is fine
<beisner> updated sheet, linked a job.  also re-running just to confirm.
<beisner> so, whatever is broken will apparently be fixed when we push next out to trusty & precise spaces.
<gnuoy`> beisner, kk, as we discussed last week I'm happy to leave those as is and limit testing of  lp:charms/precise/x namespace to the upgrade stable to next testing thedac is doing
<beisner> gnuoy`, no i think you have a valid point in that the stable precise deploy should use the charms from the precise namespace.
<gnuoy`> beisner, but as you say we're testing something which will be fixed in 2-3 days when the sync happens
<beisner> gnuoy`, it really just shows that something isn't in sync with stable T -> P
<gnuoy`> agreed
<Icey> how does a reactive charm call for a hook? I'm trying to debug-hooks on a new reactive charm but have no idea how the hook should be called
<Icey> wait, methinks I wasn't finishing the directions
<stokachu> Icey: the hooks are exposed through charms.reactive.hook
<Icey> how long do charms take to be ingested from LP to jujucharms.com?
<stokachu> Icey: i think it runs every 30 minutes?
<stokachu> or maybe on the hour
<Icey> cool, thanks stokachu
<stokachu> Icey: np, lemme know if thats not the case ill get a more concrete answer for you
<Icey> that's good enough, I can keep deploying from local: for a while :)
<stokachu> cool sounds good :)
<rick_h_> Icey: it's about 2hr total atm.
<Icey> ok
<rick_h_> Icey: to go all the way through the system (both old/new systems)
<beisner> jamespage, dosaboy, cholcombe - with uuidgen as fsid, V install hook stops failing.  added comment @ https://bugs.launchpad.net/charms/+source/ceph/+bug/1506287
<mup> Bug #1506287: ceph-disk: Error: Device is mounted: /dev/sdb1 (Unable to initialize device: /dev/sdb) <openstack> <uosci> <ceph (Juju Charms Collection):New for james-page> <https://launchpad.net/bugs/1506287>
<cholcombe> beisner, this sounds familiar
<cholcombe> beisner, sdb is automatically mounted correct?
<beisner> cholcombe, indeed familiar, but slightly different:
<beisner> with a static fsid uuid, i consistently get fails on Vivid and Wily (systemd), but not on upstart (Precise, Trusty)
<beisner> with a unique-ish fsid uuid on every run, Vivid and Wily begin to pass
<cholcombe> beisner, i'll peek at the charm and see if i can track down what is going on
<beisner> cholcombe, so we appear to be unblocked (still validating with a few more cycles) -- as long as the fsid value changes.
<beisner> cholcombe, thx appreciate it
<cholcombe> beisner, the charm is prob doing something silly if it sees an existing fsid.  can you quick wipe the disks between tests?
<beisner> cholcombe, no, it takes many hours for that wipe to happen.
<cholcombe> beisner, ok
<beisner> cholcombe, if maas had a quick-wipe feature (maybe i'll raise a feature request bug on that) ... like write 0s to the first X blocks then bail.
<cholcombe> beisner, that would be nice ;)
<jamespage> cholcombe, systemd will be slurping them in some way
<cholcombe> jamespage, i figured but i didn't know for sure
<jamespage> well that's my guess
<beisner> cholcombe, raised that bug that i've thought about raising on a half-dozen or so previous occasions.  https://bugs.launchpad.net/maas/+bug/1507745
<mup> Bug #1507745: feature request:  quick disk erase <ceph> <openstack> <uosci> <MAAS:New> <https://launchpad.net/bugs/1507745>
<beisner> nice-to-have
<cholcombe> beisner, +1 :)
<arosales> man summit.juju.solutions looks really solid
 * arosales looking forward to that event
 * arosales thinks marcoceppi had something to do with that
<beisner> dang, yah.  that looks nice!
<Smace> https://jujucharms.com/ => 503 Service Unavailable
<Smace> No server is available to handle this request.
<Smace> http://juju.ubuntu.com/docs => 503 Service Unavailable
<Smace> So it seems it is down. At least for me it is.
<Smace> Even without `docs`. http://juju.ubuntu.com/ => 503 Service Unavailable
<blahdeblah> Smace: We're working on that right now
<Smace> Allright. Sorry for disturbing.
<blahdeblah> np
<Smace> whois blahdeblah
<hloeung> heh
<blahdeblah> Smace: EMISSINGSLASH :-)
 * Smace hides. :)
#juju 2015-10-20
<bdx0> Hey what's up everyone? I'm trying to configure ha for some services by putting them behind haproxy
<bdx0> First I set a vip for the service, then make the relation to the hacluster charm
<bdx0> After the status settles and becomes idol I add more service units
<bdx0> This seems to be a pretty straight forward process .....but I am bleeding caveats, bugs and errors around every corner.....is clustering services behind haproxy known to be problematic to some extent?
<bdx> core, dev, charmers, openstack-charmers: charm store is down!
<blahdeblah> bdx: Known issue - working on it
* blahdeblah changed the topic of #juju to: Welcome to Juju! || Charm store down at present || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP
* blahdeblah changed the topic of #juju to: Welcome to Juju! || jujucharms.com down at present || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP
<bdx> http://paste.ubuntu.com/12871528/
* blahdeblah changed the topic of #juju to: Welcome to Juju! || jujucharms.com back up || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP
<gnuoy`> jamespage, could you take a look at https://code.launchpad.net/~gnuoy/charm-helpers/max_map_count/+merge/274979 if you get a mo?
<jamespage> gnuoy`, +1
<gnuoy`> thanks
<admcleod-> is there a way to flush the action queue for an agent?
<admcleod->  i moved /var/lib/juju/agents/unit-plugin-0/state/uniter but i suspect thats not the best way since now its runningthe install hook again
<Icey> any idea why https://code.launchpad.net/~chris.macnaughton/charms/trusty/telegraf/trunk (last pushed to LP 17 hours ago) isn't yet available in the charm store at cs:~chris.macnaughton/trusty/telegraf ?
<rick_h_> Icey: does charm proof pass on it?
<rick_h_> Icey: w/o error?
<Icey> eh, 3 W, 2 I; would be nice if passing charm proof was listed as a requirement on the actual documentation for submitting a charm to the charm store in a user namespace :)
<Icey> will get it passing
<rick_h_> Icey: just errors have to be clear. You're correct that it should be documented on that. It's step 4 on https://jujucharms.com/docs/stable/authors-charm-store
<Icey> which is not in the section on "Name Space Charms" (https://jujucharms.com/docs/stable/authors-charm-store#name-space-charms)
<Icey> :(
<Icey> oh well :)
<rick_h_> Icey: ok, will file a bug to get that updated. Thanks for pointing me where you were looking
<Icey> blech: W: Maintainer format should be "Name <Email>", not "Chris MacNaughton <chris.macnaughton@canonical.com>"
<Icey> parsing fail?
<rick_h_> Icey: https://github.com/juju/docs/issues/708 filed if you'd like to track
<Icey> heh cool
<Icey> thanks :)
<rick_h_> Icey: hmm, no errors though
<Icey> apparently charms with warning aren't passed on to the charm store
<Icey> which is kind of annoying because apparently it really doesn't like my email?
<rick_h_> Icey: looking, I think this might be due to the legacy (older store) running an older proof. I'm installing the older version now to check if it throws errors
<rick_h_> Icey: I know the store only skips charms for error conditions
<rick_h_> Icey: if it's not that, then some other bug and we'll get the team to look into it
<Icey> https://jujucharms.com/docs/devel/tools-charm-tools
<Icey> warning level blocks charm store upload as well
<Icey> "Any charm with a Warning or Error will not pass charm store review policy."
<rick_h_> Icey: that's for promulgation
<rick_h_> Icey: for being recommended, not personal namespace
<Icey> that's in the proof docs
<Icey> -_-
<Icey> I'm cleaning up the proof issues anyways
<rick_h_> if you want to be the official telegraf charm in the store and able to juju deploy telegraf it can't have warnings
<Icey> but annoying that it doesn't like my name + email format -_-
<Icey> that's the goal soon, but not just yet ;-)
<rick_h_> Icey: one step at a time :)
<Icey> yeah, first official charm is the goal with this, and it's an auxiliary charm for something we're working on :-P
<rick_h_> Icey: filed https://github.com/CanonicalLtd/jujucharms.com/issues/168 and ping'd the team to peek at it. We'll see what they find
<Icey> apparently tags is bad?
<Icey> well, we'll see :-P
<rick_h_> no, I know other charms have tags
<rick_h_> https://api.jujucharms.com/charmstore/v4/trusty/rabbitmq-server-38/archive/metadata.yaml for instance
<rick_h_> so I think I'm pulling at straws
<Icey> different format?
<Icey> I'm using the yaml array format, rabbitmq is using [] ?
<rick_h_> no, i tried to match taht format locally and it still errored
<Icey> gah
<Icey> well, I'm going to push a change that clears most of the rest
<Icey> maintainer had a typo, missing the closing > -_-
<rick_h_> Icey: cool thanks, the folks will investigate
<Icey> and who knows, maybe it'll go now with less charm proof complaints ;-)
<rick_h_> Icey: there's a chance
<Icey> rick_h_ I bet if you tried `charm proof` on the version that threw that error before that it'd pass now :)
<Odd_Bloke> A review/merge of https://code.launchpad.net/~daniel-thewatkins/charms/trusty/ubuntu-repository-cache/nrpe-servicegroups/+merge/275009 would be much appreciated.
<mbruzek> ping rick_h_
<rick_h_> mbruzek: pong
<mbruzek> I have found my answer in a bug you wrote against docs.  But who (besides you) knows about the ingest process/rules ?
<rick_h_> mbruzek: urulama and bac in particular
<mbruzek> rick_h_: Thank you.
<Odd_Bloke> marcoceppi: Thanks for the merge!
<bdx> Whats up charmers, openstack-charmers, core, dev?
<bdx> core, dev, charmers, openstack-charmers: I am expeimenting with deploying openstack stateless services behind hacluster, and am experiencing a plethora of issues....do you guys know that ha deploys are entirely borked???
<bdx> core, dev, charmers, openstack-charmers: I can exploit and reproduce each problem, and create bugs for each, unless this is being delt with on a higher level somehow/someway.....
<lp|lunch> bdx, o/
<lp|lunch> bdx, thats interesting, I'm fairly certain we test the OpenStack bundles with an HA deployment
<bdx> lazyPower: how sure?
<lazyPower> lets take the pepsi challenge
<lazyPower> beisner, o/
<lazyPower> beisner, we test openstack with hacluster + test HA deployments in osci don't we?
<beisner> o/ lazyPower
<beisner> lazyPower, yep, uosci exercises HA deploys, but even better ... the whole test infra runs on a juju-deployed trusty-kilo HA openstack cloud ;-)
<bdx> lazyPower, beisner: even though my juju env looks great: http://paste.ubuntu.com/12878545/
<bdx> the functionality of the openstack services is so broken
<bdx> using percona, things are 10x more broken
<bdx> hooks error for shared-db for glance, cinder, nova-cloud-controller, and neutron-api if percona-cluster is used
<lazyPower> bdx, filing bugs for these issues that you're uncovering would be helpful. This may or may not be a gap in our testing coverage.
<lazyPower> but having a bug to track them as they get investigated would be helpful so we have a reference to begin looking at the issue(s) afoot here and we can get this prioritized
<bdx> lazyPower: entirely....I will set aside my upcoming weekend to do so
<beisner> bdx, there are perhaps infinite topology and ha decisions to make.  here is what we exercise, with percona cluster, essentially 3 units of every service...
<beisner> have been running something very similar to that in prod for 1.5+ yrs, and have taken that prod cloud from trusty-icehouse to trusty-juno to trusty-kilo
<beisner> these links are dev/testing, but may provide more insight
<beisner> but you'll have to self-parse branches into the bundle
<beisner> "trusty-kilo-ha":  http://bazaar.launchpad.net/~ost-maintainers/openstack-mojo-specs/mojo-openstack-specs/view/head:/helper/bundles/ha.yaml
<beisner> +
<beisner> branches:  http://bazaar.launchpad.net/~ost-maintainers/openstack-mojo-specs/mojo-openstack-specs/view/head:/helper/collect/collect-next-ha
<beisner> ie.  juju-deployer -B -v -c ha.yaml -d trusty-kilo-ha
<beisner> our automation parses the two and puts branches where they belong
<beisner> bdx, bear in mind, that the 15.10 openstack charm release is upon us this week.  that means all of these "next" branches will become charm store charm versions very shortly.
<bdx> beisner: what I'm getting from this is that alot of the problems I'm having have most likely been delt with in the 15.10 release
<beisner> bdx, some bug fixes, sure.  but i'm sitting on a cloud that has existed for > 1.5 yrs, originally deployed with old charms, and upgraded along the way.
<beisner> bdx, the big gain with 15.07 and 15.10 are improvements around service status (better feedback in juju stat) + leadership election (juju 1.24.x really helps facilitate clustering activities).
<bdx> beisner: sweet. thanks!
<beisner> bdx, youre sure welcome!   i'd say have a look at the topology, configurations and usage in that example link and ignore the fact that it references dev charms.
<bdx> beisner: entirely....
<beisner> bdx, lazyPower - perhaps one day we will have a proper example bundle in the charmstore for openstack HA.  i do think that is on our list-o-stuff to do.
<beisner> it's difficult though, since there are so many variables with HA, and one has to pre-determine VIP addresses, etc etc
<bdx> awesome....totally
<beisner> bdx, oh so one thing to beware of - that example will use a zillion machines.  you would have to co-locate in a more sensible hardware allocation.  such as:  http://paste.ubuntu.com/12878773/
<Icey> rick_h_ https://code.launchpad.net/~chris.macnaughton/charms/trusty/telegraf/trunk is still not in jujucharms.com -_-
<lazyPower> Icey, does it pass proof?
<lazyPower> nothing above an I: will get ingested
<lazyPower> as the return code is 1, so it bails out
<Icey> % charm proof
<Icey> I: File config.yaml not found.
<Icey> I is the only issue
<lazyPower> can you open an issue here? https://github.com/CanonicalLtd/jujucharms.com/issues
<lazyPower> link @ your lp branch, and ping in #juju-gui. this went through rick_h_ before but i'm pretty sure this now is under the maintenance of urulama
<Icey> https://github.com/CanonicalLtd/jujucharms.com/issues/168
<lazyPower> oh
<lazyPower> well then
<Icey> :)
<lazyPower> i just pinged two ppl, lets hope they notice :D
<lazyPower> interesting on that bug though
<urulama> hey
<Icey> yeah, I made some changes that clean up most of the other charm proof issues
<lazyPower> mbruzek, did you see this?
 * urulama reads back
<lazyPower> mbruzek, issue #168
<lazyPower> mbruzek, sorry link is here: https://github.com/CanonicalLtd/jujucharms.com/issues/168
<mbruzek> looking
<mbruzek> lazyPower: I talked with rick_h_ today about this.  Charms will still get ingested if they have proof warnings, but bundles will fail ingestion with proof problems.
<Icey> there are no proof problems though...
<urulama> Icey: we'll look into it. the charm seems ok. I'll dig into the logs.
<Icey> thanks :)
<urulama> Icey: your charm is ok and will get published in charm store
<Icey> thanks :)
<Icey> what was the issue?
<urulama> Icey: there are some technical difficulties with it at the moment, thanks for your patience
<urulama> Icey: no charm got published today
#juju 2015-10-21
<pmatulis> morning
<Icey> howdy
<Icey> jamespage do you  know if I can get juju + aws to automatically attach ebs drives for ceph to use as a block device?
<jamespage> Icey, probably
<jamespage> Icey, andrew is the right person to ask
<marcoceppi> Icey: you'll want to use the experimental storage stuff axw has been working on
<lathiat> Hi Folks.. can anyone help me with what's happening here? Hit this twice today in a setup that was previously working fine.. destroyed and re-created my environment (MAAS), deploying first service juju-gui to the same machine i bootstrapped onto (which is libvirt) .. stuck on "filter.go:137 tomb: dying -> leadership failure: leadership manager stopped" - there are some earlier errors about i/o errors/losing communication - full log; http://lathi.at/fil
<lathiat> not entirely sure which of the errors is more related to the actual issue and what that is
<marcoceppi> lathiat: didn't get teh full log link, could you supply it again?
<lathiat> full log; http://lathi.at/files/juju-leadership.txt
<lathiat> i am guessing the real issue is the EOF stuff much earlier but i'm a bit lost with it from there
<lathiat> maybe related https://bugs.launchpad.net/juju-core/+bug/1493123
<mup> Bug #1493123: Upgrade in progress reported, but panic happening behind scenes <landscape> <landscape-release-29> <upgrade-juju> <juju-core:Fix Released by ericsnowcurrently>
<mup> <juju-core 1.24:Fix Released by ericsnowcurrently> <juju-core 1.25:Fix Released by ericsnowcurrently> <https://launchpad.net/bugs/1493123>
<marcoceppi> lathiat: that's odd, I've not seen that
<lathiat> thing is i ran into this issue just before, but iw as also getting an error about machine 7 not existing, that i had previously force destroyed, assumed i had corrupted something.. decided to try and start over and hitting the same thing now.
<bdx> beisner, lazyPower: deploying trusty-kilo-ha with next charms eliminated 99% percent of the problems I was experiencing :-)
<beisner> hi bdx good to hear - we're feverishly working on the release process this wk.
<bdx> beisner, lazyPower: Good to hear! There is one remaining issue I can't seem to get around that still exists out of the issues I was experiencing when deploying kilo-trusty-ha with trunk charms
<bdx> that is, I can't query the keystone vip api endpoint....
<beisner> bdx  grain of salt:  next charms not recommended for production as they are generally in active dev
<beisner> bdx well that would be problematic ;-)
<bdx> beisner: totally.
<beisner> bdx, enabling keystone ssl by chance?
<bdx> beisner: I'm totally down, what reasoning have you, if any?
<bdx> besides security
<bdx> ha
<beisner> bdx, just wondering as the client connections get trickier.  if not specifically needed, it's less hassle (and less secure of course) to use the default non-ssl.
<beisner> so nvm me
<bdx> totally, ok
<bdx> you have never experienced this?
<beisner> bdx, gotta run.  if you'd like input, i'd start with inspecting your sanitized novarc / openstackrc file or env vars + keystone --debug catalog  + keystone --debug token-get.
<beisner> that info might give insight
<beisner> + juju stat --format tabular   :)
<beisner> also might do a sanity check that the services are all running, and that the ips are in place in each unit's nic.
<beisner> o/
<bdx> beisner: charmconf.yaml <- http://paste.ubuntu.com/12886818/
<bdx> juju status --format tabular <- http://paste.ubuntu.com/12886819/
<bdx> im not worried about the dashboard relation, as I am currently troubleshooting it as it depends on the keystone api as well
<bdx> which is why I think its giving me grief and status shows "Incomplete relations: identity "
<bdx> beisner: oooh just saw ^^
<bdx> nice, thanks
<bdx> later
<beisner> bdx yeah, that's the new workload status.  gives a lot better feedback as to what is going on throughout the deployment steps, and through managing the thing longer term.
<jog> mgz, I made updates to https://code.launchpad.net/~jog/juju-ci-tools/centos_deploy_stack/+merge/275135
<mgz> jog: landit
<jog> mgz, thanks
<bdx> jamespage, marcoceppi, gnuoy, beisner, lazyPower: After toying with failing openha deploys, most of the issues I was experiencing have been resolved. I have found the primary issue(s) that still exist in next charms concerning ha deploys....the issue is that service charms do not get keystone vip in their .conf files, also keystone endpoints get created for non vip service endpoints
<bdx> ^^ resolved in next branches*
<bdx> After manually making the needed modifications to the endpoints in the keystone.endpoints table and correcting each of the charms configs to include the keystone vip endpoint, I have a working ha stack
<bdx> !
<bdx> yea!
<thedac> bdx: none of the service charms get the keystone vip? Or is there a specific charm? Also do you have the bundle you are deploying from so I can see?
<bdx> I'll file bugs on these issues, It would be nice to see these things fixed in the 15.10 release so as those of us looking for HA stacks have a somewhat stable answere, instead of heading into the 15.10 release with HA borked still
<thedac> bdx: fantastic. Let me know when those bugs are filed
<bdx> thedac: juju status --format tabular <- http://paste.ubuntu.com/12887639/
<bdx> thedac: deployer.yaml <- http://paste.ubuntu.com/12887642/
<thedac> thanks
<bdx> thedac: thats correct, none of the service charms get the keystone vip
<bdx> also keystone.endpoints has all non vip entries
<thedac> ok, I'll take a look today
<bdx> thedac: awesome! thanks!
<bdx> thedac, openstack-charmers, core: https://bugs.launchpad.net/charms/+source/keystone/+bug/1508575
<mup> Bug #1508575: Keystone DB gets all non vip endpoints + openstack service conf files get keystone non vip <ha> <keystone> <openstack> <server> <keystone (Juju Charms Collection):New> <https://launchpad.net/bugs/1508575>
<bdx> boom
<thedac> bdx: thanks
<bdx> thedac: NP, thanks for looking into this!
<jamespage> hello bdx
<jamespage> bdx, we're at bit late for 15.10 release for any more bugs today (as its tomorrow)
<jamespage> bdx, I am keen to understand the problems you're having - as thedac and other have noted, we've run our own internal QA cloud for 1.5 years in a HA deployment through three openstack series upgrades
<jamespage> bdx, I added a comment to that bug - really need to see the output of "sudo crm status" on any of the haclustered services
<bdx> jamespage: heres the output of "sudo crm status" on a keystone node -> http://paste.ubuntu.com/12888152/
<jamespage> bdx, ok - that looks fine
 * jamespage thinks
<jamespage> bdx, could you do the following and pastebint the output
<bdx> of course :-)
<jamespage> bdx, juju run --unit keystone/0 "relation-ids ha"
<jamespage> and then
<bdx> jamespage, ha:81
<jamespage> juju run --unit keystone/0 "relation-get -r <id from previous command> - keystone-hacluster/0"
<jamespage> bdx, for that next one you'll need to use ha:81 and the unit name for the paired hacluster unit
<bdx> clustered: "yes"
<bdx> private-address: 10.16.100.72
<jamespage> bdx, well again that looks ok - next link...
<jamespage> clustered = yes is the good bit there
<bdx> jamespage: let me note, a) this is a repeated issue across 15x deploys of trunk and next, and b) on every deploy, the service clusters form without error for each service
<jamespage> bdx, yeah - just puzzled as to why thats not propagating out correctly
<jamespage> bdx, the clustered=true triggers a re-run of relation hooks where things need to be changed, and the code that determines endpoint resolution should detect the same thing and start using the VIP's
<bdx> jamespage: ok, good, how does this happen "the code that determines endpoint resolution should detect the same thing and start using the VIP" --> the vip isnt the same as private-address: 10.16.100.72
<jamespage> bdx, lets see what keystone is propagating
<jamespage> bdx, there is an endpoint resolver in charmhelpers that figures that out consistently taking into account cluster status and configuration
<bdx> I have modified all of my endpoints and .conf files to resolve the issue, and also as a proof of concept
<jamespage> suffice to say with split networks, its gets quite hairy, but your deployment is not doring that
<jamespage> bdx, you should categorically not need todo that
<bdx> jamespage: where is the "endpoint resolver in charmhelpers"? if you don't mind?
<jamespage> bdx, http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/contrib/openstack/ip.py#L106
<jamespage> bdx, can you also do: juju run --unit keystone/1 'relation-ids identity-service'
<jamespage> bdx, the log data from /var/log/juju/unit-keystone-1.log would also be useful
<bdx> identity-service:41
<bdx> identity-service:45
<bdx> identity-service:57
<bdx> identity-service:59
<bdx> identity-service:70
<bdx> identity-service:89
<jamespage> bdx, ok and now
<jamespage>  juju run --unit keystone/1 'relation-get -r identity-service:89 - keystone/1'
<bdx> jamespage: /var/log/juju/unit-keystone-4.log <- http://paste.ubuntu.com/12888244/
<bdx> private-address: 10.16.100.72
<jamespage> bdx, that's very light
<jamespage> bdx, ok can I see this (need to know which unit is leader)
<bdx> jamespage: I don't see how resolve_address could return the vip .....
<jamespage> bdx, L132 should be in path
<jamespage> bdx, juju run --service keystone 'is-leader'
<bdx> jamespage: and also, keep in mind that I have destroyed two keystone units and re-add-units ....in case you are wonder why the log is light, and also the extra ids for past units...also I don't have debug, or verbose logging on....grrr ...my bad
<bdx> jamespage: if not net_addr: will never execute ....
<jamespage> bdx, oh the relation data was light, not the log
<bdx> oh
<jamespage> bdx, it will - if config is unset, None gets returned
<jamespage> so not net_addr will equal True in that case
<bdx> jamespage: what config must be unsetH?
<jamespage> bdx, os-XXX-network
<bdx> unset*
<bdx> omg
<jamespage> it has no default
<bdx> jesus
<jamespage> where XXX in public, internal, admin
<bdx> this is totally my bad... I should of read into that weeks ago
<bdx> jamespage: MAJOR revision to any and all docs concerning HA deploy to include ^^
<bdx> I should of gotten to the bottom of this earlier on my own, by investigating, but ...thank god
<bdx> jamespage: thanks for your help getting to the bottom of this
<jamespage> bdx, have we?
<jamespage> bdx, got to the bottom of this?
<jamespage> just eating dinner as well biab
<bdx> jamsepage: yes! You must leave the os-xxx-network unset for vip endpoints to get set anywhere!!!!
<bdx> jamespage, openstack-charmers: that is the missing piece! you all have been keeping secretssssss!
<bdx> not really though....I could of found it:-/
<bdx> :-)
<beisner> woot!
<bdx> jamespage, beisner: thanks for your help concerning this
<beisner> actually thedac is just uses aliases:  jamespage and beisner
<beisner> jusssst kidding.
<thedac> heh
<bdx> thedac: ^^
<thedac> bdx: fwiw, I just ran a test over lunch that proves this point. Services do get keystone's vip
<jamespage> bdx, erm that's not quite true
<bdx> jamespage: which?
<jamespage> bdx, you can use vips with configurations that also use os-XXX-network
<jamespage> configuration options
<jamespage> bdx, vip can be a single VIP or a space delimited list, if you are splitting endpoints across networks
<bdx> jamespage: concerning the resolve_address function, I don't see how that could happen....?
<jamespage> bdx, L134
<jamespage> for vip in vips:
<jamespage>     check if in network for endpoint type
<jamespage>     if it is, use this one
<jamespage> basically
<bdx> oooh I see
<jamespage> bdx, looking at http://paste.ubuntu.com/12887642/
<jamespage> I can't see that you are setting is os-XXX-network config options for the keystone charm
<bdx> jamespage: shoot...your right....
<bdx> jamespage: I must disclose ....initially 1 of the keystone endpoints was set to the vip in the database on my last deploy  .... the keystone admin endpoint of http://10.16.100.34:35357/v2.0
<bdx> jamespage: every other endpoint was not set to the vip including the other keystone endpoints
<jamespage> bdx, the charm would have blindley configured that anyway
<jamespage> bdx, did you figure out which is the lead keystone unit? I really want to see the juju log file fromthat one
<bdx> jamespage: here is the keystone log from the leader: http://paste.ubuntu.com/12888588/
<bdx> unit-keystone-1.log*
<bdx> jamespage, beisner, thedac: I propose I redeploy, and this time I will give the services 30mins to settle and ensure clusters form before I add any relations.... this could rule out any possibility of timing issues with clusters not being fully formed when relations are made.
<bdx> thedac: Did you use juju-deployer in a once through to deploy all services and relations sequentially in your test?
<thedac> bdx: yes
<thedac> and I am testing one of our ha oneshot bundles right now. I'll let you know
<bdx> thedac: sweeet!
<bdx> thedac: are you deploying ha services on containers?
<bdx> to containers*
<thedac> let me confirm.
<thedac> ah, no actually, so that may not be a valid test
<thedac> I'll test with your bundle.
<bdx> thedac: nice
<bdx> thedac, jamespage, beisner: So...I haven't set the param 'vip_iface', hence I am assuming the default of 'eth0'. Seeing as I am deploying these services to containers, the primary interface is not 'eth0', but 'juju-br0'. This is a redherring to me, and looks like the ha-relation-joined hook could be affected.
<bdx> http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/keystone/next/view/head:/hooks/keystone_hooks.py#L515
<thedac>   vip_iface:
<thedac>     type: string
<thedac>     default: eth0
<thedac>     description: |
<thedac>       Default network interface to use for HA vip when it cannot be
<thedac>       automatically determined.
<thedac> so you may be on to something there
<bdx> thedac, jamespage, beisner: WAAALAAA
<bdx> thedac, jamespage, beisner: >>> import netifaces
<bdx> >>> netifaces.interfaces()
<bdx> ['lo', 'eth0', 'lxcbr0']YGT
<bdx> no juju-br0!!!!!
<bdx> http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/contrib/network/ip.py#L156
<thedac> bdx: so you might test with vip_iface set to juju-br0
<bdx> thedac: entirely....what I'm pointing out....is that netifaces.interfaces() does not recognize the juju-br0!
<bdx> which would implicate the call to netifaces.interfaces() in network/ip.py as the culprit
<thedac> http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/keystone/next/view/head:/hooks/keystone_hooks.py#L515  get_iface_for_address *OR* vip_iface. I think setting vip_iface and vip_cider will fix this.
<bdx> thedac: as example if "get_one()" returns "one" and "get_two()" returns "two"
<bdx> thedac: and you have "one_or_two = (get_one() or get_two())"
<bdx> thedac: one_or_two == "one"
<bdx> thedac: so even if the 'vip_iface' is set it would still not return the correct iface
<thedac> sorry, I am trying desperatly to get something up and running to actually validate this. But if get_iface_for_address does not retrun an address because juju-br0 is not in netifaces.interfaces() then the or would work. If it does return you are right.
<bdx> thedac: entirely
<thedac> Looking at lines 145-183 looks like it will return None http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/contrib/network/ip.py#L156
<bdx> thedac: totally
<bdx> thedac: I'm redeploying with 'vip_iface' configured to juju-br0
<thedac> great, fyi, you may also need vip_cidr set
#juju 2015-10-22
<bdx> thedac, beisner, jamespage: here is my keystone.endpoint table post deploy: http://paste.ubuntu.com/12890703/
<bdx> the endpoints ending in .3x are vips, and are noticably absent
<bdx> thedac: how did your deploy go? same?
<thedac> turns out I can't do lxc deploys in a nested cloud. I'll have to find another way to test
<bdx> thedac: ha-bindiface also sets to 'eth0' by default
<bdx> I going to redeploy with these set appropriately....I'm hoping for a win here
<thedac> ack. let us know
<lathiat> marcoceppi: interesting, blew away my environment,m installed again.. got the same EOF errors about the containers etc but it otherwise worked fine.. weird.
<bdx> thedac: notice my latest comment -> https://bugs.launchpad.net/charms/+source/keystone/+bug/1508575
<mup> Bug #1508575: Keystone DB gets all non vip endpoints + openstack service conf files get keystone non vip <ha> <keystone> <openstack> <server> <keystone (Juju Charms Collection):New> <https://launchpad.net/bugs/1508575>
<bdx> thedac: second to latest
<jamespage> marcoceppi, hey - could you add a 16.01 and 16.04 milestones to the charms distro please?
<jamespage> I need to bump outstanding bugs off 15.10 and can't do that yet as 16.01 does not exist :-)
<joec> hi folks, any juju azure users or experts in the house?
<joec> .. can anyone tell me if juju supports Azure Resource Groups or can anyone lead me to related documentation?
<joec> https://azure.microsoft.com/en-gb/documentation/articles/resource-group-overview/
<joec> AFAIK juju still uses classic deployment but was wondering if anyone had info regarding support for Resource Groups in Azure https://azure.microsoft.com/en-gb/documentation/articles/resource-manager-deployment-model/
<lazypower> joec, afaik it does not.
<lazypower> joec, however if this is something you need, it would be good to file a bug on to track the conversation
<joec> thanks @lazypower, for now I'll stick with classic deployment, I'm not sure I actually need Resource Groups functionality yet, still planning -  but will raise a bug on launchpad
<marcoceppi> joec: 16.01 >
<marcoceppi> err, jamespage 16.01 >
<marcoceppi> ugh ?*
<jamespage> milestones on the charms distro
<jamespage> I use them for targetting bugs
<marcoceppi> yeah, but what's 16.01 ?
<jamespage> january next year
<marcoceppi> okay, I get that, but it's not an ubuntu release
<jamespage> I know - its a charm release
<jamespage> we do them every 3 months
<marcoceppi> the 16.01 isn't a milestone in the distro, it's a series
<marcoceppi> oh wait, I see
<jamespage> marcoceppi, you create them here - https://launchpad.net/charms/+milestones
<jamespage> they are tied to arbitary dates of our own choosing
<marcoceppi> jamespage: yes, I see that now. I've created the xenial series in the distro, so it's open for charms to be uploaded to
<marcoceppi> I'll add the milestone now
<jamespage> ta
<marcoceppi> jamespage: good to go
<jamespage> marcoceppi, thanks!
<jcastro> mthaddon: http://blog.dasroot.net/2015-charming-2-point-oh.html
<jcastro> mthaddon: https://jujucharms.com/docs/devel/authors-charm-composing
<mattrae> hi i'm making a charm layer.. should a file in a higher layer override a file in a lower layer?
<mattrae> i'd like to override site.conf in the apache-php layer
<marcoceppi> mattrae: we have a vendoring method for that
<mattrae> marcoceppi: ahh cool where would i look to see how to use that?
<marcoceppi> mattrae: https://github.com/johnsca/apache-php#usage
<mattrae> marcoceppi: looks like each site specified in apache.yaml will use the apache config template site.conf. i don't see a way to use a file other than site.conf, or override site.conf in a different layer
<marcoceppi> mattrae: site.conf is put in templates. If your layer has a site.conf file in a templates directory it'll overwrite it
<mattrae> marcoceppi: great, thats what i expect.. i have a site.conf in my layer but it when i compose, the composed charm still has the site.conf from the apache-php layer
<marcoceppi> mattrae: interesting
<marcoceppi> bcsaller cory_fu ^ ?
<cory_fu> marcoceppi, mattrae: Just tested on the vanilla layer and I'm able to override site.conf with no problem.
<marcoceppi> mattrae: do you have your layer somewhere?
<mattrae> cool thanks guys for confirming. i don't have my layer uploaded anywhere yet. i'll put it up on github
<cory_fu> marcoceppi: Has the newest charm-tools been released?
<marcoceppi> cory_fu: no, I'm OTP, but it's going out the door tomorrow
<marcoceppi> err
<marcoceppi> today
<cory_fu> Ok
<marcoceppi> TODAY!
<cory_fu> mattrae: Can you confirm which version of charm-tools you're using?
<mattrae> cory_fu: i'm using version 1.7.1-0ubuntu4~ubuntu14.04.1~ppa1
<cory_fu> Ok, that's the one I tested with
<marcoceppi> cory_fu: I've got a feature request
<marcoceppi> cory_fu: for reactive
<cory_fu> What's that?
<cory_fu> btw, I also just tested with master from charm-tools on GH and overriding site.conf works as expected
<marcoceppi> cory_fu: it would be cool if all I needed to was have an actions.yaml file, and then on creation it'll create action stubs like it does for hooks and then I can do @hook('action-name') and have it all in my reactive thing
<cory_fu> mattrae: http://pastebin.ubuntu.com/12896191/ is the expected behavior (with the name change from "compose" to "build")
<marcoceppi> cory_fu: Although, I guess that's actuall part of the build process, huh?
<cory_fu> marcoceppi: Yeah, I had planned to add support for actions, but I hadn't gotten around to it yet.  I'm also not 100% sure if we need to consider anything else for that, though.  When an action is called, you don't necessarily want any other reactive handlers triggering, do you?
<mattrae> cory_fu: great thanks :)
<marcoceppi> cory_fu: potentially
<marcoceppi> though i guess not
<marcoceppi> I just don't want to have to mess with paths in my action file
<marcoceppi> to get at deps in lib
<cory_fu> marcoceppi: Right.  Hook handlers obviously won't trigger, and any other handlers that do trigger ought to be safe, so it's probably the right thing to do.
<marcoceppi> cory_fu: so I guess it's like @action as a handler?
<cory_fu> marcoceppi: Because you also ought to be able to guard an action based on state (e.g., only run an action handler if the DB is available)
<marcoceppi> right
<cory_fu> marcoceppi: The only thing I'm not 100% confident in is how surprising it will be for other, non-@hook handlers to be triggered.  I guess we just document it and teach it
<marcoceppi> yeah
<cory_fu> marcoceppi: So, there are a couple of parts to that.  We'll need an @action handler, analogous to @hook, in charms.reactive.  We'll also want `charm build` to parse actions.yaml and generate the boilerplate actions/ files
<marcoceppi> cory_fu: yeah
<cory_fu> I think with that, it will also become more important to fix the issue that @hook (and now @action) can't be combined with @when, et al.
<cory_fu> marcoceppi: https://github.com/juju/charm-tools/issues/30 and https://github.com/juju-solutions/charms.reactive/issues/11
<bcsaller> I feel like the semantics around action handling here are quite different, we should talk about this more
<cory_fu> bcsaller: Ok, I am on board with discussing it
<marcoceppi> ditto
<dweaver> Anyone know how long it should take for the new openstack charms to appear in the charm store, nova-compute charm is still version 29 and should be updating to version 30?
<marcoceppi> dweaver: it can take a few hours
<marcoceppi> upwards of 2-3
<dweaver> hi marcoceppi , reason I ask is all the others are there except nova-compute and it looks like 5 hours ago it was committed.
<marcoceppi> dweaver: then you'll need to ping rick_h_ to get some insights into ingestion
<dweaver> marcoceppi, no problem, not urgent from me, just thought it looked a bit odd and there might be a problem, I was just trying a liberty deployment and nova-compute barfed.  I'll wait until tomorrow.
<dweaver> marcoceppi, thanks anyway and hopefully see you in Belgium in Feb.
<marcoceppi> dweaver: definitely see you in belgium!
<marcoceppi> Office hours starting in a few mins! https://www.youtube.com/watch?v=hQ4nq_6zKJ0
<lazypower>  Greetings office hours lurkers o/
<bdx> hey whats up guys
<cory_fu> Greetings
<bdx> I can't make office hours, but wanted to get a collective take on my findings in comment #5 here: https://bugs.launchpad.net/charms/+source/keystone/+bug/1508575
<mup> Bug #1508575: Keystone DB gets all non vip endpoints + openstack service conf files get keystone non vip <ha> <keystone> <openstack> <server> <keystone (Juju Charms Collection):New> <https://launchpad.net/bugs/1508575>
<bdx> would you guys be knid enough to address it? thanks
<bdx> kind*
<cory_fu> bdx: Sure thing, we can touch on it
<bdx> cory_fu: thanks
<marcoceppi> http://blog.dasroot.net/2015-charming-2-point-oh.html
<marcoceppi> http://interfaces.juju.solutions/
<bdx> beisner: thanks
<thedac> bdx: ok, I just stood up a maas deployed (partial) openstack with keystone and other api services as lxcs. Everything is getting keystone's vip address. I also notice the lxc's have eth0 and not juju-br0 which is only on physical hosts.
<thedac> bdx: so I am still trying to re-create your env. Is there anyting you are doing to preseeds (curtin or debiain-installer)? Can you confirm you have juju-br *inside* the lxcs?
<thedac> So let's try 'juju run --service keystone -- "ifconfig"'
<bdx> thedac: omp...give me 5
<thedac> thanks
<thedac> bdx: I need to head out shortly. To answer your questions in the bug we have not broken any conceptual problem. We have not yet identified what is causing the problem in your environment. I guess the next step is to post keystone juju logs for us to analyze.
<bdx> thedac: I'll update you with these things tonight at some point
<bdx> shoot....after an upgrade to 1.24.7 I cant get lxc units to get past allocating
<bdx> i'll revert to 1.24.6 and report back
<bdx> ok...for some odd reason when I upgraded only juju was upgraded, not juju-core...things are moving along quite nicely now
#juju 2015-10-23
<gennadiy> hi all, i'm going to publish my bundle to juju charm store to user namespace. i pushed it to launchad repo. waited 1 hr but it haven't appeared in charm store yet
<gennadiy> my repo - https://code.launchpad.net/~tads2015dataart/charms/bundles/tads2015-demo/bundle
<gennadiy> do i need to wait for any review?
<gennadiy> hi all, i'm going to publish my bundle to juju charm store to user namespace. i pushed it to launchad repo. waited 1 hr but it haven't appeared in charm store yet. should i wait for any review?
<rick_h__> gennadiy: give it one more hour please
<rick_h__> gennadiy: oh looks like you did
<rick_h__> gennadiy: lokoing
<rick_h__> errr looking
<gennadiy> i made some changes in bundle. i have added quotes for gui position attributes
<rick_h__> gennadiy: ah, you're caught in our corner case atm. sec I'll get you a quick instruction to work around it
<rick_h__> gennadiy: ok, so you've got a file named 'bundle.yaml' but your bundle is in the older format. so please rename that file 'bundles.yaml' with an (s) and you should be good
<gennadiy> @rick_h___ also i have question about juju-gui service in bundle. my charm depends on this service and i need to add relation between them. but juju bundle proof return error if bndle contains juju-gui service
<rick_h__> gennadiy: yes, bundles are deployed through the gui, so it has to exist first to send the bundle to be deployed.
<rick_h__> gennadiy: so if there's a conflict it'll error there
<rick_h__> gennadiy: you can try renaming it to smoething else. juju-gui-2 or the like, but not sure if that works.
<rick_h__> gennadiy: there's WIP to remove that restriction and deploy bundles straight from the juju cli, but it's not ready yet.
<gennadiy> not sure that it's correct way to rename service. in this case we will have 2 different juju-gui :)
<rick_h__> gennadiy: yes, I'm trying to think of a way around it. Because the user finds the bundle in the store and hits deploy, they've alreaday got a GUI
<gennadiy> it will not be a problem
<rick_h__> gennadiy: unfortunately I don't see any way around it at the moment.
<rick_h__> no one's related to the GUI before so it's just not come up heh
<rick_h__> gennadiy: what does it do with the gui that it's related to it? /me is curious
<gennadiy> another question: if i push bundle with with `juju bundle proof` error - will be it published to store?
<gennadiy> we are trying to implenet autoscale. and we use juju api to increase count of units
<gennadiy> so i need to know ip address of juju api
<rick_h__> gennadiy: no, it will fail to ingest
<gennadiy> one more question: do we have some page with review errors? i see http://review.juju.solutions/ in the head of page. but i can't find my charms/bundles there
<rick_h__> gennadiy: for the juju api endpoint can you send an email to the juju mailing list about wanting to do that?
<rick_h__> gennadiy: long ago another wanted to do this and there were lots of ideas as to how it might be done in the charm's hook context.
<rick_h__> gennadiy: but I'm having trouble finding it in gmail atm.
<rick_h__> gennadiy: so that's the review queue to be reviewed and a recommended charm. You have to follow manual steps to go through that and meet stricter requirements.
<rick_h__> gennadiy: see https://jujucharms.com/docs/stable/authors-charm-store#recommended-charms
<gennadiy> clear
<gennadiy> i will be good to have some page with automatic review result too.
<gennadiy> because today i have some issues with charm updates
<rick_h__> gennadiy: yes, we're moving to a new model for the charms/bundles that removes the pulling from launchpad and allows users to directly upload to the store
<rick_h__> gennadiy: so you'd get immediate feedback on issues and not have to wait for automated machinery to come around and pick things up
<gennadiy> cool
<rick_h__> gennadiy: we've currently got an 'older' deprecated system that's going to EOL in Dec that's causing most of the current delays
<rick_h__> as the two systems (old and new) have to try to keep in sync so they wait for each other and such
<gennadiy> do you have posibility to check  the state of latest commit of this charm - https://code.launchpad.net/~tads2015dataart/charms/trusty/telscale-restcomm/trunk?
<rick_h__> so come Dec we'll kill the old system and hopefully have new tools for direct uploads
<rick_h__> gennadiy: sure thing, looking
<gennadiy> store shows 2 revisions only - https://jujucharms.com/u/tads2015dataart/telscale-restcomm/trusty
<gennadiy> also do i need to change revision number manually before publishing?
<rick_h__> gennadiy: it takes about 2hrs for both systems to get into sync. Looks like that was updated 1hr ago. Give it a bit more and watch the file here to see your new config and you'll know it's pulled in. https://api.jujucharms.com/charmstore/v4/~tads2015dataart/trusty/telscale-restcomm-1/archive/config.yaml
<rick_h__> gennadiy: the charm should come in fine though, no issues I can see.
<gennadiy> clear, i will wait 2 hrs in the future
<gennadiy> thanks alot for your help
<rick_h__> gennadiy: very sorry, it's painful
<rick_h__> gennadiy: the folks are working on the new stuff as fast as they can so when we can kill the old system it'll be a ton better.
<rick_h__> lazypower: <3 the video and blog post. you're my hero man. That's awesome stuff. great work katco wwitzel3 natefinch ericsnow http://blog.dasroot.net/2015-charming-2-point-oh.html
<lazypower> :) Happy to help
<lazypower> all the credit goes to the people who did the work, i just blog about it
<rick_h__> lazypower: pulls together nicely
<Icey> is there a way to get juju to deploy SSD EBS backed AMIs instead of traditional EBS backed?
<Icey> on AWS
<Icey> in essence, can I specify the AMI ID to deploy?
<Icey> oh god, apparently the AMI we're deploying should be an SSD EBS backed instance but we're deploying it with a standard type EBS volume
<urulama> hey, any pointers on how to set private simplestream, i'm getting:
<urulama> read metadata index at "http://cloud-images.ubuntu.com/releases/streams/v1/index.sjson"
<urulama> 2015-10-23 12:45:26 DEBUG juju.environs.simplestreams simplestreams.go:433 skipping index "http://cloud-images.ubuntu.com/releases/streams/v1/index.sjson" because of missing information: index file has no data for cloud {RegionOne http://10.5.0.195:5000/v2.0} not found
<rick_h__> Icey: I think the 1.25 release with storage support lets you do that. see https://jujucharms.com/docs/master/storage and search for ebs-ssd
<rick_h__> Icey: the 1.25 pending release I should say
<Icey> thanks
<Icey> I'm so looking forward to the storage stuff but it isn't really what I'm talking about, I mean the root disk stuff, not added storage
<Icey> like: '--constraints "root-disk=250G cpu-cores=2 root-disk-type=ssd"'
<Icey> I now root-disk-type doesn't exist
<Icey> but illustrates what I'm wanting
<Icey> know*
<rick_h__> Icey: is there an instance type that's ssd backed? maybe have to go that route. I'm not sure if there's another work around. Might be worth an email to the juju list to see if anyone's got a trick up their sleeve
<Icey> the AMI we're using _should_ be using an SSD backed EBS for the root drive
<Icey> somehow, we're initializing it with a standard EBS
<Icey> rick_h__,
<rick_h__> Icey: oh, really not sure then :/
<Icey> how hard is it to setup an aws environment that will let you colocate charms? For example I may want a cache host that runs redis and memcached on the same host instead of two separate hosts
<marcoceppi_> Icey: juju deploy --to lxc:<machine-#>
<marcoceppi_> well,
<Icey> and if I wanted to do something like that with a bundle :) ?
<marcoceppi_> Icey: you can
<Icey> spread a bundle around on x machines with LXC?
<marcoceppi_> yup
 * marcoceppi_ gets example
<rick_h__> Icey: marcoceppi_ https://jujucharms.com/docs/1.24/charms-bundles#service-constraints-in-a-bundle shows the lxc bit
<rick_h__> sorry this one https://jujucharms.com/docs/1.24/charms-bundles#bundle-placement-directives
<marcoceppi_> Icey: https://gist.github.com/marcoceppi/58a5598f038fda5fc8cd
<Icey> more broadly, say I wanted to deploy an existing bundle with colocated services: https://jujucharms.com/rails-example-scalable/6
<marcoceppi_> Icey: so, that's something you can do in the GUI, which allows you to modify placement before execution
<marcoceppi_> Icey: so, bundle -> gui, modify placement, commit, then you can export the new bundle
<Icey> gotcha
<marcoceppi_> rick_h__: correct me if I'm wrong, placment and pre-commit bundles exist in gui now, right?
<rick_h__> marcoceppi_: yes
<marcoceppi_> \o/
<narindergupta> arosales: jamespage: Nuage marked their charms as fix committed. But i think they need to create a new charm for neutron-api. I have send an email copy to let Nuage and me know in case thats not the case.
<gennadiy_> @rick_h__ are you still here? my bundle is still not visible in charm store - https://code.launchpad.net/~tads2015dataart/charms/bundles/tads2015-demo/bundle , my page in charmstore - https://jujucharms.com/u/tads2015dataart/
<rick_h__> urulama_: can someone help gennadiy_ please? ^
<urulama> gennadiy_: hey
<gennadiy_> hi
<gennadiy_> i have publiched my bundle 3 hrs ago
<gennadiy_> but it's still not visible in charm store. seems something wrong with it.
<urulama> gennadiy_: bundles are publish quicker, if they pass validation
<gennadiy_> but i checked it with juju bundle proof. everything was ok
<urulama> gennadiy_: ok, let me check
<urulama> gennadiy_: yes, only {"Id":"cs:~tads2015dataart/trusty/telscale-restcomm-2","PublishTime":"2015-10-23T12:16:21.185Z"},{"Id":"cs:~tads2015dataart/trusty/monitor-server-1","PublishTime":"2015-10-23T10:46:30.363Z"} were published
<urulama> gennadiy_: i'll have to check the logs
<jrwren> gennadiy_: : bundle verification failed: ["service \"calls-consumer\" is subordinate but has non-zero num_units","service \"conference-call\" is subordinate but has non-zero num_units","service \"monitor-agent-mesos\" is subordinate but has non-zero num_units","service \"monitor-agent-mesos-master\" is subordinate but has non-zero num_units","service \"sms-feedback\" is subordinate but has non-zero num_units"
<jrwren> ]
<jrwren> gennadiy_: subordinates should have num_units: 0, or not that property at all.
<arosales> narindergupta: so should we wait for the neutron-api charm before we review the Nuage charm?
<narindergupta> arosales: James mentioned in PM that not needed.  So you can go ahead to review.
<narindergupta> arosales: he mentioned no need to follow new design currently as they already integrated few sdns into neutron-api  as well like calico
<arosales> narindergupta: thanks for the ping
<Icey> can somebody help me out with an interface? I'm writing a new simple interface but my charm doesn't seem to want to use it?
<jrwren> Icey: have it listed in metadata.yaml ?
<Icey> aye (I think?)
<Icey> https://gist.github.com/ChrisMacNaughton/12fe21abab4ddc3a880d
<jrwren> Icey: i assume you mean relation interface?
<Icey> I'm writing the reactive interface for influxdb
<Icey> or at least, I'm trying to
<jrwren> Icey: and some influxdb charm provides influxdb interface?
<Icey> https://jujucharms.com/u/chris.macnaughton/influxdb
<Icey> need to change my name in the charm
<Icey> but 'm trying to build an interface like:http://interfaces.juju.solutions/
<jrwren> oh that kind of interface. i don't know about those. sorry
<Icey> yeah, trying to avoid any actual hooks in this charm, since it's literally just dropping some html under apache
<Icey> but it needs the config for that database
<marcoceppi_> Icey: I can try to help you out
<Icey> want to do a hangout for a min?
<marcoceppi_> Icey: sure
<Icey> https://plus.google.com/hangouts/_/canonical.com/can-i-make-a-name-for-this-hangout-that-is-longer-than-the-one-marcoceppi-made-earlier-to-talk-about-ceph-benchmarking-things-so-we-can-figure-out-interfaces
<Icey> hyphens do make that name easier to read :)
<Icey> cury_fu if I want to embed a tgz in a charm using the apache-php layer instead of having it install from a remote source?
<bdx> hey whats going on everyone? Can I get a review or two on the MR's associated with https://bugs.launchpad.net/charms/+source/nova-compute/+bug/1509267
<mup> Bug #1509267: Ceph-relation-joined hook error <ceph> <nova-compute> <openstack> <storage> <nova-compute (Juju Charms Collection):New> <https://launchpad.net/bugs/1509267>
<bdx> thanks
 * GQWER slaps Zetas around a bit with a large fishbot
 * GQWER slaps X-Rob around a bit with a large fishbot
<marcoceppi_> cory_fu bcsaller any ideas on this? http://paste.ubuntu.com/12905838/
<marcoceppi_> ugh, nvm. Apparently, 'layer: basic' is not the same as 'layer:basic'
<marcoceppi_> is that a bug or a feature?
<bcsaller> same as  cs:foo vs cs: foo
<marcoceppi_> bcsaller: makes sense, thanks
 * marcoceppi_ can imagine adding lint rules to charm proof for layers
<ejat> marcoceppi_: http://paste.ubuntu.com/12906168/
<ejat> btw .. how r ya ?
<ejat> i've try using juju resolved
<ejat> retry .. but nothing ... :(
<marcoceppi_> ejat: I'm not sure about that, but try cory_fu kwmonroe and admcleod-
<marcoceppi_> they may be able to help
<ejat> ive tried : juju resolved yarn-master/0 --retry
<ejat> but its not retry ...
<kwmonroe> ejat: can you paste the output from 'juju debug-log -i yarn-master/0 -n 100'
<kwmonroe> *pastebin
<ejat> http://paste.ubuntu.com/12906269/
<kwmonroe> hm.. a problem importing jujuresources.. gimme a sec
<ejat> kwmonroe: ok
<kwmonroe> ejat: let's go back farther.. pastebin the output from 'juju debug-log -i yarn-master/0 -n 1000 | grep jujuresources' to see if we can see jujuresources getting installed
<ejat> http://paste.ubuntu.com/12906441/
<cory_fu> kwmonroe: Probably should have used --replay instead of -n 1000
<cory_fu> ejat: I take it you ran `juju resolved` several times?
<ejat> i try click at juju-gui
<ejat> :(
<ejat> my bad
<ejat> cli 2 times
<cory_fu> ejat: No worries.  But we do need the first error from the log.  Can you look through `juju debug-log --replay -i yarn-master/0 | less` and get us the very first traceback error?
<cory_fu> marcoceppi_: Whatever happened to that "juju share" or "juju mate" project, or whatever it was called?
<cory_fu> Would be very useful heree.  :)
<cory_fu> ejat: I don't suppose you've used the dhx plugin before, have you?
<ejat> http://paste.ubuntu.com/12906522/
<ejat> cory_fu: havent use dhx before yet
<ejat> u want me to install the plugin ?
<cory_fu> ejat: That's definitely the right error.  There ought to be a few lines above that that give info about what caused the apt-get install to fail.
<ejat> u want the line before the traceback
<cory_fu> Maybe 5 lines before, yeah
<ejat> http://paste.ubuntu.com/12906609/
<cory_fu> I've never seen that "packages cannot be authenticated" before.
<kwmonroe> marcoceppi_: are we back in your wheelhouse yet?
<kwmonroe> ejat: can you try "juju ssh yarn-master/0 'sudo apt-get -y install python-pip'"
<kwmonroe> and see if you get the same unauthenticated pkgs warning
<ejat> E: There are problems and -y was used without --force-yes
<kwmonroe> ejat: are you using any http or apt proxies in your environment?
<ejat> nope
<ejat> my environment at Azure
<cory_fu> From this http://ubuntuforums.org/showthread.php?p=7001019#7001019 it seems like it might be an issue with the debian-keyring.
<cory_fu> No idea what might have caused it, though
<kwmonroe> ejat: paste output from this: juju ssh yarn-master/0 'apt-key list'
<kwmonroe> and we'll compare keys
<ejat> http://paste.ubuntu.com/12906786/
<Icey> can I check the current state (eg: 'database.available') from within debug-hooks for a charm?
<cory_fu> Icey: Yes.  You can use the charms.reactive CLI tool and call get_states
<cory_fu> Icey: $CHARM_DIR/bin/charms.reactive get_states
<cory_fu> I think that's the right syntax
<Icey> $CHARM_DIR is the location of my current charm, which has no charms;reactive (the charm itself is in python), do I need to grab charms.reactive to look at state in bash?
<cory_fu> Icey: If you're using the basic layer, it should include charms.reactive by default
<ejat> is the key differ?
<kwmonroe> ejat: those are the same keys i have.. i was hoping they'd be different (though i'm not sure how they would be)
<cory_fu> Icey: Oh, there was a bug in charm-tools that it didn't include the CLI tools
<ejat> :(
<kwmonroe> ejat: long shot, but try this:  juju ssh yarn-master/0 'sudo apt-key update && sudo apt-get update && sudo apt-get install -y python-pip'
<Icey> -_- cory_fu
<kwmonroe> ejat: and see if you still get the "E: There are problems" message
<cory_fu> Icey: If you do a manual `pip install charm.reactive` then you'll get the CLI tool
<cory_fu> Sorry, charms.reactive
<kwmonroe> it's a long shot because i don't think "apt-key update" is gonna do anything, but worth a try i think
<ejat> its installing \0/
<Icey> thanks cory_fu
<cory_fu> ejat, kwmonroe: My guess is some sort of transient network error causing the packages to be corrupted.  And once the apt-get failed, it was just trying to use pip which wasn't installed
<ejat> should i rerun resolved?
<kwmonroe> hmph.. weird.  yeah ejat:  juju resolved -r yarn-master/0
<cory_fu> ejat: Actually, you still need to get jujuresources installed
<cory_fu> ejat: I'd recommend doing this: juju run --unit yarn-master/0 'hooks/install'
<marcoceppi_> kwmonroe cory_fu unauthenticated packages means somethings up with apt. Should just be able to run sudo apt-get update on the node to sort it out
<cory_fu> I think that will redo the important bit and get jujuresources installed
<cory_fu> marcoceppi_: If only you'd said that 20 minutes ago
<ejat> http://paste.ubuntu.com/12907066/
<ejat> ?
<ejat> error installing the hook
<cory_fu> ejat: I think that's a difference in the PATH or something between `juju run` and a proper hook context.  It looks like it got past the pre_setup bit that I was concerned about.  You should be able now to use `juju resolved --retry` on the unit, I think
<ejat> http://paste.ubuntu.com/12907214/
<ejat> \0/
<ejat> https://jujucharms.com/apache-flume-twitter/trusty/3#revisions
<ejat> i cant test
<cory_fu> What do you mean, you can't test?
<ejat> To verify this charm is working as intended, SSH to the flume-hdfs unit, locate an event, and cat it:
<cory_fu> Are you saying that you cannot ssh into flume-hdfs?  Or that no files are showing up in HDFS?
<ejat> can ssh
<ejat> no file/directory
<cory_fu> ejat: I assume you set your twitter access credentials on the flume-twitter service?
<ejat> secret.yaml
<ejat> yes
<kwmonroe> ejat: assuming your creds are set, you can "juju ssh flume-twitter/0" and have a look through /var/log/flume-twitter logs
<kwmonroe> dont' pastebin those because i'm not sure if they expose your twitter creds or not
<cory_fu> ejat: And if you do `juju get flume-twitter` you see the correct credentials?
<ejat> hdfs dfs -ls /user/flume/events  # <-- find a date
<ejat> ?
<kwmonroe> ejat: how about "hdfs dfs -ls /user/flume"
<kwmonroe> do you see a 'flume-twitter' subdir in there?
<ejat> http://paste.ubuntu.com/12907539/
<kwmonroe> ok ejat, "hdfs dfs -ls -R /user/flume/flume-twitter"
<kwmonroe> looks like the readme incorrectly points to an 'events' subdir for the test, vs 'flume-twitter'
<ejat> :)
<kwmonroe> ejat: i'm about to EOW, but wanted to make sure you have tweets in hdfs.. things looking ok?
#juju 2015-10-24
<ejat> kwmonroe: yes .. thanks a lot
<apuimedo> lazypower: ping
<lazypower> apuimedo, pong
#juju 2016-10-24
<kjackal_> Good morning Juju world!
<icey-travel> I have a user attempting to run juju on MacOS Sierra
<icey-travel> the juju client is crashing and cannot run anything
<kjackal> icey-travel: I do not know how to help you. But do we have a ticket to track this issue? Have others seen this problem or are you the first one?
<icey-travel> kjackal: it's this bug: https://bugs.launchpad.net/juju/+bug/1633495 and is fixed comitted, marcoceppi says that it will release with 2.0.1
<mup> Bug #1633495: Panic MacOS Sierra <osx> <juju:Fix Committed by cox-katherine-e> <https://launchpad.net/bugs/1633495>
<kjackal> icey-travel: thank you
<voidspace> can anyone help SimonKLB in #juju-dev with a question on charmhelpers?
<SimonKLB> we can take it here or in pm, whatever works
<SimonKLB> i've created a merge request for a possible fix to the issue, please see https://code.launchpad.net/~simonklb/charm-helpers/include-empty-config-options/+merge/309105
<deanman> Hi, is it possible to make bootstrap command more verbose?
<anrah_> deanman: Have you tried --debug flag?
<deanman> anrah_: Thanks that works, i remembered there was such a command on pre-2.0 but just couldn't find it on newer documentation.
<rick_h_> lazyPower: ping, can you give SimonKLB any tips on testing across a relation  broken/back again setup please?
<rick_h_> lazyPower: how do you all test that in a way that it's triggered, wait the right amount of time, etc.
<lazyPower> rick_h_ : so to ensure i'm understanding the scenario -  We're looking for how to test removing a relationship, and making amulet wait until we're certain the relationship has been fully broken?
<SimonKLB> lazyPower: want to take it in pm or do you want me to spam here? :)
<lazyPower> SimonKLB spam away my friend
<rick_h_> SimonKLB: feel free to spam here
<rick_h_> lazyPower: <3 ty
<lazyPower> this will be useful to others i'm sure
<SimonKLB> so, in amulet there is a relation function that i've been using to check whether or not the relation exist
<SimonKLB> it throws an exception when the relation is gone
<lazyPower> you know SimonKLB, i've tested the daylights out of the inverse... adding relations. I dont think i've removed a single relation with amulet code
<lazyPower> but, i suspect we can do this like so
 * lazyPower double checks the amulet api docs before banging out a response
<SimonKLB> yea, the problem seem to be adding a relation too quickly after removal
<lazyPower> for reference, thats over at: https://pythonhosted.org/amulet/
<lazyPower> well, this seems implementation specific, but you can wait_for_messages()
<lazyPower> assuming your charm will emit a status message that its either waiting on said relationship (Required) or that its handled some operation on that relationship (optional)
<SimonKLB> yea that is actually what i did first, but in that case the message showed up too quickly and the relation was still dangling, adding the relation again would simply fail with "this relation already exist"
<SimonKLB> thats when i started using the relation function in amulet
<SimonKLB> https://github.com/juju/amulet/blob/master/amulet/sentry.py#L282
<SimonKLB> using this exception: https://github.com/juju/amulet/blob/master/amulet/sentry.py#L316
<SimonKLB> this worked until recently
<lazyPower> Ah and its a nice and generic exception too
<SimonKLB> yea i checked the message haha
<lazyPower> so it requires a fancy diaper
<SimonKLB> so im not sure what changed, but the problem in this case is not that its not removed yet, but rather that the *-joined hook is not firing again after readding the relation
<lazyPower> hmm, question
<lazyPower> are you absolutely certain the states are being removed that execute in that context?
<lazyPower> and its not a case of state pollution keeping your code from re-executing?
<lazyPower> i recently encountered that, and i wasn't removing the .connected and .available states i set during the joined on the -departed phase of the relatinoship.
<SimonKLB> the interface hasnt changed as far as i know, the available state is being removed on broken/departed
<SimonKLB> the problem is that the joined hook is not being fired and because of that the available state is never activated again
<SimonKLB> doing stuff slowly by hand works fine
<lazyPower> SimonKLB : I think at this point its prudent we file a bug
<lazyPower> we dont have the plumbing in amulet to properly introspect this that i can see
<SimonKLB> yea i talked to rick_h_ quickly and as he said this is a huge async operation
<SimonKLB> so things could be still left dangling on the remote end etc
<lazyPower> it really is, it looks like we've only really accounted for adding relations and then probing that relationship dictionary of values
<lazyPower> we haven't added support for truly knowing if the relationship is gone, and how to identify when that operation is complete
<lazyPower> and then we have the added complexity of why your code isn't re-executing
<SimonKLB> if there was a way to truly know when the relation is gone and cleaned up in juju i think the problems would go away
<lazyPower> well, you have that nice and generic exception
<SimonKLB> it would be neat if the add/remove calls would queue up
<lazyPower> but that seems like a hack
<SimonKLB> yea i meant in juju itself
<SimonKLB> not in amulet
<lazyPower> yeah, similar to the d.setup() routine
<lazyPower> oh
<lazyPower> i was going the oppostie direction and saying that lies in the test tooling :P
<SimonKLB> haha, it might be possible for amulet to check it, but im not sure there is any way to probe juju itself for the relation status
<lazyPower> man i need more coffee, i'm typoing at increased rates
<lazyPower> sure there is
<lazyPower> juju has this fancy relationship data structure these days in status output
<lazyPower> we could simply parse that to determine if the relationship has been removed before/after a .realtionship() operation
<SimonKLB> how well does it reflect the "real" status?
<lazyPower> at least i think we could, thats my over simplified analysis
<SimonKLB> for example executing relation-list could give you a false image of what is really the case
<SimonKLB> or atleast that seem to be the case right now
<lazyPower> SimonKLB - lets start by filing a bug against amulet with what you're wanting to do, so we have a captured feature request
<SimonKLB> alright, ill do that
<SimonKLB> but hmm..
<lazyPower> SimonKLB - from there, we'll investigate and file bugs against the other projects
<SimonKLB> its not really appropriate to say that amulet isnt providing real information about the relation since its actually correct in that the relation is gone from the clients perspective
<SimonKLB> there is not a problem adding the relation again after the amulet function says that the relation is gone
<SimonKLB> its the fact that the joined hook is not being fired again if you add the relation too quickly that is the issue
<SimonKLB> and that's not really amulet's fault, imo
<lazyPower> yeah but if we dont have a bug to start somewhere, we're not likely to fix the problem.
<SimonKLB> yea but could it be better for me to fill the bug in some other project?
<lazyPower> I wont be upset if the bug doesn't' get filed against amulet if thats what you're asking ;)
<SimonKLB> haha, alright, ill write something to start off with there
<lazyPower> and good shout on testing this SimonKLB
<lazyPower> i have a few cases i can think of we should be testing removals, that we arent. so we'll hit this pretty soon as well i'm sure.
<SimonKLB> that, and this https://github.com/juju-solutions/layer-basic/issues/79 is basically the only things left for me to send our charm for review :)
<SimonKLB> so im trying to sort it out as good as i can
<jcastro> balloons: good morning!
<balloons> good morning
<jcastro> balloons: I've been on leave for a month, and I'm back and ... want to totally do pure snapped juju with snapped lxd, anyone doing this yet?
<lazyPower> SimonKLB we'll do what we can to help :)
<Baqar> Hey guys. Is there an alternative to PPAs for hosting the .deb pacakges that are deployed by the charms? That I can use as default
<balloons> jcastro, welcome back. Did you try the snap on fedora?
<SimonKLB> lazyPower: created the issue here https://github.com/juju/amulet/issues/154
<jcastro> which one, lxd or juju?
<balloons> jcastro, we need a LXD interface for the juju snap
<balloons> jcastro, juju
<jcastro> balloons: oh ok, so it won't work right now
<balloons> jcastro, it will (should( in devmode
<jcastro> is a lxd interface milestoned?
<balloons> jcastro, no
<balloons> jcastro, but others would also like one
<magicaltrout> make it so \o/
<balloons> the onus in on the consumer to create it
<magicaltrout> jcastro.... build the interface! ;)
 * jcastro tries to find a place to hide
<jcastro> I have a k8s video to make today
<jcastro> balloons: is us-east-2 looking good? I'm debating doing it someplace new and shiny.
<balloons> jcastro, I was playing it it in the other day
<balloons> OH > VA right? :-)
<jcastro> that totally depends on who is peering closest to my ISP.
<jcastro> heh.
<lazyPower> us-central DC's ftw
<lazyPower> hey jcastro
<lazyPower> i have a ton of assets i created for use with my slides that match the branding of jujucharms.com - where would you like me to offload these for others to use? in teh talks repo?
<jcastro> yeah, I'm not sure we have an assets directory there though
<jcastro> might need to make one?
<lazyPower> let me push these to dropbox and you tell me if they're worth having in there
<jcastro> either someone has been handling PRs while I was gone, or no one has been submitting things, heh
<lazyPower> a little of column a, little of column b
<jcastro> Baqar: I wonder if resources would help you here
<jcastro> Baqar: is it like workload debs of the service or is it something like security updates for the OS or something?
<deanman> I'm running into some issue with local VM bootstrapped with "localhost". Controller gets bootstrapped just fine and i can see the LXD instance running but when trying to deploy a service the new machine never get to start. I'm suspicious of proxy settings, without proxy same workflow works just fine. Any hints how to see what's happening on subsequent LXDs not booting?
<deanman> For some reason also juju debug-log won't show any output
<Baqar> jcastro thanks. Looking at that.
<Baqar> deanman on the hypervisor search for relevant logs in /var/log/juju/
<lazyPower> jcastro - late on this one, but here's the dropbox link i promised earlier  https://www.dropbox.com/sh/bc7ihzr6hfztfwy/AAAB9Vfqsql9VesKpJqEi46xa?dl=0
<lazyPower> cory_fu - when you get a chance, it would be good to get your opinions on https://code.launchpad.net/%7Esimonklb/charm-helpers/include-empty-config-options/+merge/309105
<lazyPower> marcoceppi cc'd ^
<lazyPower> i'm about to land a hack-in-place work around to this issue, but here's the root that simon has identified
<SimonKLB> lazyPower cory_fu this bug is basically asking for the opposite, that unset config values should be removed from hookenv.config() https://bugs.launchpad.net/charm-helpers/+bug/1630706
<mup> Bug #1630706: Config options remain in hookenv.config() even when unset <Charm Helpers:New> <https://launchpad.net/bugs/1630706>
<SimonKLB> before you merge you should make sure that it won't be a problem
<SimonKLB> cmars should probably get in on the discussion :)
<lazyPower> yeah, its a peristence cache issue
<lazyPower> caching, oh you gloriously misunderstood thing
 * lazyPower hugs it
<cmars> SimonKLB, LGTM. i'd typically use `if config.get('key'):` anyway, so this works fine. thanks!
<cory_fu> PR on juju-deployer https://code.launchpad.net/~johnsca/juju-deployer/series-fix/+merge/309143 to address failures like http://8.19.32.215:8000/test/ca4e3cae9a7a40b3ba04fc368cae1664 where it tries to deploy with the wrong series
<cory_fu> kwmonroe, kjackal, petevg, marcoceppi: ^
<petevg> cory_fu: +1 for working code that does what it's supposed to do. I think that chunk-o-if-statements is a great candidate for moving into a utils module in python-libjuju at some point -- it would be nice to have one and only one place that implements charm store url parsing in Python ...
<cory_fu> petevg: Very much agreed.  It would be great to have deployer phased out by libjuju
<petevg> +1 :-)
<deanman> Baqar: You mean to ssh into the controller and check the logs there?
<hackedbellini> hi guys! I'm using juju 2 here. I deployed the landscape-client charm and added a relation between it and some services that are running on xenial machines. But it is failing to add a relation on services running on trusty machines: ERROR cannot add relation "landscape-client:container stoqwiki:juju-info": principal and subordinate applications' series must
<hackedbellini> match
<hackedbellini> what can I do to workaround that? I really wanted to add the landscape-client to those trusty machines
<rick_h_> hackedbellini: you'll need to add it a second time with the trusty series
<rick_h_> hackedbellini: and relate that on it's own
<hackedbellini> rick_h_: you mean I have to duplicate the charm?
<lazyPower> hackedbellini - correct, you'll need to deploy it with --series=trusty
<lazyPower> hackedbellini - on the gui under the charm config, you can select the series the subordinate should be if its a multi-series subordinate
<lazyPower> a subordinate charm deployed into a model can only occupy a single series boundary, if you wish to have subordinates related to both, you will have to duplicate the charm one for each series you wish to deploy.
<hackedbellini> lazyPower: hrm ok. That is really sad since I have to duplicate every change I do to the subordinate charms, but I'll accept the solution for now
<hackedbellini> also, why can't I remove a subordinate relation?
<lazyPower> hackedbellini - you can, the GUI angrily denies you though.
<hackedbellini> oh, nevermind. I couldn't from the gui but I could make it work on a terminal
<lazyPower> hackedbellini - if you remove the relation on the CLI it works as expected.
<deanman> Anyone using juju 2 behind proxy? It seems that I'm able to bootstrap just fine but deployment of services fail as well as several other juju commands like juju debug-log and ssh
<lazyPower> deanman - yep, there's model proxy configurations you need to set
<lazyPower> deanman juju model-config and juju model-defaults should help you
<lazyPower> deanman - also be aware, that there are some charms that haven't been properly tested behind a proxy, and some of those charms may require bugs to be filed to ensure the charm author is aware of network limited environments and that they are using commands that may/may-not require proxy flags if defined.
<deanman> lazyPower: I'm passing proxy configuration during bootstrap command with --config and i'm setting all three options, http/https/noproxy
<lazyPower> deanman - sure, did that land in your model-defaults onfigurations?
<deanman> Well it must have cause controller was able to update/upgrade and download any required software
<kwmonroe> cory_fu: MP is +1 from me too, though i'm curious your thoughts on machine series constraints. if i define charm: cs:ubuntu; to: 1 and define machine: 1: series: trusty, should there be some glue that adds appropriate series data if the "to" machine defined it? i'm not sure if you'll have machine data at Charm.from_service() -- just curious if you think that's what *should* happen.
<lazyPower> deanman but we're interested in the units, not the model-controller at this point. Can you humor me and check model-config / model-defaults to see if your http/https/noproxy is configured there?
<deanman> lazyPower: Sure, let me check this
<cory_fu> kwmonroe: My inclination is to let it break.  If you did that on the CLI, I'm fairly certain that Juju would reject the deploy command because you didn't give the series (even though it could infer it from the machine, as you say)
<kwmonroe> fwiw deanman, i do this on my juju2 stuffs behind a proxy.  as lazyPower mentioned, bootstrap --config sets the controller config, --model-default sets model config:
<kwmonroe> juju bootstrap --config http-proxy=$http_proxy --config https-proxy=$https_proxy --config no-proxy=$no_proxy --model-default http-proxy=$http_proxy --model-default https-proxy=$https_proxy --model-default no-proxy=$no_proxy
<kwmonroe> ^^ in that case, i have my proxies defined as envvars
<deanman> kwmonroe: well it must be that then, it wasn't clear to me that --config would only pass these to the controller configuration only
<kwmonroe> ack cory_fu.  works for me.  i much prefer the explicit cs:series/charm syntax in bundles anyway.
<aisrael> lazyPower: Are the charmbox docs up to date wrt juju 2 and lxd provider? it's linking ~/.juju instead of .local/share/juju
<lazyPower> aisrael - MASTER is known to be out of date as its still targeting the 1.25 release
<lazyPower> aisrael  if you swap to devel it should be more current
<lazyPower> aisrael mbruzek started cleaning that up last week, we should have some new docker image tags this week/next week, as 2.0 went GA and we're lagging behind on getting that work complete
<aisrael> lazyPower: gotcha, thanks
<deanman> kwmonroe: Out of curiosity and while i'm trying to test your suggestions, with only --config for the controller shouldn't it respond to juju debug-log afterwards?
<kwmonroe> deanman: i'm not 100%, but i think all models need to send their log data to the controller model for 'debug-log' to pick it up.  so if you had a model that wasn't capable of reaching the controller, that might cause debug-log problems.  you could verify that debug-log is at least working on the controller model with "juju debug-log -m controller"
<deanman> kwmonroe: I think my confusion came from https://jujucharms.com/docs/2.0/models-config. Is the following sentence right? "These values can also be passed to a new controller for use with the default model it creates. To do this, use the --config argument with bootstrap:"
<lazyPower> deanman - i think that was teh behavior at one time, but looks to be no longer the case.
<lazyPower> rick_h_ - can you confirm/deny the alleged?
 * rick_h_ reads backlog
<kwmonroe> i'm checking on that too.. bootstrap is taking it's sweet time..
<deanman> lazyPower: no worries, as long as we correct it for future newbies like me ;-)
<rick_h_> deanman: hmm, so I would expect the proxy config set on bootstrap to carry through and be inherited to all models
<rick_h_> deanman: you can validate that with the model-config command I believe.
<rick_h_> deanman: it shows all values and if they're inherited or not
<rick_h_> deanman: what would be good would be the output of those hanging commands with the --debug flag
<rick_h_> deanman: to see where it's getting and hanging
<rick_h_> deanman: we have a bunch of use behind proxies and some tests so I'm fairly confident that it should work, but as you know, the config can get tricky because of the various ways it can get setup
<deanman> Ok, so I'm running again my juju2 setup with proxy, again using only --config to pass proxy, bootstrap completes just fine
<deanman> model-config indeed reports that these proxy settings are inherited in model
<rick_h_> deanman: k, and can you run "juju model-config" from the default model?
<rick_h_> deanman: k
<rick_h_> deanman: and do some commands like 'juju status' work?
 * lazyPower scratches head
<deanman> juju status works fine, juju debug-log just shows nothing
<rick_h_> deanman: k, can you run debug-log with the --debug
<rick_h_> ?
<deanman> it is stuck right after API hostnames unchanged - not resolving
<deanman> juju ssh 0 does not work either, it gives an "ERROR machine 0 not found"
<rick_h_> deanman: k, juju switch controller
<rick_h_> deanman: and then try it
<rick_h_> deanman: in juju2 each model is independent and so the machine number needs to exist in the currently active model
<deanman> after switching debug-log works fine
<deanman> and ssh
<rick_h_> deanman: k, so this is a case that to do controller-level operations you need to be on the controller
<rick_h_> deanman: the default model doesn't have anything worth talking about or to
<deanman> ooooohhhhh......
<deanman> aha! moment
<rick_h_> deanman: <3 let me know if you run into any other issues. Thanks for picking up and learning the new stuff in juju2. We hope it works well for you.
<rick_h_> lazyPower: kwmonroe ^ fyi
<hackedbellini> another question. My lxd deployment is using the .localdomain domain on the network. How can I change it to something else (say .foobar)?
<hackedbellini> I couldn't find the configuration for that
<lazyPower> rick_h_ sorry about that, i clearly misunderstood what was happening.
<deanman> rick_h_: ok switching back to default model, model-config shows proxy settings just fine, juju deploy wordpress executes, but new lxd never gets created. How can i debug that?
<deanman> rick_h_: basically the machine status from pending goes to down and then everything hangs, charm is never deployed
<rick_h_> deanman: check juju status --format=yaml
<rick_h_> deanman: guessing something isn't right with juju talking to the lxd endpoint?
<rick_h_> deanman: might have to check debug-log and the lxd logs to see if there's something up there.
<rick_h_> hackedbellini: sorry, there's not a custom domain flag in juju for that.
<kwmonroe> fwiw deanman rick_h_, using --config during bootstrap alone will not carry forward to new models.. so the docs are correct in the --config will take care of the 'default' model, but just be aware you'll need --model-default if you want those proxy settings on other models.  http://paste.ubuntu.com/23375527/
<rick_h_> kwmonroe: :( that sucks
<deanman> rick_h_: "agent is not communicating with the server" Could it be that proxy is not forwarded after all to model as kwmonroe mentions ?
<rick_h_> deanman: what are you bootstrapped against?
<rick_h_> deanman: is this maas, or some cloud or?
<deanman> rick_h_: localhost
<kwmonroe> deanman: what's your no_proxy setting?  does it include the lxd network?
<rick_h_> deanman: so you need a proxy on localhost? So the thing here would be that lxd is running on localhost or some address.
<rick_h_> deanman: so the question is, does this need to be not proxied?
<deanman> kwmonroe: 127.0.0.1,localhost,10.0.3.1. Last one is the range i configure to lxdbr0
<deanman> rick_h_: I fail to understand your question, my setup is networking behind corporate network, using mac as host and a xenial as guess which has proxy configuration. To make juju localhost setup work don't i have  to also pass proxy to model so that every lxd created (host->guest->lxd) is able to retrieve stuff from net?
<deanman> kwmonroe: Is that no_proxy config of mine right? I have a pre-configured lxdbr0 file which i provision before bootstrapping so the range is chosen by me.
<rick_h_> deanman: yea, I don't run the proxy much so I'm not sure tbh. I'm guessing there's something in there that's got juju trying to proxy vs looking for the local lxd api endpoint, but I'm probably guessing wrong there
<rick_h_> deanman: best thing is to hit the logs and see
<rick_h_> deanman: ssh to the host and check the main host lxd logs and then the debug-log or go look at the /var/log/juju/xxxx.log on the machine 0
<deanman> rick_h_: that's the source of my problem, debug-log does not give any output on default model
<rick_h_> deanman: right, but don't look there.
<rick_h_> deanman: the controller handles everything so the logs on there are the important ones
<deanman> rick_h_: you mean switch to controller ?
<deanman> ok let me have a look, maybe i could find something more usefull for debugging conversation
<deanman> rick_h_: should i pastebinit ?
<rick_h_> deanman: sure
<kwmonroe> deanman: fwiw, i "no_proxy" the *entire* lxdbr0 subnet (see my noproxy config in the earlier pastebin: http://paste.ubuntu.com/23375527/).  just guessing here, but what you have probably allows containers to talk to the controller (10.0.3.1 is the controller), but perhaps it's not enough for the controller to talk back to the container (which would be a 10.0.3.x)
<kwmonroe> deanman: i do this in my env with:  export no_proxy=`echo localhost 127.0.0.1 10.245.65.108 10.0.3.{1..255} | sed 's/ /,/g'` <-- note the 10.245.65.108 is my own machines eth0 address, as i don't want communication to/from myself and the controller/containers proxied either.
<deanman> kwmonroe: that could be it, sorry for some reason pastebinit fails with some locale problem. I will try your configuration
<kwmonroe> deanman: wouldn't it be great if we could just trust the internet, get onto ipv6, and get rid of proxies all together?
<kwmonroe> (hint:  yes)
<deanman> kwmonroe: if only then i would be able to focus on how to present juju2 on colleagues instead of fighting this :-)
<kwmonroe> :)
<vmorris> I'm looking for some docs on setting the architecture when bootstrapping into a maas cluster
<vmorris> I have juju cli on s390 and would like to bootstrap into an x86 maas.. but hitting "ERROR failed to bootstrap model: cannot start bootstrap instance: no matching tools available"
<lazyPower> vmorris - i've hit that trying to bootstrap x architecture
<lazyPower> specifically i was trying to bootstrap ARM from AMD64
<vmorris> lazyPower  yea, that sounds the same.. were you able to work around it?
<lazyPower> vmorris - to make things less obvious, i wasn't able to build tools for that arch locally and use --build-tools.
<lazyPower> not at present, i discovered we dont support 32 bit arch arm architecture, which is what i have available to me
<lazyPower> so i promptly abandoned the effort, but i see that might have been premature
<lazyPower> vmorris - i would encourage you to file a bug, and then post on the public mailing list so we can raise awareness of the situation. There might be a clean workable fix that i'm not aware of.
<vmorris> lazyPower: thanks, i'll consider it. If I don't make any progress in the next hour or two I will likely do so
<lazyPower> vmorris sorry i didn't have better news, but i can validate your findings for you
<lazyPower> as i too have encountered that
<vmorris> yes, thanks for that confirmation.. i figured it must be my architecture mismatch, but I expected there would be a way to work around it -- as you mentioned there likely is
<deanman> kwmonroe: i tried setting a full range of ips for no_proxy as per your setup. I could confirm with model-config but still i get the "agent is not communicating with the server". For some reason pipeing into pastebinit does not give a full URL with my logs so i can share more data.
<vmorris> lazyPower: wouldn't I just run --constraints arch=amd64 during bootstrap? (i'm waiting for a failed bootstrap to exit to try)
<lazyPower> thats worth a try, but i was unable to get that to work
<vmorris> ok
<deanman> kwmonroe: http://pastebin.com/g2M4ZZN3. A lot of "connection refused". This is my model-config http://pastebin.com/2GatAEb3
<deanman> Also "machine-0: 20:00:11 INFO juju.tools.lxdclient no image for ubuntu-trusty found in https://streams.canonical.com/juj u/images/releases/" and "machine-0: 20:01:21 ERROR juju.provisioner cannot start instance for machine "1": image not imported!" seem related to my problem.
<vmorris> lazyPower: yeah this works 'like a charm': $ juju bootstrap --constraints arch=amd64 maascloud maascloud-1
<lazyPower> thats *awesome* vmorris
<lazyPower> now i feel the need to go back and beat on my arch problem with a larger hammer
<vmorris> good luck :D
<deanman> juju status
<lazyPower> deanman : Failed to connect to controller. are you sure its bootstrapped?  (this is a pun ;) and a bad one at that)
<deanman> lazyPower: :-D
#juju 2016-10-25
<junaidali>  /msg NickServ identify 561107
<blahdeblah> junaidali: time to change your password :-)
<junaidali> blahdeblah, lol yeah
<junaidali> a whitespace, ruined me
<junaidali> :D
<blahdeblah> You might want to consider making it 3 or 4 times longer, while you're at it. :-)
<Baqar> junaidali haha
<hloeung> and add a few random characters instead of just all numbers heh
<kjackal> Good morning Juju world
<icey-travel> I'm having an issue bootstrapping on lxd with juju 2, the end of the log is at http://pastebin.ubuntu.com/23377823/
<icey-travel> nevermind, looks like https://bugs.launchpad.net/juju/+bug/1547268 is it, and referenced iptables change fixed it
<mup> Bug #1547268: Can't bootstrap environment after latest lxd upgrade   <2.0-count> <juju:Triaged by rharding> <https://launchpad.net/bugs/1547268>
<deanman> Is there a chat history of this channel persisted somewhere?
<magicaltrout> deanman: https://irclogs.ubuntu.com/2016/10/24/
<kjackal> Hey everyone, I have this strange behavior. When i apt-get install amulet it also brings in the python-jujuclient package which is not "bundletester friendly". This fails: Command (juju api-endpoints -e localhost-localhost:admin/default)
<kjackal> Have you seen this before?
<magicaltrout> bugg@tom-laptop2:~$ sudo apt-get install amulet
<magicaltrout> Reading package lists... Done
<magicaltrout> Building dependency tree
<magicaltrout> Reading state information... Done
<magicaltrout> E: Unable to locate package amulet
<magicaltrout> nope :)
<kjackal> magicaltrout: problem solved!
<magicaltrout> okay... well if I add the ppa lets see what dependencies it brings in
<magicaltrout> yeah i can vouch for you
<kjackal> magicaltrout: I probably have some left over library from juju 1.25
<magicaltrout> i'm sure i used to pip install amulet
<deanman> magicaltrout: Got it, thanks!
<kjackal> magicaltrout: to it seems the apt repo has slightly older versions of jujuclient than pip
<kjackal> I had to apt-get install the jujuclient and then pip upgrade it
<deanman> Any lightweight xenial charm to suggest for quickly debugging environmental issues?
<Baqar> Which juju provider do you use for amulet testing? LXD, AWS, or OpenStack
<Baqar> ?
<vmorris> i've got a maas deployment here, and i was able to bootstrap a juju controller, but when i go to deploy charms it hangs
<vmorris> agent status stays on 'allocating'
<vmorris> and message is 'waiting for machine'
<vmorris> i'm not seeing much in the juju --debug output
<vmorris> where else can i look for what is hanging up?
<mgz> machine-0.log on the controller
<mgz> may be something like a constraint on the machine that means nothing in the maas matches
<mgz> eg constraint on maas name N, and the controller already took that machine
<vmorris> mgz: hmm, okay i'm pouring through it now
<rick_h_> vmorris: try juju status --format=tabular to get some additional machine provisioning feedback
<vmorris> okay
<vmorris> things look okay in the machine-0.log until ERROR juju.api.watcher watcher.go:86 error trying to stop watcher: connection is shut down
<mgz> that's also actually fine
<mgz> and it seems the maas provider logging at debug isn't great, I only have 'juju.provider.maas environ.go:1035 started instance "..."'
<mgz> and not any more details than that before
<vmorris> tabular format didn't help with additional info
<vmorris> yeah nothing's really popping out here
<rick_h_> vmorris: sorry, meant format=yaml
<vmorris> rich_h_ mgz: I'm going to pastebin the machine-0 log ERROR messages, if that's interesting
<vmorris> ah okay, let me try this
 * rick_h_ drinks more coffee
<vmorris> nope, nothing more interesting there, machine-status: pending
<vmorris> mgz rick_h_: please see https://gist.github.com/anonymous/a69b42d950075f63876f443701bc37d9
<mgz> vmorris: if you can repro easily, it's probably worth setting provider.maas logging to trace
<vmorris> it runs for a bit, then gets in an error loop that ran overnight, so the latter 80% of that doc isn't useful
<vmorris> mgz ah okay, i can try this
<mgz> as in, `export JUJU_LOGGING_CONFIG="<root>=DEBUG; juju.provider.maas=TRACE"`
<mgz> then bootstrap, then deploy
<vmorris> thanks, okay - - i'll try this morning
<mgz> though... that's a lot of worker restarting
<mgz> more than normal
<mgz> implies the local api server is just very unhappy for some reason, probably worth looking for mongo issues in syslog as well
<jcastro> rick_h_: does juju not try to look in ~/.aws/credentials?
<rick_h_> jcastro: hmm, looking at docs it looks in env vars
<deanman> vmorris: You could also try $juju debug-log -m controller -l INFO --reply . At least that helped me dig something from the logs...
<rick_h_> jcastro: not sure about which file. I'm looking
<jcastro> rick_h_: I'm going to file a wishlist if not
<deanman> --replay*
<rick_h_> jcastro: +1
<jcastro> rick_h_: that's the default config from the AWS tool, I figure we could reuse it
<rick_h_> jcastro:     1. On Linux, $HOME/.aws/credentials and $HOME/.aws/config
<natefinch> jcastro: juju autoload-credentials will slurp it up
<rick_h_> jcastro: is in the doc at least
<jcastro> ah, of course, the snapped juju doesn't have access to that right?
<jcastro> that's probably why
<rick_h_> jcastro: ah, might be
<vmorris> rick_h_ mgz: re-bootstrapped with debugging enabled, trying to deploy again and still not finding anything jumping out at me
<vmorris> first failure that i think might be interesting follows DEBUG juju.apiserver request_notifier.go:140 -> [6] machine-0 95.606348ms {"request-id":39,"response":"'body redacted'"} Singular[""].Claim
<vmorris> ERROR juju.api monitor.go:59 health ping timed out after 30s
<vmorris> then ERROR juju.rpc server.go:510 error writing response: write tcp 127.0.0.1:17070->127.0.0.1:45560: write: broken pipe
<vmorris> would an http/https/ftp_proxy setup in the config.yaml during bootstrap of the juju controller cause this?
<icey-travel> can juju storage be defined in a bundle?
<magicaltrout> you can't define config stuff in a bundle, so i'd be amazed if you could define storage in a bundle
<icey-travel> magicaltrout: of course you can :
<icey-travel> :)
<icey-travel> magicaltrout: https://jujucharms.com/u/canonical-storage/ceph-with-dash defines config for the ceph-osd charm
<rick_h_> magicaltrout: what config stuff can you not define?
<magicaltrout> clearly i've been drinking
 * magicaltrout checks the links
<rick_h_> magicaltrout: anything good? :P
<icey-travel> rick_h_: can we define storage to attach in the bundle?
<jcastro> rick_h_: debug-log just hangs, ideas?
<rick_h_> icey-travel: so I thought that we could, but I think the bundle can only use already created/defined pools
<magicaltrout> ah you can set-config, i was under the impression you couldn't store config options within a bundle definition
<rick_h_> jcastro: switch to the controller
<magicaltrout> that would override the underlying charm config defaults
<icey-travel> rick_h_: generally speaking, the bundle is uused for demos on AWS / GCE / OpenStack
<rick_h_> icey-travel: so I think you'll need to script the creation of the pools, but the bundle can then use those pools. Let me check the code/docs quick
<rick_h_> icey-travel: magicaltrout looks like I lied on the storage end. Not seeing anything to support it :(
<icey-travel> rick_h_: no worries, it seemed like a risky thin to add anyways
<vmorris> question: with maas, once the juju controller is bootstrapped, does the controller perform power up and down of machines directly or does it still use maas to perform this function?
<icey-travel> vmorris: juju uses maas to handle that
<hackedbellini> Hey guys! If I want to upgrade a machine from trusty to xenial, will juju/lxd realize that the machine has changed series? And after that, can I force the charm deployed in it to change series to xenial?
<vmorris> okay, that's what i thought.. can't understand why juju would be happy to power on the controller machine and get it up, but trying to add a machine to juju then doesn't seem to work
<lazyPower> hackedbellini negative
<lazyPower> hackedbellini - series boundaries are charm-boundaries as well unless the charm is multi-series, and in that case, i dont know but i doubt that will work...
<hackedbellini> lazyPower: negative for both questions
<hackedbellini> ?
<lazyPower> i'm positive the juju agent will get confused
<lazyPower> i'm less clear about how lxd will handle it
<hackedbellini> lazyPower: hrm, I see. So what happens if I want to upgrade the container from trusty to xenial? I should avoid that and instead create a new machine and migrate stuff to it?
<lazyPower> correct, the recommended way is a re-deploy and data migration
<lazyPower> less snowflakes, more cattle
<hackedbellini> lazyPower: ok, thanks for the info! :)
<jcastro> rick_h_: what do you mean "switch to the controller"
<jcastro> like, the admin model?
<rick_h_> jcastro: juju switch controller
<jcastro> oh
<jcastro> that machine is running fine
<jcastro> oh I see what you mean
<jcastro> now debug-log works, <3
<jcastro> http://paste.ubuntu.com/23379383/
<rick_h_> jcastro: ? what are you doing?
<jcastro> trying launch canonical-kubernetes in us-east-2
<magicaltrout> ...hacking
<rick_h_> jcastro: hmm, do you have a VPC aws account?
<rick_h_> jcastro: oh hmmm....sec
<jcastro> yes, at least I think I do
<jcastro> I wonder if there's anything extra I'm supposed to do to use a new region?
<bdx> jcastro: oh lordy
<rick_h_> jcastro: yea, looking, there was a branch for the instance types stuff...I'm trying to see if that's the issue
<bdx> rick_h_, jcastro: https://bugs.launchpad.net/juju/+bug/1636551
<mup> Bug #1636551: instance-type not being recognized <juju:New> <https://launchpad.net/bugs/1636551>
<rick_h_> bdx: is that the new instance type?
<rick_h_> bdx: e.g. last weekish they added some new ones around that haven't they?
<rick_h_> jcastro: bdx I wonder if this is going to effect it: https://github.com/juju/juju/commit/ffc98ca2a4d12e46c969efc2b03aca6c8568243b
<magicaltrout> there was a call to do an update-clouds for AWS stuff, but I thought that was region not instance type
<bdx> rick_h_: I'm just trying to deploy a m4.2xlarge in us-east-1
<rick_h_> bdx: right, but didn't aws create new instances there lately?
<bdx> rick_h_: oh shoot, this might be user error -> https://bugs.launchpad.net/juju/+bug/1636307/comments/10
<mup> Bug #1636307: cannot deploy to network space <juju:Incomplete> <https://launchpad.net/bugs/1636307>
<bdx> I was previously having an issue with juju not recognizing multiple constraints passed under a single constraints flag, so I've been using another '--constraint' flag for each constraint
<bdx> rick_h_: dimitern's fix resolved the bugs I've been experiencing
<rick_h_> bdx: <3
<vmorris> yeah this doesn't look good...
<vmorris> machine-0: 11:45:19 ERROR juju.rpc error writing response: write tcp 127.0.0.1:17070->127.0.0.1:42472: write: broken pipe
<vmorris> http://paste.ubuntu.com/23379436/
<vmorris> this is following a clean bootstrap, no other actions performed
<vmorris> ah i'm onto something i think
<vmorris> my juju controller has the first interface address for the maas-controller in it's dns resolution
<vmorris> this is not a reachable address... i can control this in the maas configuration right?
<jcastro> rick_h_: works fine on us-east-1 btw
<rick_h_> jcastro: yea, will have to talk to axw about that. Kind of defeats the purpose of update-clouds and the instance info
<Prabakaran> Hello Team, Getting Bootstrap error Please advise me on this error link http://pastebin.ubuntu.com/23379728/
<jcastro> rick_h_: does anything launch for you in us-east-2?
<jcastro> or anyone for that matter?
<jcastro> Prabakaran: can your system launch lxd containers already or is this a brand new setup?
<Prabakaran> jcastro: after beta release of juju i am installing it againg
<Prabakaran> juju beta was working .. i was able to create lxds
<jcastro> I would just confirm that your system can launch lxd containers, just to ensure that that works
<Prabakaran> jcastro: it works
<rick_h_> jcastro: otp, will test in a bit.
<deanman> Hi kwmonroe, still trying to sort my proxy issue and why i can't deploy a charm. Does the following give any hint to you? It seems that it cannot download an image but during bootstrapping it did manage to download the same image and boot an LXD.
<deanman> http://paste.ubuntu.com/23379874/
<kwmonroe> deanman: does 'sudo lxc list' show running container(s)?
<deanman> kwmonroe: It does, the controller LXD in running state
<deanman> kwmonroe: http://pastebin.com/Un0vXqni
<deanman> Do i have to explicitly also configure LXC proxy ?
<kwmonroe> i don't think so deanman... at least, i didn't.  i checked my machine-0 logs and i don't see anything like "finding agent binaries...".  i'll re-bootstrap and see if i can see anything related to image fetching
<deanman> kwmonroe: This small excerpt is from bootstraping operation. http://paste.ubuntu.com/23379981/ As you can see it can find the image just fine. Only when deploying charm it complains about not finding it.
<deanman> kwmonroe: Thank you for your time and support
<kwmonroe> np deanman -- it's really bizarre that you clearly found 'ubuntu-xenial' for the controller.  doesn't make sense why it wouldn't find it again for a subsequent unit.
<kwmonroe> oh hey.. deanman, my re-bootstrap says this:
<kwmonroe> 17:50:46 INFO  juju.environs.tools tools.go:101 finding agent binaries in stream "released"
<deanman> kwmonroe: from log format most probably isn't the same code?
<kwmonroe> why does yours say 'devel'?
<deanman> kwmonroe: noticed that as well, but i'm simply using an upgraded 16.04 and followed instructions on docs.
<deanman> kwmonroe: How can i change to devel branch ?
<kwmonroe> rick_h_: do you happen to know where 'deve'
<kwmonroe> shoot... rick_h_, do you happen to know where 'devel' comes from in line 2 of deanman's paste?  http://paste.ubuntu.com/23379874/
<rick_h_> kwmonroe: looking
<rick_h_> kwmonroe: bootstrapping with a dev release?
<rick_h_> kwmonroe: like the dev PPA or maybe from source
<kwmonroe> deanman: what does 'juju version' say?
<deanman> 2.0.0-xenial-amd64
<deanman> I did use sudo add-apt-repository ppa:juju/stable but used ansible to provision that but shouldn't make a difference unless ansible screwed up??
<deanman> if it is of any help I'm using vagrant/ansible with bento/ubuntu-16.04 image and i then i do perform an apt upgrade.
<kwmonroe> deanman: 2.0.0-x-y should be the GA version.. i'm pretty sure my juju-2.0 comes from ppa:juju/stable too.  what does 'grep stream ~/.local/share/juju/bootstrap-config.yaml' show?
<kwmonroe> i see this, i bet you see 'devel':
<kwmonroe> $ grep stream ~/.local/share/juju/bootstrap-config.yaml
<kwmonroe>       agent-stream: released
<kwmonroe>       image-stream: released
<kwmonroe> actually deanman, this is probably a better way to get that info:
<kwmonroe> $ juju model-config | grep stream
<kwmonroe> agent-stream                default  released
<kwmonroe> image-stream                default  released
<deanman> http://pastebin.com/VGsZvMcp
<deanman> your bet was lost :-)
<kwmonroe> shoot
<deanman> i see the same output as yours when using the latter command
<Prabakaran> still i am getting the same bootstrap issue. Could somebody help me on this pastebin.ubuntu.com/23379728/
<deanman> I do have a ppa_juju_stable_xenial.list with "deb http://ppa.launchpad.net/juju/stable/ubuntu xenial main" on my sources.list.d
<kwmonroe> yeah deanman, i think you've got the right juju.  otherwise, 'version' would have said 2.x-beta or something not '2.0.0'
<kwmonroe> deanman: and you're sure 'juju model-config' shows your proxy stuff set, right?
<deanman> kwmonroe: http://pastebin.com/YM9GHtrC
<deanman> hmm
<deanman> even output of $juju model-config -m default has controller in front of the proxy settings compared to yours where it has model http://pastebin.com/9eczDGTu
<deanman> kwmonroe: noticed that ? mine has controller in front and your pastebin has model instead.
<kwmonroe> deanman: i hadn't noticed that, but more concerning is that your agent-version is 2.0.0.1.  now i'm back to wondering where your juju came from :)
<deanman> apt-cache show juju -> http://pastebin.com/T1dd3f71
<deanman> answers your question!
<kwmonroe> deanman: what's the output from apt-cache show juju-2.0?
<deanman> http://pastebin.com/y7DJcM7m
<kwmonroe> welp deanman, i'm stumped.  i dunno why our agent versions don't match, nor why this shows a devel stream: http://paste.ubuntu.com/23379874/.  i don't even know if those are the cause of the un-deployable charms anyway :/
<deanman> at least i could try revert to stable and start from there again
<kwmonroe> deanman: you could try 'sudo apt remove juju-2.0 juju-core --purge' and start over... or perhaps someone in the #juju-dev room will have other ideas.. and there's always the file-a-bug option ;)  -- https://bugs.launchpad.net/juju/+filebug
<deanman> well i started already with a fresh vm and added stable ppa and doing apt install of juju zfsutils-linux
<kwmonroe> i bet that works.  i'm good at betting.
<deanman> kwmonroe: haha, ok let's see, if apt-cache reports again devel then i have to talk to the maintainer i guess ?
<deanman> yours say stable on apt-cache right?
<kwmonroe> alright deanman, so i think you'll be back in business if you bootstrap with --config <proxies> and *NOT* development=true
<kwmonroe> and remember, if you want the proxies to propogate to future models, use --config <proxies> and --model-default <proxies> for http_ https_ and no_proxy
<deanman> kwmonroe: Double checking here: juju bootstrap localhost lxd-test --config transmit-vendor-metrics=false --config http-proxy=$http_proxy --config https-proxy=$http_proxy --config no-proxy=$no_proxy --model-default http-proxy=$http_proxy --model-default https-proxy=$http_proxy --model-default no-proxy=$no_proxy --debug
<kwmonroe> bingo deanman
<kwmonroe> i bet that works
<deanman> well not betting on it, still a couple issues which not related to --development ? the "controller" in front of proxy entries on model-config -m default
<magicaltrout> don't bet on any advice from kwmonroe
<magicaltrout> i have first hand experience of that!
<kwmonroe> :)
<kwmonroe> deanman: i wouldn't worry about the controller vs model bit.  my model-config output was from a test where i did *not* bootstrap with --model-default values
<kwmonroe> when you include those, the proxy prefix switches to controller (unless override explicitly with juju add-model --config <proxies>)
<kwmonroe> deanman: the bit that i'm not clear on is why there are no devel images.  tbh, i don't know the glue between streams and agent versions, so i dunno if that's a bug or not.
<deanman> ok i'm testing, let's hope that this is it :-)
<kwmonroe> if nothing else, at least you got to run a lot of apt commands today.  that's always neat.
<magicaltrout> lol
<deanman> learned how to pastebinit too, i can safely put it on my resume i guess
<deanman> ;-)
<magicaltrout> thats a quality tool everyone should know
<deanman> magicaltrout: mine was broken though, adding even more to my overall agony
<magicaltrout> lol
<magicaltrout> not good
<deanman> hehe
<magicaltrout> on a completely random topic..... I never realised Global Entry was only $100... i'm sorely tempted to sign up
<kwmonroe> haha magicaltrout.  as if you'd pass the background check.  we know where #brexit started.
<magicaltrout> this is true
<magicaltrout> never said i'd get accepted :P
<deanman> Last trip was pulled over when i answered that i did have a croissant on my bag, could globar entry have saved me the embarassament?
<magicaltrout> not a croissant!
<bdx> cmars, mbruzek, lazyPower: concerning an all-encompassing ssl/tls layer -> https://gist.github.com/jamesbeedy/c94cd6e8c7cb4246818aeff7b9adf5ad
<mbruzek> bdx: that looks pretty incomplete. I left some comments with code of what I was thinking.
<bdx> mbruzek: yea ...just trying to get a solid idea/base direction
<bdx> thx
<mbruzek> bdx yeah I am willing to change the current tls relation to accommodate the other types
<mbruzek> I am not familiar with lets encrypt.
<bdx> mbruzek: I've been trying to make use of LE more .... the primary drawback is that LE runs a verification on your fqdn<->publici
<mbruzek> oh
<bdx> mbruzek: making its use in private network space .... ehh
<mbruzek> So you can't just "tell" LE what your ip-address, and fqdn is? It verifies that?
<bdx> mbruzek: yea
<lazyPower> bdx - welcome to what i've been saying ;) LE is great for freely encrypting public facing sites that wouldn't otherwise have been tls wrapped
<lazyPower> however its not a silver bullet, so the direction you're taking in that sketch is the right path i feel
<mbruzek> bdx: interesting. so what if LE could not contact the system? say it was behind a firewall ?
<lazyPower> but as mbruzek said its incomplete,a nd there's other concerns that aren't represented there.
<bdx> mbruzek: it fails hard and loud
<mbruzek> bdx OK
<lazyPower> such as this is tied to easyrsa/le layers, its not an abstracted front end to swap out any backend.
<mbruzek> bdx I have my work plate full at the moment but I could take a look at this with you some time
<lazyPower> bdx - for example i just learned that gandi (the dns registrar) has a tls key cli tool
<lazyPower> which allows for on-site requesting of a tls certificate
<bdx> mbruzek: awesome
<bdx> wow, thats awesome ... even more reason to make the tls/ssl portion lean towards pluggability/extensiblity
<lazyPower> sorry :) i feel like i kind of inserted myself in this convo
<lazyPower> but i've been having shower thoughts about our tls story and how to make that more robust
<bdx> mbruzek, lazyPower: I'm trying to accomodate the use case here
<mbruzek> bdx: some people have told me to check out cfssl
<lazyPower> we're missing 2 critical components. 1) rekey the infrastructure, 2) revoke certificates.
<bdx> we wan't to deploy ssl/tls all the time
<bdx> so I feel the layer has to have some kind of standardization
<bdx> like, include this layer and you get the best ssl/tls depending on what you environment allows right
<lazyPower> it seems like it should be an optional relation, and simply stated, if it has the relation, it should use whatever the tls abstraction provides.
<bdx> exactly
<lazyPower> and thats dependent on what its related to
<bdx> yes, I see
<lazyPower> so consider the following
<lazyPower> isntead of having a 1 size fits all CA charm
<lazyPower> we have flavors of CA,  easyrsa, letsencrypt, and we can colocate these places in lxd. they dont need to run API's or anything fancy. The fact juju proxies those requests and executes locally is kind of nice compared to say running CFSSL internally for an API that you then have to secure and harden
<lazyPower> bdx int he case of lets encrypt, there's still the routability issue when its colocated in lxd...
<lazyPower> theres a few strategies to combat that, such a using a frontend like traefik which knows LE and can conscript certificates as a reverse proxy for domains that dont have TLS but declare they want it.
<lazyPower> or kube-lego
<lazyPower> or any of the other LE wrappers, but i'm not certain how well they perform under load and so forth. so i still have a ton of discovery to do there. if you've got any experience with any of them it would be great to riff on that
<bdx> lazyPower: if I'm picking up what you are laying down, "just have these different key/crt providers laying around in my environment, and write my charms to take advantage of whichever" ?
<bdx> if so, done
<lazyPower> essentially, yeah
<bdx> lazyPower: I think the layer abstraction you mentioned is important though, and is what I'm trying to get at here
<lazyPower> bdx - yeah, i agree, it needs to be a standard set of relations that every CA provider implements
<lazyPower> and its the way we funnel behavior to an end result that we care about
<lazyPower> eg: Just give me keys and get the foo out of the way
<lazyPower> at the end of the day, thats all we really care about... that we have certificates that came from the intended source. If thats an internal EasyRSA provider, or an external CA like LE, or the new thing that hasn't been popularized yet.
<bdx> totally, because otherwise it turns into piles of jimcrackery in every charm ..... if the layer negated this it would be great, bc then you could just include the layer and get tls the best way no matter what platform/provider/routing
<lazyPower> yep
<mbruzek> agree
<lazyPower> i think mbruzek has some solid starts there with what he did in the new easyrsa charm
<lazyPower> and we could crib from that to set that standard, and then start making our way through the other layers and folding in some kind of "conformance test" to use the kubernetes term
<bdx> agreed
<lazyPower> ok, end of lazy showerthought, i hope this was as helpful to you as it has been to me while trying to demystify TLS
<bdx> extremely
<bdx> mbruzek, lazyPower: thx
<lazyPower> <3
<bdx> my outline captures a view of the target use case though, right?
<bdx> if user specifies it then obviously use it
<lazyPower> bdx yeah, its showing an approach
<lazyPower> i think whats more valuable is the intent, that it shouldn't care where the cert data is coming from
<lazyPower> it should just return cert data
<lazyPower> or block until it has some
<bdx> else, if they want tls option A-Z (LE), then opt for that secondly
<bdx> as a last resort, get an internally signed cert or you can't continue
<bdx> right
#juju 2016-10-26
<anrah> hello! Is there a way to deploy bundle using local repository on Juju 2.0
<anrah> apparentely juju-deployer does not work with Juju 2.0
<zeestrat> anrah: Do you mean local bundle, local charms in the bundle or both?
<zeestrat> Juju 2.0 just uses "juju deploy" now for everything
<zeestrat> Both bundles and charms
<anrah> both
<anrah> i have set JUJU_REPOSITORY env-variable and trying to deploy my bundle
<anrah> ERROR cannot deploy bundle: cannot resolve URL "local:xenial/my-charm-0": unknown schema for charm URL "local:xenial/my-charm-0"
<zeestrat> I haven't used the env-variables a lot, however when I point to local charms in the bundle I use "charm: /dir/to/charm"
<zeestrat> Then when deploying a local bundle, "juju deploy ./name-of-bundle.yaml"
<anrah> right, I'll try that
<anrah> As in 1.25 juju-deployer was way to go after setting JUJU_REPOSITORY env-variable to tell juju where the charms can be found
<zeestrat> Gotcha.
<kjackal_> Good morning Juju world
<anrah> zeestrat: thanks! that works
<zeestrat> anrah: Glad to hear. I see that there's no example of a bundle using local charms in https://jujucharms.com/docs/stable/charms-bundles so I'll put in a bug.
<anrah> yep, I don't know whether that local:xenial would be better approach than setting the path directly
<vmorris> good day all
<vmorris> i've got a bunch of maas machines here, and am currently trying to deploy openstack charms to them, but i'm running into a DNS problem (LXD containers will not start also, with similar symptoms)
<vmorris> here's a glance error, for example: ERROR glance DBError: (pymysql.err.InternalError) (1130, u"Host 'u33-maas-machine-2.maas' is not allowed to connect to this MySQL server")
<vmorris> cinder is seeing similar issues... any idea what's going on?
<vibol> hello ?
<vmorris> hi
<vibol> i just comming to ask what happen if the juju controller node fail ? ... can we install a new controller and connect to and existing bootstrap node ?
<vibol> Ohh i'm sorry..
<vibol> This is not rute to discuss the Juju Problem here ?
<vibol> rude*
<rick_h_> vibol: so we suggest you run the controller in HA mode so that you have more than one
<rick_h_> vibol: there's a new feature in development that's partially implemented in 2.0 and coming in 2.1 that allows migration of a controller
<rick_h_> vibol: but you still need a controller active to perform the migration
<rick_h_> vibol: our next step is to allow you to dump your controller in the migration format, so that you could, in theory, import into a fresh controller somewhere like I think you're looking for, but it's not currently available.
<rick_h_> vibol: there is a backup/restore function that uses a db dump method right now
<vibol> correct me if i'm wrong "juju ensure-ha" is increase availability on bootstrap node only ? not the controller itself ?
<rick_h_> vibol: so the bootstrap node is just the first node of the controller
<rick_h_> the API server/database store
<rick_h_> vibol: enable-ha will add additional api endpoints and replicate the db across the controller nodes
<vmorris> rich_h_ I pretty much sorted that problem out from yesterday.. got a new one today :/
<rick_h_> vmorris: doh!
<vmorris> https://paste.ubuntu.com/23383695/
<rick_h_> vmorris: heading into the team standup but will look in a sec
<vibol> thank you @rick_h
<vmorris> thanks
<vibol> Did the juju ha function work well on manual environment ?
<vmorris> rick_h_: for when you get back .. here's a longer trace
<vmorris> https://paste.ubuntu.com/23383729/
<vibol> vmorris are u being using juju in a production ?
<vmorris> vibol: it's a large test organization
<vibol> do you deploying with maas ?
<vmorris> yes
<vibol> did you run openstack as well on your testing ?
<vmorris> yes, that's my team's primary focus
<vibol> ohh.. ! :)
<vibol> I'm interest on Ubuntu platform. including maas, juju, ubuntu openstack
<vmorris> rick_h_: just some more information: https://paste.ubuntu.com/23383751/
<vibol> but it seem i'm low on budget to test and environment.
<vibol> and virtual environt work not really good with that
<vmorris> i'm using kvm guests in maas.. it's a chore to setup
<vibol> I'm sorry vmorris. But did you run both juju and maas on HA environtment ?
<vmorris> vibol: no, it's notha
<vmorris> not ha*
<rick_h_> vmorris: hmm, so this is the charms/relations not playing nice?
<vmorris> rich_h_: yes, it seems like the charms are picking up the hostname from the machine their running on, but not being able to resolve this to the DNS server in MAAS
<rick_h_> vmorris: oic are these in containers or just the root maas nodes?
<vmorris> also, i'm unable to start any LXD containers, for a similar reason i think
<vmorris> these are in the root node, i'd put them in containers if i could
<rick_h_> vmorris: yea, there's a bug in that which is fixed in the upcoming 2.0.1.
<rick_h_> vmorris: we're looking to get that released this week and I'd be curious if that helps you or not
<vmorris> this is juju 2.0.1?
<rick_h_> vmorris: yes
<rick_h_> vmorris: we had a container dns issue with 2.0.0 that we quickly updated and will have in the release this week for 2.0.1 that makes me wonder if this is part of that
<vmorris> okay rick_h_, i'll watch for it
<vmorris> thanks :)
<rick_h_> vmorris: I'm trying to find the issue/fix to see if there's a work around to unblock you here sec
<spunge> Hi guys, is there somebody here that could help me out for a sec? I'm running into a issue with juju bootstrap..
<magicaltrout> go spunge go!
<spunge> im trying to bootstrap juju locally in a lxd container
<vmorris> rick_h_: symptoms here https://bugs.launchpad.net/juju-core/+bug/1632909
<mup> Bug #1632909: ceilometer-api fails to start on xenial-newton (maas + lxd) <maas-provider> <uosci> <OpenStack AODH Charm:New> <juju:Triaged> <juju-core:Triaged> <ceilometer (Juju Charms Collection):New> <https://launchpad.net/bugs/1632909>
<vmorris> but don't see any work around
<spunge> i'm not using the default lxd bridge
<vmorris> ah but that's juu 1.25, oops
<spunge> bridge ip is 192.168.122.230
<spunge> ERROR cmd supercommand.go:458 new environ: creating LXD client: Get https://192.168.122.1:8443/1.0: Unable to connect to: 192.168.122.1:8443
<spunge> but juju is setting up a client for 192.168.122.1
<spunge> how do i point it at the correct ip?
<spunge> juju 2.0.0-xenial-amd64
<rick_h_> spunge: https://bugs.launchpad.net/juju/+bug/1634744
<mup> Bug #1634744:  bootstrap fails with LXD provider when not using lxdbr0 <bootstrap> <lxd-provider> <juju:Confirmed> <https://launchpad.net/bugs/1634744>
<magicaltrout> they called him rick_h_ they keeper of the launchpad links.... rick hhhhhhh!
<rick_h_> vmorris: I think it might be https://bugs.launchpad.net/juju/+bug/1616098
<mup> Bug #1616098: Juju 2.0 uses random IP for 'PUBLIC-ADDRESS' with MAAS 2.0 <4010> <cpec> <juju:Fix Committed by dimitern> <https://launchpad.net/bugs/1616098>
<spunge> Damn, how could i have missed that page :/
<spunge> thanks so much!
<rick_h_> spunge: np, sorry you're hitting the issue. It's something we need to get working better.
 * rick_h_ is an expert in the ways of breaking juju unfortunately :P
<rick_h_> well, at least in how users create new ways of using things that don't quite work out yet. :)
<magicaltrout> its the laws of programming isn't it. You write a piece of code that you expect to get used in way X, and the user regardless of knoweledge will always use way Y
<magicaltrout> its just the rules
<rick_h_> we're good at following rules!
<vibol> I though everybody comming for maas and juju because openstack
<vmorris> rick_h_: i'm not sure that's the same.. I'm getting the IP addresses in the MAAS DNS set properly, but the openstack charms are passing their /etc/hosts (eg. 127.0.0.1) to the mysql relation
<vmorris> guessing here, but mysql then sets the permit for 127.0.0.1, and then denies connection from hostname
<deanman> rick_h_: Is this container DNS bug also affecting proxy in some way ?
<rick_h_> deanman: not that I'm aware of.
<deanman> kwmonroe: Unfortunately skipping --development didn't solve my proxy problem. I did see the proper agent version though in the logs compared to previous runs. :-(
<deanman> I simply cannot understand why it would not be able to check/download image and boot a new LXD http://paste.ubuntu.com/23382970/
<kjackal_> kwmonroe: cory_fu I am almost done for today. Let me know what is there to turn green for tomorrow
<kwmonroe> kjackal_: i think we'll be in good shape after the ganglia/rsyslog bits
<kwmonroe> but i'll let you know at my eod what's left
<kwmonroe> deanman: bummer!!  is there anything in /var/log/lxd/lxd.log that's related to image fetching?  also, can you curl this from you host machine?  https://streams.canonical.com/juju/images/releases/streams/v1/index.json
<deanman> kwmonroe: guest machine running bootstrap command or inside the controller running on LXD ?
<deanman> kwmonroe: cause it seems first time guest is able to dowload image but subsequent image retrieval operation is iniated inside the LXD controller.
<deanman> this is from guest VM ->http://pastebin.com/vRJ0kX1s and this is from inside LXD controller http://pastebin.com/yuhH2wdr
<deanman> kwmonroe: this is lxd.log from guest VM -> http://pastebin.com/F1BeksJY and this is the one from inside LXD controller http://pastebin.com/nRF08QXn
<kwmonroe> ok deanman, so we're back to super weird.  i have no idea why there would be any difference between a controller container and a charm container :/
<kwmonroe> deanman: try this:  juju deploy -m controller cs:xenial/ubuntu
<kwmonroe> maybe ^^ that'll tell us if it's a model issue, or a generic unit-spawning issue
<deanman> sorry please disregard last logs, it was from a fresh environment where i was trying a different proxy but haven't issued yet a deploy
<deanman> kwmonroe: this is lxd.log from LXD controller http://pastebin.com/Jx9jfU8f
<deanman> let me remove unit deploy and re-run it with your command
<deanman> same behavior, it will try download images at least 3 times and then decides to say state is down and will print this "machine-0: 15:44:04 ERROR juju.provisioner cannot start instance for machine "2": image not imported!"
<deanman> brb
<bdx> https://www.elastic.co/v5
<bdx> its all GA now
<rick_h_> bdx: <3 congrats!
<bdx> haha
<bdx> lazyPower: ^
<bdx> they were chasing you guys
<rick_h_> bdx: hah, sorry thuoght this was your thing you were working on
<bdx> its about to be ... devs over here are abitious to have the new v5 elasticsearch
<bdx> geehh, ambitious*
<marcoceppi> bdx: we've been re-writing the elasticsearch charm, I can share our initial stuff later this week if you want to help get some testing in
<bdx> marcoceppi: that would be awesome
<vmorris> bdx: it's about time ... been waiting on that release for awhile :D
<lazyPower> bdx ;) yuuuuuup
<kwmonroe> hey cory_fu, i'm almost done pushing temp xenial bits, but have a make test error on rsyslog:  http://paste.ubuntu.com/23384895/
<kwmonroe> cory_fu: can i mock open differently to get this passing?  here's where the trouble starts:  http://bazaar.launchpad.net/~bigdata-dev/charms/xenial/rsyslog/trunk/view/head:/unit_tests/test_hooks.py#L143
<cory_fu> kwmonroe: Hrm.  So, in Python3, f.read() should return a byte string (b'').  I don't see where the test is defining what is returned by the mock open's result's read function, though
<cory_fu> kwmonroe: Try changing the mock_open line to mock_open(read_data=b'foo')
<kwmonroe> cory_fu: you are the wind beneath my wings.  that worked.. rev'd bundle is on the way.
<lucacome> hi, is there a function that watches for new file attached to a charm?
<lazyPower> lucacome - in terms of juju resources?
<lucacome> yes
<lazyPower> lucacome - if so, when you `juju attach` a new resource, it runs upgrade-charm and config-changed
<lazyPower> just like you issued `juju upgrade-charm`
<vmorris> trying to deploy a charm into lxd where the model has apt-*-proxy and *-proxy set to a custom squid host (ipv4), and seeing something odd...
<lazyPower> lucacome if you're looking for examples, we've got some extensive work in the kubernetes charms to handle resource management.
<lucacome> uh...didn't know that
<lazyPower> lucacome - np, let me fish up a link 1 sec
<lucacome> sure
<lucacome> thanks
<lazyPower> lucacome - https://github.com/juju-solutions/kubernetes/blob/master-node-split/cluster/juju/layers/kubernetes-worker/reactive/kubernetes_worker.py#L41
<vmorris> juju.downloader is trying to connect to the api server over ipv6 (this would be okay) but it's going out and hitting my proxy server (not okay)
<lazyPower> lucacome - and we handle new resources here: https://github.com/juju-solutions/kubernetes/blob/master-node-split/cluster/juju/layers/kubernetes-worker/reactive/kubernetes_worker.py#L20
<lazyPower> vmorris - you'll likely need to set the NO_PROXY model config
<vmorris> oh hmm
<lucacome> thanks lazyPower
<lazyPower> lucacome - no problem :) Happy to help. Let me know how you get along with resources. My team was an early adopter of the feature so we've got your back if you hit a snag. find either myself or mbruzek  or ping the list if we aren't around.
<mbruzek> lucacome: hello.
<lucacome> hello
<mbruzek> lucacome: The TL;DR on resources is we always want the charm to work. So if for some reason the resource is not available the charm should install the apt package (if available) or use status-set to inform the Juju operator what resource is needed.
<vmorris> lazyPower: i'm afraid that didn't work, would i need to redeploy the machine to pick up the model config change?
<lazyPower> vmorris i think thats the case, but i'm not certain.
<lazyPower> kwmonroe - do you know if proxy model-config options are changed post-deployment if they are then registered on the machine?
<mbruzek> vmorris: lazyPower: I do think the model-config options are set at deploy time.
<lazyPower> mbruzek - i thought that was the case. Thanks for confirming.
<vmorris> okay, i added a machine following the change to the model config, having set no-proxy for jujucharms..com but this doesn't seem to help
<vmorris> https://paste.ubuntu.com/23385323/
<vmorris> the ipv6 address that it's trying to connect to is the lxd host machine, but i'm seeing the http request hit my external proxy server :/
<vmorris> i do have the apt-*-proxy and http|https|ftp-proxy options set to that external squid
<vmorris> actually, the ipv6 address that the LXD container is trying to connect to get to the API server is the v6 address on one of the juju-controller interfaces, so i guess that's fine... and here's the real strange thing - that address was already automatically in the no-proxy environment variable on the container, so why's it hitting my external squid proxy?
<lucacome> I don't understand why the @when_not_installed is triggered 5 times
<lucacome> I think I'm missing something...
<lucacome> sorry @when_not('packet.installed')
<vmorris> so maybe i'm seeing a regression of this issue https://bugs.launchpad.net/juju/+bug/1556207
<mup> Bug #1556207: 1.25.4: Units attempt to go through the proxy to download charm from state server <landscape> <juju:Fix Released by dooferlad> <juju-core:Fix Released by dooferlad> <juju-core 1.25:Fix Released by dooferlad> <https://launchpad.net/bugs/1556207>
<lazyPower> lucacome - a pastebin or link to your code would be helpful
<lazyPower> lucacome - that + the output of a "charms.reactive get_states" being run on that unit
<vmorris> could it be that juju-downloader worker isn't honoring the no_proxy environment variable when it's ipv6?
<cholcombe> seems my wheelhouse has some prereq's that need to get installed before pip3 will work.  Can I though those in the apt dependencies and be assured they'll install before wheelhouse?
<cholcombe> i'm going to skip wheelhouse and just use the apt layer instead
<lutostag> cholcombe: yes the apt dependencies are force installed before the wheelhouse is installed
<cholcombe> lutostag, nice
#juju 2016-10-27
<vmorris> is there something in particular that i need to do to pass http proxy settings into LXD containers?
<junaidali> hey guys, can a peer relation only share private-address?
<junaidali> I'm trying to share some data across peers, but the relation data that i get is always the private address
<kjackal> Good morning Juju world!
<kjackal> junaidali: yes peer relations are just like the rest of the relations, you can share any kind of data
<kjackal> junaidali: You need to create an interface for that, let me find an example
<kjackal> junaidali: https://github.com/juju-solutions/interface-spark-quorum
<junaidali> Good morning kjackal, thanks.
<vibol> Hello
<vibol> when we run juju enable-ha --n 3 we will be avaible to run juju command on the second and third node like the first controller ?
<vibol> i mean we will be able to run juju status on the 3 state server in ha mode ?
<SimonKLB> how do you run the same charm with different configurations since units have the same?
<SimonKLB> for example two mysql databases with different ports?
<mgz> you name the services/applications differently
<mgz> `juju deploy cs:mysql dba && juju deploy cs:mysql dbb`
<SimonKLB> mgz: thanks
<SimonKLB> mgz: can you get some kind of unique id from an application?
<mgz> the service name is unique
<SimonKLB> mgz: not if you remove it and then start a new one with the same name right?
<mgz> well, how is that distunguishable from a newly deployed service? config isn't kept if you destroy a service
<SimonKLB> mgz: right, ill create the id myself, probably best not to rely on something the user can specify if you want to ensure uniqueness anyway :)
<vibol> in juju 2.0 do we need to bootstrap at lease 3 machine to enable ha with juju enable-ha -n 3 ?
<mgz> you don't bootstrap three times. juju will take three machines if you want ha though.
<vibol> i'm in manual environment did the ha work in manual environment ?
<mgz> yes. you'll need to add-machine enough suitable things before calling enable-ha
<vibol> i add 2 machine with juju add-machine ... and than juju enable-ha -n 3 --to 0,1 and i got machine 3 already a controller
<vibol> i run juju status i see my new machine is add 0, 1
<vibol> and 0, 1 is a new added machine
<mgz> did you add-machine in the right model?
<vibol> machine 0 already controller sory
<vibol> why guidline from doc in manual provisioning juju add-machine ssh:xx.xx and machine showing correctly in juju status
<vibol> i follow the doc in 2.0 version
<mgz> and also, it's 'ensure-availability' in 2.0 so I hope that's the command you're using
<vibol> this command is in 1.25 ? because in 2.0 i don't see this command instead of enable-ha
<mgz> my bad, I got it backwards
<vibol> in 2.0 it has a seperate between agent list controller list right ?
<mgz> yeah, there are two models by default
<mgz> one for the controllers, and one empty and ready to use for workloads
<mgz> anyway, feel free to file a bug with your steps and what error message you get
<vibol> i'm not good at a bug report.. but will try
<SimonKLB> mgz: is it possible to have some kind of global configuration for every application deployment of a charm in the same config.yaml and then the specific options under the service namespace?
<mgz> SimonKLB: I am somewhat confused about what you're trying to accomplish
<SimonKLB> something like
<SimonKLB> ssl-enabled: True
<SimonKLB> service_1:
<SimonKLB>   port: 8080
<SimonKLB> service_2:
<SimonKLB>   port: 9090
<mgz> sure, but you split that
<mgz> the config that generally applies goes in the charm
<mgz> and the fact there is a port config option
<mgz> the ports for specific services goes in the bundle
<SimonKLB> hmm, this is two services of the same charm
<mgz> sure
<SimonKLB> im not using a bundle right now
<mgz> you have a bundle that specifies all your services, and can include specific config for them
<SimonKLB> so it's not possible to have global configurations without a bundle ?
<mgz> you can do everything via command line as well
<mgz> so, write a script that says deploy this service, set this config, and so on
<mgz> skin the cat however you want
<SimonKLB> yea thats what im looking to do right now, so in this example i deploy two services of the same charm, but i was thinking if it was possible to keep everything in the same config.yaml still
<SimonKLB> but to not having to repeat stuff in the config.yaml file i was wondering if you could keep the general config options at the top?
<vibol> when enable ha other machine is down and sad lost connection
<vibol> but running juju ssh still working find
<vibol> fine
<mgz> vibol: down means the agent isn't talk back, so check /var/log/juju and such
<vibol> juju mongo connection fail @@
<vibol> it a strange problem
<vibol> let see...
<vibol> there 2 nic currently availble first primary one is private and has the static ip.. but why juju mongo try to connect the second nic instead because i set this nic to dhcp and get internet from my rounter
<vibol> juju mongo shoud connect through the primary nic... but not sure
<vibol> it seem like a bug ... because i can reproduce this problem anytime
<mgz> vibol: please file one, I'm not sure how multiple nics and the manual provider are meant to play together
<vibol> thank you.. i'm sorry to making u confuse
<mgz> but there may be additional prerequisites on network setup before you can add-machine
<vibol> juju seem to work not very well on manual provider
<vibol> i can deploy a service as normal on my network setup for juju
<vibol> the only problem os deploying ha only
<mgz> yeah, it's likely that the state server requirements are stricter than for a normal service
<mgz> as they need to do much more
<vibol> i'm sorry before i file a bug
<mgz> for instance, need to be on all networks that any other machine is on, and from your experience sounds like the local network setup also matters
<vibol> i want to deploy all maas juju both in ha
<vibol> and i want to deploy juju and deploy maas using juju
<vibol> because the doc say it is possible to deploy maas using juju
<mgz> er... you probably don't want to try that
<vibol> -_- it really make me confusing
<vibol> but why ? any problem ?
<mgz> I'm aware of some very experimental maas charms
<vibol> ohh
<mgz> but the normal way is you just deploy maas, then can use juju for workloads on top
<vibol> good
<vibol> ...
<mgz> juju drives maas quite happily
<vibol> i though deploy maas in ha by hand is very hardwork
<vibol> but it the same on juju with manual provisioning -_-
<mgz> yeah, that is also true
<mgz> well, it's easy with homogenous envs
<vibol> i guess i should start maas in deploy juju on top of maas and see how it work first
<vibol> and also can we install landscape with ha without using juju @mgz
<mgz> vibol: I believe so, but you'll need poke landscape for that
<vmorris> i have a question about charms running in a private maas deployment. i have configured http proxy in the model at bootstrap and things tend to come up fine so long as they are running inside lxd containers; however, whenever I try to deploy something directly to a maas machine, it tries to talk to the api server over ipv6. this normally woudn't be a problem but the requests are hitting my proxy server and are being rejected
<vmorris> trying to formulate a question.. lol. How about: is there a way to force the charm deployment to only communicate over ipv4?
<rick_h_> vmorris: juju allows using a contraint for deployment that might help there. You'd have to setup different spaces in maas for ipv4/6 subnets perhaps? and then get juju bootstrapped onto the ipv4 space and I would hope we'd use that to communicate vs the v6
<rick_h_> vmorris: if that doesn't work I'd suggest mailing the list and I'll get some of our networking folks to formulate a more intelligent response vs a guess
<lazyPower> rick_h_ - we used to have a prefer_ipv4  thing in the 1.24 era
<rick_h_> lazyPower: right, that went away
<lazyPower> rick_h_ - do you konw if that got scrubbed? i'm not finding it in teh docs anymore
<lazyPower> ok, thats what i suspected.
<lazyPower> thanks for confirming
<rick_h_> lazyPower: yea, caused some issues as much as it solved and we need a better way to do that right
 * lazyPower nods
<vmorris> rick_h_: hmm, okay. i've been able to keep ipv6 subnets from showing up in MAAS by disabling the v6 stack on the maas controller
<vmorris> but I keep ending up with messages like this in the juju debug log: ERROR juju.worker.dependency "uniter" manifold worker returned unexpected error: preparing operation "install cs:neutron-gateway-232": failed to download charm "cs:neutron-gateway-232" from API server: GET https://[fd55:faaf:e1ab:a4f:5054:ff:fea9:25aa]:17070/model/3a3a6a0e-e580-437c-8b25-211b188de393/charms?file=%2A&url=cs%3Aneutron-gateway-232: Get https://[fd55:faaf:e1ab:a4f:5054:ff:fea9:
<vmorris> 25aa]:17070/model/3a3a6a0e-e580-437c-8b25-211b188de393/charms?file=%2A&url=cs%3Aneutron-gateway-232: Forbidden
<vmorris> ew sorry
<vmorris> these requests are hitting my http proxy server and are getting denied there
<rick_h_> vmorris: hmm, but is something in the proxy path getting routed via ipv6? I mean if the ipv6 stack isn't on in maas not sure how juju would be initiating ipv6 traffic
<vmorris> rick_h_: that's another thing i don't understand, this is the sort of message I see in the squid access log:
<vmorris> 10.20.95.61 TCP_DENIED/403 3936 CONNECT [fd55:faaf:e1ab:a23:5054:ff:fe14:7dc2]:17070 - HIER_NONE/- text/htm
<vmorris> wouldn't thins indicate that the request is coming in on ipv4?
<rick_h_> vmorris: hmm, 17070 is the controller port. I'm not sure on the ipv4 part. Would have to try to match those ip addresses with your infrastructure and see what machines are doing what there
<vmorris> rick_h_: these are all maas machines and lxd containers that came up as a result of a juju deploy
<rick_h_> vmorris: right, but you're saying that the nodes don't have ipv6 enabled
<rick_h_> vmorris: and so the ipv6 addresses must be coming from some dhcp/etc on the network?
<vmorris> ah no.. the maas controller doesn't have ipv6 enabled
<vmorris> i don't know if i can control ipv6 on the nodes
<rick_h_> vmorris: so I'm curious how the deployed nodes/containers have ipv6 addresses
<vmorris> rick_h_: ooh i see what you're saying
<rick_h_> vmorris: right, the nodes and containers get their addresses either as hard coded or via the dhcp server maas runs.
<rick_h_> vmorris: either way, the data comes from maas unless there's another thing in the mix there
<vmorris> rich_h_: there's definitely not another dhcp server.. and i only enabled dhcp on the 10.20.95.0/24 subnet.. yes where are they getting these addresses?
<vmorris> here i'll get on the juju controller node and dump out some information
<vmorris> rick_h_: https://paste.ubuntu.com/23389877/
<vmorris> rick_h_: v6 addressing looks like it's coming in through MAAS metadata, looking for confirmation
<vmorris> rick_h_: lazyPower: perhaps i should've been more clear.. i've disabled ipv6 on the maas-controller itself, but there is a way to disable it in the region/rack settings as well?
<vmorris> rick_h_: let me back up and ask a more pointed question.. if i have 3 interfaces on a maas machine, but only want to use 2 of them in my juju deployments, i can do this with spaces?
<vmorris> or, even more to the point - if i have internet connectivity out of one interface, but want to run my juju applications on another, how is this configured?
<vmorris> also, according to the docs, spaces are only supported on EC2
<vmorris> question: when running juju deploy with --debug, i'm seeing some messages from httpbakery client.go GET to api.jujucharms.com -- are these requests coming from the juju controller machine?
#juju 2016-10-28
<vmorris> shouldn't no_proxy contain localhost?
<gaurangt> we're facing an issue with MAAS while deploying a charm. The storage disk attached to the VM (sdb) is not getting picked up by the charm. The juju storage list always shows the disk status as pending. Any clues on what could be going wrong here?
<gaurangt> the MAAS version we're using is 1.9 and that of juju is 2.0.
<aluria> hi o/ -- I was looking for logstash plugins to parse juju logs -- I can't find none at https://www.elastic.co/guide/en/logstash/current/filter-plugins.html
<aluria> do you know if such plugin exists somewhere else?
<BlackDex> Hello there, i'm looking voor cholcombe or icey about the ceph-dash to get access to the private repo
<autonomouse> hi folks, got a question about actions for anyone who wouldn't mind taking a shot: http://askubuntu.com/questions/842795/can-i-make-a-juju-action-pass-a-file-back-to-where-it-was-called-from
<cholcombe> BlackDex, sure what's your launchpad id?
<stokachu> can you not run 'config-set' in an action?
<lazyPowe_> stokachu - config-set isn't a thing. charms cannot set their own config
<stokachu> if im running an action that upgrades a piece of software and i want to update the charm's config to reflect that what is the alternative?
<lazyPowe_> stokachu - unfortunately, charms cannot set their own config, that has to come from the operator
<lazyPowe_> i dont know that we have any examples for that pattern
<stokachu> lazyPowe_: so is preferred to update the actual software via actions?
<stokachu> is it*
<lazyPower> No, we tend to do that via config, or via resources.
<stokachu> ok
<lazyPower> stokachu - i would probably crib from the openstack charms for workflow there. they have a pretty solid upgrade story
<stokachu> thanks
<stokachu> i think using resources is the way to go
<lazyPower> +1
<lazyPower> if you come up with an interesting strategy to manage resources in terms of traceability we're all ears over here on the kubernetes team
<lazyPower> with many minor version releases and major releases every 3 months, we're up to our ears in resources
<stokachu> lazyPower: mind elaborating on what you are looking for?
 * stokachu just trying to understand
<lazyPower> stokachu - we're currently working on a binary pipeline to give us s'more traceability into whats out there that gets deployed via resource.  We want things like "Build ID, git hash, fingerprint"
<lazyPower> so when someone opens an SOS report, we have all that data handy
<lazyPower> we can pull that bin, redeploy and verify the bug
<lazyPower> not everyone will juju upgrade-charm when there is a new resource available (it just says charm upgrade available)
<stokachu> ahh
<lazyPower> so we have to have some level of accountability for what bins are out in the wild, and how we triage/treat support rquests for deployments that are not on the latest release.
<stokachu> i use to remember you could run an elf tool on the binary
<stokachu> to pull the buildid if it had one
<lazyPower> well we're thinking about going even lower tech
<lazyPower> just including a .release file with all that info
<lazyPower> if you dont have that you're unsupported
<stokachu> lazyPower: yea if you have that ability i would do that instead
<stokachu> i had to try and pull the build-id out of kernel
<lazyPower> ewww
<stokachu> so it was embedded at the beginning
<stokachu> but attaching a metadata file is easier
<lazyPower> oh thats fair
<lazyPower> thats less eww
<stokachu> im working on my dokuwiki charm right now ill see what kind of info i can pull from that wrt traceability
<lazyPower> stokachu the longer lived a charm is, the larger the resource trail will be too
<lazyPower> thats another area we're interested in knowing.
<lazyPower> i've considered putting together a simple little database so we have more detailed information than what the store-api is giving us, which is just hash and date uploaded.
<stokachu> ah so, can you delete a resource form the charm store?
<lazyPower> you cannot
<lazyPower> once its in there, its in there
<stokachu> wow ok
<lazyPower> you can release a new resource attached to a charm id
<lazyPower> but in terms of resource streams, its a forward progression (in unpublished) and you get little aliases per channel that the resource is released (when associated with a charm release)
<lazyPower> so to be clear, you can shuffle a resource out, but it never goes away
<stokachu> gotcha, yea querying the api may be the way to go
<lazyPower> you're still only getting al imited subset, which is why i think it necessitates an additional meta source
<lazyPower> no tonly in teh deliverable, but we're going to want a catalogue of what was published with what, when.
<lazyPower> and the more automated we can make that process, the happier i'll be
<lazyPower> release management sucks. point blank.
<lazyPower> maybe someone out there likes it, i feel like its a time suck
<stokachu> haha
<stokachu> those guys get their own job title for a reason :)
<lazyPower> +1 to that sentiment
<stokachu> but yea, more automation would make everyone's life easier
<lazyPower> we have a pretty good start to standing up jenkins + plugins + build workspace snapshot/restore
<lazyPower> so we're getting there, its just a matter of time until we get the whole pipeline captured as code
<jrwren> the problem with continuous deployment is that now you are deploying continuously.
 * jrwren ducks
<lazyPower> jrwren fact
<lazyPower> i dont think anyone said its easy ;) and if they did, their pants.. they are on fire.
<jrwren> oh, it can be easy... when dealing with easy software on easy infrastructure. I've done that. it was easy. :]
<lazyPower> jrwren - that smokey musty smell... i do believe your pants
<lazyPower> they are on fire
<jrwren> lazyPower: lol.
<lazyPower> stokachu - when you were working with conjure to get resources working properly
<lazyPower> stokachu did you file a bug for that?
<urulama> stokachu: and did you manage to get it working in the end?
<stokachu> yea we have resources working in conjureup
<stokachu> we pull them down via the api if they exist
<stokachu> lazyPower: urulama https://github.com/conjure-up/conjure-up/blob/master/conjureup/juju.py#L385-L403
<stokachu> using our macumba juju 2 api client
<urulama> stokachu: ty!
<stokachu> np
<lazyPower> nice drop stokachu, thanks for the pickup :)
<lazyPower> we just tag teamed a gui bug
<stokachu> :D
<stokachu> urulama: http://paste.ubuntu.com/23395090/
<stokachu> am i doing something wrong here?
<stokachu> or anyone else having issues uploading resources to charmstore?
#juju 2016-10-29
<PCdude> hi all!
<PCdude> I have here a PC, with an openstack install from conjure-up
<PCdude> it works, but it cannot take a restart
<PCdude> when I do so, LXC list and commands like that do not work anymore
<PCdude> when issuing them and hitting enter the cursor goes to the next line but stays there. no output no error nothing...
<PCdude> the services seem to work, but that is all. they show up in "top"
<PCdude> any idea?
<stokachu> PCdude: yea im hitting this issue too, trying to figure out why restarts aren't bringing up the containers automatically
<PCdude> stokachu: well if u have the issue too then I know why I was not able to fix it :)
<PCdude> way over my head then haha
#juju 2017-10-23
<hallyn> axw: hey, if i have (for licensing reasons) multiple vcenters which each have a few esxi6 hosts, can juju treat them all as one cloud?
<axw> hallyn: heya. you could treat them as regions. normally a region would be a datacenter in the same vcenter, but you could set the endpoint for each region. you'll need to use the same user/pass for each vcenter though
<axw> hallyn: though that'll only work if the DC in each vcenter has a different name
<axw> and if you're wanting Juju to deploy across all of them, in the same model - that's not going to work
<mark-dickie> Hello all, looking at the Github repo for juju and the vsphere provider in particular and I see that there doesn't appear to be support for deploying Windows machines.
<mark-dickie> This is something I'm interested in doing and would be prepared to commit some code, can anyone with familiarity point me in the right direction?
<mark-dickie> I'm wondering if it would be simpler to use MaaS as a layer between juju and vSphere.
<mark-dickie> Seeing as how MaaS can already deploy Windows machines.
<Mmike> Hi, lads - is there a (simple-ish) way to recreate juju environment once I 'juju ssh' into a unit? I can't use 'juju run' as relations are in error state and I need to verify some relation variables across the environment. So before I resort to peeking into the database I wonder can I run 'relation-ids, relation-list, ...' from the unit itself somehow?
<zeestrat> Mmike: Not sure if it helps, but have you checked out the "juju debug-hooks ..." command? https://jujucharms.com/docs/stable/developer-debugging
<Mmike> zeestrat, yup, but I can't really use that :(
<Mmike> I have several dozen of units that I need to go trough
<Mmike> juju run would be awesome, but can't run that as most of my units are in error state
<zeestrat> Mmike: Ah, right. This prod or dev? How about, "juju resolved" or "juju resolved --no-retry" to tell Juju to recover from the error state?
<Mmike> zeestrat, yup, no dice. It's production, and I need to rerun hooks to fix the env, but before that I need to gather info from the env
<Mmike> I'm going trough the database, so I'll have it for later too :) (unless database changes)
<Mmike> if I just 'juju resolved --no-retry', then I can't re-fire those hooks at later time
<zeestrat> Gotcha. Yeah, I can see if it's prod then the --no-retry might get you a bit stuck if you can't easily restart the jujud services
<ryebot> what recourse do I have if I'm trying to add a space with a (known existing) subnet that juju says is not found?
<ryebot> ^ https://gist.github.com/wwwtyro/2954c1efd9b755d7d1d7c543a72a3dca
<bdx> ryebot: new model
<bdx> I think I saw some work going into that somewhere recently
<bdx> autodetecting/autoupdating the subnets
<ryebot> bdx: ah, got it. gotta make the model with the vpc-id config. +1
<bdx> totally
<bdx> how do I access network bindings from charmhelpers?
<bdx> Via
<bdx> Iâm thinking I use network_get() then filter for the address in the network from the space specified in the bindings directive?
<rick_h> With networkbget you pass the endpoint in and tlget the binding info out right?
<bdx> Ohhh I havenât used it with bindings specifies yet
<bdx> It lists the bindings in the output Iâm guessing
<bdx> If you have bindings specified
<bdx> Ok
<bdx> Cool
<bdx> thx thx
#juju 2017-10-24
<junaidali> Guys, is it possible to deploy an application on a machine with either a tag test1 or test2?
<junaidali> cloud provider is MAAS
<mark-dickie> junaidali: You can use the --constraints tags=<tag> parameter.
<junaidali> mark-dickie: can we specify a constraints like `--constraints tags=<test1 or test2>`?
<junaidali> to deply on a machine with a tag either test1 or tes2*
<bdx> did something come across about being able to deploy bundles to pre-existing machines recently?
<rick_h> bdx: it's something being hacked on in the side time.
<bdx> rick_h: thx
<andrew-ii> A best practice question of MAAS and Juju: does my Juju controller need to run on its own MAAS node, or would it be reasonable to run on the MAAS controller?
<rick_h> andrew-ii: so for testing definitely so what makes sense
<rick_h> andrew-ii: but for production, with an HA controller setup we recommend own nodes so that you can migrate and have full safety of those controllers
<andrew-ii> rick_h: does that impact bringing Juju charms on the internet, when MAAS is the proxy for everything (any challenges I should watch for?)
<andrew-ii> (I am planning to use Openstack)
<rick_h> andrew-ii: so if maas is proxying for the nodes that come up it should be ok. One thing folks have done is to manually create a VM and register that in maas
<rick_h> so you bootstrap to MAAS, but use constraints (--to=xxx) to target the VM node in maas for the controller
<andrew-ii> Huh. Neat. That's a pretty cool design
<rick_h> andrew-ii: yea, it helps isolate the controller in case you need to blow stuff away later and keeps everything behaving as normal maas setup
<andrew-ii> Thanks rick_h. I have the hardware to make that work, I just wanted to make sure I didn't shoot myself in the foot at the start
<jamesbenson> Hi all, so I've moved my maas controller to a hyper-v server and I'm trying to move my juju controller there, but having issues.  I'd rather have a VM host this than a full node.  Has anyone else done this?
<rick_h> jamesbenson: moving an already running controller?
<jamesbenson> fresh is fine with me :-)
<jamesbenson> I was having issues deploying the controller in a VM because I didnt' get the reboot typing right...
<jamesbenson> since maas can't do hyper-v
<rick_h> jamesbenson: hmm, k. so I don't know about maas and hyper-v tbh. If you can get it registered in maas just tag the node with a special tag and bootstrap --constraints to target it during bootstrap
<jamesbenson> yeah, I got the VM registered on Maas, but the power types don't exist for hyper-v... tagging was the way to go though for making sure it got the right one
<xarses> snaps, why does it always have to be snaps...
<xarses> seriously though, all of the charms tools versions where switched to snaps
<xarses> and aparently snapd doesn't run on this box, and its installed and there is no service for it
<pmatulis>  xarses what is the nature of your box?
<xarses> wsl, 12.04
<xarses> all of the docs for all the juju versions had non-snap directions a few months ago
<xarses> erm, 14.04 even
<pmatulis> xarses, i know. it was changed to snaps b/c the PPA wasn't supported anymore
<pmatulis> it may still work
<xarses> ppa's have been removed totally? or just for charms project?
<hallyn> axw: sure I can give each DC a different name.  thanks i'll give that a shot.
<pmatulis> xarses, just charm tools
<pmatulis> and like i said, it may still work
<xarses> sigh, sounds not fun
<pmatulis> https://launchpad.net/charm-tools
<[Kid]> is there a way to run the juju controller in a lxc container on a remote node?
<[Kid]> instead of localhost
<[Kid]> i tried juju bootstrap maas maas-controller --to lxd:containername but it didn't like that
<rick_h> [Kid]: not atm, it's not supported.
<[Kid]> ok, then how about HA between two localhost controllers?
<[Kid]> on different physical hosts
<[Kid]> provided they can talk to each other?
<[Kid]> i.e. lxd network bridge on each host
<rick_h> [Kid]: so at the moment that's waiting on some work taking place in lxd to enable building a lxd cluster.
<rick_h> [Kid]: then it'll allow juju to treat it as a cloud and lxd will provision containers and load balance and such
<[Kid]> so it will basically be an lxd cloud?
<[Kid]> lxd cloud provider
<rick_h> [Kid]: the best thing at the moment is to build a lxd environment on each machine and use the cross model relations feature in the upcoming 2.3 release (in beta now) to have things speak across the machines
<rick_h> [Kid]: exactly
<rick_h> [Kid]: take 5 machines, build a lxd cluster on them, and point juju at them to manage multiple models of running applications
<[Kid]> that sounds perfect
<[Kid]> but that is waiting on some work in lxd right now?
<rick_h> [Kid]: right
<[Kid]> could i not just build a controller on each physical machine right now under the same model and do a juju enable-ha?
<[Kid]> brb, i would like to finish this shortly
#juju 2017-10-25
<[Kid]> rick_h, you around/
<tamszagot> good moring
<tamszagot> I have a problem with juju after upgrading to 2.2.5
<tamszagot> got "no matching agent binaries available" when trying to deploy a model
<tamszagot> destroyed all controllers and tried to reinstall juju and then do a bootstrap, but same problem
<tamszagot> tamas@tamas-EasyNote-TE11HC ~ $ sudo snap install juju --classic
<tamszagot> [sudo] password for tamas:
<tamszagot> juju 2.2.5 from 'canonical' installed
<tamszagot> tamas@tamas-EasyNote-TE11HC ~ $ sudo snap list juju
<tamszagot> Name  Version  Rev   Developer  Notes
<tamszagot> juju  2.2.5    2713  canonical  classic
<tamszagot> tamas@tamas-EasyNote-TE11HC ~ $ juju bootstrap
<tamszagot> Clouds
<tamszagot> aws
<tamszagot> aws-china
<tamszagot> aws-gov
<tamszagot> azure
<tamszagot> azure-china
<tamszagot> cloudsigma
<tamszagot> google
<tamszagot> joyent
<tamszagot> localhost
<tamszagot> oracle
<tamszagot> rackspace
<tamszagot> Select a cloud [localhost]:
<tamszagot> Enter a name for the Controller [localhost-localhost]: localhost
<tamszagot> Creating Juju controller "localhost" on localhost/localhost
<tamszagot> Looking for packaged Juju agent version 2.2.5 for amd64
<tamszagot> To configure your system to better support LXD containers, please see: https://github.com/lxc/lxd/blob/master/doc/production-setup.md
<tamszagot> ERROR failed to bootstrap model: no matching agent binaries available
<tamszagot> tamas@tamas-EasyNote-TE11HC ~ $ cat /etc/issue
<tamszagot> Linux Mint 18 Sarah \n \l
<Akshay> Hi All, Is there a way to tag charms on charmstore for specific release for e.g. tag the uploaded charm for OCATA release only.
<mark-dickie> Hello, does anyone know if the maas-image-builder is still available?
<mark-dickie> bzr branch lp:maas-image-builder
<mark-dickie> bzr: ERROR: Invalid url supplied to transport: "bzr+ssh://bazaar.launchpad.net/+branch/maas-image-builder": no supported schemes
<mark-dickie> Actually the MaaS docs say I need Ubuntu Advantage for custom images.
<tamszagot> Any of you seen the message: ERROR failed to bootstrap model: no matching agent binaries available
<tamszagot> after upgrading to 1.2.5 ?
<tamszagot> sry I mean 2.2.5
<[Kid]> is there a way to deploy a juju HA cluster with MAAS to a lxd container?
<jfh> hi - did anyone run into this before (and has an idea about it):
<jfh> "juju status"
<jfh> "ERROR unable to connect to API: Forbidden"
<jfh> ssh login to the controller and 'juju controllers' still work, but more or less no other juju cmd anymore ...
<magicaltrout> anyone want to hazard a guess as to why trying to bootstrap k8s on lxd via conjure up says
<magicaltrout> Error bootstrapping controller: ['ERROR enabling HTTPS listener: cannot listen on https socket: listen tcp [::]:8443: bind: address already in use']
<magicaltrout> but
<magicaltrout> fuser 8443/tcp
<magicaltrout> is empty
<stokachu> magicaltrout: do you have multiple lxd's installed, like deb and snap?
<stokachu> conjure-up requires snap lxd
<magicaltrout> good catch
<magicaltrout> thanks stokachu
<stokachu> magicaltrout: when snap lxd is first class in next lts this wont be a problem
<[Kid]> is there a way to put SSH keys into a credentials.yaml for my own private cloud provider?
<bdx> [kid]: use `juju add-ssh-key "$(cat mykey.pub)"`
<bdx> that will add the key to your model
<[Kid]> bdx, awesome, i think that's what i needed
<[Kid]> bdx, will this allow me to bootstrap juju onto another machine?
<[Kid]> that has that key
<bdx> oh
<bdx> I can't be sure
<bdx> if its the default ssh of of your user
<bdx> in ~/.ssh/id_rsa.pub
<bdx> I think it gets added automatically
<bdx> don't quote me on that
<bdx> :)
<[Kid]> yeah, i have two machines and i am using one to bootstrap the other, then i will add the other one to the model and join them in a HA cluster
<[Kid]> well, when i do bootstrap, it is trying to use the key at /home/ubuntu/.local/share/juju/ssh/juju_id_rsa.pub
<bdx> [kid]: again, don't quote me on this, but I *think* you need a quorum of at least 3 for controller HA
<[Kid]> ohh dammit
<bdx> ahh I see
<bdx> yeah, so add /home/ubuntu/.local/share/juju/ssh/juju_id_rsa to you ssh keys
<bdx> then you will just be able to ssh
<[Kid]> well, that's what i was thinking.
<bdx> eval "$(ssh-agent -s)"
<[Kid]> i have to do that on the remote host
<bdx> $ ssh-add -K /home/ubuntu/.local/share/juju/ssh/juju_id_rsa
<[Kid]> that just adds the juju client key for each local host
<[Kid]> i need to add the keys from node 1 to node 2 and vice versa
<[Kid]> that worked.
<[Kid]> i just did it manually
<[Kid]> the question now, is there a way to have juju use the same client key?
<[Kid]> on all of its deployments
<bdx> yeah
<bdx> add it to the model
<[Kid]> oh with the command you said earlier?
<bdx> [Kid]: yea
<tamszagot> goodevening from sweden
<tamszagot> Any of you seen the message: ERROR failed to bootstrap model: no matching agent binaries available
<tamszagot> after upgrading to version 2.2.5
<knobby> tamszagot: known issue related to the bionic update
<tamszagot> ahh , thanks
<tamszagot> is it possible to downgrade?
<tamszagot> to precious version?
<tamszagot> previous
<rick_h> tamszagot: the trick is to specify the series as xenial at the moment. You can pass --series into the deploy command or the add-machine command or use the charm urls with series in them such as cs:xenial/postgresql
<rick_h> tamszagot: the team's actively getting new releases out that corrects this
<tamszagot> thank you very much rick!
<tamszagot> hmm ... I have no controller since I uninstalled juju
<tamszagot> and --series doesn't seme to be an option of the bootstrap command
<tamszagot> (I'm new with juju so bare with me ..)
<gQuigs> juju bootstrap  --bootstrap-series xenial
<gQuigs> tamszagot: ^^
<tamszagot> thanks
<tamszagot> (I would have changed that --bootstrap-series to plain --series if you asked me :)
<tamszagot> tamas@tamas-EasyNote-TE11HC ~ $ juju controllers
<tamszagot> Use --refresh flag with this command to see the latest information.
<tamszagot> Controller  Model    User   Access     Cloud/Region         Models  Machines    HA  Version
<tamszagot> localhost*  default  admin  superuser  localhost/localhost       2         1  none  2.2.5
<tamszagot> gQuigs: thanks alot!
<gQuigs> :)
<rick_h> 20 minutes until the Juju show, get your youtube ready https://www.youtube.com/watch?v=zhz1PSXyIuA
<rick_h> https://hangouts.google.com/hangouts/_/laeoptm2grfy5lpnc67r3yld3me to join the conversation
<rick_h> starting in 9
<rick_h> wpk: bdx kwmonroe and others ^
<kwmonroe> i'll be there rick_h -- just gotta find some pants.
<rick_h> kwmonroe: oh good idea :)
<rick_h> bdx: you on the way?
<[Kid]> don't need pants, just point the camera on the upper half
<bdx> does anyone know of a good example of using Juju storage with the MAAS provider?
<bdx> I found this https://jujucharms.com/docs/devel/charms-storage#maas-(maas)
<bdx> but it doesn't really give any examples, I'm going to be digging into it a bit
<bdx> just wondering if anyone has made any headway there that I might be able to take a look at
<charlls> good afternoon
<charlls> I'm trying to do 'juju bootstrap lxd lxd-local' but it gets stuck in waiting for address 20m, after which it gives 'failed to bootstrap model: without getting any addresses'
<charlls> kubuntu 17.04 host
<charlls> juju --version: 2.2.5-zesty-amd64
<charlls> lxd and lxc --version: 2.19
<charlls> lxc profile list shows two profiles: default and juju-controller
<charlls> lxc network list shows lxdbr0, as type BRIDGE, managed
<aemara> When creating a GCE instance Is there a way to set up a new instance to run as a service account?
<charlls> halp
<thumper> charlls: morning
<thumper> aemara: what do you mean by service account?
<charlls> hi thumper
<charlls> I'm trying to do 'juju bootstrap lxd lxd-local' but it gets stuck in waiting for address 20m, after which it gives 'failed to bootstrap model: without getting any addresses'
<charlls> kubuntu 17.04 host, juju --version: 2.2.5-zesty-amd64, lxd and lxc --version: 2.19
<thumper> ok...
<thumper> likely to be one of several problems
<thumper> however
<thumper> firstly
<thumper> if you try the bootstrap in one window,
<thumper> and then in another terminal
<thumper> do 'lxc list'
<thumper> do you see the machine?
<aemara> @thumper basically its is a way to authenticate and authorize the instance to use other GCE services without having to login. I am trying to use the kubernetes charm and I have a private docker registry  setup on GCE. To access it the instance need to have a service account which juju does not create or attach to default. https://cloud.google.com/compute/docs/access/service-accounts
<thumper> if you do, you want to get into that container and check the cloud init output
<thumper> lxc exec <name> bash
<thumper> then look in /var/log/cloud-init <mumble> something in there
<thumper> aemara: not at this stage, but it is on our roadmap
<charlls> thumper: yes, one instance with state running and type persistent.
<thumper> aemara: does GCE allow you to specify this after the instance has started?
<charlls> but it has no ipv4 or ipv6 address
<thumper> charlls: so it looks like the networking is not working
<aemara> @thumper only if i shutdown the instance. I cant do it while it is running :(.
<thumper> the cloud init output from the inside of that machine should tell us why
<charlls> during bootstrap I get a line that says "Resolved LXD host address on bridge lxdbr0: 10.122.27.1:8443"
<thumper> charlls: are you on a vpn or something that would try to proxy that address?
<thumper> charlls: really info from inside the container is necessary to debug
<charlls> lxc network show lxdbr0 shows ipv4.address: 10.122.27.1/24
<charlls> no, just behind my home router
<thumper> charlls: does lxc list show an address for the container?
<charlls> no
<charlls> it seems to get stuck precisely getting an address
<charlls> not sure how to debug the networking issue
<thumper> charlls: can you get the cloud-init logs and pastebin them?
<charlls> hmm, I don't seem to have cloud-init installed
<thumper> charlls: 'lxc exec juju-xxx-0 bash' where juju-xxx-0 is replaced with the name in lxc list
<thumper> that should give you a root shell in the container
<charlls> awesome, let me try
<bdx> concerning `network_get()`
<thumper> charlls: then there should be two file: /var/log/cloud-init.log and /var/log/cloud-init-output.log
<bdx> would this be the appropriate way to use network_get() https://gist.github.com/jamesbeedy/b3ee89d9c266fb374db41e4642505a0e#file-provides-py-L13
<thumper> charlls: ideally grab the content of those files
<thumper> bdx: I'd have to defer to wallyworld or jam
<bdx> thumper: I didn't mean to target you - sorry for interrupting :)
<bdx> but yeah
<thumper> bdx: that's fine, I just didn't want to leave you hanging
<bdx> wallyworld, jam, wpk, cory_fu: should I be calling network_get() in my interface code or layer code?
<bdx> I feel like I should be calling network_get() at the layer level, and passing the desired value returned from network_get() to the relation
<thumper> charlls: also, you can cat the file from outside the container :  lxc exec juju-xxx-0 cat /var/log/cloud-init-output.log
<bdx> instead of what I have going on ^, where I'm passing the relation the interface_name and calling network_get() from the interface code
<cory_fu> bdx: Yeah, that's what I would recommend as well.
<charlls> yeah, I was wondering how to copy the content outside the container. Thanks
<cory_fu> bdx: The interface layer should really only be concerned with providing a consistent API for the communication channel and dealing with any serialization or other transport concerns
<bdx> cory_fu: thx
<bdx> cory_fu: what are the implications of the new endpoint for the peer relation type?
<cory_fu> bdx: It should work the same, and actually be more consistent.  The only difference is that you can expect that there will only ever be one relation, so you can just use the all_units helper if you want
<bdx> cory_fu: thats awesome!!!!!
<thumper> charlls: also from lxd expert, the lxd version, contents of /var/log/syslog, and ps fauxww from the host
<charlls> thumper: https://pastebin.com/yvKGrQnN
<thumper> charlls: he mentioned that the most common problem is running a DNS server on the host
<thumper> charlls: that was the cloud-init.log right? can we get cloud-init-output.log too?
<charlls> just contains a single line: Cloud-init v. 0.7.9 running 'init-local' at Wed, 25 Oct 2017 20:58:04 +0000. Up 15 seconds.
<charlls> yes the one on pastebin was cloud-init.log
<thumper> oh...
<thumper> that's weird
<thumper> charlls: what about tye syslog and ps?
<charlls> the /var/log folder in the juju container has the following contents: apt btmp cloud-init-output.log cloud-init.log dist-upgrade dpkg.log fsck lastlog lxd unattended-upgrades wtmp
<charlls> no syslog
<charlls> this is ps fauxww inside juju container: https://pastebin.com/YnAKdkXG
<thumper> charlls: sorry, on calls just now
<charlls> ok no prob
<charlls> if someone feels in a helpful mood, I posted my issue here: https://askubuntu.com/q/969275/17002
#juju 2017-10-26
<magicaltrout> random question of the day
<magicaltrout> has anyone seen juju ssh route them to the wrong box?
<wpk> nope, and there's host key check to make sure it doesn't happen
<magicaltrout> something funky is happening because if i run juju ssh 0
<magicaltrout> i go to a remote node
<magicaltrout> if i run juju ssh 1,2 or 3
<magicaltrout> I loop back to my controller node
<magicaltrout> hrm
<wpk> magicaltrout: what does juju --debug ssh 0 says?
<wpk> s/0/1/ ofc
<magicaltrout> er well
<magicaltrout> using 1 didn't fail that time but 2 did
<magicaltrout> using target "2" address "172.17.0.1"
<magicaltrout> but thats a local docker interface
<magicaltrout> and juju status shows the #2 machine as being 10.10.1.81
<wpk> juju show-machine 2 ?
<magicaltrout> ip-addresses: 10.10.1.81, 10.1.100.0, 10.1.100.1.... 172.17.0.1
<magicaltrout> instance-id: manual: 10.10.1.81
<wpk> With --debug enabled is it checking host keys? And finding a proper key?
<magicaltrout> https://gist.github.com/buggtb/04efdbd34493984069026810a23227ff
<magicaltrout> and that loops me right back to the machine i'm on
<magicaltrout> and logs me in
<wpk> and if you ssh into the machine manually?
<wpk> could you see how ip a looks like?
<magicaltrout> well its external ip is 10.10.1.81
<magicaltrout> it also has a docker internal net on 172.17.0.1
<magicaltrout> its just a manual CDK deployment
<wpk> and 172.17.0.1 appears also on the machine you're on?
<magicaltrout> isn't 172.17.0.1 on ever machine docker is ever installed on?
<magicaltrout> its on my latop for example
<wpk> magicaltrout: can you try newer juju? 2.2?
<wpk> magicaltrout: in 2.2 we're checking for host keys on all the possible interfaces, and only connecting to the ones that provide the proper key
<magicaltrout> can i snap change it somehow?
<wpk> magicaltrout: in 2.0 the machine is reporting 172.17.0.1 as its address, so we try to connect to it. And since it's localhost it's likely it's going to 'win the race'.
<magicaltrout> bonus
<magicaltrout> ah yeah 2.2 does work wpk
<magicaltrout> thanks
<magicaltrout> that had me utterly baffled
<wpk> magicaltrout: could you paste juju --debug ssh 2 somewhere? I wonder how it looks like
<magicaltrout> new or old?
<wpk> new one
<magicaltrout> https://gist.github.com/buggtb/5a9d1708dd0ee59ea11a806dbf1c6e8a
<wpk> Thanks
<magicaltrout> wpk: i lied
<magicaltrout> check this treat
<magicaltrout> https://gist.github.com/buggtb/d85fbf00cd45c9e3b72251dd3fc619e4
<magicaltrout> thats absolutely amazing :)
<magicaltrout> basically you can't run docker on the same machine as a juju controller
<magicaltrout> without it looping back in
<magicaltrout> fml
<magicaltrout> well
<magicaltrout> i can change the default docker0 ip i guess that'll stop it for now
<wpk> magicaltrout: ok, that's a serious bug
<wpk> magicaltrout: could you do ssh-keyscan 172.17.0.1 10.10.1.81 ?
<magicaltrout> wpk: https://gist.github.com/buggtb/6b9a4fd460aa2ba84284c3ec847808f5
<wpk> there's no output for 172.17.0.1?
<wpk> try just ssh-keyscan 172.17.0.1
<magicaltrout> i have realised part of the problem
<magicaltrout> there is an ubuntu user locally which allows ssh loopback access to the jujucontroller user I have
<magicaltrout> which is non-standard I accept :)
<magicaltrout> so usually you wouldn't be able to login to yourself
<magicaltrout> that said, its still a bit weird how the docker interface gets preferential treatment over the interfaces you've declared
<wpk> it's the fastest one
<wpk> since both 172.17.0.1 and 10.10.1.81 are in private space
<wpk> we wouldn't know anything about docker, ip is an ip
<wpk> hm, but still it shouldn't validate the host key on both IPs
<ryebot> Where can I find documentation for bootstrapping & adding units in an egress-restricted environment?
<bdx> elasticsearch-peeps: http://paste.ubuntu.com/25825991/
<bdx> :0
<lazyPower> nice
#juju 2017-10-27
<[Kid]> if i run a juju bootstrap manual/ubuntu@10.1.1.5 test-model
<[Kid]> which SSH key does it use?
<[Kid]> my understanding was the juju-client-key
<[Kid]> which is in the /home/user/.local/share/ssh/ directory
<xilet> juju v2.0.2, it looks like the controller isn't coming up,  any juju commands come back with ERROR unable to connect to API: websocket.Dial wss://IP:17070/model/<UUID>/api. And nothing is listening on 17070, is there a way to manually start that service?
<bdx> elasticsearch-peeps: making headway - http://paste.ubuntu.com/25831054/, http://paste.ubuntu.com/25831045/, https://github.com/jamesbeedy/layer-elasticsearch-base/blob/refactor_for_network_get_juju_2_3/reactive/elasticsearch_base.py
<jamesbenson> quick question:  For those who have maas and juju: If you raid all of your disk into one large volume, is there a way to get an openstack deploy to split up that one disk vs use sdb?
<bdx> jamesbenson: yea, you have to create the partitions in maas, then specify the partitions in the charm config
<jamesbenson> bdx!!  Awesome! Can you tell me how to specify that in the charm?  I've looked and not sure where/what to modify
<bdx> jamesbenson: are you using the ceph-osd charm?
<jamesbenson> yeah
<bdx> https://jujucharms.com/ceph-osd/#charm-config-osd-devices
<jamesbenson> you are awesome!  Thank you bdx
<bdx> np
<jamesbenson> bdx, do those directories need to exist prior or will juju make them?
<bdx> jamesbenson: which?
<jamesbenson> "For ceph >= 0.56.6 these can also be directories instead of devices - the charm assumes anything not starting with /dev is a directory instead."
<jamesbenson> osd-devices
<bdx> ah, my bad
<bdx> jamesbenson: https://imgur.com/a/nEqeR
<bdx> ^ for that setup, I would specify /dev/md0
<bdx> I see what you are saying
<bdx> you want to raid all of your disks in maas, and install ubuntu on to that raid, and then use a directory in the filesystem for ceph osd-device?
<bdx> jamesbenson: are trying to make the most of your drive bays here?
<jamesbenson> So maas only picks up 1 HD because we raided them all in our controller, not in maas.  But I don't want to go into all of our server to change the raid controller...
<bdx> jamesbenson: I see, thats unfortunate
<bdx> jamesbenson: for openstack with ceph ......
<bdx> its going to be optimal if you feed ceph physical devices
<bdx> instead of doing what you are trying to do
<jamesbenson> https://snag.gy/qPgVtR.jpg
<bdx> jamesbenson: moreover, if you want to use all of the disks connected to your controller as ceph osd|journal devices, just get a satadom to plug into your mobo, and configure that to be your / partition in maas
<jamesbenson> So my 6 HDD's read as 10TB.
<bdx> right
<bdx> so you don't want to waste any of those by installing the host os on it
<jamesbenson> Agreed with the physical disks... just we have OLD hard drives and well... things fail a lot, so RAID is easier so we have less babying...
<bdx> jamesbenson: right
<bdx> so like, you can get a POC openstack going with that
<jamesbenson> I don't mind repartitioning the servers through maas, just don't want to go into the raid controller if necessary....
<bdx> but if you are actually wanting ceph to support any type of workload
<bdx> I just don't think the filesystem backend is really supported
<jamesbenson> are you familiar with fuel?
<bdx> jamesbenson: yeah
<bdx> its what initially turned me onto using juju to deploy openstack years ago
<[Kid]> guys, i am trying to enable HA on my controllers and I am getting ERROR failed to create new controller machines:
<[Kid]> i have 1 controller and 2 machines added
<jamesbenson> I'm not sure how fuel did it, but they were able to deploy openstack over our raid'ed systems....  but trying to migrate off since it's not supported with mirantis anymore.
<jamesbenson> yeah, we are using kolla now, but also trying to test juju to see how that stability is and also for deployments of other systems...
<bdx> yeah ... I have done that with fuel too
<bdx> https://github.com/openstack-charmers/openstack-on-lxd/blob/master/bundle-mitaka-novalxd.yaml#L81
<bdx> jamesbenson: I think its arbitrary filesystem path
<bdx> try using /srv/osd
<jamesbenson> should I deploy juju with lxd then?
<bdx> jamesbenson: there are a number of ways to go, it all depends on the use case
<jamesbenson> lol, simple!  we're not fancy here.... ;-)
<bdx> jamesbenson: first things first, the filesystem backend is going to kill you
<bdx> I have to be honest
<[Kid]> jamesbenson, you can only deploy juju to lxd on a single localhost
<[Kid]> using the localhost provider
<jamesbenson> well we want it production potentially so more than devstack env...
<bdx> jamesbenson: yeah, latest ceph doesn't even work with filesystem backend I don't think (with bluestore and all) ... possibly it does
<bdx> last time I tried, I encountered issues
<jamesbenson> Ceph L has xfs as a backend...
<bdx> totally
<bdx> I mean do what you want to do
<jamesbenson> https://snag.gy/yadZHm.jpg
<jamesbenson> I did this for another rack with Luminous...
<jamesbenson> using ceph deploy and manually modified the raid controllers
<bdx> ah,
<bdx> I see
<skay> I got a stack trace out of hookenv.log because of inadvertently trying to log too much info, https://paste.ubuntu.com/25831914/
<bdx> jamesbenson: https://jujucharms.com/ceph-osd/#charm-config-bluestore
<skay> (I had django.db logging turned to DEBUG)
<bdx> jamesbenson: you are going to want bluestore, and direct-io
<bdx> https://jujucharms.com/ceph-osd/#charm-config-use-direct-io
<bdx> you just won't be able to take advantage of the goodies with the directory based osd devices
<jamesbenson> yeah, I was a bit confused as to why bluestore wasn't being used by default...
<jamesbenson> but that's how ceph-deploy did it... so I didn't argue.
<bdx> right, well, ceph-deploy should be how you learn ceph, before professionally deploying with juju
<jamesbenson> completely agree... unfortunately time is my enemy.... and too frequently hope/assume defaults are "good enough" options for us here and dig into it further as necessary.
<dvavili> Can Juju be used to deploy P3 instances on AWS that was announced a couple of days back?
<shadoxx> Can I run Juju on my Maas controller?
<bdx> dvavili: submit a bug on that and it will get added
<bdx> shadoxx: you can run the juju client from wherever you install it at
<shadoxx> bdx: i mean, can i run juju on the same machine as my maas region/rack controller?
<bdx> shadoxx: run juju(very ambiguous), you mean the juju client?
<shadoxx> Is there a Juju server?
<bdx> shadoxx: yes, there are a few different components for aure
<bdx> The juju client is what you use to communicate with a juju controller
<shadoxx> Ok, so. Can I install the Juju Controller on the same machine as my MaaS Region/Rack Controller
<bdx> You *can*
<shadoxx> But not recommended lol
<shadoxx> I figured that it's not recommended. Just not sure if it was technically possible or not.
<bdx> It is definitely possible
#juju 2017-10-28
<bdx> You can even create a vm on your Maas controller and check it in to maas
<bdx> Then bootstrap juju to that and use that as a juju controller
<bdx> shadoxx: many ways to peel this orange
 * bdx searches for more neutral analogies 
<shadoxx> bdx: I like the VM on MaaS controller option
<shadoxx> Especially since I kind of messed up and installed a bare MaaS controller on this 32 core /  64GB server...
<shadoxx> Should have thrown a hypervisor on it. But I digress.
<shadoxx> I'm assuming KVM is the hypervisor solution in the Juju as a VM proposal?
<bdx> shadoxx: totally, just make sure the vm connects to the bridge interface that is, or is on the Maas primary interface
<bdx> To get it on the same network
<bdx> Yeah, and make sure you set the boot option to network
<bdx> So it pxe boots to maas
<shadoxx> The PXE boot part I've got down pat. :D
<bdx> Nice
<bdx> So yeah, if you just create a bridge interface you can attach that vm to sonit can be a part of the network, you will be set
<lazyPower> bdx: ping on https://github.com/juju-solutions/layer-filebeat/issues/29#issuecomment-338497819
<shadoxx> bdx: this is what i'm working with at present https://next.deskcloud.io/s/qfzhQM2uqXVI1XO
<shadoxx> Only problem is that I can't get any of the nodes to deploy...
<bdx> lazyPower: thx
<lazyPower> no no, ty.
<lazyPower> i would hate to sink a weekend into this and find out i targeted a deprecated charm
<lazyPower> not that *that* has ever happened before ;)
<bdx> shadoxx: those are physical nodes? What does their power and network configuration look like?
<bdx> Hah right
<shadoxx> bdx: yes, all physical. ipmi is setup properly. they can PXE boot properly so the network is fine. they're HPs that have problems talking to the drives on boot for some reason
<shadoxx> probably has to do with some BIOS setting. trying to find a BIOS update for these/upgrade the ilo
<shadoxx> But, outside the scope of this channel, to be sure. :]
<lazyPower> bdx: did you see https://l.dasapp.io/4WVvY ?
<bdx> shadoxx: exactly, boot order in the bios should be 1) pxe, and 2) sda
<shadoxx> Oh no, I mean, it's a RAID controller problem. The lvmetad daemon doesn't scan the drives in time on one of the nodes
<bdx> lazyPower: no I hadnât until just now. Thatâs ****** awesome!!!!
<shadoxx> so it doesn't detect disks after a certain timeout, whole deployment fails
<lazyPower> well, its gonna get better at some point. I have plans
<shadoxx> That's a next week issue. I've already spent 2 weeks bootstrapping this cluster. I had HP.
<shadoxx> hate*
<bdx> shadoxx: yeah one problem at a time
<lazyPower> welp good luck, i'm gonna head out and clean the apartment. Looking forward to collabing again bdx. :hattip:
<bdx> lazyPower: likewise
#juju 2019-10-21
<anastasiamac> wallyworld: decided to propose add-k8s separately (will probably do a pr per command) to avoid mudding the mud
<anastasiamac> wallyworld: PTAL https://github.com/juju/juju/pull/10760 - add-k8s changes for ask-or-tell
<wallyworld> ok
<wallyworld> anastasiamac: lgtm, ty
<anastasiamac> oh wallyworld  \o/ re: "cluster"... everywhere else in the command we actually say 'k8s cloud"... should i still say "k8s cluster" or 'k8s cloud' to b consistent..?
<wallyworld> hmmm
<wallyworld> k8s cloud
<wallyworld> i prefer cluster but i think certain others wanted cloud
<anastasiamac> wallyworld: ack
<kelvinliu> wallyworld: +1 plz, thanks! https://github.com/juju/juju/pull/10761
<wallyworld> ok
<wallyworld> kelvinliu: glad that one was caught
<kelvinliu> yeah, thanks!
<anastasiamac> wallyworld: PTAL next in line - https://github.com/juju/juju/pull/10762 - remove-k8s changes
<wallyworld> +1
<anastasiamac> \o/\o/\o/
<anastasiamac> wallyworld: PTAL https://github.com/juju/juju/pull/10764 - remove-cloud changes :D
<wallyworld> in a minute, just doing a critical fix
<wallyworld> thumper: https://github.com/juju/juju/pull/10765
<wallyworld> still need to figure out the potential dependency issue
<wallyworld> anastasiamac: lgtm with a suggestion
<anastasiamac> wallyworld: for ur delight PTAL https://github.com/juju/juju/pull/10766 - remove-credential changes :D
<kelvinliu> wallyworld: was about to fix the is_primary machine tag issue but found you already got a PR for this. here is a small enhancement, +1 plz thanks! https://github.com/juju/juju/pull/10767
<kelvinliu> microk8s test is green now, just enabled the job on CI. I think the gke will be green as well once the k8s version fix landed,
<wallyworld> kelvinliu: jeez, that was a big change
<kelvinliu> yes, it is! lol
<wallyworld> kelvinliu: i merged directly since jobs are taking upwards of 40 minutes right now and it was a acceptance test only Pythin change
<kelvinliu> yep, thanks
<kelvinliu> `snap remove microk8s` fixed the 503 health check errors.  last two runs were all green. so merged the PR on qa repo to enable the job.
<manadart> wallyworld: I know what the k8s issue is with not gating on the upgrade. My late email on Friday was poorly communicated in that the issue fixed by John wasn't the only outstanding article for my patch.
<manadart> I have a couple of patches to put up.
<stickupkid> babbageclunk, you around?
<stickupkid> babbageclunk, send email instead
<wallyworld> manadart: awesome that you have it it hand :-)
<stickupkid> wallyworld, it looks like it's return a complex error from juju rather than a string for pylibjuju
<wallyworld> manadart: my PR to address the k8s issue is landing as we speak
<wallyworld> stickupkid: you mean the libjuju storage issue?
<stickupkid> wallyworld, yeah
<wallyworld> i thought it looked like the params not being marshalled properly
<wallyworld> ie the deploy storage args (a map) was not converted from a map of string to a map of struct
<wallyworld> i didn;t raise the issue - just updated the description
<stickupkid> wallyworld, yeah, sorry, you're right, it's expecting a param rather a string
<wallyworld> in the bundle, it's a map of string, but in the api, it's a map of struct
<wallyworld> but there's code to do it
<stickupkid> wallyworld, ah, nice nice
<wallyworld>         if storage:
<wallyworld>             storage = {
<wallyworld>                 k: client.Constraints(**v)
<wallyworld>                 for k, v in storage.items()
<wallyworld>             }
<wallyworld> appears to be *just* for bundles perhaps
<wallyworld> so maybe there's a code path that is with bundle deploy what doesn't invoke that conversion, not sure yet
<stickupkid> wallyworld, also it might be an issue where i've pinned the facades to agressively, so it might be worth checking those out
<stickupkid> wallyworld, https://github.com/juju/python-libjuju/blob/master/juju/client/connection.py#L20-L118
<wallyworld> stickupkid: that storage functionality was introduced in 1.24 so inlikely to be that
<stickupkid> wallyworld, that's good to know
<wallyworld> probs a long undiscovered issue in libjuju, but i haven't diagnosed fully
<nammn_de> manadart achilleasa I got a pr review regarding skipping a caas on upgrades. https://github.com/juju/juju/pull/10696#discussion_r336822311 Sadly I cannot 100% follow where the difference between applications and the others are. Currently I just skip if its kube, but there seems to be a easier way I don't know. Someone a pointer?
<nammn_de> plan is to skip models and controller (just everything) related to kube on upgradesteps
<Fallenour> nammn_de, wallyworld stickupkid rick_h Im having an issue with a for loop:
<Fallenour> for i in $(seq 1 3); do lxc launch ubuntu:x "saltmaster-00${i}"; done
<Fallenour> its telling me this isnt correct, but it should be.
<Fallenour> am I missing something here?
<stickupkid> Fallenour, bash or shell, if bash, that should work
<stickupkid> Fallenour, https://paste.ubuntu.com/p/hTBvMdJwTq/
<manadart> nammn_de: thumper is saying you can remove that block at 129, because checking that the agent tag is of type machine satisfies this.
<manadart> CAAS agents never have machine tags.
<nammn_de> manadart: thanks ð¦¸ââï¸, ahhh why that? Do we have some kind of doc running around what kind of unit can have what kind of tag? For me thats pretty confusing tbh
<manadart> jam, wallyworld: The reason k8s continues even though the DB upgrade worker fails to start is that the lock is returned *unlocked* if we are already on the current version.
<manadart> So wallyworld's patch is sufficient.
<Fallenour> stickupkid, its bash. I think I know wht the issue was. I was using an explicit path for a command in front of it, which is why it was failing. Im testing to see if adding that bin to PATH env will solve the problem
<wallyworld> manadart: ah, yes, that makes sense
<Fallenour> hey wallyworld ;) I see you put several lines of code in chat. I see manadart didnt eat your face for more than one line. Im assuming you work for canonical or the juju team o.o
<wallyworld> i do
<Fallenour> YAAASS *scribbles notes* *adds to birthday cake list*
<Fallenour> I think theres at least 8-11 of you in here o.o
<Fallenour> but ive only got like...4 :(
<Fallenour> you guys do way too much for the community to not at least get birthday cake o.o
<wallyworld> \o/
<manadart> :)
<Fallenour> \o/
<Fallenour> 8D
<wallyworld> glad you like juju :-)
<Fallenour> But yea, I herd from the grape vine that canonical was hiring o.o
<Fallenour> Oh, I dont like Juju
<Fallenour> Id marry juju if she were about 5'2, 135 lbs. shes beyond amazing personality wise, and runs like a champ. After HA, like...all my problems died. Well, most of them.
<Fallenour> its the best damn solution I think ive ever seen cloud wise. I converged it with Saltstack, and am deploying that to production now on live to about 15,000 people in my network.
<Fallenour> I plan on doing talks on it all year this year, integrating it in systems, storage, and security.
<wallyworld> there's an opening we're interiewing for in the APAC timezone
<Fallenour> Id go for it, but I honestly dont think im good enough. You guys are several levels higher than I am in terms of skillset.
<wallyworld> if you need help with talks etc we have a developer advocate on the team who would love to help if needed
<Fallenour> omg thatd be AMAZING
<Fallenour> One of the talks Im looking at doing is a full CI/CD stack deployment with juju to CI/CD in a can kinda concept.
<wallyworld> he isa busy person so it depends on the requets etc, but feel free to ask and he can do what he can
<Fallenour> im building a new solution out of the box that uses LXD containers instead of docker containers for kubernetes with git that allows security teams to be appeased from security concerns with service containers.
<wallyworld> he's in NZ. you can ping him here to ask about stuff. his nic is timclicks
<Fallenour> OOOH its TIM! Tim is the bees knees!
<wallyworld> we don't necessarily have any pre-canned material but can offer general advice etc
<Fallenour> wallyworld, oh thats totally fine! I generally find people dont like pre-canned stuff, so thats actualyl a good thing
<wallyworld> great
<Fallenour> I do a lot of international security conference talks already, and one thing Ive noticed is they dont respond well to canned anything. unless its a canned solution for deployment purposes with a custom talk on top
<Fallenour> ill hit him up once hes on. For now, I have to figure out why sshing into one system logs me into another one? o.O
<Fallenour> I have no idea how thats even possible honestly.
<wallyworld> maybe the model you think you're connecting to is not
<wallyworld> you can use juju models and the with with * is the current
<wallyworld> or use the -m to specify exlicitly
<Fallenour> oh its an ssh issue with ubuntu base, its not a juju issue. juju ssh always works reliably.
<wallyworld> ok
<Fallenour> btw, while its on my mind. do you guys still do the juju live events?
<Fallenour> I think those are the greatest things since elderberry jam
<wallyworld> the Juju Show?
<Fallenour> yea
<Fallenour> id love to be on one of those one day.
<wallyworld> yeah, tim (clicks) and rick are working on a new batch
<wallyworld> nayone can join in
<Fallenour> omg
<Fallenour> what?
<Fallenour> o.O
<Fallenour> how do I sign up?
<wallyworld> good question, i'm not totally sure
<Fallenour> and is there a subscription thing I can sign up for?
<wallyworld> rick_h is the person to ask
<Fallenour> 8D
<wallyworld> he'll be on irc in a few hours
<Fallenour> yaaaaasssss
<Fallenour> rick_h, is awesome
<wallyworld> i think they normally have a few people logged in and able to ask questiosn etc
<wallyworld> he is
<Fallenour> one thing Ive noticed about all canonical employees in general is they are all really happy
<wallyworld> we're all peachy
<Fallenour> canonical must be a great company or its the drugs in the free water, its gotta be.
<Fallenour> only a handful of companies i know that are like that, saltstack, suse, canonical, riot
<wallyworld> bit of booth
<nammn_de> manadart: just to make sure before i press the merge button. I removed the block below: https://github.com/juju/juju/pull/10696/files#diff-8bc810c7809469ea95764da958639d1aR121-R126
<manadart> nammn_de: Looks fine.
<nammn_de> manadart: ðââï¸
<Fallenour> quick question, but what service(s) should be running for lxd/lxc to work? @wallyworld manadart nammn_de
<Fallenour> I just built an lxd cluster, and it was in HA and fine, then I built 3 containers, and it died.
<Fallenour> all three of them. So much for HA :P
<wallyworld> Fallenour: way past my EOD now, i'll leave to others to answer as i need to get AFK
<manadart> Fallenour: I think the daemon, the socket and possibly DNSmasq if the LXD bridge is managed.
<Fallenour> manadart, I found out the issue is with the database, likely due to being a snap. Super frustrating that snaps are supposed to be stable, and my general experience is they are anything but. Im having to rebuild all three machines
<nammn_de> stickupkid: got a min?
<stickupkid> nammn_de, sure
<nammn_de> stickupkid: Thinking about changing this function  https://github.com/juju/charm/blob/974f39ea8f706c25616d022f70838c862687d3ca/charmdir.go#L418
<nammn_de> that it does not log anymore at all
<nammn_de> so that it can be called n times without keep logging
<nammn_de> problem: I want to log things at different levels (debug, error and warn)
<nammn_de> on way to solve is to return the log level as well, but this would change the return signature from 3 to 5, which is kind of not cool
<stickupkid> nammn_de, that make sense, maybe, pass in a logger, then you can tell it to not log at all if you don't want it too
<nammn_de> ahhh good one
<stickupkid> nammn_de, or return a list of issues
<nammn_de> oh, list of issues are nice too. which "only" makes it to 4. Like both approaches. Gonna try them out
<nammn_de> Im going the first approach to let the things stay lean
<nammn_de> stickupkid how would you tell a logger not to log before passing it in?
<nammn_de> *loggo
<stickupkid> nammn_de, is there not a dumb logger
<stickupkid> nammn_de, or provide an interface for a logger and then pass an instance of the logging instance to it
<stickupkid> nammn_de, similar to how the worker are done (thumper has done work in this area)
<nammn_de> stickupkid: thanks gonna take a look
<rick_h> Fallenour:  what's up?
 * rick_h yawns
<rick_h> Fallenour:  your cluster died?
<Fallenour>  rick_h yup. It ate the orange squirrel cable of love, sailed off into the tuskegee, took a short walk on a long pier
 * rick_h processes that for a while...
<rick_h> Fallenour:  any hint on the issue?
<Fallenour> rick_h, database connection issue.
<rick_h> Fallenour:  for the lxd db? or juju to the mongodb?
<Fallenour> rick_h, likely from snap/deb collision.
<Fallenour> rick_h, lxd db, juju isnt at fault here
<rick_h> oh hmmm, how did they collide?
<Fallenour> rick_h, followed a juju tutorial on bootstrap deployment in ha, but didnt do lxd.migrate because it wasnt in the instructions :P
<rick_h> Fallenour:  :(
<Fallenour> rick_h, lesson learned, lxd does NOT in fact like to use deb/snap in combination. it is very much so a dinner menu only kinda gal.
<rick_h> no no no, agree it's a "pick one and only one"
<Fallenour> rick_h, took me 10+ minutes to run systemctl snap.lxd.daemon stop
<Fallenour> just to kill the service
<rick_h> :/ you have much more patience than me
<Fallenour> I think what it was doing is creating a race condition. I think it was passing the for i loop as a command to each node, which cuased that node to create a for loop for the next node
<Fallenour> so it was infinitely trying to create 3 containers in loop
<rick_h> how helpful!
<Fallenour> rick_h, I know! It just really wanted to make sure I had enough salt masters :P
<Fallenour> I guess it figured 30,000+ salt masters should suffice
<Fallenour> I did in fact, not concur, so we had a splitting of ways, ergo a complete rebuild inside maas
<rick_h> well for most people that'd be good, but nooooo you have to be all picky and stuff :P
<rick_h> ouch, sorry for the trouble
<Fallenour> rick_h, yeaaa :( I like my systems like I like my tacos, a little on the light side
<rick_h> lol
<Fallenour> rick_h, BUT!
 * rick_h ducks
<Fallenour> rick_h, its ok. because they have a lot more ram now, so this is good.
<Fallenour> each has at least 96GB of ram apiece, so I can do the things with them now :)
<rick_h> now you're cool with 30k masters?
<rick_h> Fallenour:  nice! that's cool
<Fallenour> rick_h, no, sadly but I am ok with 3 ^__^
<Fallenour> rick_h, the good news is that once this is finally finished, Ill be bootstrapping the juju controllers onto them, and have a cluster in lxd. Ive come to the ultimate realization that I just dont need 3 physical dedicated controllers, they will never use the resources in the current state.
<rick_h> ok, glad the world sucks a bit but has a bright side. I'm going to go make some morning coffee
<Fallenour> rick_h, morning....coffee?
<Fallenour> rick_h, Dont you mean all day coffee? o.O
<rick_h> yes, the morning ritual that means it's morning time and the day is starting off ok
<rick_h> hah
<rick_h> no, cut off after lunch. Have to sleep you know
<Fallenour> rick_h, mmmm, *starts looking around for the rick_h service restart command*
<Fallenour> can anyone assist? I keep finding errors in my journalctl -u rick_h log files
<Fallenour> rick_h, something something morning coffee?
<Fallenour> XD
<Fallenour> something something sleeping. cant have sleeping services o.o
<Fallenour> my morning starts at 4 am EST, runs till 9-10PM EST
<danboid> How do I use curtin the set the default DNS servers for all of my MAAS deployments?
<rick_h> danboid:  can't you set the dns servers in MAAS itself?
<rick_h> danboid:  under "settings, network services, DNS"
<danboid> rick_h, That is already configured but its never worked. For some reason I have to edit the netplan config post deployment and add in the DNS server for it to work
<danboid> Maybe bind isn't corretly configured on my maas controller?
<danboid> rick_h, Shouldn'r MAAS error tho if bind was inorrectly configured?
<rick_h> danboid:  I would think, I've not poked at it tbh other than setting it
<rick_h> I guess I've never really checked it was set like I did in config
<danboid> I'm going to try with DNSSEC explicitly disabled, see if that helps. It was set to auto
<danboid> Turning off DNSSEC under MAAS hasn't fixed it
<rick_h> danboid:  ok, yea I don't know. It might be good to bug the maas folks. If you want to use curtain to tweak things there is the cloud-init configy stuff you can do in Juju but sounds like a work-around
<danboid> The docs give the impression cloud-init is more for one-off configs and curtin is what I want if I want ti change the config of all deployments
<rick_h> well there's a juju cloud-init thing that runs on any machine juju goes on
<rick_h> danboid:  https://discourse.jujucharms.com/t/using-model-config-key-cloudinit-userdata/512
<bdx> danboid, rick_h: add the dns server you want to use to the subnet details page
<bdx> https://maas.io/docs/networking
<danboid> Looks like bind is configured to only listen to localhost, despite it saying otherwise in /etc/bind/maas/named.conf.options.inside.maa
<danboid> s
<danboid> Yay! Fixed it!
<rick_h> oooh, ty bdx
<bdx> np
<danboid> I just had to comment out the `listen on` line in /etc/bind/named.conf.options
<danboid> Then restart bind
<nammn_de> stickupkid: still around? If yes might wanna take another review round?
<nammn_de> https://github.com/juju/charm/pull/294
<stickupkid> nammn_de, looking now
<stickupkid> nammn_de, almost, just the naming I think now
<achilleasa> hml: I think this bit of code conflates space IDs (defaults) with space names (givenBindings that comes from the client). I will try to fix it as it prevents me from applying the Merge -> merge change
<hml> achilleasa:  which bit of code?
<nammn_de> stickupkid: thanks, just to make sure. You suggest to extract the extra file and just put it into same `charmdir` file as well?
<achilleasa> hml: https://github.com/juju/juju/blob/6e8f551b5305ec2f30d1910a0788db52cc30466d/apiserver/facades/client/application/deploy.go#L134-L162
<stickupkid> nammn_de, yeah, along with renaming it from MockLogger
<achilleasa> hml: the comment on L135-136 is misleading. DefaultEndpointBindingsForCharm returns space IDs
<achilleasa> hml: we can either get everything as space names and pass it to NewBindings or translate givenBindings to spaceID before calling this func. What do you think?
<hml> achilleasa:  pondering
<achilleasa> hml: Is the model config param for the default space storing a name or a space ID?
<achilleasa> nammn_de: ^ ?
<hml> achilleasa:  a name
<nammn_de> stickupkid: done
<hml> the space id will always be 0
<achilleasa> hml: I propose we change it to use space names since this is the facade
<achilleasa> or more accurately, make DefaultEndpointBindingsForCharm return a Bindings object
<nammn_de> stickupkid: while we are at it, we were planning to land https://bugs.launchpad.net/juju/+bug/1846240 time for a chat about it? I can see that you opened the bug as well as the inital PR
<hml> achilleasa:  yes to the DefaultEndpointBindingsForCharm
<mup> Bug #1846240: Add support for Windows 2019 series  <juju:Triaged> <https://launchpad.net/bugs/1846240>
<stickupkid> nammn_de, done
<nammn_de> stickupkid: thanks man! still got time today to chat about the windows bot? Or gonna go soon?
<stickupkid> nammn_de, give us 15
<nammn_de> stickupkid: us? if you mean in 15 min, im flexible :D just ping me if you can
<achilleasa> hml: so it turns out fixing that needs much more effort than I thought (lots of validation code that assumes space names). I will just add the fix for the isModified for now and push the PR
<achilleasa> hml: I am kinda surprised that deploy --bind worked while running my QA steps...
<hml> achilleasa:  rgr
<achilleasa> hml: can you take a look? https://github.com/juju/juju/pull/10734 (I am releasing the guimaas nodes ATM)
<hml> achilleasa:  yes, already started with the cli piecesâ¦ will continue ;-)
<achilleasa> hml: if you want we can pair tomorrow on the cleanup for the facade bits...
<hml> achilleasa: sounds like a plan
<nammn_de> stickupkid: its me again for the related pr in our main repo https://github.com/juju/juju/pull/10744/files
<stickupkid> nammn_de, https://media.giphy.com/media/RoajqIorBfSE/giphy.gif
<nammn_de> stickupkid: no ones safe :D, btw. ignore the linting stuff, I will fix them by then
<stickupkid> nammn_de, looks good to me
<stickupkid> nammn_de, it doesn't like your lint issues though :D
<stickupkid> nammn_de, got a dep issue going on
<nammn_de> stickupkid: yeah i smell that I may forgott to add my gopkg lock :D
<Fallenour> ughhh
<Fallenour> Ive been fighting with this all day
<Fallenour> does anyone know why LXD wouldnt see a MAAS API thats working?
<Fallenour> Im so tired of issues like these @____@
<wallyworld> Fallenour: you mean that from inside the LXD container you cannot reach the MAAS API endpoint?
<hpidcock> wallyworld: I've pushed that rebase
<wallyworld> ta
<hpidcock> time for more coffee
#juju 2019-10-22
<anastasiamac> wallyworld: PTAL https://github.com/juju/juju/pull/10774 - update-cloud changes
<thumper> https://github.com/juju/juju/pull/10763 updates for model log files on the controller
 * wallyworld otp interviewing, soon
<thumper> ack
<babbageclunk> thumper: approved
<thumper> babbageclunk: thanks
<wallyworld> hpidcock: got a sec?
<hpidcock> yep
<wallyworld> standup
<babbageclunk> thumper: super-simple review? https://github.com/juju/os/pull/14
<thumper> sure
<thumper> babbageclunk: all good
<babbageclunk> to be followed by a dep update against 2.6 then a merge to 2.7
<babbageclunk> thumper: looks like there was a backwards-incompatible change made to juju/os before mine (used in 2.7 but not 2.6). Should I just make the change to 2.7, or update 2.6 so that it can work without the removed function?
<babbageclunk> just looking to see how the juju/juju code was changed
<wallyworld> hpidcock: i left a few more thoughts. see what you think, especiallyu about the worker setup
<wallyworld> i need to head out to 1:1 in about 5
<babbageclunk> wallyworld: have you got 2 mins then?
<babbageclunk> :))))
<babbageclunk> bah looks like he already went
<thumper> wallyworld: looks like an intermittent failure in apiserver/facades/client/action
<babbageclunk> thumper: can I grab you for a moment - confused about the series stuff
<thumper> babbageclunk: sure
<babbageclunk> in 1:1
<wallyworld> i'll look after 1:1 when i get back
<thumper> wallyworld: ack
<thumper> wallyworld: problem appears to be iterating over an unsorted set, using Values() rather than SortedValues() in action.go method Tasks
<thumper> nope, just saw the swap in the test... that's gross
<thumper> wallyworld: I'm on it
<thumper> wallyworld: I have the fix
<thumper> if my branch fails to merge again, I'll add it to mine and retry
<thumper> if it merges I'll propose separately
<thumper> we have a 50/50 chance either way
<thumper> map ordering FTW
<thumper> it failed again, so pushing fix to my branch
<thumper> https://github.com/juju/juju/pull/10763/commits/4d6fee29f7e7614f5d546220f3c83cd208abeed1 FWIW
<babbageclunk> I hate when a package has a directory that includes a broken symlink so they can test that it doesn't break them, and then it breaks any other tool that does stuff with that directory.
<thumper> babbageclunk: like?
<babbageclunk> make rebuild-dependencies is failing for me because of gopkg.in/juju/charmrepo.v2/internal/test-charm-repo/series/format2/hooks/symlink
<thumper> stink
<babbageclunk> it really is
<babbageclunk> just blowing away the vendor dir fixes it.
<wallyworld> hpidcock: back from 1:1 now if you need me to look at anything
<hpidcock> wallyworld: just getting the worker together and fixing tests I just broke
<hpidcock> probably 30min away
<wallyworld> no worries, i reckon we can squeeze it in
<hpidcock> hah let's hope so
<hml> manadart:  do either of 10772 and 10773 (the small refactoring ones) require additional qa to land?
<manadart> hml: Probably prudent to flex them some. We can contrive some steps and discuss and stand-up.
<manadart> *at stand-up.
<hml> manadart: agreed, just wasnât sure if itâd been done already
<nammn_de> rick_h: mind turning on github actions for juju/os? I can open PR with the workflow later
<rick_h> nammn_de:  it's on
<rick_h> "enable local & third party actions for this repository"
<rick_h> nammn_de:  they have to land first though
<nammn_de> rick_h: ta, go it
<nammn_de> *got it
<hml> simple review if someone is willing https://github.com/juju/juju/pull/10781
<hml> achilleasa:  ping
<rick_h> woooo! juju 2.7-beta available in the beta track now
<rick_h> errr 2.7-beta1
<achilleasa> hml: here
<rick_h> ty stickupkid!!
<rick_h> great stuff team!
 * rick_h does a happy dance
<rick_h> stickupkid:  hah, bug emails coming now
<hml> achilleasa:  iâm not understanding WHY we validate the old default endpoint space when changing to a new one.
<achilleasa> hml: we don't, we validate the new default if different than the old and not set to network.DefaultSpaceID
<achilleasa> oh... I see what you mean. We can just always validate the new one right?
<achilleasa> (so passing the old one through would not be needed)
<hml> achilleasa perhaps? i was misreading and not connecting that the oldDefaultSpaceIDForApp was only used to see if the value changes.
<hml> achilleasa:  either way is okay, now that i get whatâs going on.  :-)
<achilleasa> hml: you are right. Let me strip that out
<achilleasa> hml: pushed a commit
<hml> achilleasa: ty, looking
<hml> achilleasa: nice, i think simplier easier to understand, and revalidating doesnât hurt.
<hml> achilleasa:  what are your thoughts around the Force values from the application and bind changes?
<achilleasa> hml: did you see my comment in the other PR?
<hml> achilleasa: just need to do qa on 10780
<hml> achilleasa:  nopeâ¦ will read now
<achilleasa> hml: let me know if my reply makes sense :D
<hml> achilleasa: yes, ty.  the corollary makes sense to me also.  put that in the pr.
<hml> achilleasa: a question in 10780 before I approve.  itâs logged in the pr
<achilleasa> hml: I don't understand... This was caused by "spaceNotFeasibleError" which should work now
<achilleasa> hml: it should have been fixed by this: https://github.com/juju/juju/pull/10780/commits/7bfbaa36128734ad63f0ed2c4d0bb7309f293e2c#diff-3b940d8ae0b2e8e93eabd0ea62b619feR314-R340
<hml> achilleasa:  hrmâ¦  let me try again.
<achilleasa> hml: is that on guimaas?
<hml> achilleasa:  yes
<achilleasa> hml: I will try next with a few log bits in there to see what's going on
<hml> achilleasa:  let me check something first.
<hml> achilleasa:  shitâ¦ sorry.  i think i know whatâs going on.  damn release.  let me reproduce first
<achilleasa> hml: I was about to suggest it's not running the right code :D
<hml> checked my history and i didnât use âbuild-agent during bootstrap
<hml> probably picked up the wrong agents
<nammn_de> stickupkid: os github actions and finally running on osx and windows https://github.com/juju/os/pull/15 :D
<hml> achilleasa:  i had the wrong code.  qa from scratch now
<stickupkid> nammn_de, CR'd
<nammn_de> stickupkid: thanks man. The reason I changed the signature of the function is that people in future working on it don't get confused as I do. Actually I was hell of confused what the reason for the test was and why it failed. As I am not on windows this was a not fun debug session :D. The function only get's used for the test anyway.
<stickupkid> nammn_de, it gets used in newDetectHardwareScript in winrmprovisioner.go
<stickupkid> nammn_de, and we would throw that info away in that function
<nammn_de> stickupkid: ahh, we use it on other repos
<stickupkid> nammn_de, yarp
<nammn_de> stickupkid: then let me change it on test level.
<stickupkid> nammn_de, hence why i think we shouldn't change the signature
<stickupkid> wicked
<stickupkid> :D
<stickupkid> or WICKED
<stickupkid> GREAT WORK
<nammn_de> stickupkid: dammnit, this function itself is SIMPLE but CONFUSING at once :D
<stickupkid> nammn_de, yeah, global state SUCKS
<nammn_de> as for someone without context
<stickupkid> nammn_de, no no, that package is horrid
<nammn_de> stickupkid: ha, thats true ð¿
<stickupkid> nammn_de, it should be take a inspector, which returns a host series as a dependency and then use the vistor pattern to walk over the series to ensure it's valid
<stickupkid> nammn_de, that's what i'd do, but would cause pain to move to it
<hml> achilleasa:  much better with the correct code!
<nammn_de> stickupkid: yeah seems like we have a lot of places where we could add work. I guess takes quite some time no one is really willing to invest for some spaces :D
<stickupkid> nammn_de, looks good to me
<nammn_de> stickupkid: thanks
<nammn_de> stickupkid: catalina support should already be merged, right? no more work to be done
<nammn_de> ?
<timClicks> this is a really interesting post https://discourse.jujucharms.com/t/any-advice-on-using-juju-for-regular-websites/2223/6
<rick_h> timClicks:  yea, crazy post
<timClicks> single engineer has supported over 2,000 deployments (most test/staging, I expect) each with their own version of php, magento/wordpress, mail, SSO, ...
<rick_h> wheeeee
<rick_h> "
<rick_h> "part time"
<rick_h> timClicks:  when you're free I'd love to catch up on how you'd like to go about the 2.7-beta1 release notes bit
<rick_h> timClicks:  we didn't send anything out yet because of the debs we'll process tomorrow but wanted to sync on highest priority material. I tried to add some bits on the "to document" discourse post but I'm not sure if that's meant for beta1 release notes and such
<timClicks> ah right
<timClicks> I removed the non-snap instructions from the notes and pushed it love
<timClicks> *live
<timClicks> we can edit them back in once they're available
<timClicks> rick_h: did you want to jump into a call?
<rick_h> timClicks:  sure thing, omw to the cross team
<admcleod> how is the lxd snap installer working with network restricted installs?
<admcleod> i.e does the apt proxy get passed to snapd?
<rick_h> admcleod:  so there's a snap proxy config I would expect to be set
<admcleod> rick_h: in model-config?
<rick_h> admcleod:  note this is 2.7 only as lxd is gotten from a deb before 2.7
<admcleod> rick_h: riiiight ok
<rick_h> admcleod:  and 2.7 has the updated code to handle lxd from a snap source and working in a better way
<admcleod> rick_h: ok thanks. wont log a bug then
<admcleod> rick_h: ad i seem to have wrked around it with 2.6.9 so \o/
<rick_h> admcleod:  woot
<anastasiamac> admcleod: would b awesome if u could document ur workaround in discourse \o/ well done!!
<admcleod> anastasiamac: i can do that if you tell me which one specifically
<anastasiamac> admcleod: I've create this for u using the wording in this conv... feel free to edit/comment
<anastasiamac> admcleod: tyvm
<anastasiamac> admcleod: https://discourse.jujucharms.com/t/pre-2-7-lxd-snap-installs-with-network-restrictions/2269
<admcleod> anastasiamac: done
<anastasiamac> admcleod: \o/
<wallyworld> hpidcock: i have meetings for the next while soon, but i left a few comments to look at
<hpidcock> thanks
#juju 2019-10-23
<timClicks> is it possible for a controller to claim control of a pre-existing model? say the instance that is hosting the original controller is accidentally deleted
<rick_h> timClicks:  no
<rick_h> timClicks:  the only way is the very manual db restore/disaster recovery steps that are noted somewhere that jam put together
<timClicks> understood, that's what I thought
<anastasiamac> oh wallyworld PTAL https://github.com/juju/juju/pull/10784 - update-creds changes
<wallyworld> anastasiamac: lgtm
<anastasiamac> \o/
<wallyworld> hpidcock: i left a couple more questions
<hpidcock> wallyworld: I think that's it. Thanks-you for all the reviewing
<wallyworld> no worries, will look soon
<wallyworld> babbageclunk: for whenever https://github.com/juju/description/pull/63
<wallyworld> hpidcock: let's land it. looks good, we can deal with any fallout when we test over the next few days etc
<babbageclunk> wallyworld: approved
<wallyworld> ty
<wallyworld> will fix the import func as suggested
<wallyworld> anastasiamac: that bug does look legit - destroy-model should cleanup everything before returning
<anastasiamac> wallyworld: it's enclear whether the 2 given commands are run in parallel.. it did not look like there was a wait for command to come back
<anastasiamac> wallyworld: anyway i need to talk to u when u can
<wallyworld> he says they run one after the other
<wallyworld> and destroy-model blocks
<wallyworld> helping in #juju atm
<anastasiamac> no he says that remove-cloud blocks
<anastasiamac> wallyworld: ^^
<wallyworld> remove-model blocks also
<wallyworld> it counts down the stuff that is being removed
<anastasiamac> wallyworld: let's talk - there is no 'remove-model' and he is not saying that destroy-model blocks but that the following 'remove-cloud' does not succeed.. m not sure u've read the bug
<wallyworld> i meant destroy-model, i have read it, was a typo :-)
<wallyworld> anastasiamac: can talk now until i get pinged again
<anastasiamac> stdup?
<wallyworld> ok
<anastasiamac> wallyworld: PTAL https://github.com/juju/juju/pull/10786 -add-cloud exclusivity and go1.13 'fix' as discussed
<wallyworld> ok
<wallyworld> lgtm, ty
<anastasiamac> \o/ u r a review ninja!
<nammn_de> anyone for a really, really small one? https://github.com/juju/juju/pull/10782, additionally do we have other places to add them?
<manadart> nammn_de: Does it need test changes? See https://discourse.jujucharms.com/t/adding-new-regions-to-clouds-yaml/1741
<nammn_de> manadart: ohhh, I keep forgetting to search in discourse, I will search for that :)
<nammn_de> manadart: just looked at the test, no does not seem to be the case. Seems like we removed those tests from there. The linked e.g. PR adds test to a place which was removed in later commit.  Only need to update jenkins AFAICT
<manadart> nammn_de: OK.
<nammn_de> manadart: juju bot only failed for windows. Some flaky tests (?)
<nammn_de> now that was confusing me for some time. https://github.com/juju/juju/pull/10787 2 link updates in the readme
<nammn_de> stickupkid: around?
<stickupkid> nammn_de, always
<nammn_de> stickupkid: now thats the spirit :D!
<nammn_de> nammn_de: regarding adding support for windows and catalina. Should be done with the pr we added, right? Catalina was added before and windows by us both
<nammn_de> just to make sure I did not overlook something
<stickupkid> nammn_de, correct
<nammn_de> stickupkid: cool
<stickupkid> nammn_de, nope
<achilleasa> manadart: I have been looking into the peergrouper code for the address equality code that you mentioned in 10750 but can't seem to find it. Can you help me track it down?
<manadart> achilleasa: Looks like the old comparison was removed from the peergrouper and it just does reflect.DeepEqual. This is on SpaceAddresses anyway, so it's a different type to here.
<achilleasa> manadart: I agree that the comparison code should proabbly live in core/network
<achilleasa> jam: I have pushed two new commits to 10750 which address some of your comments. Can you take a look?
<jam> achilleasa: looking
<jam> achilleasa: did you find anything for comparing lists of addresses?
<achilleasa> jam: not yet... the code in my PR is the same as the code used by the current implementation
<jam> achilleasa: reviewd
<hml> achilleasa: review pls https://github.com/juju/juju/pull/10783
<achilleasa> hml: looking
<achilleasa> hml: got a few min for a quick ho?
<hml> achilleasa:  sure, meet you in daily
<nammn_de> am i doing something wrong? How do I set my client to run on the same version as the one I am planning to deploy ? I am using build-agent and have compiled before. Still got the err msg. "agent binary 2.7-rc1 not compatible with bootstrap client 2.7-beta1"
<achilleasa> hml: approved
<hml> achilleasa: ty!
<nammn_de> nvm stupid me... I thought make would include go-install..
<hml> nammn_de: what QA steps are needed for this pr?
<cory_fu> jam: Hey, can I get confirmation from you that relation IDs are in fact unique integers and that the `{relation_name}:{id}` form is just for informational purposes on the CLI?  Will the API and CLI always accept a bare int?
<nammn_de> hml: oh i forgot to add them. My bad, let me quick add them. but it's just deploying them into the new regions.
<nammn_de> hml: updated descriptin
<achilleasa> hml: can you do a quick CR on https://github.com/juju/juju/pull/10788?
<hml> achilleasa:  sure, once iâm done with the current
<achilleasa> hml: no rush
<Fallenour> goooood morning
<stickupkid> hml, CR'd 10781
<hml> stickupkid: ty!
<hml> nammn_de: iâm unable to bootstrap to one of the new regions
<hml> nammn_de: it should be showing up with juju regions <cloud>, but they are not.
<Fallenour> hey guys, Im having issues with networking with LXD. I have NAT configured, and I have a network setup, with a interface configured as well. I can ping containers from the host machine, but I cant reach the same hosts with the same network from another node. Can anyone help me out? This doesnt make any sense.
<Fallenour> preferrably Id like to just let MaaS be the DHCP server, and let maas issue the lxd containers their IPs
<nammn_de> hml: which region does not work? Both work for me. They should show if you dont have a local cloud.yaml file. Else they will not fallback to the yaml file provided by juju
<hml> nammn_de:  neither
<nammn_de> hml: here updated description and expected output https://github.com/juju/juju/pull/10782. As long you have a clouds.yaml under /.local/share/juju , juju will not respect the fallbackoption and use the one defined there. Could that be the case?
<jam> cory_fu: the CLI immediately strips out just the integer portion for the rest of the internal operations. I don't see that changing in the near future, at least.
<cory_fu> jam: Great, thanks.  That's what I thought.
<jam> "unique" in that both sides of a relation see the same integer, but every relation is a different integer
<hml> nammn_de:  HO?
<nammn_de> hml: sure comin in 1 min
<hml> nammn_de: approved
<nammn_de> hml: thanks!
<nammn_de> hml rick_h: related bug description https://bugs.launchpad.net/juju/+bug/1849509 I tried to make it a bit more elaborate
<mup> Bug #1849509: Juju cloud update and fallback precedence <juju:New> <https://launchpad.net/bugs/1849509>
<achilleasa> manadart: the unit manifold allows my to fetch an environs.Environ instance. Is it somehow possible to get a NetworkingEnviron instead?
<achilleasa> I want to access NetworkInterfaces like we do in networkingcommon and handle NotSupported for providers that don't expose it
<achilleasa> manadart: nvm, I think I 've got it
<anastasiamac> wallyworld: PTAL https://github.com/juju/juju/pull/10789 - last in series ;D
<wallyworld> ok, in a sec
<anastasiamac> wallyworld: of course, none of this is 'look mmediately' :D
#juju 2019-10-24
<anastasiamac> timClicks: PTAL https://github.com/juju/juju/pull/10790
<hpidcock> wallyworld: https://github.com/juju/juju/pull/10785 I added a few comments
<anastasiamac> timClicks: thnx!
<timClicks> anastasiamac: np :)
<wallyworld> hpidcock: will look after call
<wallyworld> hpidcock: with the Message_ thing, we use that elsewhere when we need to have stuff available for serialisation but it's really internal (like in the juju/description) package. maybe worth avoiding in juju/juju though
<hpidcock> sounds good
<hpidcock> I don't mind either way
<anastasiamac> wallyworld: r u aware of this caas operator failures? I've seen it couple of times in the last couple of days on develop - https://jenkins.juju.canonical.com/job/github-make-check-juju/1928/testReport/junit/github/com_juju_juju_apiserver_facades_agent_caasoperator/TestAll/
<wallyworld> i blame hpidcock :-)
<anastasiamac> :)k... lemme re-phrase hpidcock r u aware of these failures? :D
<wallyworld> he is now :-)
<anastasiamac> x2
<hpidcock> yep I'll have a look
<anastasiamac> \o/
<hpidcock> https://github.com/juju/juju/pull/10792
<thumper> patch for windows agent failures: https://github.com/juju/juju/pull/10793
<hpidcock> thumper: LGTM
<babbageclunk> jam: here's the test charm
<babbageclunk> https://github.com/juju/juju/pull/10794
<babbageclunk> almost forgot to paste the link
<jam> babbageclunk: thx
<jam> babbageclunk: reviewed
<wallyworld> manadart: so the monfo dial info is being served with 2 addresses: "localhost" and "X.X.X.X". "localhost" is correct and so it seems we (or mgo) is forking off separate dial attempts and doesn't shutdown the one that doesn't work
<wallyworld> need to look deeper into it but that's what i have so far
<manadart> wallyworld: Ack.
<jam> wallyworld: mongo driver doesn't let us change the original list of possible addresses, and keeps pinging on all addresses we passed in
<wallyworld> jam: well that sucks, we will get log spam forever then if an address changes
<jam> wallyworld: yep. known bug. I came across it when doing HA stuff (juju remove-machine controller) will cause endless debug spam.
<jam> wallyworld: I though we spammed DEBUG not INFO+
<jam> wallyworld: I don't know the bug #
<wallyworld> jam: machine-0: 15:25:01 WARNING juju.mongo mongodb connection failed, will retry: dial tcp 10.152.183.140:37017: i/o timeout
<wallyworld> it's warning :-(
<wallyworld> and it happens in k8s everytime since we serve up all api addresses
<wallyworld> and we only need localhost for mongo
<wallyworld> we have forked mgo right
<wallyworld> so we should fix
<jam> wallyworld: bug #1761237
<mup> Bug #1761237: remove-machine seems to leave a mongo address that is gone <enable-ha> <mongodb> <juju:Triaged> <https://launchpad.net/bugs/1761237>
<jam> wallyworld: so if you want to take the time to figure out how to update the list of addresses the driver has, we can do that, but it isn't trivial
<jam> wallyworld: the warning is in *our* code
<jam> as it would be a problem if you actually couldn't connect to a mongo address that should be valid
<jam> wallyworld: the "keep trying addresses that might be valid" is mgo code
<wallyworld> jam: even debug is wrong surely. we need to be able to say "this address is no longer applicable"
<jam> comment #3 is about not being able to update the "userSeeds" list
<jam> wallyworld: if mongo is local, how does the IP address change without the agent being restarted?
<wallyworld> in k8s the controller agent and mgo run together in the same pod. so localhost. but we can serve up the controller service ip address which will not work
<wallyworld> we server up all api addresses which we assume are relevant for connecting to mongo
<wallyworld> but that doesn't hold with k8s
<wallyworld> since the controller and mgo containers are co located
<wallyworld> and we haven't mapped the service port therefore
<wallyworld> i guess for now we can make it debug
<jam> wallyworld: so I was hoping to use a separate list that tracked "known valid addresses" and would log debug if it thought the address shouldn't be used
<jam> I think updating mgo to allow you to poke at userSeeds would be ok, just non-trivial
<kelvinliu> wallyworld: got this PR to introduce RBAC for caas credentials , could you take a look? thanks! https://github.com/juju/juju/pull/10776
<wallyworld> jam: i just added debug to get past the current noise issue but +1 on adding additional changes
<nammn_de> jam: coming back to our discussion yesterday, here would be the pr for the charm logging https://github.com/juju/charm/pull/296/files
<nammn_de> stickupkid for interested ^
<stickupkid> nammn_de, nice, let me look
<achilleasa> jam: have you seen my response to your comment on the juju bind PR? Any objection to landing it as-is?
<nammn_de> stickupkid: still need to fix the unit tests, be logic wise should be fine
<jam> achilleasa: no issues, I see your point
<jam> achilleasa: note I didn't actively review, just raised the question
<achilleasa> jam: nw, Heather has already reviewed it
<manadart> achilleasa: Were you going to review https://github.com/juju/juju/pull/10779 ?
<nammn_de> stickupkid: just saw your comment? We try to get rid of gh actions again? Any plans what replacement we will use?
<stickupkid> nammn_de, they don't really fullfill a need once we get jenkins running the same tests
<stickupkid> nammn_de, they're burning cpu cycles for fun but not profit
<nammn_de> stickupkid: hmm true. I thought that they were kind of used as a gatekeeper to jenkins for the small tests. If they run good we would run the whole suite on jenkins
<nammn_de> stickupkid: now unit tests have been fixed/added for the pr mentioned before
<nammn_de> wdyt?
<nammn_de> but yes, right now they are just happily destroying our good nature :D
<stickupkid> nammn_de, the problem with github is they don't merge in the same way as jenkins, so we're actually comparing apples and oranges
<stickupkid> nammn_de, this doesn't effect juju/os though, that package needs those tests as setting up a windows bot else where is just plain painful
<nammn_de> stickupkid: ha, now that makes me happy. Making it run therre was a pain :D
<nammn_de> stickupkid: thansk simon for the rev
<achilleasa> manadart: looking now
<wallyworld> manadart: want a call?
<wallyworld> you nee dto use --agent-version 2.7-rc1
<wallyworld> plus make push-operator-image
<wallyworld> with your docker hub user name set
<wallyworld> cause it queries available binaries using the configured repo
<wallyworld> similar to looking up simple streams
<wallyworld> you probs don't need --agent-version if you push-operator-image
<wallyworld> export DOCKER_USERNAME=fred
<wallyworld> make push-operator-image
<wallyworld> juju bootstrap microk8s --config caas-image-repo=fred
<manadart> wallyworld: In team-standup.
<wallyworld> manadart: to confirm, you running "make microk8s-operator-update" right? exporting the docker username and setting caas-image-repo should be enough to tell microk8s to use your custom jujud. but i do suspect you will need push-operator-image also so that the upgrade command can query what's available (not 100% on the last bit but i think i'm right). you only need to push once to seed the list of available tagged images
<wallyworld> you can check upgrade with --dry-run
<nammn_de> stickupkid: https://github.com/juju/juju/pull/10795
<nammn_de> stickupkid: regarding changing the charm version gopkg
<stickupkid> nammn_de, k, will check in a bit
<stickupkid> nammn_de, done
<skay> I have a charm I haven't used in a while, and I'm getting errors when I try to attach a resource. https://paste.ubuntu.com/p/sTbg5Rh7fW/
<skay> I get a "connection reset by peer" message when I run the attach command
<skay> And on the machine-0.log I get "no kvm containers possible". the pastebin has the full details
<jam> btw nammn_de, I am also seeing quite a few "charm is not versioned" warnings in 'juju debug-log' after having done a 'juju deploy' from our acceptancetests repo. Shouldn't it have grabbed the git version from juju/juju ? The charm *is* in a subdir of a versioned dir
<N3tw0rK> When deploying a new openstack cluster for dev, I try to deploy a container (LXD) to a node and when it applies the network bridge all my interfaces stop passing traffic (still up). Everything works fine on the host up until the point the container trys to start.
<jam> manadart: I had a controller that seemed to be bootstrapped to 2.7-beta1, and just tried to upgrade to my devel branch which claims 2.7-rc1. However, I'm seeing it stuck in upgrade
<jam> with no obvious errors in the log other than "lease operation timed out"
<jam> not HA
<jam> its plausible the 2.6-beta1 wasn't the official release, and I know my 2.7rc1 isn't 'devel', but I'm surprised to see it think it is stuck in "upgrading since...." in 'juju stautus'
<nammn_de> jam let me follow on that one. With the change added it should not take the version from the place where you have run the command juju/juju, but the dir where the charm itself is located
<nammn_de> because there was a bug, that deploying from the current working dir, which is under vcs, but has nothing todo with the charm
<nammn_de> can lead to a charm version which has nothing to do with the charm. The code now takes the version string from the charm path, instead of the cwd
<nammn_de> jam: https://github.com/juju/charm/blob/v6/charmdir.go#L490-L491
<jam> nammn_de: if you're under a versioned dir, you *are* versioned. eg, acceptancetests/repository/charms/dummy-sink is very much at a known version if you want to go back the original source
<nammn_de> jam: ah sorry did not know that the charms are under that repository. I am not sure how far git is willing to follow though. let me try that
<nammn_de> jam: You are right, this should work. Let me quick test whether it works or not. I did think it works
<achilleasa> hml: got a min?
<hml> achilleasa: sure, whatâs up
<achilleasa> quick ho?
<hml> achilleasa: omw
<manadart> jam, I will see if I can replicate.
<nammn_de> jam: ah i see the, yep that warrants change how this is currently done in the code
<nammn_de> *see the error
<jam> manadart: don't stress too much, it is plausible I was running a custom version of beta1, but it is a bit of a "are upgrades currently broken"? we should intend to be able to upgrade from beta1 to rc1
<nammn_de> dammnit this thing had so many loopholes
<nammn_de> *has
<nammn_de> jam: right now the code only checks whether a .vcs folder exists and if yes, then executes the corresponding vcs code. My change would do something along the line of:  just try each vcs. If one returns with $success, take that one, if everyone fails, return that warning. Wdyt?
<jam> nammn_de: Given things like "we do it frequently" I'd be a little concerned. Especially if we are seeing it on controllers, where the charm they have is fixed, it won't become versioned if it wasn't previously
<nammn_de> jam: what would your suggestion be? We can HO for few min if you can
<nammn_de> *btw i changed the warn to an info in a previous pr
<nammn_de> jam stickupkid: If we want to make sure to take parent vcs dirs, I thought of something along this line https://github.com/juju/charm/pull/297 It is more of a draft to talk about
<stickupkid> nammn_de, i like it already tbh, it cleans up the code, but I'ld just inline the strategy part
<stickupkid> nammn_de, i.e.  vcsStrategies["hg"] = []string{"hg", "id", "-n"}
<stickupkid> nammn_de, or even better, have the strategy be a struct i.e. vcsStrategies["hg"] = vcsCmd{cmd: "hg", args: []string{}}
<nammn_de> stickupkid: being a struct is nicer thats very much true!
<nammn_de> stickupkid: was just hacking it together to have a discussion point
<nammn_de> btw. added a comment to the pr description with possible downside
<stickupkid> nammn_de, you can add a func to each struct to handle the errors
<stickupkid> nammn_de, vcsCmd{cmd: "hg", args: []string{}, errorHandler: func(err error) error}
<stickupkid> nammn_de, that way you can handle the error exactly how you want
<nammn_de> stickupkid: my unsure point was more about: right now I just try each vcs. If they fail i can either log or not. The best solution would be only to log, if I know that the underlying vcs is actually git. So only log if git fails. But this would need even more checks
<nammn_de> makes sense?
<stickupkid> nammn_de, that's what the errorHandler does, i.e. if you only have it for git, then it's fine
<stickupkid> nammn_de, i.e. if cmds.errorHandler != nil { err = cmd.errorHandler(err) }
<stickupkid> done
<nammn_de> stickupkid: not sure if I follow or we one the same page. What would you put in the errorHandler? Reasonably I would only implement it for now for git. But the errhandler has to contain something along: if you fail, check if you are even running under git
<nammn_de> and if yes, then log
<nammn_de> that would be my approach on the errfunc
<bdx> goog morning
<bdx> I wanted to have a quick chat about peer relations and unit data
<bdx> peer._data['private-address']
<bdx> from what I can gather, the 'private-address' in the peer data above is derived from the 'private-address' attribute in the unit_data
<bdx> for each peer
<bdx> should we be populating the unit_data with information from network_get()?>
<bdx> in my use case, all peers need to know eachothers ip address
<bdx> previously, I had always ascertained peer information through the unit_data exposed for each unit of the peer relation
<bdx> I get the feeling we are moving away from the unit_data model
<bdx> or possiboy just moving to replace the unit_data with information from other sources
<bdx> wallyworld recently added some functionality that adds ip address data to the return network-get for k8s application charm units
<bdx> I'm trying to see my way through to being able to consume that newly added ip address information from the view of a peer
<bdx> one thing I'm thinking about doing is extending the Endpoint class via a peer relation to allow for each peer to set its ip address data (the return of network_get()
<bdx> in this way I would be able to expose each peer's ip (returned via network_get()) to each other peer
<bdx> does this sound like a legitimate way to go about this?
<stickupkid> bdx, probably best to speak to manadart or jam (rick_h is away). maybe open a discourse post so the right people see it
<bdx> stickupkid: will do, thanks!
<skay> I'm seeing weird errors when I deploy an app, and when I try to attach a resource. https://paste.ubuntu.com/p/xQQ4nwcqR3/
<skay> The controller is localhost based on lxd.
<skay> to take my charm out of the equation I deployed juju-hello https://paste.ubuntu.com/p/7TQ3StsXFz/
<stickupkid> skay, what version of juju?
<skay> 2.6.9-bionic-amd64
<stickupkid> skay, so what are you bootstrapping to? i.e. juju bootstrap lxd (or localhost)?
<skay> localhost
<stickupkid> skay, if you're bootstrapping to lxd, those errors are normal, there is ticket to try and reduce the verbose nature of that error message, but I've yet to get around to it
<skay> stickupkid: ok. is there a hello-world type of charm that I can attach a resource to to see if I get the other error?
<skay> those errors might be misleading me
<skay> hold on, I thought I included it in a pastebin. https://paste.ubuntu.com/p/7trQtnn5zP/
<skay> that's when I tried to run the attach command with a charm I'm working on. Last time I worked on it was pre-bionic and I haven't done anything with it a while
<stickupkid> skay, ah ok, that's a different error, yeah does indicate that api server isn't up and caused a connection reset
<stickupkid> skay, i would honestly open up a bug, so we can discuss it
<stickupkid> skay, https://bugs.launchpad.net/juju/+bugs
<skay> will do! thank you
<stickupkid> skay, it would be great if you can as much info about what you're bootstrapping to and what charm you're using
<stickupkid> skay, that would be fantastic
<skay> stickupkid: It's a private charm, but I'll see what I can do.
<skay> maybe I can make a charm to reproduce it
<stickupkid> skay, sure, if we can boil it down to a simple test case then that would help
<skay> unless there is another I can use
<skay> one that already exists
<stickupkid> skay, we have test charms that might be of some use https://github.com/juju/juju/tree/develop/testcharms/charm-repo/bionic/dummy-resource
<stickupkid> skay, or https://jaas.ai/u/juju-qa/upgrade-charm-resource-test/bionic/1
<skay> stickupkid: the upgrade-charm-resource-test worked when I attached a resource to it. that is a helpful clue.
<skay> stickupkid: I think it has to do with the filesize I was trying to attach. I got a connection refused eventually while trying to attach it to the test charm
<skay> how big can those files be? I attach a tarball that includes of my app's dependencies
<webstrand> I'm trying to update an ebuild for juju 2.6 on gentoo
<webstrand> The one I'm modifying (2.1) has a list of dependencies in the format "github.com/lestrrat-go/jspointer:f4881e6"
<webstrand> any idea how I can get a list of dependencies like that for the new version?
<wallyworld> webstrand: juju 2.6 uses golang's go dep tool. there's a Gopkg.toml file with all the deps
<webstrand> wallyworld: thanks!
<wallyworld> no problem. we will be moving to modules at some point soon hopefully
<webstrand> Do you know if `dep status` is just a nicely formatted version of Gopkg.toml? Or does it include other stuff
<thumper> babbageclunk: https://github.com/juju/juju/pull/10798
<thumper> webstrand: there is a 'make dep' command
<thumper> yes Juju has a Makefile :)
<webstrand> thumper: According to the doc, that only checks to make sure the Gopkg.toml and lock are in sync?
<thumper> webstrand: it does pull them into the vendor dir
<thumper> and that they are in sync
<thumper> I use it daily, so I'm pretty sure it works :)
<webstrand> ah, I need the dependency source and revision for the ebuild. Something to do with reproducible builds
<thumper> webstrand: the gopkg.toml file does list the hashes or all the dependencies
<thumper> so it is entirely reproducible
<thumper> what are you missing?
<webstrand> I'm just trying to figure out how to parse Godeps.toml and extract what I need, that's all.
<thumper> ah
<thumper> ok
<babbageclunk> webstrand: the .lock file is probably what you should use for fixing a reproducible build
<webstrand> oh, that makes sense
<babbageclunk> it has all the transitive dependencies which the .toml file might not have (although I think we do try to do that)
<webstrand> Oh no... the lock file references the bad packages too: github.com/lestrrat/go-jsschema has now become github.com/lestrrat-go/jsschema but still depends on github.com/lestrrat/go-jsschema
<webstrand> I'll deal with this tomorrow
<pmatulis> the 19.10 OpenStack Charms release is now available...
<babbageclunk> thumper: approved
#juju 2019-10-25
<wallyworld> babbageclunk: we mis you
<babbageclunk> oops! omw
<babbageclunk> wallyworld: any idea why actions would just get queued as pending and never be run?
<babbageclunk> worse - one did run and then others never have.
<babbageclunk> (even on different units)
<wallyworld> babbageclunk: the unit agent watches actions and reacts - it marks them as running when it executes them. so if an action is not being run, you need to look at the unit agent to see why
<babbageclunk> I can't see anything in the debug-logs, I'll have a look in the files.
<babbageclunk> thanks wallyworld
<wallyworld> babbageclunk: i had a look at the action resolver in the uniter and there's no filtering - it really does just react directly to an action watcher that flicks across any queued/pending actions
<thumper> babbageclunk: actions require the hook-execution lock
<thumper> babbageclunk: look at the machine-lock file on the machine
<thumper> and the juju_machine_lock command
<babbageclunk> nothing holding the lock - I can see the first run releasing the lock at the end, and I've had hooks run after that. Just not getting anything for the new pending actions I've added.
<babbageclunk> so that means that the watcher isn't seeing them for some reason?
<wallyworld> maybe - you'd need logging in remotestate/watcher etc
<thumper> babbageclunk: if you specify juju.apiserver to TRACE you'll see the details of all the api calls
<thumper> that may help
<babbageclunk> ok, thanks
<babbageclunk> there's logging in remotestate/watcher for when the action watcher triggers, I can see it for the first one but not subsequent actions...
<babbageclunk> it's almost like the actions watcher has stopped.
<wallyworld> kelvinliu: the issue is that the secret labels in 2.7 include model name. that's not needed because the secrets for an app are created in the app's namespace and we know the model. if we remove that label it will work as 2.6.9 correctly uses the juju-app=foo label
<wallyworld> configmap also uses model label
<wallyworld> we should only use model lable for global resources
<kelvinliu> ic. we should only have model labels for global resources
<wallyworld> so it will be a quick fix
<kelvinliu> i will fix it after current pr landed.
<kelvinliu> wallyworld: so I just tested on gke, aks working fine, microk8s failed due to this issue https://github.com/ubuntu/microk8s/issues/757  I will test CDK on aws after lunch.
<wallyworld> sgtm, ty
<wallyworld> kelvinliu: is latest change pushed?
<babbageclunk> thumper: can I pick your brains about this?
<kelvinliu> the credential schema change,  yes, but I need rename the func name and to see if other clean up/rebase etc need to be done.
<wallyworld> that's ok, i might try and test too
<thumper> babbageclunk: sure
<babbageclunk> in 1:1?
<thumper> omw
<wallyworld> kelvinliu: testing with CDK on AWS, the LoadBalancer service is not getting an external IP so bootstrap fails. not sure if that's related. might be an issue with integrator
<kelvinliu> wallyworld: my cdk is deploying, i will see if it's reproducable
<wallyworld> kelvinliu: also, the nodes are missing the juju.io/cloud labels so i am raising a bug
<kelvinliu> yeah, it's missing https://pastebin.ubuntu.com/p/xZMPxCwJYn/
<kelvinliu> wallyworld: i tested on cdk/aws, all good for me. I didn't have the LB provision issue.
<wallyworld> must be my account or something
<kelvinliu> missing -- trust?
<wallyworld> juju deploy charmed-kubernetes --overlay /home/ian/aws-overlay.yaml --trust
<wallyworld> that's supposed to work
<kelvinliu> i use the same cmd and copied overlay from the doc https://pastebin.ubuntu.com/p/WMgVhNQyTF/
<wallyworld> yup, i did too
<wallyworld> babbageclunk: did you sort out the actions thing?
<babbageclunk> wallyworld: nope
<babbageclunk> it's really weird
<wallyworld> what charm?
<babbageclunk> wallyworld: it's my charm - but it doesn't seem like the watcher internals are passing along the update
<anastasiamac> wallyworld: babbageclunk : PTAL https://github.com/juju/juju/pull/10801 - corrected creation of cloud call context
<wallyworld> babbageclunk: i'll try deploying a stock charm and see if actions work
<wallyworld> babbageclunk: hmmm, works for ubuntu-lite with a simple action i added
<babbageclunk> wallyworld: and if you queue another action?
<wallyworld> babbageclunk: works
<wallyworld> this is on develop
<babbageclunk> ok, I think it's to do with jam's changes. Can I talk through with you a bit? I'm pretty unfamiliar with the hubwatcher code
<babbageclunk> wallyworld: in standup?
<wallyworld> sure
<wallyworld> anastasiamac: i have to duck out for a bit, a couple of quick initial questions have been left. i may be missing something
<anastasiamac> k
<anastasiamac> wallyworld: updated
<wallyworld> looking
<wallyworld> anastasiamac: while i look? https://github.com/juju/juju/pull/10800 small
<wallyworld> hpidcock: did you need me to look at any wip?
<anastasiamac> wallyworld: k
<hpidcock> probably monday morning, I'm pretty tired going to finish up a bit earlier
<wallyworld> anastasiamac: with the network facade, that is a model facade right? so technically it is ok as is i would have thought?
<wallyworld> as it always gets the right state when it is constrcuted
<wallyworld> it's only controller facades that get controller state
<wallyworld> and the same st is being passed in anyway
<anastasiamac> wallyworld: with network ine becaue we r using the code in common, we could potentially have a controller facade using it (dunno for what but possible ... since nothing stops us)... I did not want it to be a possibility
<wallyworld> func NewNetworkConfigAPI(st *state.State ...)
<anastasiamac> wallyworld: i think with this experience, every time we need call context, we should construct it
<wallyworld> so the change does not prevent that st above being passed in
<wallyworld> i can see passing in the one st is useful though
<wallyworld> and making the context off that
<wallyworld> just trying to understand the change
<anastasiamac> wallyworld: with respect to networking facade, i.e. NewNetworkConfigAPI(...) like i said above - i don't think we should construct call context in constructor any longer... we should only construct it as needed in the individual call
<wallyworld> yeah
<anastasiamac> wallyworld: hence i've changed the signature and removed call context creation from constructor...
<wallyworld> anastasiamac: func (s statePoolShim) GetModelCallContext(modelUUID string).... is there an issue there? we Release() modelState inside the func but then pass modelState.State as a return value expecting it can be used outside the func call.
<anastasiamac> wallyworld: i don't think so.... the new implementation juts moves things around for convenience but is the same a old where we did call release in apiserver... and old one worked... so m sure the new one works too
<anastasiamac> wallyworld: i tested live :D
<wallyworld> ok, that reference counting code ia a bit hairy and i need to re-learn it
<wallyworld> statepool, modelstatepool, systemstate etc etc
<anastasiamac> the pr fixes the issue in the bug :)
<anastasiamac> and makes sure that all similar calls are not wrong
<wallyworld> indeed, i just wanted to make sure there wasn't a st leak
<wallyworld> or a premature release of state from the pool
<wallyworld> i'm still not 100% but if it works....
<anastasiamac> ur concern is appreciate and all the vali questions :D
<anastasiamac> valid*
<wallyworld> i don't know the code well enough anymore
<anastasiamac> it works in practice :)
<anastasiamac> yeah.. that m.UUID() gave me hell... the model we get above is is not a full model... it's interfaced... but i think u r right, it's cleaner to add a method to Interface than to do the convulated dance
<wallyworld> yeah, got to add it to mocks etc but cleaner prod code
<manadart> wallyworld: Should I look at https://bugs.launchpad.net/juju/+bug/1849744 today?
<mup> Bug #1849744: k8s upgrade to 2.7 results in broken apps <juju:Triaged> <https://launchpad.net/bugs/1849744>
<wallyworld> manadart: nope, i already know the fix
<wallyworld> just got to do a pr
<wallyworld> it's about 3 lines of code and tests
<wallyworld> i assume it's not blocking you?
<manadart> wallyworld: OK. No not block me.
<manadart> *blocking.
<achilleasa> manadart: thanks for the bug link. I have approved 10779
<manadart> achilleasa: Thanks.
<stickupkid> manadart, the branch stuff is still behind a feature flag right?
<manadart> stickupkid: Yeah. "export JUJU_DEV_FEATURE_FLAGS=generations".
<stickupkid> manadart, nice nice
<stickupkid> manadart,  are we calling the code internally to juju, branches or generations
<manadart> stickupkid: It was initially just generations before we started using the "branch" name. So the packages, state collection and some types still use that nomenclature.
<manadart> stickupkid: "branch" is generally the term for in-flight, not-committed generations.
<manadart> stickupkid: One committed, the generation seq is bumped and the model has a new "generation", a high-water mark if you will.
<manadart> There is a notion to make all changes use a branch implicitly, so each change will increment the model generation and you will be able to see a history of changes.
<manadart> ^^ Once committed...
<stickupkid> righto
<nammn_de> manadart: if you want something small in the evening, a small PR regarding branches update  https://github.com/juju/juju/pull/10796 :D
<nammn_de> manadart: was not 100% sure whats the way right now to work on status is, so I just opened the PR so that we can discuss in case we do it differently
<manadart> nammn_de: OK, I'll take a look.
<manadart> achilleasa: https://github.com/juju/juju/pull/10802
<manadart> Oops. Looks like we have never handled space info in address migrations.
<achilleasa> manadart: approved PR; one mock seems to have escaped the renaming
<stickupkid> manadart, is there a way to see all branches ?
<manadart> stickupkid: We are going to need such a command.
<stickupkid> :-p
<nammn_de> if someone wants to take a look at this small one. Changed the track err msg and update mockgen manadart ^ https://github.com/juju/juju/pull/10804
<nammn_de> and the one we were just talking about how we read charm, simon already added nice comments https://github.com/juju/charm/pull/297 . I added my opnion why this also can be bad what i added
<stickupkid> manadart, before I add some qa steps, anychance of looking at the WIP https://github.com/juju/juju/pull/10803
<manadart> stickupkid: Yep.
<N3tw0rK> noob question: how can i set the release of openstack, looks like it defaults to queens. Id like to set it to a more recent release.
<pmatulis> N3tw0rK, if you're on an LTS the short answer is "UCA"
<pmatulis> https://wiki.ubuntu.com/OpenStack/CloudArchive
<N3tw0rK> pmatulis, ok is there anything juju specific that needs to be done once the UCA repo is added to the nodes?
<pmatulis> N3tw0rK, are you installing O/S charm by charm or using a bundle?
<N3tw0rK> pmatulis, charms
<pmatulis> then this is the key syntax you need:
<pmatulis> juju deploy <openstack-charm> --config openstack-origin=cloud:bionic-train
<N3tw0rK> pmatulis, Thanks!
<pmatulis> some non-openstack charms (e.g. ceph) will use a different option:
<pmatulis> juju deploy ceph-osd --config source=cloud:bionic-train
<hml> achilleasa:  we pass the space names as tags in some placesâ¦ and _default is not a valid space tag currently
<achilleasa> like constraints?
<hml> :-)
<hml> achilleasa:  âjuju spacesâ
<hml> output
<achilleasa> I sense a names.v3 PR coming soon...
<hml> hahahaha
<hml> achilleasa:  it would help with the Config validation as well.
<hml> achilleasa:  itâs a bit ugly reallyâ¦ i donât think we want to allow just an name with understore in front
<achilleasa> can we have the underscore in the end instead?
<stickupkid> manadart, it doesn't look like we validate the branch name in terms of uniqueness
<manadart> stickupkid: I think we do for active branches, but you can use the name of a committed one.
<stickupkid> manadart, yeah, ok, so let's say if it's not committed you can't pick that branch name
<stickupkid> manadart, i'll add that validation
<nammn_de> manadart: looks like my pr was not problem free. I need to address this and come back to you, probably I will remove the additional api call
<nammn_de> manadart: seems like init cannot access the api yet
<webstrand> I'm having trouble building juju 2.6.9; 2.1 builds fine. Any idea what I'm missing? http://dpaste.com/20WC576
<manadart> webstrand: Juju switch to dep since 2.1. Have you run "go make dep"?
<manadart> *switche
<manadart> *switched dammit.
<webstrand> Yep, I rebuilt all dependencies from Gopkg.lock
<mbeierl> anyone else having charm build failures due to https://git.launchpad.net/layer-apt/ giving 503?
