#juju 2013-09-09
<davecheney> this week in silly juju tricks http://dave.cheney.net/2013/09/09/using-juju-to-build-gccgo
<mthaddon> who do I talk to about https://maas.ubuntu.com/docs/quantal/juju-quick-start.html being out of date (pyJuju vs. juju-core for instance)?
<elmo> mthaddon: quantal?
<elmo> mthaddon: in any event, evilnick owns docs for both maas and juju AFAIK
<mthaddon> hmm, right - not sure how I got to quantal docs, will see if I can backtrack
<mthaddon> but http://maas.ubuntu.com/docs/juju-quick-start.html also mentions "It has to completely install Ubuntu and Zookeeper"
<mthaddon> evilnickveitch: ^
<evilnickveitch> mthaddon, hi, yeah, I can update MAAS docs
<mthaddon> evilnickveitch: cool, per above it looks to be referencing pyJuju still
<evilnickveitch> mthaddon, okay, it probably needs a good going over anyhow.
<mthaddon> ah, so the reason I was in those docs is "Get Started Now" off https://maas.ubuntu.com/ takes you to https://maas.ubuntu.com/docs/quantal/install.html
<mthaddon> the documentation link from the top nav takes you to http://maas.ubuntu.com/docs/ though
<evilnickveitch> oh, that sucks... I will see if I have access to update the link too
<jcastro> evilnickveitch: a maas-docs sprint wouldn't be a bad idea!
<evilnickveitch> jcastro, it would certainly help if someone read it now and again
<aethelrick> hello all, I'm using juju in local mode and I've added a postgresql service that works quite nicely. I would like to connect pgAdmin3 to this from my host machine but my changes to the pg_hba.conf are destroyed on boot. What's the correct way to make this configuration persistent?
<aethelrick> should I been looking at a custom charm for postgres hosted on my machine or is their a way to do this using the original charm from the charm store?
<AskUbuntu> Juju - Installing ceph and ceph-osd charms on same machine? | http://askubuntu.com/q/343349
<marcoceppi> aethelrick: there's no way to currently do this from the charm, unfortuantely. There is a postgresql-psql charm though that might be of assitance though.
<marcoceppi> aethelrick: I've not used pgAdmin3 before, so I can't say for certain if it will help you or not
<marcoceppi> aethelrick: https://jujucharms.com/precise/postgresql-psql-HEAD/
<aethelrick> marcoceppi, thanks for the links, I've already had a look at the postgres-psql charm, and it overcomes the same problem for itself, but it runs as a service itself, whereas pgAdmin is running on my host and needs to connect to one of my service instances
<aethelrick> I need some simple way of add-relation to machine "0" I suppose, but the documentation does not seem to suggest this is possible
<marcoceppi> aethelrick: that's not possible. Let me play with pgadmin to see if I can get you an answer
<rick_h> marcoceppi: so yea, the trouble is the need to add access to the outside world to pgsql via the pg_hba.conf (which lists the ip ranges/user allowed to auth)
<marcoceppi> aethelrick: you could create an SSH tunnel to either the psql service or the postgresql service to avoid having to edit the pg_hba conf
<aethelrick> marcoceppi, the other thing I may be able to do is use SSH tunnels via the postgres-psql service
<aethelrick> hehe... great minds think alike apparently!
<marcoceppi> Honestly, with the way charms are designed, you should never have to ssh in and change things. If you wanted to be able to add external IP addresses to the pg_hba conf, it should be done via a charm configuration option. Something like "external_access" and you can set a list of IP addresses to be amended to the allow list
<marcoceppi> aethelrick: if you were looking to do that, it'd be a great addition to the charm, otherwise I think a tunnel is the only way to go
<aethelrick> yeah, I think I'll add the option  to the charm... I'll go read the charm writing howto and get back to you
<aethelrick> :D
<marcoceppi> aethelrick: feel free to ping the channel with any questions you may have! The postgres charm is a bit more of an "advanced" charm, and uses a single hook file as opposed to many different ones
<jcastro> aethelrick: a new charm option would be badass!
 * aethelrick is off to read the docs...
<jcastro> jamespage: this one's for you! http://askubuntu.com/questions/343349/installing-ceph-and-ceph-osd-charms-on-same-machine
<sinzui> jcastro, charmers, This is the listing of charm READMEs with odd names: http://pastebin.ubuntu.com/6083706/
<jcastro> ok
<jcastro> so I need to fix hive is what you're telling me
<sinzui> jcastro,  I think charmers have fixed what they can (except for hive).
<jcastro> yeah, and tbh, do we really care about personal branches from oneiric etc?
<sinzui> Since Juju-GUI is going to start mixing reviewed and unreviewed charms in listings, These other charms might be a problem
<jcastro> yeah but the readmes will just be ugly
<jcastro> and the results will weigh towards reviewed charms anyway right?
<bac> sinzui, jcastro: md files with extensions of .md, .mkd, and .markdown are all handled identically
<bac> sinzui, jcastro: README files, that is
 * jcastro nods
<jcastro> and hive is committed, that takes care of the store!
<sinzui> jcastro, I am tinkering with a search to find reviewed charms with store errors. I'll paste it if I find some
<jcastro> sinzui: if it's more than two can you post it to the list?
<jcastro> or dude, I have an idea
<jcastro> maybe make it so when there's store errors it automagically puts the charm in the review queue?
<sinzui> jcastro, exactly why I am tinkering with the query :)
<AskUbuntu> Why use blackmagic/witchraft (juju) as the new theme for Ensemble? | http://askubuntu.com/q/343383
<jcastro> nice!
<jcastro> marcoceppi: yo yo
<jcastro> marcoceppi: any idea for questions, etc wrt. testing and charm helpers? http://91.189.93.79/cloud/cookbook/
<marcoceppi> jcastro: testing?
<jcastro> testing your own charm, etc.
<marcoceppi> jcastro: there isn't any written yet
#juju 2013-09-10
<dalek49> why does juju require a local charm to be under a "precise" directory?  I'm running raring, and when I put it under "raring" it complains that it can't find anything in "precise"
<sarnold> dalek49: does your environment say to deploy on precise or raring?
<dalek49> sarnold: it's not specified, does it default to precise?
<bladernr`> davecheney: sorry...
<bladernr`> davecheney: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html
<bladernr`> I have two different keypairsâ¦ 1 is stored locally and allows me to ssh from my machine to an instance i created and associated with that key.
<bladernr`> the other I use when creating other instances and allows me to ssh from my main ec2 instance that runs juju to the newly created instanceâ¦  IOW, key1 allws me to ssh from my workstation to ec2_instance_1.  I create a new instance manually and set it to use key2. Key 2 allows me to ssh from ec2 instance_1 to ec2_instance_2
<bladernr`> manually, it works fineâ¦ what I'm trying to find, now, is the key to set in environments.yaml to tell juju "When you create an instance, associate it with ec2 keypair 'key2'"
<bladernr`> davecheney: you mentioned AWS environment variables earlier, and I can find ones to specify the aws API keys so juju can spawn instances, but I can't find one to specify those ec2 keypairs as described in the link I posted
<bladernr`> davecheney: sorry, I'm really not trying to be obtuse or intentionally confusing...
<bladernr`> it's just working out that way
<freeflying> does juju core support export whole deployed environment?
<aethelrick> hi all, I've made a change to the postgresql charm that allows you to set a admin_ip configuration option which causes this ip to be added to the pg_hba.conf file... anyone want to check it out for sanity?
<aethelrick> I'm not a python programmer... but it seems to work ok for me :)
<mthaddon> aethelrick: do you have a merge proposal?
<aethelrick> mthaddon, wastha? sorry, I'm new here... I started playing with juju yesterday for the first time
<marcoceppi_> aethelrick: Do you have a launchpad account>
<aethelrick> marcoceppi_, not yet... I'm sure I can make one though...
<mthaddon> aethelrick: if you want to get a change made to the upstream charm, you'll need to create a merge proposal on launchpad and it'll then show up on http://manage.jujucharms.com/review-queue
<marcoceppi_> aethelrick: that'd be the first place to start. Once you have a (free) account, you can push your version of the code to launchpad and people can review it
<aethelrick> mthaddon, ok, thanks... will figure that out and get back to you when merge proposal is submitted
<aethelrick> marcoceppi_, thanks :)
<mthaddon> k
<kenn> Hi guys, also just starting out with juju. Really like it so far, have experience with Chef on RightScale, this seems more ... sane
<kenn> have a question though, I deployed a service which failed during install. I then ran destroy-service so I could deploy it to the machine again, but it's not going away. Is there a way to force the destruction of a service?
<aethelrick> ok, I have made a merge request for postgresql
<aethelrick> this is the branch I made... https://code.launchpad.net/~richard-asbridge/charms/precise/postgresql/postgresql-admin-ip
<marcoceppi_> aethelrick: you were really close, you need to reverse the order
<marcoceppi_> aethelrick: you want to merge lp:~richard-asbridge/charms/precise/postgresql/postgresql-admin-ip in to lp:charms/postgresql :)
<marcoceppi_> kenn: welcome!
<kenn> ah there we go, juju resolved did the trick
<marcoceppi_> kenn: when a service is in an error state all future events (including destroy) are queued
<kenn> thanks marcoceppi_
<marcoceppi_> kenn: you have to resolve the error before continuing
<marcoceppi_> kenn: ah, you got it, cool
<kenn> marcoceppi, actually just watched the charm schools on Local/LXC Provider, thought your name looked familiar. Great video thanks for that, it really cleared up a lot of small random things for me
<marcoceppi> kenn: glad that worked out for you, there were a lot of odd things that popped up during that charm school, so I'm happy you found it helpful
<marcoceppi> aethelrick: the merge looks good, I'd just change the default from 'None' to an empty string
<Kab> hi im trying to deploy to a vps i have running ubunutu 12.04
<jcastro> jamespage: ooh, tell me about this percona xtradb you've got going on
<Kab> the urls in juju init are wrong
<marcoceppi> Kab: what do you mean the URLs are wrong?
<Kab> type in juju init
<marcoceppi> Kab: right, I'm familiar with this, which provider are you trying to use?
<Kab> they all give https://juju.ubuntu.com/get-started/ url
<Kab> local
<marcoceppi> jcastro: ^
<marcoceppi> Kab: we just had a new website released, looks like they didn't properly 301 redirect the URLs
<marcoceppi> Kab: https://juju.ubuntu.com/docs/config-local.html
<jcastro> YARGH
<jcastro> Kab: which URLs are you running into this?
<marcoceppi> Kab: make sure you add the ppa:juju/stable before installing mongodb-server
<marcoceppi> jcastro: they're in the juju init output
<marcoceppi> jcastro: and all over the internet
<jcastro> oh
<jcastro> just the root get-started
<jcastro> got it
<marcoceppi> jcastro: yeah, going to make a tool to prevent this from happening ever again
<aethelrick> marcoceppi, hi, was away from desk, I will change the None to an empty string
<marcoceppi> aethelrick: no worries, thanks. You'll also want to update the merge proposal, I sent instructions in your current merge request
<aethelrick> marcoceppi, thanks :)
<jcastro> marcoceppi: make sure it spams the planet when we break urls
<marcoceppi> aethelrick: otherwise, the merge looks good. I'm on review this week so I might not get to it today, but I'll be looked at sometime this week
<marcoceppi> jcastro: yeah, seriously
<kurt_> marcoceppi: has that configuration for single provide been tested with maas?
<jcastro> marcoceppi: they're already working on it
<marcoceppi> kurt_: what do you mean by single provide been tested with maas?
<jcastro> but still ... :-/
<kurt_> marcoceppi: local rather
<jcastro> marcoceppi: did you see the manual provisioning stuff on the list? I totally missed it until this morning
<jcastro> marcoceppi: oh nm, I see you replied
<marcoceppi> jcastro: I did!
<marcoceppi> kurt_: you don't use the local provider with maas
<marcoceppi> maas is it's own provider :)
<kurt_> but what about the case of trying to consolidate services when deploying?
<kurt_> ie. --to
<kurt_> I'm wondering if this is a more elegant solution for that
<kurt_> so far I have not had a lot of success trying to consolidate services with 1.12
<jamespage> jcastro, active/active mysql :-)
 * jamespage disappears again
<jcastro> whoosh!
<gnuoy> hazmat, hi there, I'm not sure if you're the right person to ask but I have a pretty small mp to lp:juju-deployer/darwin that would be great to get landed if  you get a moment ( https://code.launchpad.net/~gnuoy/juju-deployer/darwin-fix-force-machine/+merge/183613 )
<hazmat> gnuoy, noted
<gnuoy> thanks
<hazmat> gnuoy, at a conference today, but can tackle in this evening
<gnuoy> that would be awesome, thank you
<arosales> hazmat, should evilnickveitch take pythonhosted.org/juju-deployer for instructions on juju deployer for the juju docs?
<hazmat> arosales, there's not much there, src for that is lp:juju-deployer && cd docs
<jcastro> marcoceppi: redirect fixed
<marcoceppi> jcastro: \o/
<arosales> hazmat, ok we'll start with the basics. From there we can get your feedback,
<arosales> hazmat, if you have an outline on what you would like the docs to look like evilnickveitch can also build from there.
<arosales> jcastro, were you still working on the django workflow for the docs
<jcastro> I just mailed the guys
<jcastro> bruno is full up on work, waiting to see what patrick says
<arosales> wedgwood, would you be interested in some getting some docs to evilnickveitch  for charm helpers?
<arosales> wedgwood, it can be a rough outline and evilnickveitch cand word smith
<wedgwood> I'm definitely interested...
<wedgwood> I'm afraid I won't have time to devote until tomorrow afternoon.
<evilnickveitch> wedgwood, that would be cool
<arosales> wedgwood, thanks, even if you have a rough outline to evilnickveitch by end of week that would be helpful
<arosales> marcoceppi, to confirm are you still working on how to upgrade a charm for the docs?
<wedgwood> I'll try my hardest
<arosales> wedgwood, thanks
<marcoceppi> arosales: yes
<arosales> marcoceppi, thanks and also the juju plugin bits too, correct?
<marcoceppi> arosales: correct
<arosales> marcoceppi, thanks
<Kab> the lxc install script is broken
<Kab> it does not fully install
<marcoceppi> Kab: care to elaborate?
<kurt_> marcoceppi: I'm picking up where I left off from about a week and a half ago.  I mentioned I was having problems destroying services after a failed deployment.  I believe you said I need to resolve the error prior to destroying service - is that correct?
<marcoceppi> kurt_: that's correct, You can run the destroy command at anytime - it's not until the unit is resolved of it's error that the next events will be processed
<kurt_> macroceppi: and if I cannot actually resolve the problem? does it matter?
<kurt_> will I still be able to successfully destroy the service?
<marcoceppi> kurt_: so you can run `juju resolved` and that just tells juju "pretend like I've resolved this issue"
<marcoceppi> kurt_: it doesn't actual validate if you've resolved the issue or not
<marcoceppi> it just marks the error as fixed and moves on to the next event
<kurt_> marcoceppi: ah ok, so the only requirement is to successfully mark the problem as resolved (assuming in the mongodb) and the service can be successfully destroyed.
<kurt_> marcoceppi: a question related to this, when using the gui, does it take care of all of this in the background? or is there some similar manual intervention.
<marcoceppi> kurt_: right, by marking it resovled juju just continues execution of queued events, which in this case, is the destroy environment
<marcoceppi> kurt_: there is both a "resovled" and "retry" button on the unit screen in gui. The first just runs `juju resolved`, the second runs `juju resolved --retry`
<marcoceppi> no further manual intervention is required
<kurt_> marcoceppi: ok, thank you.  This is a workflow change from previous versions of juju.  I think it should be documented, or perhaps commented in an error condition or something.  This was a source of confusion.
<marcoceppi> kurt_: it's psudeo documented. Previously you could destroy services in an error state
<kurt_> it's not clear that you have to resolve the problem before a service can be destroyed
<kurt_> yes, why can't that be supported?
<kurt_> that would make life easier
<marcoceppi> kurt_: that's no longer the case as you see. However, there is this section https://juju.ubuntu.com/docs/charms-destroy.html which says to see the troubleshooting guide
<marcoceppi> kurt_: unfortunately that troubleshooting link is broken
<kurt_> marcoceppi: it is broken :)
<marcoceppi> kurt_: when it shows up again, that'll be fixed
 * marcoceppi files a bug
<kurt_> marcoceppi: is there a technical reason one should not be able to directly destroy service like before?
<marcoceppi> kurt_: it's the way events are processed by newer version of juju
<kurt_> is it a problem with state tracking of the service and relations?
<kurt_> yeah ok
<marcoceppi> kurt_: it's not a problem, but a fix to another issue
<marcoceppi> in order to destroy a service you need to fire the stop hook
<marcoceppi> kurt_: in earlier versions of juju you destroyed a service it basically just removed it from the juju topology and that was that. No stop hook would fire therefore it didn't matter. To fix that there's a dying state now, which says this unit is to be removed after all queued events have completed
<marcoceppi> kurt_: So, if you're in an error state, all hook executions are stopped and queued, including the stop hook and the destroy event
<marcoceppi> kurt_: so this "problem" is actually the way juju is supposed to work
<marcoceppi> it's just a fix to a long standing bug :)
<kurt_> I see.
<kurt_> its just an interesting condition to leave things in an un-ending state
<kurt_> I guess in a particular use case
<marcoceppi> kurt_: no so much unending, everything just pauses when there's an error
 * kurt_ thinking
<kurt_> marcoceppi: maybe its the terminology "dying" that is getting to me.  that implies action when the process is actually paused.
<marcoceppi> kurt_: well it is /dying/ just how long it takes to die is up to you :)
<kurt_> marcoceppi: right.  Maybe I'm too caught up in my own confusion.  LOL
<kurt_> marcoceppi:  do you have a cached copy of that troubleshooting guide somewhere?
<marcoceppi> kurt_: I don't think it ever existed. I'm looking through revision history now
<marcoceppi> evilnickveitch: ^
<kurt_> thanks
<marcoceppi> evilnickveitch: also, where should the charm-tools documentation live? I don't know what section to put it under
<evilnickveitch> marcoceppi, I am adding a new section for it and other tools
<marcoceppi> evilnickveitch: \o/ cool. I'll just keep adding pages then to add to that section later
<evilnickveitch> kurt_, the troubleshooting page isn't live, but marcoceppi  has given you a good summary of what it has to say on the subject
<jamespage> jcastro, just answered http://askubuntu.com/questions/343349/installing-ceph-and-ceph-osd-charms-on-same-machine/343883
<jamespage> we really need a way to say - "don't do this" - in a way that juju can enforce
<kurt_> evilnickveitch: thanks. I will look forward to that working.
<X-warrior> If I'm using postgresql charm with volume-ephemeral-storage as false and volume-map  as "{ postgresql/0:  S3 volume-id }". Do I need to attach the volume to instance? Or does the charm/juju will handle it form?
<marcoceppi> X-warrior: juju can't do any volume attaching IIRC, you'll need to do it yourself
<marcoceppi> evilnickveitch: what would "juju plugins" go under?
<evilnickveitch> marcoceppi, I guess the same section as charm tools and amulet?
<marcoceppi> evilnickveitch: well this is "how to create plugins" as well as install plugins
<marcoceppi> evilnickveitch: not sure if that changes anything
<evilnickveitch> marcoceppi, I think for the time being we will group all those things together
<marcoceppi> evilnickveitch: cool, what should I prefix the files with? reference-* ?
<evilnickveitch> I think "tools-", it will be a new section
<fwereade_> marcoceppi, evilnickveitch, jcastro: https://code.launchpad.net/~fwereade/juju-core/docs-splurge/+merge/184833 has quite a lot of new stuff
<fwereade_> marcoceppi, evilnickveitch, jcastro: sorry it took so long
<jcastro> better late than never!
<evilnickveitch> fwereade_, cool! thanks for that, i will look it over tomorrow
<X-warrior> is it possible to use a persistent storage on mysql config? Similar to postgresql? I don't think so, but maybe I'm missing something
<marcoceppi> X-warrior: not that I'm aware of
<X-warrior> marcoceppi: last question for the day (hopefully), is it possible to use elastic ip with juju?
<marcoceppi> X-warrior: you can assign an elastic ip to your instances, via the EC2 control panel. It won't affect juju at all. However, there is no way to do this from within juju at this time
<marcoceppi> * that I'm aware of
<X-warrior> marcoceppi: Does this elastic ip control fits in juju?
<X-warrior> I mean, this type of feature is in juju 'scope'?
<marcoceppi> X-warrior: I'm not sure if it's something on the road map or not. I know juju can kind of work with floating-ips using OpenStack, but I'm not sure if there are plans to manage elastic ips with ec2
<X-warrior> marcoceppi: well I'm willing to add it to juju, but before I start something I would like to check if it is something that could be used inside juju or not, because if it is not, I guess I can go with ec2 control panel
<marcoceppi> X-warrior: check with #juju-dev that's where all the core developers hang out
<X-warrior>  ty
<X-warrior> :D
<kurt_> Is 1.13.3 tested against maas yet?  I was trying to figure out where to download
<marcoceppi> kurt_: 1.13.3 is in ppa:juju/devel
<kurt_> thanks :)
<marcoceppi> kurt_: as it stands now all 1.EVEN releases are in juju/stable and all 1.ODD releases are in juju/devel - the versioning follows the linux kernel where odds are devel and evens are stable
<kurt_> marcoceppi: thanks.  Don't particular app release levels follow the different ubuntu release levels too?  Like isn't 1.14 for raunchy rabbit or whatever its called?
<kurt_> rearing rhinoceros or whatever
<sarnold> kurt_: that's only packages in the archive (it'd be juju 0.7-0ubuntu1 in the raring ringtail archive)
<sarnold> kurt_: ppas have no rules
<marcoceppi> kurt_: right now the only version of juju in raring, 13.04, is 1.10 and that's in backports
<marcoceppi> kurt_: we've got 1.14 in saucy, 13.10, but the PPA will move forward regardless of the version in the archives
<marcoceppi> well, we have 1.13.3 in saucy, but that is essentially 1.14
<kurt_> ok, still learning the release level stuff
<sarnold> marcoceppi: hrm, saucy still has pyjuju! https://launchpad.net/ubuntu/+source/juju
<marcoceppi> sarnold: don't look at me!
<sarnold> jcastro: hey can I give you the funny look? :) ^^^
<marcoceppi> sarnold: it looks like juju points to 1.13.3, there's a juju-0.7 in saucy
<marcoceppi> but I think that's just for backwards compat?
<sarnold> marcoceppi: hrm... I wonder what I'm missing here.
<jcastro> yeah
<jcastro> .7 iss there for people who want it
<jcastro> but apt-get install juju does the right thing
<sarnold> oh good :) crisis averted :D
<sarnold> thanks guys
<jcastro> sarnold: jamespage has handled everything, it's lovely
<kurt_> is there a problem with sync-tools in 1.13.3?
<kurt_> http://pastebin.ubuntu.com/6089411/
<kurt_> s/http://pastebin.ubuntu.com/6089411//
<kurt_> http://pastebin.ubuntu.com/6089417/
<kurt_> sync-tools don't appear to be downloading correctly
<kurt_> nevermind: glitch passed apparently
<kurt_> Is there a later version (but somewhat stable version) of the juju-gui I could be testing with?
<kurt_> currently working with charm: cs:precise/juju-gui-76
<kurt_> or 0.9.0
<lamont> 2013-09-10 21:51:28 ERROR juju supercommand.go:235 command failed: no tools available
<lamont> clearly, I misseed something simple.
<lamont> jcastro: around? ^^
 * lamont tries the "juju sync-tools" route
<sarnold> lamont: while you're waiting for an expert to weigh in :) I believe I've read that the tools have to be in an s3/swift/etc bucket -- is that configured properly in the environments.yaml?
<lamont> sarnold: well.
<lamont> this a new everything
<lamont> so, "wrong" is a rather likely scenario
<sarnold> lamont: hehe :)
<lamont> found 0 tools in target; 8 tools to be copied
<sarnold> lamont: any luck with sync-tools?
<lamont> 2 down, 6 to go
<sarnold> promising :)
<lamont> error: cannot start bootstrap instance: no "precise" images in co-01 with arches [amd64 i386]
<lamont> now to figure out what the magic names are that it looks for
<lamont> public-bucket-url: <URL TO JUJU-DIST BUCKET> <-- I would love to know how to construct that URL
<thumper> lamont: hey there
<thumper> lamont: luckily wallyworld is spending his time making this better
<lamont> that will be wonderful later.
<lamont> I'm hoping for "finally, it worked" tonight.
<thumper> lamont: I think wallyworld may be able to help now if he is around
<lamont> and afk for a goodly while, sadly.
<wallyworld> lamont: you need to be given the public bucket by the cloud admin. but it is going away soon hopefully
<wallyworld> lamont: you get the url by looking at the keystone endpoint
<lamont> wallyworld: I am the cloud admin.  can you provide me with clue?
 * lamont has about 10 minutes before he disappears again
<wallyworld> lamont: the url is the public endpoint url for the world readable container which has been created in order to hold the tools tarballs
<wallyworld> lamont: eg for canonistack, it is https://swift.canonistack.canonical.com/v1/AUTH_526ad877f3e3464589dc1145dfeaac60
<wallyworld> does that make sense?
<lamont> almost
<wallyworld> is this for a new cloud?
<lamont> there's the origin of that token, and how to create a public bucket, that remain beyond my experience and understanding
<lamont> yes
<lamont> private cloud
<wallyworld> you can use the swift client to create a container
<wallyworld> and then mark it as world readable
<wallyworld> i'd have to look up the exact commands
<wallyworld> once you have the juju-dist container, you then do swift post -r .r:*,.rlistings juju-dist
<wallyworld> is there juju doc somewhere for how to set up an openstack deployment? i'm not across what out tech writer has produced
<wallyworld> but basically you need to set up a swift account, create a juju-dist container, make it world readable, and then add tools tarballs to a tools sub contaner
<lamont> I've found at least 2 docs claiming to describe at least bits of it, without actually working correctly for me.
<wallyworld> :-(
<lamont> swift stat juju-dist gives me the account AUTH_${string} and
<lamont> Container: juju-dist
<lamont>   Objects: 3
<lamont>     Bytes: 1139
<lamont>  Read ACL: .r:*,.rlistings
<wallyworld> that looks ok i think
<lamont> if I am silly enough to think that https://swift.$mumble/v1/AUTH_$string should give me more than a 401 when I smack it with wget, well, I get the 401
<wallyworld> if you do a keystone catalog you can see the full url
<wallyworld> that will give a 401 i think, you need to type the url of an object in the container
<wallyworld> i think wget of the top level does give a 401
<wallyworld> there should be files in juju-dist called "tools/juju-blah.tar.gz"
<wallyworld> wget on those should work
<wallyworld> for canonistack, this is wget able - https://swift.canonistack.canonical.com/v1/AUTH_526ad877f3e3464589dc1145dfeaac60/juju-dist
<wallyworld> as are the tools therein
<hatch> hey guys I'm getting an error with a newly installed juju-core from /stable when I try to `juju --version` it says `error: flag provided but not defined: --version`
<hatch> it's on 12.04
<lamont> so... if the account is AUTH_nnnnn and juju-dist is the container, and it has tools/juju-1.12.0-precise-amd64.tgz, then the url is?
<lamont> assmuing https://swift.foo.com/v1/
<wallyworld> lamont: the public-bucket-url would be https://swift.foo.com/v1/AUTH_nnnnn
<wallyworld> see the canonistack example above
<lamont> error: cannot start bootstrap instance: no "precise" images in co-01 with arches [amd64]
<wallyworld> lamont: you now need to set up some image metadata so juju knows the image id to use
<wallyworld> there is a tool for that
<wallyworld> juju metadata generate-image i think
<wallyworld> you need to know the image id you wsnt to use for amd64 precise
<wallyworld> and then you run the above tool. see the --help
#juju 2013-09-11
<wallyworld> then you copy the generated files up the the public bucket
<wallyworld> hatch: i just tried juju --version and it works for me, what version of juju do you think you have installed?
<lamont> error: flag provided but not defined: --version
<lamont> 1.12.0-0ubuntu1~ubuntu12.04.1~juju1
<hatch> wallyworld: I JUST did it this morning, following the 'install guide'
<lamont> anyway, afk for a couple of hours
<hatch> trying to see what the typical user would see
<hatch> lamont: thanks for confirming :)
<wallyworld> lamont: ok, ping us in #juju-dev if you want any more input
<wallyworld> hatch: i'm running from source, but "juju --version" worked  for me
<sarnold> hatch: the python version and the go version have different --version vs version behavior. that's almost a decent way to tell which one you have already.... :)
<sarnold> wallyworld: (oh, has that been fixed?)
<wallyworld> i didn't realise there was a difference
<wallyworld> i've only ever run go juju
<hatch> oh that's odd :)
<sarnold> ah :))
<wallyworld> ian@wallyworld:~$ juju --version
<wallyworld> 1.15.0-raring-amd64
<hatch> sarnold: thanks for clearing that up
<sarnold> wallyworld: neat! yay :)
<hatch> 1.12.0-precise-amd64
<hatch> here
<hatch> on stable
<wallyworld> ah ok. there's been lots of fixes post 1.12 i think
<wallyworld> we are looking to release 1.14 next week
<hatch> well then....someone should update stable :P
<wallyworld> for now, there's a 1.13.3
 * hatch stokes fire
<wallyworld> hatch: agree, i think that will happen
<wallyworld> each even number release is considered stable
<hatch> but really though thanks - it looks like the go version doesn't handle the -- on 1.12.0
<wallyworld> appears so, sorry
<hatch> but if it works fine on your version then I won't file a bug
<wallyworld> ok
<hatch> this happened when running through the Local Configuration setup docs
<hatch> it says `juju generate-config --show` which failed
<hatch> just fyi
<wallyworld> hmmm. seems the docs may need some love then
<hatch> I'm guessing that it was just done with a more recent version
<hatch> unless even yours doesn't do --show :)
<marcoceppi_> lamont: still having issues with private cloud?
<wallyworld> marcoceppi_: i think he's afk for a bit, but i got his public bucket url sorted, now he needs to generate image metadata
<marcoceppi_> wallyworld: cool
<AskUbuntu> Juju e MAAS get error in apt | http://askubuntu.com/q/344064
<kenn> Question about the bridge node. Due to budget constraints I can only run a single instance in the cloud. Are there any reasons why I wouldn't want to deploy my services to that first machine 0, or anything I should be aware of when doing that? I've done it locally so it's possible to do.
<AskUbuntu> Can I deploy juju on Eucalyptus | http://askubuntu.com/q/344137
<fwereade_> jcastro, marcoceppi_: if either of you are around, I'd appreciate advice re resolving doc conflicts caused (at least partly) by header/footer changes
<fwereade_> jcastro, marcoceppi_: eg "take your tree, copy in $files from trunk, run tools/build.py, commit, merge trunk in"
<AskUbuntu> help: bootstrap error | http://askubuntu.com/q/344168
<jcastro> fwereade_: evilnickveitch is your guy there
<jcastro> fwereade_: I am pretty sure he strips all that out and regens the footer/header anyway
<fwereade_> jcastro, thanks, I chatted to him
<jcastro> marcoceppi_: also ... review queu!
<marcoceppi_> jcastro: yup :\
<evilnickveitch> jcastro, fwereade_ I just posted to the list about the new, super-easy way of creating pages I just sorted out this morning...
<jcastro> k
<fwereade_> evilnickveitch, cool, thanks
<bloodearnest> heya all - am having some problems with juju-deployer (0.2.3) hanging after deploying all services, but before adding relations
<bloodearnest> Ctrl-C'ing gives me a tb: https://pastebin.canonical.com/97357/
<bloodearnest> same tb every time
<bloodearnest> had this happen on 2 different raring machines, on both lxc and openstack (canonistack) envs
<bloodearnest> any pointers to fix?
<jcastro> http://pad.ubuntu.com/7mf2jvKXNa
<jcastro> T minus 30 minutes until the Charm Call!
<jcastro> 10 minutes until the juju charm call!
<mattyw> jcastro, is there a link to just watch the hangout?
<marcoceppi_> mattyw: ubuntu-on-air.com
<marcoceppi_> mattyw: http://ubuntuonair.com/
<mattyw> marcoceppi_, of course, thanks
<Chor> hi there
<kurt_> Can you guys tell me if its correct that a deployment fails when there is an existing configuration file for a charm?  I'm seeing this when having destroyed keystone, then trying to redeploy it.  Maybe an adjunct questions is when a service such as keystone is destroyed, should it clean up it's configuration files cleanly?
<marcoceppi> kurt_: if you destroy a service, then try to deploy it again, I believe you get an error about service already deployed
<marcoceppi> kurt_: is that the error you're getting?
<kurt_> marcoceppi: no, the problem, specifically with keystone was that the /etc/keystone/keystone.conf is left behind
<kentb> does anyone have a *fairly current* and recommended set of steps for deploying openstack charms with maas & juju, especially with quantum-gateway.  For whatever reason, with quantum-gateway in the mix, keystone authentication with quantum gets hosed. every. freaking. time.
<marcoceppi> kurt_: that's a different issue
<sarnold> early charms might expect the unit to be destroyed when the service is unconfigured / terminated / etc. I'm not surprised it didn't do a great job of cleaning up
<kurt_> kentb: I've been working on this for some time.  I hope to have something out in the near future
<kurt_> there are several guides out there, but you have to be patient and figure it out
<kentb> kurt_: yeah, that's the part that's about burned up (patience).  I'll also take whatever you have in the meantime. The quantum-gateway piece is the one that I just can't seem to crack.
<kurt_> marcoceppi: this appears to be some conflict in removing 2013.1.2-0ubuntu2~cloud0 and it's need for /etc/keystone/keystone.conf
<kurt_> kentb: yes, I've ran in to this too.  you have to plan your deployment topology very carefully
<kurt_> kentb: have you seen jamespage's excellent guide? https://wiki.ubuntu.com/ServerTeam/OpenStackHA
<kentb> kurt_: yep. I've worked off of that many times.  I don't have enough nodes for the HA part, so, I've tried to make do with 6 physical machines, each with at least two nics.
<kurt_> kentb: I'm doing mine completely on VMs
<kurt_> kentb: and you are running in to one of the challenges I am facing as well.  I'm consolidating a lot of services to fewer nodes.
<kentb> kurt_: yeah, I'm wondering if putting too much on one machine might be hurting me
<kentb> (with juju-core)
<kurt_> these guys test with a small amount of nodes, so they have managed to figure it out
<kurt_> if there was one thing I wish were out there (hint hint jcastro)
<kurt_> it would be a minimal install blueprint
<kurt_> I hope to figure that out on my own with what knowledge I have
<kentb> me too!
<kurt_> and eventually I am going to create a guide with all of this info - but I'm out in the wild right now
<kentb> join the club :)  I feel like I'm really close.
<kurt_> keep checking in to this channel.  Since we are on the same track, we should share ideas
<kentb> will do!
<kentb> and I agree
<kurt_> these guys do appreciate the bugs you find, so keep the info flowing
<kentb> definitely!
<jcastro> kurt_: you mean for openstack?
<kurt_> yes sir
<jcastro> yeah
<kentb> yep
<jcastro> adam_g: I'm supposed to talk to you about an openstack bundle actually
<kentb> quantum-gateway is kicking my butt
<kurt_> that is my numero uno goal right now
<kurt_> to get a working openstack deployment on VMs with the minimum install
<kurt_> but not the "virtual" method
<kurt_> I would love to have a "scalable" blueprint working with maas, juju, juju-gui and openstack
<marcoceppi> kurt_: There are a few deployer configs out there for that
<kurt_> marcoceppi: yup, but I've not been successful yet in making one work
<kurt_> I've gotten close, but no cigar
<kentb> same here
<jcastro> yeah, so arosales told me this morning that adam_g's been working on a working bundle
<marcoceppi> kurt_: kentb thanks for sticking with it. We're working on making this a very strong story in the near future
<kurt_> marcoceppi: I know you guys are.  I believe you feel this is an important and compelling user case and story
<kurt_> that's why I've been working hard on this
<kentb> marcoceppi: my pleasure. I'm learning a *ton*  I'm also working with a big OEM on a whitepaper on how to do this on their hardware.
<kentb> and I drew the openstack straw :)
<kurt_> marcoceppi: can you share those deployer configs you spoke of to see if there is anything I don't know about?
<marcoceppi> adam_g:  where's be most recent version of the openstack deployers? they still in the deployer repo?
 * marcoceppi knows you're working with jcastro on this as well
<jcastro> I only found out today
<jcastro> but I am keen on getting my hands on whatever he has, heh
 * marcoceppi thinks we all are ;)
<adam_g> marcoceppi, in lp:openstack-ubuntu-testing, juju-deployer   -c etc/deployer/deployments.cfg  -l
<kurt_> are these similar to devstack type things?
<adam_g> those are all sort of specific to our lab, using custom charm branches and some lab-specific config
<adam_g> gimme a minute and ill put together a vanilla one for a simple deployment
<kurt_> adam_g: nice one, thanks mate
<kentb> woohoo
<marcoceppi> kurt_: so juju-deployer, if you're not aware, is a means to standup complicated juju environments
<jcastro> adam_g: hey so, I am thinking make a simple one, post it on the list, and ask for feedback
<marcoceppi> it's just a yaml file that can be used with juju-deployer to deploy, configure, and relate services
<adam_g> jcastro, yeah, i need to put up a wiki that documents this. i have a WI for it and hope to get to it soon.
<kurt_> marcoceppi: I saw that information for the first time at the bottom of jamespage's manifesto
<kurt_> marcoceppi: is the juju-deployer discussed at length anywhere?
<jcastro> hazmat: maybe it's time to post on the list about deployer as well
<kurt_> I would love to read about what it does and how it does it
<adam_g> jcastro, http://paste.ubuntu.com/6093510/
<adam_g> jcastro, this is a vanilla openstack + ceph. ceph node is single node (with no redundnacy), swift is single storage node as well
<jcastro> is this that deployments.cfg or is this a new thing?
<kurt_> adam_g: how many nodes is this?  is it single-node?
<adam_g> kurt_, every service in its own machine
<jcastro> ok so you could add-unit to this?
<kurt_> how about the quantum-networking - is there a local.yaml or something it is referencing?
<jcastro> 2013-09-11 13:57:55 Deployment name must be specified. available: openstack-services', 'precise-grizzly', 'raring-grizzly')
<jcastro> which one do I use?
<kurt_> I think both kentb and myself have managed to get pretty far, but have not been able to get a complete working set up because of the networking
<kentb> yeah, that backfires almost every time...there's something screwy with keystone in the end product that I'm not sure what broke
<jcastro> adam_g: where do you want the wiki page to be? I can start documenting this
<kurt_> kentb: I don't think keystone is the problem
<kurt_> at least in my set ups it wasn't
<kurt_> it was the ability to assign an IP and spin up vm's from horizon
<kentb> kurt_: really?  What were you hitting?  For me, I was always getting a 401 error if I tried to do anything with quantum.
<kurt_> which was probably due to my networking being incorrect
<adam_g> jcastro, im not sure. somewhere near the current openstack HA wiki page? dont have URL handy
<jcastro> jujuclient.EnvError: <Env Error - Details:
<jcastro>  {   u'Error': u'invalid entity name or password',
<jcastro>     u'ErrorCode': u'unauthorized access',
<adam_g> jcastro, https://help.ubuntu.com/community/UbuntuCloudInfrastructure
<jcastro> I get this kind of stuff when I try to run deployer on that pastebin'ed bundle
<jcastro> https://help.ubuntu.com/community/UbuntuCloudInfrastructure/JujuBundle for now
<jcastro> adam_g: ok, so now, next step is what to do wrt. openstack-service, precise-grizzly, or raring-grizzly
<jcastro> I have an environment up and running
<jcastro> juju-deployer -c openstack.cfg  -elocal precise-grizzly
<jcastro> this seems right? Looks like it's working
<kurt_> jcastro: are you able to access horizon and spin up VMs?
<jcastro> it's firing up right now
<jcastro> gimme like 5 minutes
<kurt_> kk
<jcastro> juju-deployer -v -c openstack.cfg  -elocal precise-grizzly
<adam_g> jcastro, good luck using local provider :)
<jcastro> adam_g: I just wanted to get the syntax for the command down, etc.
<adam_g> ah, right
<adam_g> jcastro, openstack-services is the base deployment, precise-grizzly just inherits and sets series and the config to install grizzly
<jcastro> adam_g: can you add some info here? https://help.ubuntu.com/community/UbuntuCloudInfrastructure/JujuBundle
<adam_g> jcastro,  on a call atm. i will, cant promise its going to happen this week tho
<jcastro> k
<jcastro> do you think this will fire up on like hpcloud or something?
<jcastro> I'd like to see it work at least once!
<adam_g> not sure
<jcastro> but it works on MAAS?
<kurt_> adam_g: if you could include some info around how the networking is handled (i.e. IP/CIDRs, interfaces, public ranges, etc) , I would be grateful
<kurt_> jcastro: did VM spin up?
<jcastro> a bunch of containers spun up
<jcastro> fails on swift-storage-z1
<kurt_> anything interesting in debug-log that's an easy fix?
<jcastro> http://imgur.com/Bh114KN
<jcastro> this is the 2nd time I'm trying it, I'll check debug log
<jcastro> failing on the install hook
<kurt_> it looks like most of them are stuck deploying
<jcastro> yeah, a bunch of them are turning green now
<kurt_> nice
<kurt_> can you pastebin your install hook error for swift?
<jcastro> ok so deployer showed an error
<jcastro> but the unit came up just fine
<jcastro> and I think I found a bug in nova-compute though
<jcastro> adam_g: on the nova-compute charm: http://pastebin.ubuntu.com/6093626/
<jcastro> should I report that as a bug?
<kentb> ah! that might be what's killing me too...my nova-compute instance was DOA and libvirt was all messed up as one of the symptoms
<adam_g> jcastro, full log?  i believe thats one of the many issues you'll hit doing this in containers
<jcastro> http://paste.ubuntu.com/6093637/
<kurt_> adam_g: wasn't log filling issue fixed in 1.13?
<kurt_> (juju 1.13)
<kentb> kurt_: yep...hasn't come back for me since updating
<kentb> bug killed my bootstrap node within a few hours
<kurt_> yes, mine too in 1.12
<kentb> ok. so if an instance is stuck in 'dying' state is there a good way to nuke it?  I can't terminate the machine b/c we're indefinitely stuck there. Please tell me I don't have to destroy-environment and start over (using juju-core 1.13.3-1-1737).
<kentb> the agent-state is 'error' with a hook-failure during config-changed
<kentb> nm I ran juju resolved ceph and then that allowed me to kill it
<thumper> jcastro: ping
<jcastro> yo!
<thumper> jcastro: got a few minutes for a hangout?
<jcastro> yeah let me finish something
<jcastro> ~10 min?
<thumper> sure
<kurt_> kentb: juju resolved <service>
<kurt_> kentb:  juju resolved <service>
<kurt_> I just went over this with marcoceppi yesterday
<marcoceppi> kurt_: yes, we need to update the documentation for this
<kurt_> Can anyone tell me if there is default username/password for console only access for nodes?  I don't have ssh access
<marcoceppi> kurt_: everything is done via ssh
<kurt_> I'm hosed them
<marcoceppi> kurt_: you can try ubuntu with the password ubuntu
<kurt_> then
<kurt_> that doesn't work
<marcoceppi> then I don't think so
<marcoceppi> We try to avoid default passwords and users, because that's a vulnerability
<kurt_> macroceppi: if I am trying to add an interface after the fact to node?  I keep messing up my routing when I do it
<kurt_> sorry, that wasn't clear
<kurt_> is there an easy way to add an interface to a node once it's been deployed?
<kurt_> I appear to keep screwing up my routing
<kurt_> this is on maas btw
<kurt_> not that that matters
<kentb> kurt_: yep, that unclogged it. thanks!
<kurt_> kentb: good stuff
<jcastro> hey thumper
<marcoceppi> kurt_: no, you can't add interfaces after the fact unless you use upgrade-charm
<kurt_> upgrade-charm?
<marcoceppi> kurt_: yes. So if you're developing a charm locally, and you deploy using --repository and local:, and you later ADD a relation/interface to the metadata.yaml, you can run `juju upgrade-charm --repository ... <service>` to upgrade the charm and register the new relation/interface
<marcoceppi> kurt_: chaning interfaces/relations or removing them can be quite dangerous if you don't first remove all relations
 * marcoceppi is working on the upgrade-charm docs atm
<kurt_> so, do I have to entertain the idea of completely statically assigning my maas installation to make openstack work as in jamepage's doc?
<kurt_> I am sure I could avoid using the vip parameter, but does that only apply to HA situations?  Or is it a real virtual IP that can be assigned on top of juju's administrative IP?
<kurt_> Looks like this is going to be a problem too:  https://bugs.launchpad.net/juju/+bug/1188126
<_mup_> Bug #1188126: Juju unable to interact consistently with an openstack deployment where tenant has multiple networks configured <canonistack> <openstack> <serverstack> <juju:New> <juju-core:Triaged> <https://launchpad.net/bugs/1188126>
<freeflying> how many constraints from juju python has been implemented in juju-core?
#juju 2013-09-12
<sinzui> hi davecheney, I see a 1.14 branch and series was created shortly after 1.13.3. Will the two fixes be merged into both 1.14 and trunk so that the release can be cut?
<zradmin> Im trying to deploy charms with juju 1.13.3 t oa maas environment, and while the first charm allocates resources from maas, the next charm i try to deploy doesn't even though there are nodes in the ready state
<zradmin> has anyone seen this before?
<kurt_> zradmin: you may have clock/timing issues with oauth
<zradmin> so time between the bottstrap noe and the maas region controller is off?
<kurt_> zradmin: does the debug-log complain of oauth problems?
<zradmin> bootstrap :)
<kurt_> yes, it could be
<kurt_> do both nodes have direct access to the internet?
<zradmin> yes
<zradmin> they should be set to pool.ntp.org
<kurt_> what does debug log say?
<zradmin> its still scrolling, i had the bug in 1.12 where the all-machines.log grew exponentially
<kurt_> right, don't use 1.12
<zradmin> yeah, i just found all the threads on that today it was driving me crazy
<kurt_> I got off that as soon as I could.
<kurt_> 1.13.3 is what I'm working with
<zradmin> yeah i'm on that as well
<zradmin> just a second, i deleted the log file and am reattempting to deploy the charm to get fresh data
<kurt_> you don't need to delete the log.  I've never worried about that
<kurt_> start up another terminal and do "watch debug-log" from your root node
<kurt_> I run 3 terminals when I'm deploying
<kurt_> 1 for commands, 1 doing a "watch juju status" and 1 doing "watch juju debug-log"
<kurt_> that works well for me
<zradmin> cool
<zradmin> this is a sample of the garbage im getting in the log
<kurt_> I sometimes have a 4th window to juju ssh  to whatever node I'm working iwth
<zradmin> ceph3:2013-09-12 00:38:09 ERROR juju runner.go:211 worker: exited "api": websocket.Dial wss://juju.unity:17070/: dial tcp 10.10.33.1:17070: connection refused
<zradmin> ceph3:2013-09-12 00:38:09 INFO juju runner.go:245 worker: restarting "api" in 3s
<zradmin> ceph3:2013-09-12 00:38:12 INFO juju runner.go:253 worker: start "api"
<zradmin> ceph3:2013-09-12 00:38:12 INFO juju apiclient.go:106 state/api: dialing "wss://juju.unity:17070/"
<zradmin> ceph3:2013-09-12 00:38:12 ERROR juju apiclient.go:111 state/api: websocket.Dial wss://juju.unity:17070/: dial tcp 10.10.33.1:17070: connection refused
<zradmin> ceph3:2013-09-12 00:38:12 ERROR juju runner.go:211 worker: exited "api": websocket.Dial wss://juju.unity:17070/: dial tcp 10.10.33.1:17070: connection refused
<zradmin> ceph3:2013-09-12 00:38:12 INFO juju runner.go:245 worker: restarting "api" in 3s
<zradmin> ceph3:2013-09-12 00:38:06 INFO juju runner.go:253 worker: start "api"
<zradmin> ceph3:2013-09-12 00:38:06 INFO juju apiclient.go:106 state/api: dialing "wss://juju.unity:17070/"
<zradmin> ceph3:2013-09-12 00:38:06 ERROR juju apiclient.go:111 state/api: websocket.Dial wss://juju.unity:17070/: dial tcp 10.10.33.1:17070: connection refused^CConnection to juju.unity closed.
<kurt_> ARRGGGH
<kurt_> don't paste in here...
<zradmin> whoops
<zradmin> sorry about that
<kurt_> use pastebin
<zradmin> ok
<kurt_> pastebin.ubuntu.com
<zradmin> http://pastebin.ubuntu.com/6094948/
<zradmin> my apologies to everyone else in the room
<kurt_> you are having connection issues
<zradmin> the ceph service in the log deployed correctly and seems to be checing in to juju
<zradmin> but that was deployed under 1.12 and then i upgraded the tools
<zradmin> should i just destory it again and start clean from 1.13?
<kurt_> I think you may need to destroy-env - but I can't answer for certain
<zradmin> ok, I'll try that first
<kurt_> also - from your node, make sure you can ping out to some internet hosts
<zradmin> why is 1.12 listed under juju/stable? I've had nothing but issues since testing with it
<zradmin> the python client never gave me too many issues
<kurt_> I'm not with canonical, can't help you with that
<kurt_> sorry
<sarnold> zradmin: the juju team has chosen to do evens for stable, odd for unstable, and as I understand they just haven't done a 1.14 release yet..
<zradmin> ah yeah, I meant to ask "do you know why"
<kurt_> sarnold: isn't 1.14 for saucy?
<sarnold> .. never mind the results of annoying bugs in the stable series :) but the intention was for 1.13 to be less stable than 1.12 because it was under more active development
<zradmin> ah ok
<kurt_> 1.12 is dead though I think I heard
<sarnold> yeah probably best to think that :)
<sarnold> 1.13.3 is still most recent in saucy, I ohpe they can fix that up before too much longer
<kurt_> its been working pretty well for me so far
<zradmin> cool, im hoping i can get my test openstack environment up and running soon
<zradmin> ok now when trying to destroy the environment im getting a 409 conflict message
<kurt_> did you juju resolve <service> before trying to do so?
<kurt_> ahâ¦wait
<kurt_> you are destroying your environment completely
<zradmin> yeah im destroying each service individually right now and then seeing if it will let me destroy the environment
<zradmin> its not
<zradmin> getting the mass 409 error still
<zradmin> ok after removing all the nodes from maas it let me destory the environment
<zradmin> ok well this is going to take a bit while the environment bootstraps itself again, but thank you kurt_ and sarnold for the assistance!
<zradmin> (and the lesson in pastebin)
<kurt_> zradmin: I figured you may need to do that.  you may be having some strange connectivity issues
<sarnold> good luck zradmin :)
<kurt_> sarnold: what does "agent-state-info: 'hook failed: "relation-changed"' mean?
<kurt_> getting that from nova-cloud-controller after having deployed some other services
<sarnold> kurt_: eek, no idea, sorry
<kurt_> sarnold: ok, thnx anyways
<zradmin> i had an issue with that under a .7, but got past it. what service are you adding a relation to nova-ccc when it gives you that message?
<kurt_> zradmin: it happened sometime after deploying nova-compute and adding the relations
<zradmin> kurt_: was it connected to keystone/rabbitmq etc. already? Also are you following this guide: https://wiki.ubuntu.com/ServerTeam/OpenStackHA
<kurt_> zradmin: I'm loosely following that guide
<zradmin> kurt_: if your following the HA guide, my biggest problem stemmed from ceph not actually setting up the osd's properly, but from some reason it let me stand up the rest of the services with no noticable errors. The root of that was essentially that the serves never started on the VIPs so Nova-CCC was the first service to report a problem for me
<kurt_> yes, that's a common problem a few of us have run in to.  its about topology too
<kurt_> I'm actually trying to consolidate services down to as few nodes as possible
<kurt_> and I'm definitely not doing HA, and I'm 100% on VMs in Vmware Fusion
<zradmin> yeah i was running alot of the core api services on vms as well
<kurt_> I've gotten very close, but not having the topology right has bitten me more than once
<kurt_> I've gotten everything done minus the ability to spin up VMs
<kurt_> in openstack I mean
<kurt_> I wasn't using ceph though...
<zradmin> i see. I was able to upload imaged on my last attempt and start creating instances, but they would never finish deploying - quantum was my issue i think.
<melmoth> with juju-core i cannot bootstrap on a maas installed when i m behind a proxy. It used to work (after some chaneg in MAAS) with pyjuju
<melmoth> cloud-init error: https://pastebin.canonical.com/97399/
<melmoth> any idea what it could be and if there s a way to fix it ?
<thumper> melmoth: that log is from pyjuju not juju-core
<melmoth> hmmm, so, i ended up with juju py on my bootstrap node...
<melmoth> hu, actually, nope
<melmoth> but there s nothing like juju installed on the bootstrap node
<thumper> I can tell by the log format, and that it mentions py files not go files
<melmoth> well, it mention 2013-09-12 03:19:23 ERROR juju supercommand.go:235 command failed: no reachable servers
<melmoth> but still; none version of juju seems installed, and i did not spotted any error (like an apt-get install failing) before that one
<melmoth> ahhh
<melmoth> Sep 12 03:08:00 bootstrap [CLOUDINIT] cc_apt_update_upgrade.py[WARNING]: Source Error: ppa:juju/stable:add-apt-repository failed
<thumper> ah so it did
<thumper> ah, it is the cloud init python failure
 * thumper sighs
<melmoth> it was not able to install the ppa fo juju-core, most probably because the gpg key stuff failed ehind a proxy (i had to change that in maas, used to work with pyjuju)
<melmoth> ahhh, i think i know, my previous change only added the ppa:juju/pkgs and here i think it s trying ppa:juju/devel
<melmoth> where can i find the list of command that cloud-init feeds to the bootstrap node ?
<thumper> melmoth: on the machine or in the code?
<melmoth> in the code, so i can change it.
<thumper> melmoth: mostly in juju-core/environs/cloudinit/cloudinit.go
<melmoth> thanks
<melmoth> is it like python , compiled on the fly so i can change it withot repakcaging the whole stuff ?
<thumper> no
<thumper> go is a compiled language
<melmoth> grumble
<thumper> and it creates a statically linked executable
<thumper> I know that there is effort around making sure that juju works in private clouds
<thumper> with firewalls etc
<thumper> so please document your issues to the juju mailing list
<bradm> any charmers about who feel like reviewing my squid-reverseproxy fixes?  they're pretty minor, but actually let the charm work on juju > 0.7
<davecheney> bradm: this one ?
<davecheney> https://code.launchpad.net/~charmers/charms/precise/squid-reverseproxy/trunk/+merge/185202
<davecheney> diff, he is empty
<bradm> davecheney: huh
<bradm> davecheney: I did the merge proposal against http://bazaar.launchpad.net/~brad-marshall/charms/precise/squid-reverseproxy/http-port-config/revision/42, I thought
<bradm> davecheney: I must have screwed it up somehow
 * bradm retries.
<bradm> davecheney: https://code.launchpad.net/~brad-marshall/charms/precise/squid-reverseproxy/http-port-config/+merge/185204
<bradm> davecheney: looks better?
 * davecheney looks
<bradm> davecheney: let me know if there's a problem with that merge
<davecheney> bradm: change looks good
<davecheney> i'll have to wai for marcoceppi
<davecheney> i'm just a baby charmer
<davecheney> i stil hvae my training wheels attached
<bradm> davecheney: cool
<bradm> davecheney: I'm redoing python-moinmoin charm in python too, using charmhelpers
<davecheney> noice
<bradm> davecheney: https://code.launchpad.net/~brad-marshall/charms/precise/squid-reverseproxy/fixed-ports-path/+merge/184023 is still open too, if you can get someone to merge it that'd be great :)
<davecheney> bradm: looking
<davecheney> small changes are good
<davecheney> lets do more of those
<davecheney> ok, same deal
<davecheney> need marcoceppi to show me the ropes
<bradm> davecheney: I figure small, bite sized chunk changes make it easier on everyone
<bradm> davecheney: since its obvious what they're doing, and I'm new at this too :)
<gnuoy> I have a charm which populates data into a database when the db-relation-{joined,changed} hooks are fired. The process of loading the data can take > 30mins. The hook does not return until the load completes however juju status reports that the hook has successfully completed well before it actually has. Currently the charm doesn't log anything while the load is running, if thatâs relevant. Is this the expected behaviour?
<wellsb> I'm looking into implementing multiplayer functionality into an Ubuntu Touch game.  What charms should I be looking at?
<mgz> gnuoy: I wondered whether a timeout might be involved, but as far as I can see, it's just Cmd.Wait when running hooks and no other logic
<mgz> wellsb: what kind of thing are you after? I don't think there are any charms for game servers, unless nyancat counts. Looking at any charm which uses expose should give you some ideas though.
<wellsb> mgz: I'm thinking about something like the channels api provides for google app engine.  Perhaps I can use node.js in tandem w/ haproxy to create a socket server?  Then my clients can connect to that?  I really don't have much experience in this area
<wellsb> I guess without a pomelo or maple charm, this isn't really possible
<mgz> wellsb: personally, I'd have a charm specific to your game server, rather than looking for a generic game-server charm that you then customise
<mgz> so, you'd write a charm that installs and uses nodejs/pomelo, rather than trying to have a generic pomelo charm with enough configurability to work with any game
<stub> gnuoy: This is Bug #1200267, or at least a facet of it.
<_mup_> Bug #1200267: Expose when stable state is reached <canonical-webops> <papercut> <juju-core:Triaged> <https://launchpad.net/bugs/1200267>
<gnuoy> stub, thats the badger by the looks of it
<stub> gnuoy: juju status just tells you that the -joined hook has successfully run, which it probably has (given it probably needed to wait until the db's -joined hook had run and databases exist etc.)
<gnuoy> stub, thanks
<yolanda> hi, i'm trying to add some nagios functionality to a gerrit charm, and i need some advice. I see other charms like memcached, postgres... that are using nagios plugins for it, but there isn't a nagios plugin for gerrit, what should be the best way to proceed?
<lifeless> yolanda: you can monitor gerrits basic availability using the http plugin
<mgz> that sounds like a good starting point at least
<yolanda> lifeless, that for http, and if i want to check the ssh port maybe i use the check_tcp one?
<marcoceppi> davecheney: you around?
<fwereade_> evilnickveitch, ping
<evilnickveitch> fwereade_, hi
<fwereade_> evilnickveitch, I wanted to check in quickly about the docs I proposed to see whether they were seeming sane?
<evilnickveitch> fwereade_, it looks good so far, I haven't finished them all yet :)
<evilnickveitch> you did quite a bit of work there
<fwereade_> evilnickveitch, cool, I feel like I should do more really
<mgz> there are a few XXXX type bits
<mgz> but really the current needs landing I think
<fwereade_> mgz, fuck, yeah, I just remembered I didn't do sample charm metadata and config files
<fwereade_> mgz, were there more you spotted?
<mgz> just the examples I think
<fwereade_> mgz, thanks, well spotted
<fwereade_> evilnickveitch, but also to say that there's an effort underway to get all useful-for-developers docs collected in one place
<evilnickveitch> fwereade_, yeah, i know that, we aqre working out how that can be done
<fwereade_> evilnickveitch, and that initial indications suggest that something like restructured text docs in juju-core itself may be the most suitable source format
<fwereade_> evilnickveitch, but, yeah, the important thing is that you're aware
<fwereade_> evilnickveitch, anyway I tried to cover all the stuff I could think of that's relevant for charm authors
<fwereade_> evilnickveitch, the major holes are the subordinates page (which I think I recognise as basically the original spec document) and the implicit relations page (which I couldn't really make head or tail of)
<fwereade_> evilnickveitch, but I didn't touch those for fear of never finishing
<evilnickveitch> heh, I think m_3 already went over subordinates, but once the dust settles on all the new bits we should reappraise it
<fwereade_> evilnickveitch, ah, cool, I may be talking about the docs as of a few versions ago
<wedgwood> stub: I'm also curious if run should be in its own module
<wedgwood> which sort of goes along with your API stability question
<stub> wedgwood: yeah, it does look a little lonely
<stub> it should go in with the fixture - nothing else about it is charm specific so its only purpose in charm-helpers is support the fixture.
<wedgwood> stub: ok, so while I'm doing a proper review...
<wedgwood> in keeping with the 1.0 goals, I think both modules can be combined and they need more docs. An example in the module docstring would be excellend.
<wedgwood> *t
<wedgwood> stub: ah, and you'll also need to handle python-fixtures installation.
<stub> wedgwood: I need to declare the dependency if it is in contrib? Or is this because you want it moved to core?
<wedgwood> stub: I think that there will be things in core that handle their own dependency installation. like the fetch and archive modules. I want to keep the actual dependencies (as in setup.py) down
<wedgwood> stub: see charmhelpers.fetch.bzrurl
<stub> I don't think there is any sane way I can help with the python-fixtures dependency apart from documenting it.
<wedgwood> stub: ^^ and also, I don't mean for it to be in charmhelpers.core.testing, just at charmhelpers.testing
<stub> oh, yeah. that is better.
<wedgwood> stub: If I understand the use well enough, I *think* the API is solid. Adding additional kwargs to handle variations on placement shouldn't break anything.
<AskUbuntu> juju - how to set environment variable before running script inside hooks/install | http://askubuntu.com/q/344687
<stub> wedgwood: Ta. I'll do those changes tomorrow.
<wedgwood> stub: cool. don't know if you noticed that I commented on the MP. thanks man and have a good night.
<stub> wedgwood: yes, just saw the notification come through.
<stub> o/
<ahasenack> does anybody know if relation-ids can return relation ids in a broken state?
<ahasenack> or are all relation ids that are returned guaranteed to be in a working state?
<marcoceppi> ahasenack: there was talk on the list about this.
<ahasenack> marcoceppi: I was wondering if I could rely on relation-ids to know if a relation is established or not
<marcoceppi> ahasenack: I'm not 100% sure, looking int he archives
<avoine> do you guys have any idea how I could end up with this error on the agent of a lxc machine:
<avoine> ERROR juju machine.go:286 running machine 1 agent on inappropriate instance: machine-0:26b172cc-.....
<avoine> where machine-0:26b172cc is the Nonce that I got
<marcoceppi> avoine: how can we reproduce that error?
<avoine> I guess you should have the error when deploying using lxc
<ahasenack> marcoceppi: I have a service that will only start (initscript-wise) after a db relation is joined
<ahasenack> marcoceppi: was wondering what's the best way to track that
<ahasenack> marcoceppi: touch a file at the end of db-relation-changed for example?
<ahasenack> marcoceppi: the problem is actually in config-changed, it tries to start the service at the end. But the hook execution order at deploy time is
<ahasenack> marcoceppi: install hook and then config-changed hook
<ahasenack> so it's that run of config-changed where the start will fail, because it's not related to the db yet
<avoine> ahasenack: you can list relations and check if there is a db one
<marcoceppi> ahasenack: that's how I do it, touch files, etc
<ahasenack> avoine: the question then becomes, will relation-ids only return established relations, or does it include broken ones in its output? Relations with errors
<marcoceppi> ahasenack: I touch a file, then run config-changed at the end of every hook, which will run hooks/start
<ahasenack> marcoceppi: ok
<avoine> relation_get_all() in the haproxy charm loop over relations and fetch relation information for each ones
<avoine> then use that to configure haproxy.cfg
<avoine> marcoceppi: I'll dig up more and come back with a bug report if I found something
<marcoceppi> avoine: cool, I've not seen that error, but if you run everything with like --debug and -v you should get plenty of information
<avoine> ok
<avoine> thanks
<kurt_> marcoceppi:  can you tell me how multihomed AND statically assigned IP address systems are achieved with maas and juju (ie. for openstack)?
<kurt_> I was considering putting this out there to ask ubuntu, but thought maybe someone here could answer this
<jcastro> kurt_: did you ever sort your quantum thing?
<kurt_> jcastro: not yet.  
<kurt_> jcastro: did you get everything working yesterday?
<jcastro> no, it's on my todo this weekend
<kurt_> jcastro: do you have any ideas on my question?  should I put it out there to ask ubuntu?  I'll be jamespage could answer it easily.
<jcastro> yeah
<jcastro> marcoceppi: 33 unanswered `juju` questions
<jcastro> oh hey sinzui
<jcastro> we should put unanswered questions from askubuntu tagged with "juju" in the review queue as well
<jcastro> just like a link to: http://askubuntu.com/questions/tagged/juju?sort=unanswered&pageSize=50
<marcoceppi> jcastro: i've got the review queue baring down on me. Questions will have to wait
<sinzui> jcastro, ack, jcsackett , can you report a bug about that so that we can include it in your efforts
<jcastro> marcoceppi: I was mentioning you as sort of "just nod and validate my idea!"
<jcastro> sinzui: rock, on ... lp:charmworld?
 * marcoceppi nods to jcastro
<sinzui> yep
<kurt_> jcastro: was the problem you saw yesterday in adding a relationship between nova-compute and nova-cloud-controller?  I keep seeing "        agent-state-info: 'hook failed: "relation-changed"'"
<kurt_> both can be deployed without issues, but as soon as I try to join them, nova-cloud-controller errors out
<kurt_> http://pastebin.ubuntu.com/6098340/
<lifeless> kurt_: there is a debug thing
<lifeless> kurt_: I think you'll need to do that to see whats failing
<jcastro> I'm not even getting past install hooks on some of them, but remember I'm on the local provider, there's a bunch of issues there left to resolve
<jcastro> this weekend I'm going to try to fire up the bundle on HP
<kurt_> lifeless: you are talking about the debug hooks, right?
<kurt_> I was considering trying that next
<lifeless> kurt_: no, the watch-all-the-logs and related drop-into-a-shell-when-a-hook-fires thing.
<lifeless> jcastro will know what I'm blathering about
<jcastro> juju debug-logs
<kurt_> yes, I do that
<kurt_> output is above in pastebin :)
<jcastro> kurt_: we're in your neck of the woods next month, I am wondering if we should just get together for beers with your stuff
<jcastro> and get you sorted for real
<kurt_> ah yeah sure :)
<kurt_> where and when are you talking?
<kurt_> lifeless: I was referring to debug-hooks - I am wondering how useful that will be here
<jcastro> yikes, debug-hooks, not -logs
<jcastro> sorry, long day!
<kurt_> ahâ¦ok, that makes more sense :D
<jcastro> kurt_: week of 21 October, though hopefully it won't be this same issue, heh
<jcsackett> sinzui: there's a card on our kanban for askubuntu now.
<sinzui> thank you jcsackett
<zradmin> is there a way in 1.13 to destroy subordinate services yet?
<marcoceppi> zradmin: yeah, you can destro subs for some time
<marcoceppi> zradmin: just remove the relation
<zradmin> marcoceppi: odd, i removed the relation and the subordinate service and the main service have been stuck in a dying state for hours now
<marcoceppi> zradmin: can I see the juju status output?
<zradmin> marcoceppi: here it is http://pastebin.ubuntu.com/6098492/
<marcoceppi> zradmin: it says the agents are stopped
<marcoceppi> zradmin: anyways, run `juju resolved mysql-hacluster/0; juju resolved mysql-hacluster/1`
<marcoceppi> zradmin: that should finish the removal of the subs, and then the final cleanup of the juju status
<marcoceppi> zradmin: whenever a unit (or sub) is in an error state all future events for that unit are queued and event processing is stopped, even on a destroy-service or removal of a sub
<marcoceppi> zradmin: you need to mark the error as resolved in order for juju to process the next event
<marcoceppi> zradmin: that's why you see it in life: dying but it's not dead, because it's stuck with an error
<lifeless> zombie!
<marcoceppi> I would yell at evilnickveitch because this isn't in the docs yet, but he just quit
<zradmin> marcoceppi: ah ok, that makes sense - still relearning all the new changes in the go rewrite. in .7 it just seemed to do everything instantly
<zradmin> marcoceppi: that worked btw, so tyvm!
<marcoceppi> zradmin: yeah, that was a bug (technically) that has been fixed in the rewrite
<marcoceppi> zradmin: you're welcome!
<kurt_> Do debug-hooks work in 1.13.3?  It appears I need to manually set a bunch of environmental variables.  Maybe its not intended to work with the add-relations hooks?
<zradmin> trying to deploy an haproxy for mysql im getting an error now where corosync isn't starting because its missing some principle? i've got the VIP etc set in my config file so i don't know what is missing to finish starting the service properly. here's the debug-log section thats relevant http://pastebin.ubuntu.com/6098666/
<marcoceppi> zradmin: haproxy for mysql, I dont' think those two play with each other
<AskUbuntu> Juju debug-hooks for add-relation? | http://askubuntu.com/q/344862
<zradmin> marcoceppi: its worked in the past, but it configures in active/passive mode (its also whats in the public documentation im following on https://wiki.ubuntu.com/ServerTeam/OpenStackHA)
<zradmin> marcoceppi: i got a little farther with it, apparently maas and juju now deploy the nics as bridges (for lxc support maybe?) so i had to adjust the config for that but the VIP didn't come up
#juju 2013-09-13
<kenn> So I had an instance running on AWS which I started simply to test the constraints and make sure it was starting the right instance type. Once I had verified that it was, I immediately used destroy-machine and left it. A few hours later I noticed the machine was still running, and after repeated destroy-machine commands it didn't go away. I eventually terminated it from the AWS console, but it still shows up in juju stat. Is there a way I ca
<kenn> n tell juju to forget about the machine?
<davecheney> kenn: nope, sorry
<davecheney> not of the delete failed
<davecheney> sorry
<davecheney> destroy failued
<davecheney> can you give some more information
<davecheney> liek the output of juju status
<kenn> hang on
<davecheney> when you were trying to destroy the machine
<kenn> current output of juju status: http://pastebin.com/rj0g0UAi
<kenn> before I terminated the machine on AWS, instance-state on machine 1 said something other than missing. Sorry I didn't note down any of the information, but if it happens again I'll pick up the logs and such as well
<davecheney> kenn: was a machine created ?
<davecheney> hmm, i see dying, it probably was created
<davecheney> yeah, it has an instance
<kenn> I created machine 1 and requested it be destroyed very soon after that. The instance was created in Amazon, and I could also SSH to it
<kenn> it said dying when I realised it hadn't shut down 2.5 hours later
<kenn> that's when I killed it in AWS
<davecheney> ok, there is currently no way to remove the record from juju
<davecheney> sorry
<kenn> ok, cool. I'll leave it around for when I remake the environment
<kenn> thanks for your help. Next time I will grab more info
<davecheney> np, sorry i wasn't able to do more
<kenn> oh actually, just noticed one of my terminals still has the output of a tail -f on machine 1 for /var/log/juju/machine-1.log: http://pastebin.com/wuW91Eur
<davecheney> 2013-09-13 02:34:33 ERROR juju runner.go:211 worker: exited "api": websocket.Dial wss://ec2-23-23-45-19.compute-1.amazonaws.com:17070/: lookup ec2-23-23-45-19.compute-1.amazonaws.com.: no such host
<davecheney> wow
<davecheney> impossible
<kenn> best diagnostic ;)
<davecheney> that machine failed to boot properly
<davecheney> no idea what happened to it
<davecheney> ec2 plays the law of large numbers
<kenn> oh wow it's actually failing to connect to itself?
<davecheney> a certain % of macines spawned are duds
<davecheney> /win	10
<kenn> yeah, strange things happen occasionally
<kenn> thanks for the help davecheney, I'll leave the entry in juju alone until I rebootstrap, which I will at some point anyway
<kenn> lol I missed the "no such host bit" that's funny, yeah, law of large numbers
<kurt_> Hi guys - I put this request already to ask ubuntu yesterday, but is there any way to track progress with add-relation in debug-hooks? Or is it just as easy to look straight at the debug-log?
<marcoceppi> kurt_: Okay
<marcoceppi> kurt_: I've seen you ask this a few times, but I've been too busy to respond
<marcoceppi> kurt_: I'm making time to get you resolved, because the answer is yes
<kurt_> marcoceppi: thanks!
<marcoceppi> kurt_: debug-hooks will stop at every hook, so yes, you should be able to just trap the hook when you add relation. But I feel like you're experiencing an issue stopping that from happening
<kurt_> marcoceppi: I'm not seeing any output in the debug-hook window
<marcoceppi> kurt_: you should see a byobu/tmux window with numbers at the bottom
<marcoceppi> kurt_: right?
<kurt_> marcoceppi: yup, got that
<marcoceppi> kurt_: have you already run juju add-relation from your machine?
<kurt_> marcoceppi: yes, which gives the error  agent-state-info: 'hook failed: "relation-changed"'
<kurt_> which I believe is something to do with ssh keys
<marcoceppi> kurt_: so right now you have the relation-changed error and debug hooks open?
<kurt_> marcoceppi: no just general window
<marcoceppi> kurt_: Right, but the unit is currently in an error state, right?
<marcoceppi> if you run juju status in another terminal window from your machine it shows hook failed.. right?
<kurt_> yes
<marcoceppi> kurt_: PERFECT! You're just one step away from making this rock
<kurt_> I do "watch juju status" which gives me the error above
<marcoceppi> kurt_: in a terminal window other than the debug-hooks window, run `juju resolved --retry <unit>`
<kurt_> should I have the debug-hooks window open yet?
<marcoceppi> kurt_: yes, you should
<kurt_> k, hang on
<marcoceppi> kurt_: cool
<kurt_> oh actually I do,it was hidden
<marcoceppi> What happens, you've got debug-hooks open, then you run `resolved --retry` it will attempt to run that hook again, however debug-hooks will catch it and put you in window 1 on the bottom, which should be X-relation-changed
<marcoceppi> kurt_: at that point you can run hooks/X-relation-changed or any hook and watch the output live
<marcoceppi> kurt_: you can even edit the hooks on the machine and run them over and over again
<marcoceppi> until you resolve the issue, just make sure you copy your changes to your local repo :)
<kurt_> it doesn't do anything
<kurt_> all I see is a root prompt
<kurt_> root@amcet:~#
<marcoceppi> kurt_: are you debug-hooks in to the right unit?
<kurt_>  juju debug-hooks nova-cloud-controller/0
<marcoceppi> have you run `juju resolved --retry nova-cloud-controller/0`
<kurt_> yep
<marcoceppi> kurt_: Okay, debug-hooks was added in 1.13.1
<marcoceppi> kurt_: which is why it's not working on you 1.12.0 deployed nodes
<marcoceppi> kurt_: if you're willing to give it a shot, there's a juju upgrade-tools command, which will upgrade the version of juju on all your nodes. I don't know if there's an upgrade path between 1.12 -> 1.13; I know they try to do them between stable versions, IE 1.10 -> 1.12 -> 1.14
<marcoceppi> kurt_: At worse, you'll have to destroy and try again
<marcoceppi> kurt_: the command I think you'll want to use is `juju upgrade-juju --dev --upload-tools`
<marcoceppi> which will upload to your maas bucket the latest tools and will select the latest dev, 1.13.3
<marcoceppi> kurt_: I've not used the command before, so I'm not sure how long this will take or what success looks like (other than agent-version being updated)
<kurt_> I'll try that....
<marcoceppi> kurt_: I'd watch juju status like you are until the agent-versions are > 1.13.3 (could be 1.13.3.1, not sure the exact version)
<kurt_> ok, give me a few minutes...
<marcoceppi> once that's done, do the same steps. Launch debug hooks, run resolved --retry, wait for it to trap the hook
<marcoceppi> kurt_: sure, np
<kurt_> cheers
<marcoceppi> kurt_: feel free to ping me if something unexpected pops up
<kurt_> thanks marcoceppi
<kurt_> marcoceppi: juju status is now dead
<kurt_> well, returns nothing
<kurt_> just spinning
<marcoceppi> kurt_: it might be in the process of restarting the state-server
<kurt_> ok, same thing with debug-log
<marcoceppi> kurt_: once juju status is broken, most other juju commands will be
<kurt_> ah, here we go.. unfortunately, still at 1.12
<marcoceppi> kurt_: almost all of them rely on connecting to the state server
<marcoceppi> kurt_: are any of the other nodes updated, or are all of them 1.12 still ?
<kurt_> all still at 1.12
<marcoceppi> kurt_ :\ well it was worth a shot
<kurt_> but I see stuff going on in debug-log
<kurt_> lots of stuff
<marcoceppi> kurt_: oh, maybe it's still in the process of updating then
<kurt_> yes
<marcoceppi> kurt_: there is still hope
<kurt_> let's give it a few minutes, I will report back
<marcoceppi> kurt_: awesome!
<kurt_> marcoceppi: WOOT! WOOT!    agent-version: 1.13.3.1
<marcoceppi> kurt_: AWESOME!
<marcoceppi> kurt_: You should be able to play with debug-hooks now
<kurt_> on it
<kurt_> stuck here...
<kurt_> nova-cloud-controller/0:cloud-compute-relation-changed %
<marcoceppi> kurt_: that's not stuck
<marcoceppi> that's the hook
<kurt_> ah..ok
<marcoceppi> if you look at the bottom
<marcoceppi> you'll see you're in tab 1
<kurt_> yup
<marcoceppi> so if you do an `ls`, you'll see you're in your root charm
<kurt_> yep
<marcoceppi> kurt_: you should be able to run `hooks/cloud-compute-relation-changed` and watch the output and change stuff
<marcoceppi> kurt_: this is also an environment in which you can run the special juju commands, like relation-get, config-get, open-port, etc
<marcoceppi> Just like with tab 0, when you're done playing with the hook just type `exit`
<marcoceppi> and it'll proceed on with the rest of the queued events
<marcoceppi> trapping each one in succesion
<kurt_> ok, I saw some of that in the documentation. this is helpful.  what is tmux command for opening new win?  guess its time to RTFM
<kurt_> :)
<marcoceppi> kurt_: Ctrl + A, c
<kurt_> right - been a while since I've worked with tmux
<marcoceppi> Ctrl + A is the escape sequence, type it, thenthe command which is 'c'
<kurt_> ok, back to the core problem which I saw before
<marcoceppi> kurt_: cool, yeah, c creates space moves between
<kurt_> its doing a getaddrinfo - but it should know the address already since it should be querying the name server which should be the root node
<marcoceppi> natefinch: seems upgrade-juju works as advertised
<marcoceppi> kurt_: getaddrinfo for itself or for the other unit it's connected to?
 * marcoceppi reads the hook
<kurt_> are you still in my screenshare?
<marcoceppi> kurt_: oh, doh
<marcoceppi> kurt_: let me join again
<kurt_> ok
<natefinch> marcoceppi: great :)
<marcoceppi> kurt_: https://bugs.launchpad.net/charms/+source/nova-cloud-controller/+bug/1225160
<_mup_> Bug #1225160: cloud-compute-relation-changed fails, getaddrinfo error <nova-cloud-controller (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1225160>
<ubot5`> Launchpad bug 1225160 in nova-cloud-controller (Juju Charms Collection) "cloud-compute-relation-changed fails, getaddrinfo error" [Critical,Confirmed]
<_mup_> Bug #1225160: cloud-compute-relation-changed fails, getaddrinfo error <nova-cloud-controller (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1225160>
<marcoceppi> holy cow, we don't need that many bots
<natefinch> marcoceppi: haha... I bet they could easily get into an infinite loop....
<marcoceppi> natefinch: I was really worried that ubot5` was going to start up again.
<marcoceppi> natefinch: I wonder if you pasted a URL to another but in the title of a bug if it would set them off
<natefinch> natefinch: I was thinking about trolling them that way.... but i'd feel bad if they just spammed the channel for forever
<frakt> Hi, I'm trying Juju with DevStack and I'm getting this when I run juju bootstrap:
<frakt> error: cannot query old bootstrap state: failed to GET object provider-state from container juju-674f89e24929d54fb4376c9bfad71193 caused by: cannot create service URLs caused by: the configured region "RegionOne" does not allow access to all required services, namely: compute, object-store access to these services is missing: object-store
<frakt> Do I need to run an other type than "openstack" for devstack?
<marcoceppi> frakt: I've not tried devstack and juju yet
<marcoceppi> frakt: what does `juju version` say? just for reference
<frakt> 1.12.0-precise-amd64
<marcoceppi> frakt: okay, cool
<marcoceppi> frakt: if you're just looking to play with juju, and you have an Ubuntu machine already, you could use the local provider. It uses LXC to create a cloud environment on your machine. Since charms work on all the different providers you'll get the same "juju" experience without having a cloud available to you yet
<frakt> Ok I'll try
<kurt_> marcoceppi: thanks
<marcoceppi> frakt: https://juju.ubuntu.com/docs/config-local.html
<marcoceppi> frakt: you'll just need to install the `juju-local` package iirc
<nosleep77> hi there. i'm trying to read up on deploying openstack on ubuntu and trying to wrap my head around juju and maas etc. is there a guide that explains the relationships and the process better? thanks
<frakt> Thanks,
<marcoceppi> nosleep77: we're in the process of getting our docs super stellar to explain the whole juju, openstack, maas relationship. But they're still a bit behind. There are a few blog posts out there but I'd be happy to answer any specific questions you had
<nosleep77> marcoceppi: thanks i'm still doing some initial reading so if you can tell me about those blogs that would be awesome. i basically wanted to do a simple one or two node openstack deployment to get a taste of things
<marcoceppi> nosleep77: well, there's no real "simple" openstack deployment. At least not with one to two nodes. Juju's default mode of operation is one unit of service per machine. So following that and the openstack services we have "charmed" you'll need a min of 7 pieces of hardware to standup openstack at this time. There's things like containerization and stuch that will allow you to co-locate services on one physical machine but they're not
<marcoceppi> fully implemented yet.
<marcoceppi> nosleep77: let me find you some further reading
<nosleep77> marcoceppi: thank you; however RDO has packstack which can be done on a single node then there's also devstack and stackops and i'm sure others that I can use as well... I thought maybe something like that existed for ubuntu server since I generally like ubuntu
<marcoceppi> nosleep77: so, with juju you /can/ do openstack all on one machine, it's just not really tested or recommended at the moment
<marcoceppi> nosleep77: here's a video of deploying openstack with the Juju GUI http://www.youtube.com/watch?v=mspwQfoYQks
<marcoceppi> https://javacruft.wordpress.com/2013/06/25/virtme/
<nosleep77> marcoceppi: that's not a problem at all.. i can still try to do it and see how it goes... i do have resources to bring up 3-4 VMs and my hypervisor passes the virt cpu flags so i should be good.. not real hardware but it should work i think
<nosleep77> marcoceppi: thank you
<marcoceppi> nosleep77: https://help.ubuntu.com/community/UbuntuCloudInfrastructure
<marcoceppi> nosleep77: feel free to ask any questions you may have here, on askubuntu.com with the "juju" tag, or to our mailing list juju@lists.ubuntu.com
<nosleep77> oh fantastic... thanks! for some reason this link wasn't coming up in the google search.. thank you
<marcoceppi> nosleep77: finally, if you haven't already, here are our user docs: http://juju.ubuntu.com/docs
<marcoceppi> There's also a whole host of videos in our video section of the site, nosleep77, https://juju.ubuntu.com/resources/videos/
<nosleep77> thanks that gives me enough for a couple days :)
<marcoceppi> nosleep77: cheers!
<nosleep77> marcoceppi: cheers!
<frakt> frakt@ubuntu:~$ sudo juju bootstrap error: no reachable servers
<frakt> k? :)
<marcoceppi> frakt: run `sudo juju destroy-environment` then `sudo juju bootstrap -v --debug` and pate the output to http://paste.ubuntu.com
<frakt> http://paste.ubuntu.com/6103261/
<marcoceppi> frakt: what version of ubuntu are you on? 12.04?
<frakt> yes
<marcoceppi> frakt: did you add the ppa:juju/stable ppa?
<frakt> yes
<frakt> gonna try on my other machine, sec
<marcoceppi> frakt: run sudo juju destroy-environment again, then sudo apt-get update && sudo apt-get install mongodb-server
<frakt> ok
<marcoceppi> frakt: You just need a more recent version of mongodb; the precise version isn't compiled with ssl support. The mongodb in the stable ppa will fix that
<frakt> ok yeah I did things the wrong order I guess :)
<frakt> so there's supposed to be a web ui for juju? https://juju.ubuntu.com/resources/the-juju-gui/
<marcoceppi> frakt: yup, it's actually packaged as a charm, so it's optional.
<frakt> ah I see!
<marcoceppi> frakt: if you've got juju status saying that there's a machine 0 ready to go
<marcoceppi> frakt: you can add it with `juju deploy juju-gui` and once that's in a started state, you can navigate to the address and use that from there on out
<frakt> cool, I'll give it a try
<marcoceppi> frakt: the GUI's great and does almost everything the command line does, it does tend to lag a little behind new features, but they're always working to close the gap
<frakt> http://i.imgur.com/zB59lqB.png
<frakt> I guess it's a success! :)
<marcoceppi> frakt: Yup! nice!
<marcoceppi> frakt: just need to fill out the last bit of stuff on that setup page you'll have a running WordPress install
<frakt> yeah
<marcoceppi> frakt: local provider doesn't have a firewaller, so in the future you'll need to expose the wordpress service before being able to access it
<marcoceppi> frakt: there are a few minor caveats between the local provider and the other cloud providers, but they're minor in nature
<frakt> I did juju expose wordpress but uts still on a different LAN than my desktop computer so I use a http tunnel to access it
<frakt> is there an easy way to expose it to my 'real' lan?
<marcoceppi> frakt: not without tweaking lxc configuration, it's only available to the host machine running the juju commands
<marcoceppi> frakt: let me see if there's a blog post on it, if not I'll look in to it this weekend
<marcoceppi> frakt: here's a bug report and a very brief answer on askubuntu
<marcoceppi> https://bugs.launchpad.net/juju-core/+bug/1064263
<_mup_> Bug #1064263: Allow local containers to be exposed on the network <juju-core:New> <https://launchpad.net/bugs/1064263>
<ubot5`> Launchpad bug 1064263 in juju-core "Allow local containers to be exposed on the network" [Undecided,New]
<_mup_> Bug #1064263: Allow local containers to be exposed on the network <juju-core:New> <https://launchpad.net/bugs/1064263>
<marcoceppi> http://askubuntu.com/questions/139208/how-can-i-access-local-juju-instances-from-other-network-hosts
<marcoceppi> calm down ubuntu bots!
<marcoceppi> kurt_: I'm about to head out for the night, it's a Friday here in the US so most of us have left for the weekend. If you have questions and no ones around to answer http://askubuntu.com is a great place to stick them or you can mail the juju mailing list: juju@lists.ubuntu.com
<marcoceppi> as well as leave them here, I'll be back in a few hours
<kurt_> marcoceppi: cool.  Thanks again for your help.
<zradmin> does anyone have an idea on how to force the juju provisioner to start in 1.13.3? Im running into the bug where the api starts first and i cant frovision new machines
<zradmin> does anyone have a fix for the 17070 provision errors?
<zradmin> in the bug report it states that it should eventually resolve itself, but it still hasn't come online
#juju 2013-09-14
<hallyn_> hi, i have a noob question on relation-joined vs relation-changed.  do i understand right that i can relation-set several things in x-relation-joined?  and x-relation-changed will be fired in response to any relation-set by the peer?
<hallyn_> (somethign is not quite going right in my charm, and i'm trying to figure out if it's kvm's fault, or my charms')
<hallyn_> hm, maybe here's my problem.  I have x relating to y.  y-relation-joined is doing two relation-sets.  x-relation-changed at the top does relatino-get on both of them.  What is expected to happen?  will one of them (racily) fail?
<hallyn_> is there an argument i can check in x-relation-changed to tell me which relation was changed?
 * hallyn_ reading over https://juju.ubuntu.com/docs/authors-charm-anatomy.html
<hallyn_> all right think i've got it, thanks
<sarnold> hallyn_: what's the resolution? :)
<hallyn_> I'll need to look through `relation-ids` output to see what relations have been set so far
<hallyn_> sarnold: yeah i see now that what i had was quite racy.  i haven't yet committed the fixes to bzr, so you can see for yourself in *-relation-{changed,joined} in bzr+ssh://bazaar.launchpad.net/~serge-hallyn/charms/precise/kvm-test-block-dest/trunk/ and bzr+ssh://bazaar.launchpad.net/~serge-hallyn/charms/precise/kvm-test-block-src/trunk/
<hallyn_> :)
<hallyn_> all right well the way my day's going i'd better commit now before my hd dies :)
<sarnold> eeek
<sarnold> hope your weekend goes better :)
<hallyn_> heh thanks me too :)
<hallyn_> (I already manged to type "rm budg *" instead of "rm budg*" in my homedir earlier today :)
<sarnold> UGH!
<sarnold> owwwww
<hallyn_> luckily i've been so conditioned by wildfires and floods that i don't really lose much when i lose a hd :)
<sarnold> "lucky" indeed
<hallyn_> so let's see if this works now.  if it does i can start adding ceph backends for some real fun
<sarnold> hallyn_:  DEST_GOT_SSHKEY vs got-sshkey  ... in kvm-test-block-src
<sarnold> hallyn_: should this:  elif [ "$id" = "DEST_GOT_SSHKEY" ]; then   be this?  elif [ "$id" = "got-sshkey" ]; then
<hallyn_> oh at first i thought you were pointing out case-difference.  yeah
<hallyn_> thanks, fixed :)
<hallyn_> starting another test.  back in awhile
<czajkowski> aloha
<czajkowski> hey folks https://jujucharms.com/precise/mongodb-HEAD/  seems to not be loading
<czajkowski> none of the other charms are
<czajkowski> jcastro: ^^
<rick_h_> czajkowski: take off the -HEAD
<rick_h_> czajkowski: we've got to fix the urls again. Will hopefully have that updated soon
<rick_h_> czajkowski: so https://jujucharms.com/precise/mongodb
<czajkowski> rick_h_: hey there
<czajkowski> they were broken thursday evening and jcastro got them fixed
<rick_h_> czajkowski: k
<czajkowski> rick_h_: just we've been tweeting it and then the url is broken , didn't know who to poke
<czajkowski> rick_h_: howdy doody :)
<AskUbuntu> Configure a container for using it with juju | http://askubuntu.com/q/345535
#juju 2013-09-15
<jose> m_3: ping
<AskUbuntu> 12.04 Keystone charm fails to deploy: How do I debug it? | http://askubuntu.com/q/345959
<Azendale> They keystone charms is  giving me an "install-error" in the juju gui, and "agent-state: install-error" in the juju cli. Is there a way I can look at a log or debug this to see why it failed?
#juju 2014-09-08
<lazyPower-sprint> jose: easy pre-review for you: https://code.launchpad.net/~lazypower/charms/precise/rails/fix_proof/+merge/233653
<jose> checking
<jose> lazyPower-sprint: nack. push your changes
<jose> 0 lines diff
<mbruzek> matt sucks and jose is awesome
<jose> hey, don't say that!
<arosales> jose, nice work on the joomla
<lazyPower-sprint> jose: haha whoops
<lazyPower-sprint> jose: pushed up changes
<arosales> nice catch jose ;-)
 * mbruzek EODs
<mbruzek> jose merged
<jose> thanks mbruzek!
<mbruzek> https://code.launchpad.net/~jose/charms/precise/joomla/fix-various/+merge/212895
<jose> lazyPower-sprint: taking alook
<jose> any charmer around?
<lazyPower-sprint> I AM!
 * lazyPower-sprint toots the trumpet
<rick_h_> jose: ^
<jose> hehe, thanks rick_h_
<rick_h_> a little ping can go a long way :)
<tvansteenburgh> marcoceppi: please merge: https://code.launchpad.net/~tvansteenburgh/charm-helpers/fix-config-lookups/+merge/233558
<arosales> kwmonroe, https://juju.ubuntu.com/docs/contributing.html
<allomov_> hi, all.
<allomov_> could you say what is development ppa for juju ?
<allomov_> is it ppa:juju/devel ?
<allomov_> after I added it, I still see only 1.20.7 version
<cory_fu> jcastro: hit me
<aisrael> charmers: This needs a status change to needs information - https://code.launchpad.net/~raharper/charms/precise/jenkins/add-user-ssh-pubkey/+merge/216908
<aisrael> charmers: This one needs your review: https://code.launchpad.net/~gandelman-a/charms/precise/jenkins/offline_install/+merge/189180
<whit> jcastro,  hit me
<jcastro> whit, https://bugs.launchpad.net/charms/+bug/853910
<mup> Bug #853910: Charm needed: collectd <Juju Charms Collection:Confirmed> <https://launchpad.net/bugs/853910>
 * arosales reviewing kite by  jose :-)
<arosales> https://bugs.launchpad.net/charms/+bug/1030953
<mup> Bug #1030953: Charm needed: pagekite <Juju Charms Collection:Incomplete by jose> <https://launchpad.net/bugs/1030953>
<tvansteenburgh> marcoceppi: merge please https://code.launchpad.net/~tvansteenburgh/charm-helpers/fix-config-lookups/+merge/233558
<cory_fu> jcastro: hit me
<jcastro> cory_fu, https://code.launchpad.net/~evarlast/charms/precise/mysql/encoding/+merge/226538
<whit> jcastro, design price ~ $500 for shirt i'm wearing
<jcastro> asanjar, https://code.launchpad.net/~mbruzek/charms/precise/rabbitmq-server/tests/+merge/202573
<jcastro> lazyPower-sprint, yo
<jcastro> https://code.launchpad.net/~mbruzek/charms/precise/cassandra/ppc64le/+merge/226291
<arosales> if any folks need a generic charm icon. Like for example the project doesn't have a logo feel free to use the following https://github.com/juju-solutions/generic-charm-icons
<arosales> idea is to use the first letter of the charm name
<arosales> for example for the distcpp can use the "d" charm, but if you are looking at the chef charm you should probably use their logo as they have one.
<arosales> lazyPower-sprint, https://code.launchpad.net/~a.rosales/charms/precise/jenkins/adamg-off-install-review/+merge/233769
<arosales> lazyPower-sprint, thanks for checking thta
<arosales> *that
<lazyPower-sprint> np o/
 * arosales returns to kite review after posting icons 
<jcastro> asanjar, https://code.launchpad.net/~a.rosales/charms/precise/cassandra/config-default-keys/+merge/233465
<whit> jcastro, hit me
<whit> jcastro, https://www.youtube.com/watch?v=JWj3poGhZXE
<arosales> jose, hello are you around?
<cory_fu> jcastro: Got another review for me?
<jcastro> one sec
<cory_fu> Oh, nm.
<cory_fu> Gotta wait for the queue
<jcastro> cory_fu, https://code.launchpad.net/~gnuoy/charms/trusty/mysql/lp-1353135/+merge/230932
<jcastro> if you want that one next
<cory_fu> Ok, cool
<rcj> Is there a way to deploy using SSD EBS volumes with the Amazon provider rather than the standard EBS volumes?
#juju 2014-09-09
<lazyPower-sprint> rcj: i dont think at present that is a known constraint
<mbruzek> marcoceppi, tvansteenburgh  please take a look
<mbruzek> https://code.launchpad.net/~mbruzek/charm-tools/categories-info/+merge/233849
<jose> arosales: hey, I'm around now :)
<lazyPower-sprint> jose: o/
<jose> hey lazyPower-sprint!
<jose> anything you need me to take a look at on the revq?
<jose> I'm ready for n'acking
<arosales> jose, hello
<arosales> jose, I saw your comment re pagekite
<jose> cool
<mbruzek> jose, hello
<jose> hey mbruzek!
<arosales> I got some suggestions on the readme, and I am still trying to learn about pagekite myself
<mbruzek> jose We are all sprinting and adding icons, readme, etc.
<jose> arosales: it's a charm I still need to investigate on since it's tunnelling but only supports minecraft and html protocols
<jose> mbruzek: if you could point me to a couple charms that need them, I can give a hand with it too
<jose> checking on mbruzek's munin MP
<mbruzek> jose can you log in to http://review.juju.solutions/
<mbruzek> the log in feature is new
<jose> awesome
<lazyPower-sprint> hey asanjar, just to wrap wrap up zookeeper so its ready for the store i'm going to add a README update
<lazyPower-sprint> you'll want to backport it to your devel branch before you move forward with the charm. But you need to illustrate that changing the zookeeper storage path is destructive, therefore immutable.
<lazyPower-sprint> unless mbruzek says this is HORRIBLE
<lazyPower-sprint> https://bugs.launchpad.net/charms/+bug/1353197
<mup> Bug #1353197: hortonworks hdp2.1 zookeeper charm <Juju Charms Collection:Fix Committed> <https://launchpad.net/bugs/1353197>
<mbruzek> Hey ecosystem, please check out my application for ubuntu membership
<mbruzek> https://wiki.ubuntu.com/mbruzek
<mbruzek> jcastro, ^
<lazyPower-sprint> mwenning: perchance are you around?
<mwenning> lazyPower-sprint, I am.
<mwenning> sup
<lazyPower-sprint> i'm remoted into the dell test lab
<mwenning> ok.
<lazyPower-sprint> and boysenberry.maas isn't found - looks liek either the bootstrapped env went away or my DNS is incorrectly setup
<lazyPower-sprint> can you take a peek for me?
<mwenning> lazyPower-sprint, stby
<lazyPower-sprint> ack
<mwenning> lazyPower-sprint, yeah, lookup boysenberry.maas: no such host
<mwenning> lazyPower-sprint, are you working late or are you on a different part of the planet?
<mwenning> lazyPower-sprint, I'm at home right now, I'll take a look at it first thing tomorrow morning (about 9:00 CST)
<lazyPower-sprint> I'm west coast, working late at a sprint
<lazyPower-sprint> thanks foir taking a look mwenning
<mwenning> lazyPower-sprint, ok that should work out ok.
<lazyPower-sprint> mwenning: if i'm not around, email me and i'll be around shortly. You've got priority for me given the nature of this merge and the time constraints i've got to work with.
<mwenning> lazyPower-sprint, I'm going to try destroying and rebootstrapping juju.  timeout was 9999 (seconds?) so it probably timed out
<mwenning> "never mind", looks like I'll have to reboot it from the lab.
<lazyPower-sprint> ack. I saw the email followup mwenning - We'll re-convene on this in the morning
<jose> woohoo, fun netsplit
<lazyPower-sprint> mwenning: i dont want to keep you working late. I can bust this out early in the AM and get some some solid feedback.
<mwenning> lazyPower-sprint, cool.  See you then.
<ianunruh> is this still valid: https://juju.ubuntu.com/docs/howto-privatecloud.html
<ianunruh> there doesn't seem to be any `juju metadata` or `juju generate-tools`
<ianunruh> after configuring my OpenStack env, I keep getting
<ianunruh> ERROR index file has no data for cloud {RegionOne http://172.20.4.10:5000/v2.0/} not found
<ianunruh> interesting.. apparently the juju client for OS X doesn't have the metadata command
<rogpeppe> has anyone here used the juju publish command?
<rick_h_> rogpeppe: does it exist yet?
<rick_h_> rogpeppe: it's something we've started a spec for
<rogpeppe> rick_h_: it's existed for 18 months now
<rogpeppe> rick_h_: the current version just pushes to launchpad and waits for the charm store to pick it up
<rick_h_> rogpeppe: ah, cool then. Didn't realize it was already a command
<jose> marcoceppi: ping
<rbasak> Are bug reports accepted at for example https://github.com/juju/testing/issues, or should I file in Launchpad?
<rbasak> niemeyer: one further question about licensing for https://github.com/go-yaml/yaml. I can't find an actual copyright statement.
<rbasak> niemeyer: I presume this is Canonical? Please could you add a copyright claim to somewhere in that file, so it is clear who is licensing it?
<niemeyer> rbasak: It's all Canonical
<niemeyer> rbasak: I'll add a note to LICENSE
<rbasak> Thanks!
<niemeyer> rbasak: Done
<rbasak> Thanks again!
<allomov> Hey, all.
<allomov> Does juju-local work with mac os ? What will it use instead of LXC ?
<hazmat> allomov, it doesn't work on osx, you can download a vagrant image with juju local provider setup using lxc
<hazmat> for osx local workflows.. else the client binary on osx can be used against public/private clouds, etc
<hazmat> rogpeppe, i use hte publish command
<hazmat> rogpeppe, i point it out to people when they ask why the charm store isn't publishing their charm
<rogpeppe> hazmat: it seems to rely on someone running charmload periodically - is that right?
<rogpeppe> hazmat: (it also seems to poll indefinitely without any delay, but perhaps i'm misreading the code)
<hazmat> rogpeppe, charmload is running periodically and basically contionously.. publish is fairly quick in practice.
<hazmat> rogpeppe, but yes i think your assumptions are correct,  alternatively you could try writing a charm and publishing it to see behavior ;-)
<rogpeppe> hazmat: :-)
<rogpeppe> hazmat: we're looking at adding a backwardly compatible API to the new charm store and wondering just how much compatibility is needed. for example, is it reasonable to ask charm publishers to use a newer version of juju?
<hazmat> rogpeppe, imo that's reasonable.. as long as you keep the push  / store feedback
<rick_h_> hazmat: the feedback would be over the changes api endpoint (the rss-like api added)
<hazmat> rogpeppe, the goal is that it would eventually be actually a publish..
<hazmat> rick_h_, that's not appropriate
<hazmat> rick_h_, the point is to get feedback on a particular charm on why it wasn't published
<rick_h_> hazmat: yes, that's the next step after getting the new api in place in the current store location with enuogh backwards compat to not break folks.
<rogpeppe> hazmat: by "push / store feedback", you mean getting errors from the charm store when you try to publish a charm?
<rogpeppe> hazmat: that's actually easier with the new model
<rick_h_> hazmat: oh yes, sorry. Thought you meant feedback on 'it's loaded now'
<hazmat> rogpeppe, yes.. ie. lint works, i've pushed the charm to lp, i've deployed it.. but its not showing up in the store.. why?
<rogpeppe> hazmat: because you get the error directly from the charm store
<hazmat> rogpeppe, with publish yes.. else the whole process is async
<rogpeppe> hazmat: rather than relying on some async process to push it or set errors associated with it
<jcastro> hi guys
<jcastro> so the flagbearer charms will be: postgres, elasticsearch, rails, meteor, and chamilo
<fuzzy> What do you mean by flagbearer?
<fuzzy> Is there a specific reason mongo isn't in that list?
<fuzzy> jcastro:
<jcastro> we're picking charms that have tests, and use charm helpers.
<jcastro> dunno the state of mongo off the top of my head, let me ask.
<fuzzy> No worries, I was just curious.
<jcastro> fuzzy, when we say "flagbearer" we mean "is this a good charm that people can look at and learn how to write a charm"
<fuzzy> AH
<jcastro> apparently the mongo charm is kind of huge, monolithic, doesn't use charm helpers to the extent that it should, etc.
<jcastro> but, it for sure will remain "featured" on the gui
<jcastro> it's a high quality charm, just not something you'd start off with.
<fuzzy> The only reason I asked based on what you listed previously, you described about 70% of my stack
<fuzzy> :)
<jcastro> people are adding tests to those now, so you're in luck!
<tvansteenburgh> fuzzy: i just pushed fixes for the meteor charm to https://code.launchpad.net/~tvansteenburgh/charms/precise/meteor/fix-npm-install
<tvansteenburgh> fuzzy: (in case you are eager to try it)
<tvansteenburgh> fuzzy: it's being reviewed now, hopefully will be pushed to the charm store soon
<fuzzy> tvansteenburgh: If I get on that side of my work today, I'll totally try that.  Thank you :)
<tvansteenburgh> fuzzy: sure thing!
<whit> jcastro, gh:whitmo
<cory_fu> jcastro: gh:johnsca
<kentb> I'm trying to run a charm test using "charm test -e maas 10-deploy-test.py" but I get a complaint saying no tests were found....what am I missing?   juju is version 1.20.7 and I'm running Trusty 14.04.1 with charm-tools version 1.3.3
<arosales> jcastro, fyi  jose just merged your meteor mp: https://code.launchpad.net/~jorge/charms/precise/meteor/readme-updates/+merge/233962
<arosales> thanks jose
<jose> not a prob arosales :)
<jcastro> <3
<allomov> hazmat: that's ok, thank you
<mbruzek> kentb, are you in the charm directory?
<mbruzek> kentb, I am running a similar command right now:  juju test -v --set-e 10-deploy
<kentb> mbruzek: hey. thanks. I had to back up one directory and now it appears to be doing the bootstrap thing, so far.   I was inside the /tests directory of my charm prior to that
<mbruzek> kentb, you are welcome
<pafounette> hi
<pafounette> can any one confirm that juju only check hardcoded error message without any l10n ?
<pafounette> is there any reason explaining why juju check against EN strings but not the errno ?
<pafounette> like ... here https://github.com/juju/juju/blob/master/environs/sshstorage/storage.go#L254
<mbruzek> pafounette, there is a #juju-dev room that might have more of the juju developers in thre.
<pafounette> thanks :)
<mbruzek> pafounette, from the link you sent it does not look like Juju is looking up error message, but I am not a go programmer
<marcoceppi> cory_fu: https://code.launchpad.net/~marcoceppi/charms/precise/chamilo/unit-tests/+merge/234000
<cory_fu> marcoceppi: Looks great.  I will review and merge when I get a chance, or if you want to propose it upstream, that seems fine to me.
<marcoceppi> cory_fu: I may, but laziness is setting in
<cory_fu> marcoceppi: And I'm being pulled in a couple directions, so... :)
<marcoceppi> I thought you had some other things you were fixing
<marcoceppi> cory_fu: well, it's not urgent ;)
<cory_fu> marcoceppi: I was just adding tests, which you covered.
<tvansteenburgh> fuzzy: new meteor charm is in the store now
<fuzzy> cool
<cory_fu> marcoceppi: Oh yeah, we can't merge that upstream yet, because it relies on the charmhelpers changes.  Those still need tests as well; you didn't merge those, did you?
<marcoceppi> cory_fu: not yet
<cory_fu> Well, like I said, I need to add tests.  And I also want to add a helper for downloading a file & checksumming it
<tiger7117> Hi
<tiger7117> ERROR cannot assign unit "wordpress/0" to machine 2: series does not match
<tiger7117> whats this meant ?
<tiger7117> i gave this command # juju deploy wordpress --to=2
<rick_h_> tiger7117: the charm you're deploying is one series (precise or trusty or something) and the machine is another. This is common if you try to colocate two different services on one machine
<tiger7117> there is no other services on that machine No 2. so
<rick_h_> tiger7117: right, but how did machine #2 come to be?
<tiger7117> i was just deploying wordpress on # 2 and mysql # 3
<tiger7117> i tried to deploy mysql and it worked with out any error message but for wordpress this Error message
<rick_h_> tiger7117: does juju status show you what series is on machine #2? precise or trusty?
<tiger7117> trusty
<rick_h_> tiger7117: hmm, looks like no trusty wordpress charm
<tiger7117> yes, looks like.. becoz now in juju status its showing Service: wordpress: charm: cs:precise/wordpress-26
<tiger7117> and in Units: Wordpress/0, Agent-Status: pending
<tiger7117> now ?
<tiger7117> http://manage.jujucharms.com/~narindergupta/trusty/wordpress
<rick_h_> so now you have to kill it off. juju resolved wordpress/0 and then juju destroy-service wordpress/0
<rick_h_> tiger7117: yes, so that's someone's personal wordpress charm. I don't know anything about it.
<tiger7117> # juju remove-service wordpress works .. destory-service unknow
<tiger7117> well.. now what should i do to install/setup wordpress ?
<rick_h_> tiger7117: so if you destory that machine and start over
<rick_h_> there's a precise mysql charm that's been reviewed/etc
<rick_h_> you can install it with juju deploy cs:precise/mysql
<tiger7117> hmm.. so wordpress is in precise so now have to install mysql in precise ?
<rick_h_> tiger7117: and then your juju deploy of the wordpress charm would work just fine
<rick_h_> tiger7117: or use two machines, one precise and one trusty
<tiger7117> how to make machine precise ?
<rick_h_> you have to kill that machine and start over.
<rick_h_> juju remove-machine I think
<tiger7117> oh
<tiger7117> i am using JuDo plugin for DigitalOcean
<tiger7117> for adding machines of Digital Oceans
<tvansteenburgh> trusty/swift-storage
<jcastro> <-- reviewing trusty/mysql
<tvansteenburgh> trusty/swift-proxy
<jcastro> <-- review precise/tracks
<aisrael> reviewing precise/thinkup
<tvansteenburgh> trusty/rsyslog-forwarder-ha
<tvansteenburgh> trusty/rsyslog
<lazyPower-sprint> cs:trusty/quantum-gateway-5	
<mbruzek> cs:trusty/pgbadger-0
<tvansteenburgh> trusty/postgresql
<marcoceppi> cs:trusty/landscape-client-4
<tvansteenburgh> trusty/ntp
<arosales> arosales cs:trusy/openstack-dashboard
<tvansteenburgh> trusty/nova-compute
<jcastro> precise/cassandra
<jcastro> is done!
<jcastro> precise/juju-gui
<jcastro> precise/lodgeit
<tvansteenburgh> trusty/nova-cloud-controller
<jose> jcastro: mind a quick PM?
<jcastro> sure!
<lazyPower-sprint> arosales: http://paste.ubuntu.com/8304170/
<tvansteenburgh> trusty/mongodb
<kwmonroe> precise/pictor-3
<lazyPower-sprint> mbruzek: https://code.launchpad.net/~lazypower/charms/trusty/quantum-gateway/fix_proof/+merge/234023
<lazyPower-sprint> cs:trusty/ntp-5	
<mwhudson> sooo
<mwhudson> if i want a webpage with graphs of the load on all the machines i am deploying things with juju on
<mwhudson> i need to read up about subordinate charms, right?
<jcastro> both precise/juju-gui and trusty/juju-gui are done
<jcastro> mwhudson, yep
<jcastro> mwhudson, is it like a webui on each unit or an agent that reports to a centralized webui?
<mwhudson> jcastro: more the latter
<lazyPower-sprint> cs:trusty/keystone-7
<jcastro> yeah so the agent would be a sub, the webui would be a normal charm
<mwhudson> it's for a demo, i want to demonstrate load being spread across the cluster
<mwhudson> jcastro: are there any of these charmed up already?  istr something about ganglia
<jcastro> http://blog.dasroot.net/writing-the-papertrail-charm/
<jcastro> check that out ^^^
<jcastro> mwhudson, lazyPower-sprint just informed me that ganglia-node is a sub, probably what you want
<jcastro> http://manage.jujucharms.com/charms/precise/ganglia-node
<aisrael> cs:trusty/glance
<mwhudson> jcastro: hmm thanks
<lazyPower-sprint> did anyone land cs:trusty/jenkins yet?
<jcastro> I'm taking precise/squid-forward-proxy
<jcastro> I'm taking precise/vsftpd
<jose> jcastro: do you have any more charms that fail proof?
<jcastro> tons
<jcastro> http://reports.vapour.ws/charm-tests-by-charm
<jcastro> this is what we're going off right now
<jose> cool, thanks
<jcastro> the UI is not idea
<jcastro> but if a charm proof is not 0, it failed
<jcastro> adding tests also helps of course, heh
#juju 2014-09-10
<aisrael> Taking precise/vdbench
<mwhudson> i don't suppose anyone has a juju charm for running a docker registry? :)
<jose> ok, have to go print some Juju posters, be back in a while!
<mbruzek> working on  cs:precise/unattended-upgrades
<aisrael> taking precise/transcode
<aisrael> tvansteenburgh: http://reports.vapour.ws/charm-tests/charm-bundle-test-641-results/charm/charm-testing-lxc/0
<aisrael> taking precise/tracks
<lazyPower-sprint> cs:precise/qemu-cloud
<mwhudson> ok, failure mode i haven't seen before
<mwhudson> a machine (using the manual provider) thinks it is provisioned, but it isn't showing up in juju status
<mwhudson> ah http://askubuntu.com/questions/433842/how-to-resolve-error-machine-is-already-provisioned-in-manual-provision-set-up
<cory_fu> This looks really cool: https://www.kickstarter.com/projects/thoughtstem/codespells-express-yourself-with-magic
<lazyPower-sprint> mwhudson: looking now hang on
<lazyPower-sprint> mwhudson: did that answer clear you up on gettin the machine added to your manual environment?
<mwhudson> lazyPower-sprint: no i just gave up and started again :)
<mwhudson> (this is all mostly automated)
<lazyPower-sprint> Ok. I've been using digital ocean in phaux manual environment and haven't seen that - but the answer looks solid.
<lazyPower-sprint> jose: good dev-credit if you've got the bandwidth: https://bugs.launchpad.net/charms/+source/python-moinmoin/+bug/1367532
<mup> Bug #1367532: tests need refactoring <audit> <python-moinmoin (Juju Charms Collection):New> <https://launchpad.net/bugs/1367532>
<lazyPower-sprint> fuzzy: please keep juju help requests isolated to this channel so the community can benefit from the help you recieve, and allow others who may know the answer an opportunity to answer. <3
<fuzzy> 2014-09-10 02:38:14 INFO juju.state.api apiclient.go:242 dialing "wss://localhost:17070/"
<fuzzy> 2014-09-10 02:38:14 INFO juju.state.api apiclient.go:250 error dialing "wss://localhost:17070/": websocket.Dial wss://localhost:17070/: dial tcp 127.0.0.1:17070: connection refused
<fuzzy> nothing is running on 17070, all I did was reboot it. Restarting mongo doesn't do anything for me. Now juju status just hangs
<jose> lazyPower-sprint: I may be able to take a look at them later, triaging and shortening the queue atm
<lazyPower-sprint> fuzzy: this is related to the mongodb server, hang on and let me tap someone in that has a better idea about how to triage the issue
<lazyPower-sprint> jose: no rush - its not a hot-button charm but would be good to flex your amulet muscle
<lazyPower-sprint> fuzzy: we need you to pastebin the ~/.juju/environments.yaml for your manual provider sans credentials
<fuzzy> np one moment
<fuzzy> I was generating a pastebin for you guys already
<fuzzy> http://hastebin.com/wefavoqate.avrasm there is the log file example let me get you that yaml file
<marcoceppi> fuzzy: what is juju@juju, is it the machine you're running juju commands from and what you're trying to bootstrap?
<marcoceppi> slash did bootstrap?
<fuzzy> http://hastebin.com/lutenazuwu.cs
<fuzzy> marcoceppi: juju is my bootstrapping host and juju is the user that does it all
<fuzzy> I don't run this from a laptop, I run it from a host
<marcoceppi> fuzzy: so, that's going to be the first likely problem. To confirm. You're running the juju commands from the same host you're trying to bootstrap
<fuzzy> no
<fuzzy> i've already bootstrapped like 4 hosts
<fuzzy> I'm sorry if i'm getting your terminology wrong, but instead of fight linux mint & juju I use a server to springboard from
<lazyPower-sprint> fuzzy: to be clear, in juju jargon, bootstrapping is with reference to the api-controller node.  as in node 0
<marcoceppi> fuzzy: right, is that server juju.int.ziphub.com ?
<fuzzy> yes
<fuzzy> that is correct
<marcoceppi> fuzzy: okay that's going to be painful
<fuzzy> juju.int.ziphub.com is machine 0 and only runs the juju-gui
<marcoceppi> juju really wasn't meant to be done in such a way. It may be better to simply spin the Juju Vagrant image and drive that from your mint desktop
<marcoceppi> as it'll give you juju and CLI
<marcoceppi> but
<marcoceppi> moving past that
<marcoceppi> run the following and let us know the output
<marcoceppi> fuzzy: sudo initctl list | grep juju
<marcoceppi> so the bootstrap-host is your bootstrap node, it does all your orchestration
<marcoceppi> it's designed to survive reboots, but ocassionally doesn't for whaatever reason
<fuzzy> jujud-unit-juju-gui-0 start/running, process 2003
<fuzzy> juju-db start/running, process 2019
<fuzzy> jujud-machine-0 start/running, process 2029
<marcoceppi> so, it has survived reboots
<marcoceppi> fuzzy: can you run `juju status --debug` ?
<fuzzy> sure
<fuzzy> http://hastebin.com/owilogutej.avrasm
<fuzzy> there is nothing running on 17070 when I ask netstat
<fuzzy> http://hastebin.com/apezokuqor.hs
<marcoceppi> fuzzy: try `sudo restart juju-db` then `sudo restart jujud-machine-0`
<marcoceppi> fuzzy: also, can you include `/var/log/upstart/juju-db.log`
<marcoceppi> juju-db is lying by saying it's up but it doesn't appear to be up
<fuzzy> np gimmie one moment
<marcoceppi> fuzzy: sure, sure
<fuzzy> http://hastebin.com/luyaqajixa.coffee
<fuzzy> macroceppi:
<fuzzy> here is my *juju* process list http://hastebin.com/fiqayoyage.hs
<lazyPower-sprint> fuzzy: after the restart, still spammig that the state server cannot connect to mongo?
<fuzzy> http://hastebin.com/hixocukada.avrasm
<fuzzy> yes
<lazyPower-sprint> fuzzy: ok the order shouldn't stop, but my thoughts are lets validate that the DB brings up proper, and then lets bring up the api server, and monitor them
<lazyPower-sprint> tail your syslog in a terminal, and service stop juju-db && juju-machine-0 && juju-unit-juju-gui-0
<lazyPower-sprint> confirm once you've got everything halted on the machine
<lazyPower-sprint> if i were to guess, i'd say this is due to a stale lock causing mongodb to barf, and since the serviceis on a recycle, its not actually up but thinks its up
<fuzzy> Gimmie one minute guys I gotta eat dinner
<fuzzy> lazyPower-sprint: juju-machine-0 and juju-unit-juju-gui-0 are not recognized as a service
<lazyPower-sprint> sorry, jujud-machine-0
<lazyPower-sprint> and jujud-unit-juju-gui-0
<fuzzy> lazyPower-sprint: ok that worked, syslog is quiet
<lazyPower-sprint> ok, check and make sure there isn't a stale mongodb.lock file floating around
<fuzzy> root      1839     1  0 02:36 ?        00:00:00 /usr/bin/python /usr/local/bin/runserver.py --logging=info --guiroot=/var/lib/juju-gui/juju-gui/build-prod --sslpath=/etc/ssl/juju-gui --charmworldurl=https://manage.jujucharms.com/ --apiurl=wss://192.168.201.155:17070 --apiversion=go
<fuzzy> that is still running if i do ps -aef | grep juju
<fuzzy> mongo is not running
<fuzzy> looking for lock file
<lazyPower-sprint> /var/lib/juju/db
<fuzzy> find / | grep mongodb.lock returns nothing
<fuzzy> find / | grep mongo.lock returns nothing
<fuzzy> oh
<fuzzy> it's mongod.lock
<fuzzy> and i found it
<fuzzy> it's got a stale pid in it
<fuzzy> lamont:
<fuzzy> lazyPower-sprint:
<lazyPower-sprint> so, how familiar are you with removing a mongod.lock and dealing with mongo afterwords?
<fuzzy> about 0 out of 0
<lazyPower-sprint> you can cowboy the removal, and cross fingers that everything goes welll - which it does about 80% of hte time
<fuzzy> ok so what happens if the shotgun approach doesn't work?
<lazyPower-sprint> 20% of the time, there are further issues that crop up, and are a byproduct of mongod not shutting down properly and leaving the database in an inconsistent state
<lazyPower-sprint> hard to say, depends on whats gone wrong.
<fuzzy> Alight so in the future make sure backups are in place so roll backs can happen
<fuzzy> I dunno it's still not going
<fuzzy> i'm just going to rebuild everything and try again
<fuzzy> It should only take me about an hour
<fuzzy> lazyPower-sprint:
<lazyPower-sprint> fuzzy: shouldn't have to do that
<lazyPower-sprint> did you remove the mongod.llck and attempt to restart?
<lazyPower-sprint> if it has an issue restarting, it will barf the output to stdout/logs
<fuzzy> yea
<fuzzy> I removed mongod.lock
<fuzzy> and rebooted the machine so the startup process would start everything in the correct order
<fuzzy> and juju status is still failing trying to connect to 17070
<lazyPower-sprint> fuzzy: i wouldn't have rebooted the server - port 17070 is the state server api. and the reason you cannot connnect to that is because it cannot connect to juju-db
<lazyPower-sprint> restarting the db would have spit back any problems on STDOUT / syslog - which would have given us a next step to start investigation
<fuzzy> lazyPower-sprint: I know you probably are too busy for this, but if you would like the keys to the castle before I nuke it, just hit me in a more private method
<fuzzy> I'm also dealing with a head cold
<fuzzy> brb
<lazyPower-sprint> if you're adamant on blowing it away and restarting thats acceptable - but the root cause was the reboot, and apparently a database lock thats preventing mongo from coming back up correctly
<lazyPower-sprint> the fact its not stating that in log output is troublesome to me. MongoDB will fail to start when that mongod.lock file is present.
<lazyPower-sprint> fuzzy: http://stackoverflow.com/questions/13700261/mongodb-wont-start-after-server-crash take a look at this, about the durability and recovery method
<fuzzy> lazyPower-sprint: is MAAS a better solution than manual provisioning?
<Odd_Bloke> I'm using the JuJu Vagrant image and can't get either cs:precise/mysql-48 or cs:trusty/mysql-4 to come up.
<Odd_Bloke> They both report: 'hook failed: "start"'
<Odd_Bloke> Not really sure how to debug the problem.
<Odd_Bloke> Ah, have managed to dig in a bit.
<Odd_Bloke> Looks like it might be a memory issue.
<Odd_Bloke> Will restart the Vagrant instance with more RAM and see if that helps.
<Odd_Bloke> Alright, all happy now.
 * rick_h_ heads back home from coffee shop
<jcastro> asanjar,  https://bugs.launchpad.net/charms/+bug/842202
<mup> Bug #842202: Charm needed: Accumulo <Juju Charms Collection:New for fgimenez> <https://launchpad.net/bugs/842202>
<corntoegoblin> how do we verify that maas-dns is correctly caching the maas dhcp ips?
<jcastro> jose, around?
<jose> jcastro: yeah! what's up?
<jcastro> Are you working on those 2 merges you locked now?
<jcastro> because if not i can do them right now
<aisrael> kwmonroe: https://github.com/dergachev/vagrant-vbox-snapshot
<jose> jcastro: yeah, but I'm waiting for authorization to push
<jose> remember, I just joined ~charmers
<jose> if you auth me, I'll merge
<jcastro> marco says you should post a comment on each MP
<jcastro> then one of them will post a response with the ok.
<mbruzek> jose, comment on the review, with a note at the bottom that once another charmer looks at it, it will be merged or not.
<mbruzek> jose, so basically don't wait to comment on the mp/bug
<jose> ok
<mbruzek> comment, and then send marco, chuck, myself a link to the comment and we can take a look.
<jose> hehe, looks like we've got another jose in here :P
<jose> marcoceppi, lazyPower-sprint, mbruzek: https://code.launchpad.net/~lazypower/charms/precise/python-moinmoin/fix_proof/+merge/234041, https://code.launchpad.net/~lazypower/charms/precise/qemu-cloud/fix_proof/+merge/234040, https://code.launchpad.net/~jorge/charms/precise/vsftpd/add-icon/+merge/234034
<mbruzek> jose, I will take a look
<jose> thanks
<mbruzek> jose auth for moinmoin
<jose> ack, pushed
<mbruzek> jose auth qemu-cloud
<jose> ack, pushed
<mbruzek> jose auth for vsftpd
<jose> ack, pushed
<jose> thanks mbruzek!
 * arosales taking a look at https://code.launchpad.net/~evarlast/charms/trusty/mongodb/no-install-recommends/+merge/230978
<lazyPower-sprint> arosales: make sure that applies cleanly, mongodb hhas been updated since that MP
<arosales> lazyPower-sprint, ya I was just showing marcoceppi  that
<arosales> the current merge as it stands isn't a clean patch
<arosales> so I am going to try to clean up and the ntest
<lazyPower-sprint> asanjar: https://code.launchpad.net/~lazypower/charms/trusty/hdp-hadoop/pass_proof/+merge/234177
<mbruzek> wordpress fails on power https://bugs.launchpad.net/charms/+source/wordpress/+bug/1365585
<mup> Bug #1365585: wordpress db relation fails in trusty <audit> <ppc64le> <wordpress (Juju Charms Collection):New> <https://launchpad.net/bugs/1365585>
<arosales> mbruzek, thanks
<lazyPower-sprint> hey tvansteenburgh1 is bundletester pip installable or still blocked with package deps in the way?L
<tvansteenburgh1> lazyPower-sprint: you can pip install it
<lazyPower-sprint> oooo nvm it looks like its there
<lazyPower-sprint> tvansteenburgh: does it expect every project to use venv?
<lazyPower-sprint> http://paste.ubuntu.com/8312002
<aisrael> utlemming: I think I found a bug in the -vagrant-juju cloud images, with the lxc network postrouting. Should I talk to you about that, or someone else?
<utlemming> aisrael: me is fine....what are you seeing?
<aisrael> utlemming: if two lxc containers try talking to each other, the source ip is rewritten to 10.0.3.1, instead of the machine's private ip
<aisrael> Like the NAT POSTROUTING has MASQUERADE  all  --  0.0.0.0/0            0.0.0.0/0
<utlemming> aisrael: hrm, that is ugly. I'm open to a patch :)
<aisrael> Happy to hunt that down. Is lp:jujuquickimgs/build-trunk the correct place to start?
<utlemming> aisrael: lp:jujuredirector would be the place to play
<aisrael> Excellent, thanks!
<fuzzy> lazyPower-sprint: would maas be a better option than manual provisioning?
<lazyPower-sprint> fuzzy: it would be better and provide some automation yes - however - MAAS would allow you to orchestrate commodity hardware in *your* racks, or VM's on your hardware.
<lazyPower-sprint> its not going to provide enlistment/provisioning from linode
<fuzzy> I've already got two emails into two dc's. I did the math, I basically need 30 instances to cover my deployment
<fuzzy> For the $ I spend @ linode I can get full boxes anyway I want from datashack
<fuzzy> for about 3x the resources
<fuzzy> I have them working up something for me from what primitive knowledge I have of MAAS
<mbruzek> marcoceppi, https://bugs.launchpad.net/charms/+bug/807784
<mup> Bug #807784: Charm needed: etherpad-lite <Juju Charms Collection:Fix Released by james-page> <https://launchpad.net/bugs/807784>
<fuzzy> Is there any catches to MAAS and juju?
<mbruzek> marcoceppi, nevermind
<lazyPower-sprint> tvansteenburgh: i'm not sure how/when charmworldlib is updated, so i set the bug status to fix-committed. If its an instant release cycle can you update that? https://bugs.launchpad.net/charmworldlib/+bug/1363136
<mup> Bug #1363136: Need default request timeout <charmworldlib:Fix Committed by tvansteenburgh> <https://launchpad.net/bugs/1363136>
<tvansteenburgh> lazyPower-sprint: thanks, i think Fix Committed is correct
<lazyPower-sprint> jamespage: is the RabbitMQ merge applicable to trusty as well? the MP is only targeted at precise - without any tests i'm trying to validate this manually.
<lazyPower-sprint> jjust curious if i need to promote this against the trusty build as well
<lazyPower-sprint> for reference: https://code.launchpad.net/~cprov/charms/precise/rabbitmq-server/rabbit-admin/+merge/233205
<jose> arosales: sorry! I didn't see your lock and n'acked the mongodb review you were doing :(
<arosales> did /me forget to lock?
<jose> no, I forgot to lock
<arosales> jose, ok no worries thanks for reviewing
<arosales> jose, I am working on an MP that will work, testing now, so I'll have a follow on MP and will comment on Jay's
<jose> awesome, then!
<jose> marcoceppi, lazyPower-sprint, mbruzek: auth to push https://code.launchpad.net/~tvansteenburgh/charms/precise/membase/fix-proof/+merge/234214
<jose> is anyone around having problems with EC2?
<corntoegoblin> i found the maas dns zone files in /etc/bind/ but the hostnames aren't being added/
<corntoegoblin> disregard gentlemen
<corntoegoblin> time to clock out
#juju 2014-09-11
<jose> can anyone please take a look at http://review.juju.solutions/review/1299?
<jose> simple one
<sarnold> why add the -y at the very end of the command line? seems an awkward place to put it, to me..
<jose> sarnold: because laziness, I found it was missing so I just added it
<jose> it works, (or, well, it should!)
<sarnold> jose: .. and the next person to add a new package to the command, will it go before or after the -y? or at the beginning of the command? :)
<jose> sarnold: I believe it doesn't matter
<sarnold> sure, it probably doesn't, but by extension, would you write rm foo bar baz -f blort blart fort fart?   :)
<jose> I'll fix it once I'm back. need to run some errands real quick.
<sarnold> see ya :)
<mbruzek> bzr slow?
<mbruzek> Why slow bzr ?
<jose> because ssh
<lazyPower-sprint> sarnold: i might
<sarnold> lazyPower-sprint: well _you_ might but you're crazy :P
<lazyPower-sprint> indeed
<marcoceppi> mbruzek: http://paste.ubuntu.com/8315457/
<jamespage> marcoceppi, https://bugs.launchpad.net/charm-tools/+bug/1368056
<mup> Bug #1368056: charm proof is still wrong about missing default keys <Juju Charm Tools:New> <https://launchpad.net/bugs/1368056>
<jamespage> sorry but I'm now getting nacked for existing charms that work on this basis
<marcoceppi> jamespage: "default: " is None type
<marcoceppi> add that to the config.yaml and proof goes away without having the proof errors
<jamespage> marcoceppi,
<jamespage> E: config.yaml: type of option access-network is specified as string, but the type of the default value is NoneType
<marcoceppi> jamespage: you have an old version of charm-tools, what does charm version show?
<jamespage> marcoceppi, 1.2.10
<marcoceppi> jamespage: 1.3.3 is latest from ppa:juju/stable
<marcoceppi> that error has since been patched
<jamespage> marcoceppi, so I get:
<jamespage> I: config.yaml: option access-network has no default value
<jamespage> instead - that's acceptable right?
<marcoceppi> yeah, that's just an I which is an info
<marcoceppi> and won't kill proof
<marcoceppi> I guess you can say we're tyring to protect our users too much, but that's the compromise we've made
<jamespage> marcoceppi, it would be nice to have that doc'ed somewhere
<marcoceppi> jamespage: https://juju.ubuntu.com/docs/authors-charm-config.html
<marcoceppi> third bullet point
<jamespage> marcoceppi, ok shutting up now
<jamespage> thanks
 * jamespage should probably read stuff from time-to-time
<marcoceppi> jamespage: <3 np dude
<marcoceppi> jamespage: in other news the review queue is almost ready, about to add openstack charmers queue
<marcoceppi> jamespage: http://review.juju.solutions
<jamespage> marcoceppi, oh yes please
<marcoceppi> jamespage: so we also have automated charm testing, which will at a minimum run certain make targets (make lint, make proof, etc) and will try to run amulet tests. Obviously the openstack charms are doing their own thing in oil. Should we hook this up for the openstack charms?
<jose> marcoceppi, lazyPower-sprint, mbruzek: hey guys, auth request for merging opentsdb (https://code.launchpad.net/~tvansteenburgh/charms/precise/opentsdb/fix-proof/+merge/234239) and nvp-transport-node (https://code.launchpad.net/~tvansteenburgh/charms/precise/nvp-transport-node/fix-proof/+merge/234233), both proof fixes from Tim
<lazyPower-sprint> jose: make sure they pass bundletester. there were a few that failed today due to oddly coincidental dependencies present.
<lazyPower-sprint> jose: if they pass bundletester, merge em and send me your history plz
<jose> lazyPower-sprint: is that wrt those two merges or my tests?
<jose> sorry for asking, but how do I use bundletester? I believe I am not familiar with it
<lazyPower-sprint> jose: nvm
<lazyPower-sprint> jose: those are fine for merging
<jose> ok, will send history in a min then
<jamespage> gnuoy, https://code.launchpad.net/~james-page/charms/trusty/rabbitmq-server/config-fixup/+merge/234260
<afroyd> Hi, I deployed a local lxc mysql with juju and mysql failed to start.
<afroyd> Can I ask for help with this here?
<rbasak> sinzui: in dependencies.tsv from the juju-core 1.20.7 tarball, I can't find references to src/launchpad.net/goyaml or src/github.com/kisielk/gotool, but these directories exist.
<rbasak> sinzui: not sure what's going on here. Is this intentional?
<melmoth> hi there. anyone knows when juju 1.20.8 is supposed to be released ?
<melmoth> i have been told last week it may be middle of this (current) week, and a customer is asking me if there s some new about it.
<whit> aisrael, heyo
<whit> aisrael, closing all but https://code.launchpad.net/~stub/charms/precise/postgresql/integration/+merge/233666
<aisrael> ack
<whit> stub, what about https://code.launchpad.net/~stub/charms/precise/postgresql/bug-1281600-log_temp_files/+merge/233735 ?
<stub> whit: That should be in the integration branch too
<stub> (config.yaml gains a log_temp_files parameter)
<whit> awesome
<whit_> aisrael, any luck with those tests?
<aisrael> whit: Not the unit tests, but I'm spinning up a fresh vm for it now
<whit> jcastro, https://help.github.com/articles/user-organization-and-project-pages
<whit> jcastro, https://github.com/juju-solutions/juju-solutions.github.io
<corntoegoblin> which file can be edited in maas-dns to manually add dns entries?
<corntoegoblin> im messing around on /etc/bind and /etc/bind/maas but nothing
<corntoegoblin> i found the db.* zone files but im not sure which one to edit
<corntoegoblin> ah. found it. /etc/bind/maas/zone.maas
<corntoegoblin> now i just need to know how to call a maas-dns update?
<corntoegoblin> i know rndc reload will update zones in the dns server, but how do i call maas to add dns for a server that is already enrolled>?
<rbasak> sinzui: in dependencies.tsv from the juju-core 1.20.7 tarball, I can't find references to src/launchpad.net/goyaml or src/github.com/kisielk/gotool, but these directories exist.
<rbasak> sinzui: not sure what's going on here. Is this intentional?
<rbasak> (I'm in the process of sorting out an upload for 1.20.7)
<sinzui> rbasak, this is worrying news
<sinzui> rbasak, we need to investigate what the juju developers changes
<rbasak> sinzui: I'm confused though. Shouldn't the directories be generated based exactly on dependencies.tsv? So how did they end up there, if dependencies.tsv have no reference to them?
<sinzui> rbasak, absolutely. the tarball script doesn't know about anything other than juju and godeps. mgz, can you look into ^ this issue
<rbasak> sinzui, mgz: I'll just file a bug for now, if that's OK. This doesn't block my upload, but I would like a bug URL to refer to in a comment for future updates.
<mgz> rbasak: gotool should probably be in dependencies.tsv but is a dep-of-a-dep
<mgz> goyaml is likely an import tyop
<rbasak> mgz: ah - I didn't think of second level deps - thanks.
<rbasak> Though shouldn't we be freezing commit ids of those too?
<rbasak> mgz: BTW, goyaml now appears twice
<rbasak> yaml.v1 is the other instance.
<mgz> rbasak: right
<mgz> and the launchpad.net one is an error
<mgz> I'll find and fix these
<rbasak> mgz, sinzui: thanks. I filed https://bugs.launchpad.net/juju-core/+bug/1368321 so I can refer to it in debian/copyright. No rush now then - I can just catch up when I next update the file.
<mup> Bug #1368321: Some third party embedded sources in the source tarball are missing dependencies.tsv entries <juju-core:New> <https://launchpad.net/bugs/1368321>
<mgz> rbasak: THANKS
<mgz> toomuchcaps
<whit> jcastro, https://github.com/pynashorg/pynashorg.github.com
<rbasak> sinzui: where's your PPA packaging branch, please? I don't see it anywhere obvious.
 * rbasak has looked in ~juju and at /juju-core
<rbasak> sinzui: ooh, finished reviewing diff against PPA and archive. We're in sync, apart from whitespace (I'll sort archive end)!
<rbasak> I hadn't realised. Thanks!
<corntoegoblin> seems like i might have found a bug
<corntoegoblin> you have to edit the FQDN on maas before commissioning for the dns entries to be added to the zone file for maas
<corntoegoblin> if you commission then change the fqdn, the entry is never added to dns
<corntoegoblin> also, when editing the power type to WOL, and then commisioning, it resets back to IPMI and you have to edit the power type twice
<corntoegoblin> is this by design?
<whit_> lazyPower-sprint, overview and links to blog for book: http://en.wikipedia.org/wiki/Continuous_delivery
<whit_> lazyPower-sprint, seminal work "devops"
<whit_> vs. corporate blargatron stuff that gets smeared around the web
<aisrael> marcoceppi: https://code.launchpad.net/~lynxman/charms/precise/drupal6/trunk
<aisrael> reviewing precise/glance
<aisrael> reviwing precise/hacluster
<bcsaller> reviewing test fails cs:precise/zookeeper-4
 * arosales looking at precise/tomcat7
<cory_fu> I'm looking at cs:precise/python-django-9
<lazyPower-sprint> tvansteenburgh1: https://code.launchpad.net/~tvansteenburgh/charms/precise/block-storage-broker/fix-tests/+merge/234168 -- fix plz
<cory_fu> marcoceppi: http://reports.vapour.ws/charm-tests/charm-bundle-test-800-results
<mbruzek> I am looking at lp:~tvansteenburgh/charms/precise/haproxy/fix-proof-and-tests into lp:charms/haproxy
<ayr-ton> I'm trying to destroy a service with agent-state: error. mysql/0
<ayr-ton> but juju destroy-unit mysql/0 doesn't work
<ayr-ton> ;~
<mbruzek> ayr-ton, you need to resolve the error
<mbruzek> ayr-ton, juju resolve mysql/0
<ayr-ton> It works =xx
<ayr-ton> I forget to try this ;x
<ayr-ton> mbruzek, x3
<mbruzek> ayr-ton, glad to help
<bcsaller> looking at jenkins failure
<aisrael> mbruzek: python-virtualenv
<ayr-ton> I'm trying to deploy ghost blogging platform in a manual environment. The unit is pending for a long time (more than 15 minutes) and all I got from juju debug-log is: http://paste.ubuntu.com/8321242/
<ayr-ton> ;
<coreycb> has anyone seen this with juju-deployer?  http://pastebin.ubuntu.com/8321400/
<cory_fu> Has anyone seen an error like this before: Added charm "cs:precise/python-django-9" to the environment.\nERROR no settings found for "python-django"
<cory_fu> coreycb: Looking
<cory_fu> coreycb: I assume you gave deployer the -e option?
<cory_fu> And that you're bootstrapped
<coreycb> cory_fu, I'm bootstrapped but not using -e.  It seems to only occur in version 0.4.0-3.
<coreycb> I'm using my default env
<coreycb> I think I'll open a bug.  I'm able to use 0.3.6-0ubuntu2 successfully for the time being.
<cory_fu> coreycb: What version of python-jujuclient do you have?
<coreycb> cory_fu, 0.18.4-2
<cory_fu> Hrm.  Yeah, that seems like the best approach
<cory_fu> Sorry I couldn't be of more help
<aisrael> I've made a list of all of the juju environment variables I can find. I wouldn't mind a few eyes on this to make sure I described them accurately: https://github.com/AdamIsrael/docs/blob/juju-environment-variables/src/en/reference-environment-variables.md
<coreycb> cory_fu, np, thanks
<ayr-ton> I'm trying to deploy ghost blogging platform in a manual environment. The unit is pending for a long time (more than 15 minutes) and all I got from juju debug-log is: http://paste.ubuntu.com/8321242/
<rick_h_> hatch: might know what time frame is expected
 * hatch pops in
<hatch> ayr-ton: that's definitely not an error from the ghost charm
<ayr-ton> hatch, Its generic?
<hatch> are you able to deploy other things to this env?
<ayr-ton> Yep. I've just deployed mysql.
<ayr-ton> and juju-gui
<ayr-ton> both working
<hatch> hmm...
<hatch> did you change the port to 80?
<hatch> ayr-ton: tbh I've never seen that error before....it looks like an error from Juju and not from the ghost charm
<ayr-ton> hmm
<hatch> ayr-ton: can you log into the machine and copy the log files?
<hatch> juju ssh ghost/0
<hatch> cd /var/log/juju
<ayr-ton> one sec
<hatch> then there should be a ghost log file
<ayr-ton> hatch, http://paste.ubuntu.com/8322190/
<ayr-ton> there are other logs: machine-1.log  unit-ghost-0.log  unit-nginx-proxy-0.log
<hatch> ayr-ton: I'm thinking that it might be conflicting with something else on the machine
<hatch> I have installed it with haproxy and mysql on the same machine without issue
<hatch> but the permission denied error in the log is interesting
<hatch> I'm wondering if it's conflicting with nginx on the machine
<hatch> which nginx charm?
<ayr-ton> Actually theres not nginx on that machine.
<hatch> hmm....so that's odd then
<ayr-ton> Thats all I have: http://paste.ubuntu.com/8322217/
<hatch> ayr-ton: have you tried destroying the instance and the service and trying again?
<ayr-ton> Yep
<ayr-ton> same pending problem
<hatch> anything in the machine log?
<ayr-ton> only 2014-09-11 19:29:46 WARNING juju.cmd.jujud machine.go:353 determining kvm support: exit status 1
<ayr-ton> no kvm containers possible
<hatch> hmm....it's odd because it doesn't even look like the charm has started installing yet
<hatch> will anything else install?
<hatch> say.....wordpress
<hatch> I'm just shooting in the dark here. I've installed it many times on many different providers without issue so I'm kind of hoping it's something unrelated to the charm :)
<ayr-ton> hatch, one sec, I will try other charm on that machine.
<hatch> ayr-ton: thanks...see the very first line of the charm is https://github.com/hatched/ghost-charm/blob/master/hooks/install#L5 so that's why I'm thinking it's not related to the charm
<sebas538_> hey! someone haves a handy juju api documentation?
<marcoceppi> sebas5384: which api?
<sebas5384> marcoceppi: the socket one
<marcoceppi> sebas5384: they're in the juju source, let me see if I can find it
<sebas5384> the one that https://github.com/Ubuntu-Solutions-Engineering/macumba/blob/master/macumba/__init__.py uses
<sebas5384> I would like to do a nodejs version of it :)
<marcoceppi> sebas5384: https://github.com/juju/juju/blob/master/doc/api.txt
<sebas5384> marcoceppi: i sow you are working in a javascript wrapper
<marcoceppi> sebas5384: that library was before the API exists
<marcoceppi> existed
<sebas5384> hehe nice
<marcoceppi> The best way is to simply implement based on the python code
<hatch> a node juju api implementation hey?
<sebas5384> yeah!
<marcoceppi> funny how that got hatch's attention ;)
<sebas5384> hatch: yeah, a module for that
<hatch> marcoceppi: haha
<sebas5384> haha (don't know why)
<marcoceppi> node.js would be a great language for connecting to the api and building tools around considering its async nature
<sebas5384> marcoceppi: thanks for the doc
<marcoceppi> sebas5384: hatch is in love with node.js ;)
<hatch> yup
<sebas5384> i'm getting into it
<sebas5384> hahaha
<sebas5384> marcoceppi: not surprisingly
<hatch> marcoceppi: lol lately I've been playing with Dart.....far superior than JS :O
<sebas5384> but i'm studying go lang, and its quite awesome
<marcoceppi> sebas5384: you may want to check out #juju-dev if you have api questions or mail juju-dev@lists.ubuntu.com they should be able to help sort questions you have if you start implementing
<sebas5384> thanks marcoceppi!
<sebas5384> but it seems that this channel is the only one that helped me
<sebas5384> hehe
<sebas5384> i tried there first
<sebas5384> hatch: dart seems to black magic for me
<sebas5384> *too
<sebas5384> and it looks like java Â¬Â¬
<hatch> haha nah, what's black magic about it?
<sebas5384> hehe
<sebas5384> like angularjs
<sebas5384> xD
<sebas5384> thats black magic
<hatch> yeah well angular is gross haha
<sebas5384> hehe
<hatch> I think Google is doing really well with both Go and Dart
<hatch> some smart peeps there
<sebas5384> yeah of course
<hatch> now if they would just stop thinking webcomponents and angular are so amazing....
<hatch> :P
<sebas5384> hehe
<ayr-ton> hatch, it feels to be a problem with my machine 1
<hatch> ok well that makes me feel a little better - unfortunately that doesn't help you :)
<ayr-ton> is possible to destroy and then re-add with the same number 1? If I destroy it, it will be back with number 5.
<ayr-ton> ahahaha
<hatch> does the number matter?
<hatch> 5 > 1 therefor better ;)
<ayr-ton> Yes. =x The DNS have the unit name on this ;x
<ayr-ton> And I have OCD =x
<hatch> hah hmmm
<ayr-ton> It is possible?
<hatch> I don't think so sorry
<hatch> you can see the options with `juju help add-machine` and it doesn't show that option
<sebas5384> marcoceppi and hatch https://github.com/TallerWebSolutions/juju-client/tree/master
<hatch> sebas5384: I'm trying to figure out where I know you from...
<lazyPower-sprint> hatch: sebas5384 is our drupal charm master community contributor
<hatch> ohhhh that's it
<hatch> :)
<sebas5384> hehe
<sebas5384> lazyPower-sprint: thanks for the intro hehe
<hatch> so many great ppl
<sebas5384> love the master title hehe
<sebas5384> yeah one of the things i love juju is the community
<sebas5384> marcoceppi and hatch if you know any other resource like documentation that would help the development of the juju api client in javascript
<sebas5384> https://github.com/TallerWebSolutions/juju-client/issues/1
<sebas5384> please! any help would be awesome!
#juju 2014-09-12
<mhall119> kirkland: around?
<kirkland> mhall119: in mtg
<mhall119> kirkland: ping me when you're available, we might need help with this orangebox
<jamespage> gnuoy, can you review https://code.launchpad.net/~james-page/charms/trusty/nova-cloud-controller/cluster-sync-fix/+merge/234475 please
<jamespage> I merged you db sync branch and then realized it was a little bust
<jamespage> if you deploy a single unit, related it, then add a new one you end up with no services running anywhere!
<kirkland> mhall119: hi
<mhall119> hey kirkland, jose was trying to juju bootstrap the box but it seemed to be timing on trying to ssh somewhere
<mhall119> he's giving his talk right now though, so I don't know how far he got
<kirkland> mhall119: was it working earlier, and just stopped working now?
<mhall119> kirkland: no, this was the first time we booted it so far
<kirkland> mhall119: really?  I though we shipped that box out on Friday last week, to arrive on Saturday morning?
<kirkland> mhall119: the first time you booted it was an hour before his talk?
<mhall119> kirkland: It shipped Friday but arrived Tuesday
<mhall119> kirkland: and I have no idea what I'm doing with it, my job was to get it to Jose
<whit> lazyPower-sprint, heyo
<whit> do you have link to the rails framework charm you demoed?
<bcsaller> whit: charm get cs:precise/rails
<whit> bcsaller, https://manage.jujucharms.com/charms/precise/rails
<whit> bcsaller, working on adding some links to your blog post
<ayr-ton> hatch, hey you. I'm testing the ghost charm with 4 units behind a haproxy. But theres some strange problems.
<hatch> alrighty shoot
<ayr-ton> hatch, well. The units doesn't fell to be synced. Sometimes when I refresh, I go to a full new blog, with default name "Ghost" and default welcome post. Sometimes the posts are correct but the title is the default one. And, sometimes the full blog come up.
<ayr-ton> hatch, The blog link is http://blog.ayr-ton.net.
<hatch> ayr-ton: is the ghost service related to mysql>
<ayr-ton> hatch, yep.
<hatch> ayr-ton: hmm that;s interesting.....the issue is that some of the units aren't pulling from mysql
<hatch> has it been a while since you scaled up?
<ayr-ton> hatch, I think not. This appears to be happening since the deploy.
<hatch> ayr-ton: did you import these posts before or after you related to mysql?
<ayr-ton> after
<hatch> hmm....
<hatch> could you log into mysql to see if the posts are there?
<hatch> ayr-ton: I'm just wondering if the posts ended up in the local sqlite instead of mysql
<ayr-ton> One sec.
<ayr-ton> aaaaaah
<hatch> mysql empty?
<ayr-ton> hatch, When I add all the units, I made the relation with the mysql before it finish to deploy. The relation was made only with the units that doesn't failed. I manually fixed the units and after this, the relations with db was not made. I removed the relation and then re-added.
<ayr-ton> And all units seems to be fetching data from mysql now.
<ayr-ton> Yep. It is fixed.
<ayr-ton> hatch, Here is my canvas: https://plus.google.com/+AyrtonAra%C3%BAjo/posts/NrwNy2H3MgZ 4 units for ghost and 2 for mysql.
<hatch> and it's fixed now?
<hatch> awesome :)
<hatch> thanks for keeping on this
<ayr-ton> Yes.
<hatch> ayr-ton: does your blog get a lot of traffic?
<hatch> ayr-ton: also the version of the charm on github is more up to date than the charm in the juju charm browser...if you wanted to pull it down and do a local deploy
<ayr-ton> It is fixed.
<hatch> awesome :)
<hatch> ayr-ton: so is this on one single machine running openstack?
<ayr-ton> Nop. Is a manual environment. With 4 vps in wablle.
<hatch> ohh awesome
<ayr-ton> I got it for a good price. For tests. Probably I will change that scheme for create blogs for some friends.
<ayr-ton> But a manual env is a little buggy.
<ayr-ton> hatch, A question. If a change the theme, I will need to upload the files to all units? Or it will be automatically synced?
<ayr-ton> Or, a better idea would be change the charm and then update it?
<hatch> ayr-ton: atm I haven't figured out the best way to do this...I was thinking that I would provide a config field where you could specify a remote repo where it would go and pull the template from
<hatch> but that means you would always need to upload it somewhere.....which isn't necessarily ideal
<hatch> I'm open to ideas
<hatch> btw  I shared your post :)
<ayr-ton> hatch, It can be a good start, because normally the themes already are uploaded at github, like this one: https://github.com/daleanthony/Uno
<hatch> ayr-ton: feel free to file any feature requests or if you would have liked better readme documentation https://github.com/hatched/ghost-charm/issues
<ayr-ton> I would love to \o/
<ayr-ton> hatch, Can I fork it to try to help in this?
<hatch> ayr-ton: you bet - i'd like to keep the charm in JS as much as possible just because Ghost is written in JS
<ayr-ton> Ok. Got it.
<hatch> ayr-ton: if you're going to work on a feature as well it would be nice to open up an Issue to discuss it just so you don't waste time on something that won't work out or something
<ayr-ton> hatch, I will add comments in an existing issue I just saw.
<hatch> great!
<hatch> ayr-ton:  also if you found some bugs with the manual provider plz file those issues with juju
<ayr-ton> Ok (:
<marcoceppi> hatch: did you ever use that charm.js charmhelper?
<hatch> marcoceppi: I did not
 * marcoceppi pulls latest code
<hatch> marcoceppi: writing charms in js is kind of a pita tbh....a lot of the things that need to happen are synchronous but node doesn't have a blocking shell-out
<marcoceppi> right, so joyent wrote some charmhelpers to help with that
<hatch> https://github.com/hatched/ghost-charm/blob/master/hooks/balancer-relation-changed for example
<hatch> oh yeah? did they use promises?
<hatch> where can I find this charm.js?
<marcoceppi> hatch: I'm looking for examples, it wasn't promises but callbacks iirc
<marcoceppi> hatch: I'd love to get them organized and added as another language for charmhelpers, but they're not really being worked on anymore
 * marcoceppi winks heavily at hatch
<hatch> haha
<hatch> marcoceppi: with the ghost charm I didn't really see anywhere where a helper would really simplify things
<hatch> it's pretty simple already
<marcoceppi> hatch: true, but it does things like wrap config-get and relation-get/set much like the host.py does for python charmhelpers
<hatch> I think we need another service which makes sense to write a charm in js to get a real good idea about where it should go
<marcoceppi> making it more javascripthonic
<hatch> yeah those things would be nice
<hatch> marcoceppi: maybe if there was more than my charm written in js :)
<marcoceppi> hatch: http://bazaar.launchpad.net/~dstroppa/charms/precise/node-app/refactor-hooks/files/head:/lib/ this is what I found, I thought they were a little more...involved
<hatch> yeah that's pretty much what I thought it would end up being
<hatch> is there any documentation about this charm? blog posts, vids etc? Looks really detailed
<eckes> hey, a quick question when using amazon juju reads AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY envs, however Amazon's EC2 tools use AWS_ACCESS_KEY / AWS_SECRET_KEY. IS there a reason why it is not the standard env vars?
<eckes> well, i reported it as a bug, thanks  https://bugs.launchpad.net/juju-core/+bug/1368981
<mup> Bug #1368981: juju uses environment variables different from AWS EC2 tools <aws> <juju-core:New> <https://launchpad.net/bugs/1368981>
#juju 2014-09-13
<tiger7117> Hi
<tiger7117> i am installing precised MySQL on machine 2
<tiger7117> ERROR cannot assign unit "mysql/0" to machine 2: series does not match
<tiger7117> any one ?
<tiger7117> actually i wanted to setup trusty Wordpress/MySQL but it has issues so tried precised
#juju 2014-09-14
<MasterPiece> Is JuJu an OpenSource project? Where is source?
<jaywink> MasterPiece, Juju is fully open source, code here: https://code.launchpad.net/juju-core
<dc2447> anyone suggest how to troubleshoot "ERROR index file has no data for cloud" when running juju boostrap with a single environment of openstack?
<bloodearnest> anyone actually used the python-django charm to deploy a project from a url? I'm hittin basic issues, looks like a problem with the ansible stuff
<bloodearnest> (not charmhelpers ansible)
<bloodearnest> ah no it is charmhelpers ansible
<bloodearnest> hm, it's charmhelpers is out of date
<lazyPower> bloodearnest: so sync'ing the branch of charmhelpers in the charm worked?
<fuzzy> Following https://maas.ubuntu.com/2012/11/30/lets-shard-something/ , I end up with http://hastebin.com/rizuhubivo.sm can anyone explain why and what to do about it?
<lazyPower> fuzzy: need mroe info about why the install hook failed. paste a unit-configsvr log
<fuzzy> http://hastebin.com/wanenufica.avrasm
<fuzzy> Hows that?
<fuzzy> lamont:
<fuzzy> lazyPower:
<fuzzy> sorry lamont
<lazyPower> fuzzy: you're missing python-yaml on that cloud image
<fuzzy> ok
<lamont> he
<lazyPower> ideally the charm should handle this - if you add it ot the install hook and issuea  PR i'll review it and merge
<lazyPower> when i get home *
<fuzzy> ty
<fuzzy> http://hastebin.com/faqasiponu.sm
<fuzzy> Disco
<fuzzy> Following, https://maas.ubuntu.com/2012/11/30/lets-shard-something/ using the command "juju add-relation mongos:mongos-cfg configsvr:configsvr" I get "ERROR service "mongos" not found".  What am I doing wrong?
<fuzzy> http://hastebin.com/unicihizap.sm
#juju 2015-09-07
<stub> aisrael: users are just roles in PostgreSQL. When the PG charm generates the new user/role, we grant it the old user/role so it gets all the old permissions.
<stub> aisrael: So we don't need to introspect the database and change all the ownership, which is good as it is a) non trivial and b) quite invasive.
<ParsectiX> Hi jujuers. IS there a way to run a sandbox of juju on my laptop ?
<sparkiegeek> gnuoy: dosaboy: wolsen: can I get some review â¥ for https://code.launchpad.net/~adam-collard/charms/trusty/swift-storage/guard-paused-unit-service-restarts/+merge/269860 please?
<gnuoy> sparkiegeek, isn't pause_aware_restart_on_change going to be applicable to lots of your maintenance actions?
<sparkiegeek> gnuoy: yes, I assume your next question is "why not charm-helpers?"
<gnuoy> sparkiegeek, it is, but maybe that's the layout change
<sparkiegeek> gnuoy: to which the response is - thought about it, but the is_paused() semantics could be different for each charm
<gnuoy> sparkiegeek, merged
<sparkiegeek> gnuoy: thank you!
<gnuoy> np
<stub> Is it still possible for juju to be running 2 hooks simultaneously in the case where you have two units smooshed onto one machine? Or are hooks serialized per machine now?
<stub> I think they are serialized now, as my primary and my subordinate hooks no longer clash.
<stub> And if so, anyone know what version this changed in?
<stub> Thinking that apt failing to grab a lock may indicate a real failure now, rather than something intermittent requiring retries
<elmo> stub: there's a broken assumption there .. brb
<sparkiegeek> broken assumption?
<stub> But the assumption is too large  for this margin
<elmo> haha
<elmo> stub: the broken assumption is that only juju is invoking apt on a system
<elmo> stub: the classic counterexample is landscape
<sparkiegeek> elmo: stub: right! I'm wondering if aptdaemon has a role to play here
<sparkiegeek> of course it's not a complete answer because there will always be something else invoking APT and taking locks
<stub> I think we need a synchronous apt-get update that keeps retrying until it works or fails hard (screwed apt, timeout). And an apt-get install that just tries and fails (eg. trying to install an unsigned package)
<stub> I know nothing about aptdaemon or if it can help.
<stub> I've looked at the code and can adjust retries or add my own timeout easily enough, so I can work around for a particularly problematic charm like Cassandra
<sparkiegeek> stub: well the idea of aptdaemon is that it's one process that talks to APT and clients just make requests of it to do things
<sparkiegeek> stub: I'm curious, what is it about Cassandra charm that makes the current logic problematic?
<stub> Sounds good if you can wait for it to report success, rather than fire and forget and hope the job is done before you need it
<stub> The Cassandra charm can install DataStax Enterprise, which exists in a password protected apt archive. Remembering to add the URL in the first place, getting it right - it doesn't sound like much, but so far everybody has screwed up some part of the configuration.
<sparkiegeek> so the issue is that it takes 5mins to report the screw up?
<stub> With archive.ubuntu.com, a ppa, the us datastax.com - apt-get update takes a minute or two to run and fail. 30 times that is not 5 minutes.
<stub> Now I realize it is retrying a fixed number of times, I can lower that number. I'd assumed it was retrying forever, because that is what it felt like :)
<sparkiegeek> :)
<stub> aptdaemon looks nice. I'm also drawn to a few nice features like partial updates (update only the sources in a given sources.list).
<stub> I have a feeling some of this will need work for wily. aptdaemon may simplify things.
<sparkiegeek> i'm not sure whether we ship policykit in cloud images, do you know?
<stub> No idea. Does it matter for root?
<sparkiegeek> well it seems that aptdaemon depends on PolicyKit, *shrug*
<sparkiegeek> I mean depending on aptdaemon just gives you a bootstrap issue - how do we get aptdaemon in the first place :)
<stub> sparkiegeek: The bootstrap packages aren't as hard, as we know there is a working primary archive or we would not have gotten as far as an install hook.
<stub> sparkiegeek: but yeah, it would need to retry there.
<sparkiegeek> stub: right, that's what I meant - two units racing to install aptdaemon
<stub> sparkiegeek: Until juju sticks charmhelpers on the units for us and the charm dependency bootstrap problem goes away.
<sparkiegeek> stub: I Want To Believe
<stub> Or we make aptdaemon a snap.
#juju 2015-09-08
<g3naro> hi, im having issues trying to access lxc- commands when running juju
<g3naro> both pointed to local enviornment
<g3naro> is this is a nknown issue ?
<beisner> gnuoy, wolsen - avail to land this reviewed c-h bit, and review/land the dependent charm amulet test update MPs?  i'd like to move on to update others, but holding until these hit.  tia!  https://code.launchpad.net/~1chb1n/charm-helpers/amulet-svc-restart-race/+merge/269098
<beisner> thedac, gnuoy - resync'd the test branch to get fri's changes on lp:~thedac/charms/trusty/rabbitmq-server/native-cluster-race-fixes & testing.  thanks!
<g3naro> anyone here?
<g3naro> know how i can deploy a centos6 box /|
<g3naro> ?
<lukasa> Hey, fun, I'm not successfully authing into the onboarding site in Firefox
<lukasa> Ugh, wrong channel, sorry. =D
<marcoceppi> g3naro: you'll need to follow this guide
<ennoble> Is there a way to know that a configuration setting has reached the machine and been processed? I've then been checking that Agent-status is idle to know that the config has been applied, but I don't know how to differentiate between idle because the config has been applied or idle because the config hasn't been applied yet.
<marcoceppi> ennoble: you can do a juju status-history for the unit in question
<marcoceppi> it should show you when it processed the config-changed event'
<ennoble> marcoceppi: thanks, is there a way to access that information from the python juju client library?
<marcoceppi> ennoble: that's a great question, and I'm not aware of a way at the moment.
<marcoceppi> It's definitely exposed in the API, as that's where the cli get it
<ennoble> marcoceppi: I've been using the watcher functionality in juju client do to that, but I seem to be able to miss it.
<ennoble> macroceppi: so I may be able to do an RPC call to get back the status-history? Is it's it's own RPC call?
<marcoceppi> ennoble: I'm trying to find that out right now
<ennoble> marcoceppi: Thanks!
<marcoceppi> ennoble: I'm asking the devs in #juju-dev but I haven't gotten an answer yet
<ennoble> marcoceppi: jujuenv._rpc({"Type":"Client", "Request":"UnitStatusHistory", "Params": {"Name": âmyunit/0â, "Size" : 20, "Kind" : "agent"}}) did the trick
<marcoceppi> ennoble: good find, if you wanted to add that as a method to jujuclient I'm sure it'd be greatly appreciated
<beisner> thedac, gnuoy - friday revs sync'd in, passed the first run.  i've re-queued a couple add'l iterations.  https://code.launchpad.net/~1chb1n/charms/trusty/rabbitmq-server/amulet-refactor-1509b/+merge/270102
<thedac> beisner: excellent
<ennoble> marcoceppi: i've got to jump through quite a few hoops to submit a patch; but I do have two other bug reports on jujuclient that have been outstanding for a while with suggestion solutions in them. I'm trying to work through the hoops on my end, but if somene had a couple minutes to fix the issues I've reported (the fixes are in the bug reports) that would be great.
<marcoceppi> ennoble: I'll take a look and help triage those through, thanks
<ennoble> thanks marcoceppi: the two I'm referring to are #1455302  and #1486297
<mup> Bug #1455302: enqueue_units doesn't correctly pass parameters to action <python-jujuclient:New> <https://launchpad.net/bugs/1455302>
<mup> Bug #1486297: Action doesn't correctly translate unit name into tag if hyphen present <juju-core:Won't Fix> <python-jujuclient:Confirmed> <https://launchpad.net/bugs/1486297>
<jose> hey guys, manual can't run in ports other than 22, right?
<marcoceppi> jose: elaborate
<jose> marcoceppi: I have someone who has a server running in port 2222, and wants to set it up with juju + manual provide
<jose> r
<marcoceppi> jose: juju add-machine ssh:user@host:port
<jose> oh, ok
<jose> thanks
<beisner> thedac, gnuoy - 2 more rmq iterations ok (18 of 18 clustered ok and happy).  that is:
<beisner> https://code.launchpad.net/~thedac/charms/trusty/rabbitmq-server/native-cluster-race-fixes   +=>  https://code.launchpad.net/~1chb1n/charms/trusty/rabbitmq-server/amulet-refactor-1509b
<beisner> precise through vivid, juju w/ LE.
<thedac> ack
<beisner> poor ack, the real ack.  he probably gets hilighted all day every day.
<thedac> I'll start using /me nods :)
<beisner> nah, as you were, as you were.  i ack as well.   ack ack
<ntpttr> Hey everyone, I'm getting an error launching the mysql charm from the juju-gui, "1 hook failed: "install"". Here's the contents of /var/log/juju/unit-mysql-0.log on the machine the service was booted on: http://pastebin.com/GSgz50Sd
<ntpttr> It looks like the issue is a calledprocesserror happening related to the keyserver
<thedac> ntpttr: yeah, the output suggest the config setting 'key' has a value of null. I just fired off a deploy of mysql (without the gui) and it works. Not sure if this is a juju gui related issue. Do you minde filing a bug? https://bugs.launchpad.net/charms
<ntpttr> thedac: I think the issue is proxy related - when I did this at home it did work but behind a proxy now I'm having trouble. Would you still like me to file a bug?
<beisner> Unknown source: u'null'  seems unexpected, as it passes to the cmd as --recv null
<thedac> right ^^
<thedac> ntpttr: yes on the bug report. With as much detail on how to recreate as possible
<ntpttr> thedac: Okay, I'm doing this on the default bootstrapped setup provided by the Orange Box, should I mention that?
<beisner> yep i think the network is probably restricting tcp 80 egress.  and since the charm is pushing hkp over 80, probably getting blocked.   i would suspect in that enviro, a simple apt-get install of anything would also fail.
<beisner> (worth checking manually)
<ntpttr> beisner: To check that manually should I just juju ssh into the machine and try an install of anything?
<beisner> i'd make sure something like this succeeds:   sudo apt-get update && sudo apt-get install multitail
<thedac> ntpttr: try 'nc -vz keyserver.ubuntu.com 80'
<beisner> or, that's even better ;-)
<beisner> thedac's is-foo is quite handy
<ntpttr> beisner thedac: All right, one second while the machine deploys again and I'll run that command (I cleaned up the environment after the last failure, but I've had it happen multiple times so I'm confident it'll happen again)
<thedac> ok
<ntpttr> thedac beisner: Uh so I have no idea what happened but it worked that time, I was looking at 'tail -f /var/log/juju/unit-mysql-0.log' and it went through the whole apt-get update and finished and the service started
<ntpttr> thedac beisner: Do you want me to pastebin the log for you?
<thedac> ntpttr: that is odd. Let us know if you run into it again.
<ntpttr> thedac: okay, will do. Thank you
<isantop> jcastro: Can I ping you about some crazy juju-ness?
<marcoceppi> isantop: it's getting to be EOD for most people on the east coast, anything I can help with?
<isantop> I'm trying to manage services on a personal server using juju
<marcoceppi> isantop: sounds reasonable
<isantop> joseantonior had me going through manual provisioning, but I'm hitting issues with the agent-status not progressing past allocating. I'm not sure what configuration needs to be done for the lxc side of things
<marcoceppi> isantop: so you're manually provisioning lxc?
<isantop> Well, I think I'm not and that's my problem ;-)
 * isantop is a total cloud n00b
<marcoceppi> isantop: no worries, why don't you recount how you got to where you are right now?
<isantop> did "sudo apt-get install lxc" on the remote server, and "sudo apt-get install juju-core" on my local machine (after adding the PPA)
<isantop> then did "juju generate-config", "switch manual", and "bootstrap"
<marcoceppi> isantop: so you're trying to bootstrap an LXC machine on the remote server? or trying to bootstrap the remote server?
<isantop> the environments.yaml file is currently pointing to the remote server itself. I haven't done any lxc-related things on the remote server apart from installing it.
<marcoceppi> isantop: right, okay, and so is the bootstrap node stuck at allocating?
<isantop> Exactly
<marcoceppi> isantop: or did it bootstrap properly?
<isantop> Er
<isantop> no, It did bootstrap properly
<marcoceppi> isantop: can you put the output of `juju status` into paste.ubuntu.com and send the link over?
<isantop> I removed the services I deployed and destroyed the environment already. I'll re-bootstrap
<isantop> http://paste.ubuntu.com/12316663/
<marcoceppi> isantop: I thought there was a failure to allocate?
<isantop> It bootstraps fine, but I can't deploy any charms
<marcoceppi> isantop: how are you trying to deploy charms?
<isantop> 'juju deploy charm --to lxc:0'
<marcoceppi> isantop: I'd do that and wait, it can take some time to get the first cache of the lxc image
<marcoceppi> isantop: it's best to do that, then to tail the machine-0.log on the remote host
<marcoceppi> isantop: that will give insight to any errors
<isantop> here is the machine-0.log: http://paste.ubuntu.com/12316694/
<isantop> And the juju-status after attempting to deploy the owncloud charm: http://paste.ubuntu.com/12316705/
<marcoceppi> isantop: what does `sudo lxc-ls --fancy` show on the remote server?
<isantop> marcoceppi: http://paste.ubuntu.com/12316722/
<marcoceppi> isantop: that's a good sign
<marcoceppi> isantop: status still show allocating?
<isantop> marcoceppi: Still allocating, yes
<marcoceppi> isantop: I think I see the problem
<marcoceppi> the contaienrs are running
<marcoceppi> but don't have networking
<isantop> Ah
<isantop> How do I get them network?
<marcoceppi> it should just happen
<marcoceppi> this may sound bad, but try restarting the server if you can. Juju should come back online and the containers should auto-restart and hopefully the networking bridge will be active
<isantop> I'll give that a shot in a while
<isantop> Won't be able to do it right now
<lazyPower> isantop: can you run ifconfig and look for a lxcbr0 device?
<lazyPower> ^ on the state-server, thats currently trying to provision those lxc containers.
<isantop> lazyPower: I can see lxcbr0 when I ssh into the server, but there's no entry for it in ifconfig
<lazyPower> hmm
<isantop> Wait
<isantop> I just did that on my local machine. :|
<lazyPower> :) that'll do it every time
<lazyPower> as a side note, i freaked myself out one day doing that, when i was ssh'd into my maas machine which should be *loaded* with virtual ethernet devices for each vm/container running on it.. and i instantly panic'd when it came back with only a wireless card and loopback interface.
<isantop> haha
<isantop> http://paste.ubuntu.com/12316823/
<isantop> Assuming those all look okay?
<lazyPower> hmm, ok so 10.0.3.1 - it created the device and gave it networking
<lazyPower> and looking at your --fancy output none of the containers are listing with an IP Address, can you stop/restart the container to see if it brings up with networking? (it should have done this already and been peachy) - sudo lxc-stop -n <name-of-lxc-container> && sudo lxc-start -n <name-of-lxc-container>
<lazyPower> if its a temporary thing, that should kick it into acting right
<isantop> Would it just be the -1?
<lazyPower> juju-machine-0-lxc-0
<lazyPower> its unfortuantely going to be the full string of the container
<isantop> Nah, that's not too bad to type
<isantop> I did -1, but it's stuck at waiting 120 seconds for network device
<lazyPower> ok, so somethings actively stopping the container from grabbing a virtual device
<lazyPower> can you poke in /var/log/syslog to see if you see anything related to CGROUPS stopping anything?
<isantop> greping syslog for cgroups (with -i) doesn't give me any output
<lazyPower> I'm not sure what happened, but i imagine that marcoceppi's proposed fix will clear this up - just rebooting the machine will be the simplest path forward.
<isantop> I'm currently on a ZNC on the server in question, so brb
<isantop> lazyPower: No change, it seems
<isantop> Current juju status: http://paste.ubuntu.com/12316885/
<isantop> Same issue if I lxc-stop/lxc-start
<lazyPower> hmm
<lazyPower> isantop: sorry i stepped away, one moment
<isantop> np
<lazyPower> ok, lets try something slightly different and see if we get networking... sudo lxc-create -t download -n u1 -- --dist ubuntu --release trusty --arch amd64
<lazyPower> run that on the state server and see if the manually provisioned lxc container gets proper networking
<isantop> I tried manually provisioning a container a while ago, and it got stuck at "Setting up the GPG Keyring"
<isantop> And eventually fails because it can't download the keyring from the keyserver
<isantop> (So, looks like no)
<lazyPower> interesting
<lazyPower> Are you behind some form of firewall/proxy?
<isantop> Oh, we do have csf set up
<lazyPower> googling csf returned cerebro spinal fluid... i dont think thats what you're referring to
<isantop> google "csf firewall"? :-p
<lazyPower> but i'll assume its an egress firewall?
<isantop> It's an iptables utility
<lazyPower> hum.. that doesn't explain the lack of addressing on the lxc containers, but it its reasonable to assume there's a rule blocking the gpg service from contacting the keyring server @ keyserver.ubuntu.com
<isantop> What port does that run over?
<lazyPower> port 11371 TCP
<isantop> okay, lemme open that up
<lazyPower> however
<lazyPower> ah wait this is seed so you cant edit the config, disregard
<isantop> Ah, yeah, now I can manually provision a container
<isantop> Are there any other ports that will need to be opened up for it to work correctly?
<lazyPower> 80 and 11371 should be it
<lazyPower> 11371 should also hav efallen back over http tbh
<lazyPower> that fix was put in place circa natty narwhal....
<isantop> Okay, so that appears to be working. If I restart the two juju containers, will that get them back up and running?
<lazyPower> Its worth a shot
<isantop> Hmmmm, nope
<lazyPower> if that doesn't work, you may need to juju destroy them and attempt reprovisioning
<lazyPower> The last thing to inspect if reprovisioning doesn't work is to take a look through the LXC configs to see if there's a divergence between the networking setup by your manually provisioned lxc container and what juju thought was right.
<lazyPower> and i can help there ^ its 2 text files and some diffing
<isantop> No luck stoping/starting
<lazyPower> juju destroy-service owncloud should start tearing them down
<lazyPower> if the containers appear stuck, you can juju destroy-machine --force 0/lxc/1 - or the path to the container in juju status
<isantop> Looks like juju-machine-0-lxc-0 is gone. But -1 is still there
<lazyPower> the /1 was allocated to which service?
<isantop> I'm not sure, it's not listed in juju status
<lazyPower> ah, looks like its some leftover
<lazyPower> you can safely remove that with sudo lxc-destroy -n
<isantop> How do I delete lxc containers?
<isantop> thanks
<isantop> Okay, I redeployed owncloud --to lxc:0
 * lazyPower crosses fingers
<isantop> The "workload-status/message" is currently "Waiting for agent initialization to finish"
<isantop> Current machine-0.log
<isantop> http://paste.ubuntu.com/12317287/
<lazyPower> yeah, machine-0 is kind of a black hole of provisioning information relating to lxc/kvm machines.
<lazyPower> :( we're flying kind of blind at the moment
<isantop> if I run juju ssh lxc:0, it looks like it grabbed one of my public IPs from eth1
<lazyPower> err
<lazyPower> hmm, this might be part of the newer networking stuff that recently landed... but unless this came from a MAAS/AWS environment it should be using the lxcbr0 networking bridge
<isantop> I only say that because the primary IP on eth1 is 173.248.161.18, and "juju ssh lxc:0" asked me to confirm the identity of "193.248.161.20". (We have a /29 static, and five of the addresses are allocated to eth1 via /etc/network/interfaces.)
<lazyPower> juju routes ssh requests in from the public interface, and back out the units private interface to the requisit unit
<lazyPower> that could be a lxc container on the bootstrap node, or a remote vm/server elsewhere in the DC
<lazyPower> the stateserver acts as a proxy for just about everything you do
<isantop> Yeah, still stuck at "allocating"
<isantop> Can I chroot into the lxc?
<lazyPower> That i'm not sure of without an IP Address. I know the containers receive a cloudinit config to register w/ the juju ssh credentails so you can ssh ubuntu@host
<isantop> never mind, I figured that out "lxc-attach -n juju-machine-o-lxc-1"
<lazyPower> did that attach you to a running console or a login prompt?
<isantop> No, it said it couldn't get init pid
<isantop> Oh, but running as sudo works
<isantop> I'm going to try re-bootstrapping the environment
<isantop> lazyPower: Hmmmm, doesn't appear to offer any changes.
<lazyPower> :(
<lazyPower> isantop: I need to step out for a bit. I'll be back around tomorrow and can assist you further then. Or you can fire off a mail to the list juju@lists.ubuntu.com - and one of the EU core folks may be able to lend a hand
<lazyPower> sorry I wasn't able to get you sorted this evening
<isantop> I'll let it steam about it overnight and see if anything magic happens. If not, I'll let you or jcastro know tomorrow
#juju 2015-09-09
<jose> isantop: solved? I won't read all your backlog
<isantop> jose: nope
<jose> isantop: what was the issue again? (the last one)
<isantop> containers aren't being provisioned with networking
<jose> oh
<jose> uh
<isantop> initially, we thought it was a firewall issue, as 11371 (I think?) was blocked
<isantop> And that prevented me from being able to manually set up a container. But even after fixing that, and being able to manually provision, we still have no dice
<jamespage> beisner, branches linked to bug https://bugs.launchpad.net/charms/+source/neutron-api/+bug/1474030
<mup> Bug #1474030: amulet _get_proc_start_time has a race which causes service restart checks to fail <amulet> <openstack> <uosci> <Charm Helpers:Fix Released> <neutron-api (Juju Charms Collection):Fix Committed by 1chb1n> <neutron-gateway (Juju Charms Collection):In Progress by 1chb1n>
<mup> <openstack-dashboard (Juju Charms Collection):Fix Committed by 1chb1n> <https://launchpad.net/bugs/1474030>
<jamespage> merged and landed into charm-helpers/next branches
<jamespage> neutron-gateway still needs a fix I think?
<gnuoy> jamespage, do you have any feedback on https://code.launchpad.net/~gnuoy/charms/trusty/neutron-openvswitch/local-metadata/+merge/270416 before I go and start thinking about amulet tests?
 * gnuoy braces for thats-an-awful-name-for-a-user-config-value
<lukasa> gnuoy: The name is clear, at least.
<gnuoy> thanks!
<gnuoy> speaking of which the associated description is sub-optimal
<jeand> hi all
<jeand> I pushed a new charm bundle to my namespace on launchpad
<jeand> http://bazaar.launchpad.net/~jean-deruelle/charms/bundles/mobicents-restcomm-mysql-bundle/bundle/files
<jeand> how long does it take to be indexed and available on the charm store at https://jujucharms.com/q/restcomm?type=bundle ?
<jeand> and did I do correctly ?
<jamespage> gnuoy, couple of niggles - but LGTM
<jamespage> +1 based on resolution of my niggles :-)
<beisner> jamespage, thanks & ack, neutron-gateway is wip
<jamespage> beisner, np
<jamespage> good to know
<beisner> jamespage, 1 wasn't bug-linked, ready for review:  https://code.launchpad.net/~1chb1n/charms/trusty/swift-proxy/amulet-update-1508/+merge/268790
<beisner> jamespage, fyi prob not going to link them all to the bug, but all are ultimately affected.  just planning to address as i update each for that, and other reasons.
<jamespage> beisner, https://code.launchpad.net/~james-page/charms/trusty/mongodb/pymongo-3.x/+merge/270525
<jamespage> fixup for mongodb if we have a nice way of testing :-)
<beisner> jamespage, uosci is shut down re: serverstack upgrade.
<jamespage> beisner, that's ok - it can wait :-)
<beisner> jamespage, i could probably fire off ceilometer's amulet on metal, it pulls in mongo
<beisner> jamespage, fyi mongodb's amulet tests deploy trusty 5 times :-/
<marlinc> I get the following error when trying to deploy to a OpenStack environment: 2015-09-09 13:28:36 ERROR juju.cmd supercommand.go:430 failed to bootstrap environment: waited for 10m0s without being able to connect: Permission denied (publickey,password).
<marlinc> I am however able to connect to the machine myself using ssh
<beisner> hi marlinc, what is your   `juju version` ?    also, are you using juju-deployer with a bundle, or a different method to deploy?
<jeand> <jeand> hi all
<jeand> <jeand> I pushed a new charm bundle to my namespace on launchpad
<jeand> <jeand> http://bazaar.launchpad.net/~jean-deruelle/charms/bundles/mobicents-restcomm-mysql-bundle/bundle/files
<jeand> <jeand> how long does it take to be indexed and available on the charm store at https://jujucharms.com/q/restcomm?type=bundle ?
<jeand> <jeand> and did I do correctly ?
<cholcombe> lazyPower, are you up on how the charmhelpers codebase works?
<marcoceppi> jeand: it shouldn't take more than a few hours
<cholcombe> lazyPower, I have a make test failing but I'm not sure where it's coming from
<marcoceppi> jeand: when did you last push?
<marcoceppi> jeand: Ah, I see the issue, you'll need to name the file "bundle.yaml"
<jeand> ah ok
<jeand> Thanks :D
<marcoceppi> jeand: if you have charm-tools installed (https://jujucharms.com/docs/stable/tools-charm-tools) you can run `juju charm proof` against the bundle directory to catch any preliminary issues that would prevent it from showing in the store
<jeand> marcoceppi, Thanks for the help
<jeand> I fixed it http://bazaar.launchpad.net/~jean-deruelle/charms/bundles/mobicents-restcomm-mysql-bundle/bundle/files
<jeand> and charm proofed it
<jeand> with no errors
<jeand> so will be waiting now to see if it shows up
<lazyPower> cholcombe: a bit, whats goin on?
<cholcombe> lazyPower, i'm modifying the ceph contrib code a little to add erasure coding support
<cholcombe> everything seems fine however the one test is failing
<cholcombe> FAIL: tests.payload.test_execd.ExecDTestCase.test_execd_run_dies_with_return_code
<cholcombe> i didn't think my code touched that though
<lazyPower> nah thats in charmhelpers.fetch i think
<cholcombe> ok
<cholcombe> it worked before i made my change so i think i broke it but i'm not sure how
<lazyPower> oh... maybe not
<lazyPower> welp, i think that pretty much sums up tha ti should be answering zero questions about this :D
<cholcombe> haha
<isantop> lazyPower: Still stuck in allocating. :/
<lazyPower> isantop: i think at this point its prudent to file a bug, as i'm not sure why the lxc nodes are not getting networking
<lazyPower> isantop: https://bugs.launchpad.net/juju-core/+filebug -- include the output from `juju version`  `juju status`  `ifconfig` `sudo lxc-ls --fancy` and what you've attempted to do so we can try to reproduce it.
<isantop> lazyPower: Will do. Will need to wait till lunch
<lazyPower> isantop: can you ping me with the link when its filed? i'll sub and pass it along to the -core team that can lend a hand.
<isantop> For sure
<isantop> I blame jcastro, but that's just me being silly
<lazyPower> isantop: heh, we do it too ;)
<marlinc> beisner, I'm using Juju 1.22.1-vivid-amd64
<beisner> marlinc, i generally use the juju version from the stable ppa.  see https://launchpad.net/~juju/+archive/ubuntu/stable
<beisner> marlinc, fyi the error you indicated isn't really openstack specific, in that the bootstrap has to happen before any services are deployed
<beisner> marlinc, so to troubleshoot/confirm, i'd try a couple of    juju bootstrap  /  juju destroy-environment   iterations using the later version of juju.
<marlinc> Isn't Juju supposed to try to reconnect every 5 seconds for example?
<marlinc> Okay, I'll see what the PPA provides
<beisner> marlinc, this was the bootstrap error, right?
<marlinc>  ERROR juju.cmd supercommand.go:430 failed to bootstrap environment: waited for 10m0s without being able to connect: Permission denied (publickey,password).
<beisner> marlinc, there are a lot of other variables that i can't presume, such as provider (maas, local, a public cloud), etc., but either way, i'd go to the latest stable juju version and go from there.
<marlinc> Cool, will try thay
<marlinc> That*
<cholcombe> anyone want to give me a hand with my diff: http://bazaar.launchpad.net/~xfactor973/charm-helpers/erasure-coding/revision/446
<cholcombe> I'm failing some tests but I'm not sure how I broke it
<wolsen> cholcombe, can you put a pastebin of the error you are seeing?
<cholcombe> sure
<cholcombe> https://gist.github.com/cholcombe973/5692a444b02097b3db73
<lazyPower> tvansteenburgh: ping
<cholcombe> wolsen, seems as though I broke it but I'm not sure how
<cholcombe> wolsen, I think for the function signatures I changed I need to create an indirection layer.  I'm not sure who is linked against this library
<wolsen> cholcombe, since they don't look compatible, you'll want to deprecate the methods before yanking them
<cholcombe> wolsen, yeah that sounds good
<wolsen> cholcombe, re: the test breakage - it doesn't look related to your change at all... let me see if I get the same errors as you
<cholcombe> ok
<wolsen> cholcombe, hmm I don't see those errors when running tests against your branch
<cholcombe> wolsen, interesting
<moqq> hey â i have an agent which is stuck in the âagent-status: executing, message: running action updateâ state. for some reason, during the update hook, everything got lodged. i manually killed the lodged processes, but juju is still waiting for the agent sitting in that state. the logs reveal the agent for that unit is shitting out tons of stack traces due to a âpanic: runtime error: invalid memory address or nil pointer dereferenceâ
<moqq> on version 1.24.5
#juju 2015-09-10
<miken> How can I check and kill a debug-hooks session running on a unit? (Seems someone has run and left a debug-hooks session open, but they've gone to bed :) ).
<miken> I can `juju run --unit myservice/1 ls` fine, but myservice/0 times out.
<jamespage> dosaboy, hey - I provided some feedback on your cinder fix for multi-host cinder
<jamespage> dosaboy, i think we need to bake the intelligence into the sub, with a default assumption of statefull until all subs provide that flag
<dosaboy> jamespage: agreed
<jamespage> dosaboy, lemme work up an alternative - I have cycles right now
<dosaboy> jamespage: still do the parsing on cinder side tho right?
<dosaboy> jamespage: was thinking we could even look at a Juju action to tidy up existing volumes that would be affected by this
<dosaboy> jamespage: fyi i'm updating cinder side to incorporate getting statenes from rel
<dosaboy> stateness
<jamespage> dosaboy, ok - I put up a cinder-ceph branch to work with that hopefully
<jamespage> dosaboy, but maybe it should be 'stateless' rather than stateful - what do you think?
<dosaboy> jamespage: yeah stateless=False as a default
<jamespage> dosaboy, ok updated
<dosaboy> jamespage: tbh it would make coding this somewhat easier if the statness was declared within the json
<dosaboy> but that would require a charmhelpers change
<dosaboy> i'll see what i can do
<dosaboy> actually is not a problem after all
<blahdeblah> jose: ping - when you're around, can I ask a few questions about your precise postfix charm?
<dosaboy> jamespage: https://code.launchpad.net/~hopem/charms/trusty/cinder/lp1493931/+merge/270658
<dosaboy> r4r
<dosaboy> i've deployed, seems to work well
<jamespage> dosaboy, they look ok but lets wait for amulet tests as well please
<jamespage> we broke some yesterday by mistake
<beisner> jamespage, gnuoy, coreycb, thedac - fyi - a handful of os-charms that still have the deprecating "categories" metadata.yaml bit will now be failing charm proof (lint).  ie. charm-tools is no longer just informing on that.  ;-)
<beisner> so, as we all touch each next charm, let's double-check metadata.yaml as a drive-by fix.
<gnuoy> yeah, I think that might stop them being ingested into the charmstore come stable release time
<lazypower> yep
<lazypower> proof errors actively prevent ingestion
<sparkiegeek> a warning is not an error though :/ (charm proof categories vs. tags is a warning)
<lazypower> sparkiegeek, anything above I: is treated as an error for the store.
<sparkiegeek> lazypower: weird. So you're saying Warnings are really Errors? Why not report them as such?
<lazypower> marcoceppi, ^
<lazypower> sparkiegeek, we're having a policy re-review at the charmer summit and I'll broach that topic on your behalf.
<sparkiegeek> lazypower: ta
<lazypower> I think we should treat warns as policy changes that will affect you in the near term
<sparkiegeek> my expectation is that a W can evolve into an E over time
<sparkiegeek> right
<lazypower> so its like last call before your charm is treated as bogus
<sparkiegeek> sure, that WFM
<lazypower> otherwise, i agree. Warn's being treated as errors, there's no transitional phase. You have I: and does not compute. its binary according to the store, and thats not cool.
 * lazypower does the angry policy dance
<beisner> sparkiegeek, lazypower - right.  but a W:  does exit non-zero, and that's what i'm watching for in automation.
<lazypower> beisner, as do all the tools. There's room for improvement there I think. its a bit misleading to label it was a W when its going to cause a hard stop.
<sparkiegeek> beisner: the behaviour is consistent in that "charm proof" will exit non-zero but I don't like linters exiting non-zero for Warnings (it's not really a Warning if it's non-zero it's a hard Error)
<beisner> sparkiegeek, yeah we've always just charm proofed along with lint, rather than having a separate test for just charm proof.   at any rate, it's now a blocker, and it's an easy fix.
<pmatulis> re 'upgrade-juju --version',  â  how to get a list of available versions and â¡ what logic is used to pick a version?
<jose> blahdeblah: sure
<cholcombe> i think i remember hearing that you could post your github charms to the charm store.  Is that true?
<marcoceppi> cholcombe: kind of, the juju is growing a juju store publish command
<marcoceppi> cholcombe: but that's not landed yet
<cholcombe> oh ok cool
<cholcombe> thanks marcoceppi
 * rick_h_ goes back to writing more juju publish spec stuff so the team can do the next steps
<bdx> core, dev: Is openstack-dashboard installed at the system level or in a venv?
<bdx> As far as I can tell it is installed at the system level
<bdx> I am having issues installing the panels for designate
<bdx> lots of errors, that all seem to point to openstack_dashboard.settings not being able to be imported
<bdx> by the designatedashboard package that is installed at the system level
<rick_h_> bdx: so you have more info on what charm you're using and what you're using to get horizon?
<bdx> rick_h_: I'm using openstack-dashboard-16
<bdx> rick_h_: I am deploying openstack-dashboard to lxc containers in my env
<rick_h_> bdx: just out of the box with the deb it comes iwth?
<bdx> rick_h_: yes
<rick_h_> bdx: cool, we found that if you used the debs the python path was the system but if you used from git it used a virtualenv and fought some of that
<rick_h_> bdx: but it sounds like your issue is a bit different.
<rick_h_> bdx: but we were using git to test out aginst horizon's latest tip from git
<bdx> rick_h_: gotcha.....yeah I'm not specifying any tips from git...
<rick_h_> bdx: sorry, yea that was the only thing I could think of off the top of my head
<bdx> designate will be included in liberty yea?
<bdx> we should look at adding a config in openstack-dashboard to optionally include the designate panels yea?
<rick_h_> bdx: not sure, we were only worried with our own integration.
<rick_h_> bdx: as a subordinate and such
<rick_h_> bdx: so haven't checked out designate
<bdx> rick_h_: It would be better to have it as a subordinate rather than including it in openstack-dashboard?
<rick_h_> bdx: hmm, I don't manage the dashboard but I would think so. I'd check with the openstack team. They're mostly UK based and EOD. However, I'd think that the charm would be the default horizon ootb and then you'd add on extra dashboards via either relation or even a subordinate.
<rick_h_> bdx: but don't quote me on that. It's just how we're approaching it for one chunk of work.
<bdx> rick_h_: I see. Is there a better chanel to contact the openstack-dashboard team on?
<rick_h_> and "we" here isn't the team managing the openstack stuff. However, there's a PR for a new subordinate relation for dashboards: https://code.launchpad.net/~bac/charms/trusty/openstack-dashboard/dashboard-plugin/+merge/270177 and https://code.launchpad.net/~saviq/charms/trusty/openstack-dashboard/simplify-settings
<rick_h_> bdx: so I think it's becoming a pattern
<rick_h_> bdx: this channel usually works, but in EU TZ it works best for them.
<bdx> rick_h_: totally. thanks man!
<rick_h_> bdx: yea, sorry I don't have a thorough fix for you.
<bdx> its coo
<bbaqar> Does anyone know what is the optimal value of worker multiplier in neutron-api charm?
<mwenning> hi guys -  I'm trying to deploy a bundle that references some local charms (ceph for one) - I get  Service:  ceph has neither charm url or branch specified
<mwenning> Yes I have set $JUJU_REPOSITORY to point to my charms dir.  Any ideas?
<lazypower> mwenning, it will say that and typically still deploy the charm. can you pastebin the error for me?
<mwenning> lazypower, I will next time.  I need some test data so I deployed it by hand.  A possible tweak is that I also used the --bootstrap option
<lazypower> mwenning, is this with deployer or quickstart?
<mwenning> lazypower, juju-deployer
<lazypower> ah, yeah - something went awry there then. deployer should have deployed. lmk if you run into it again and have a break to debug
<mwenning> lazypower, my last incantation was 'juju-deployer --config=ceph_bundle.yaml --bootstrap -L --debug
<mwenning> ceph_bundle.yaml was exported from juju-gui from a previous manual deployment
<lazypower> none of that would have caused an issue that i can see
#juju 2015-09-11
<miken> I have a unit which *should* be in an error state (juju log shows an error in the charm resulting in config-changed failing), but the unit isn't in an error state... anyone familiar with that?
<miken> More details on https://bugs.launchpad.net/juju-core/+bug/1494542
<mup> Bug #1494542: unit does not go to error state <juju-core:New> <https://launchpad.net/bugs/1494542>
<Odd_Bloke> How can I check what hooks have been called on a particular unit?
<Odd_Bloke> Ah, status-history seems to do it.
<Odd_Bloke> So I'm trying to test leader election/failover of a service with three units.
<Odd_Bloke> (N.B. We're not using normal leader election yet)
<Odd_Bloke> I stopped the instance running the leader; Juju has noticed (in `juju status`) but hasn't done anything to notify any of the other units in the service.
<Odd_Bloke> I would expect at least a relation broken or departed hook to be fired, but I'm not seeing that happen.
<Odd_Bloke> Does anyone know what I could do to investigate?
<lazypower> Odd_Bloke, the leader-elected hook runs on the unit that becomes the new leader juju elected
<lazypower> Odd_Bloke, there's only one way to truly know, is thats exposed via the is_leader check
<Odd_Bloke> lazypower: Couple of questions: (a) these units have a cluster relationship; am I wrong to expect the broken/departed hook on that relationship to be triggered?
<Odd_Bloke> Crap, I forgot what (b) was going to be.
<lazypower> well context here... let me scope this as someone not familiar with what you've done
<Odd_Bloke> lazypower: So the broad context is that we were using "lowest numbered unit is the leader" logic in the ubuntu-repository-cache charm.
<lazypower> so, leadership hooks run when leadership changes occur. leader_elected is always run on the leader, and if is_leader = true, take action. If you need to send data over the cluster relation, do so out of band relation-set -r # foo=bar  - otherwise you get nothing really for free with this aside from juju picking your leader, and exposing a few primitives for that. leader-set (i need to double check this) can be used to send data to all the subo
<lazypower> rdinates
<Odd_Bloke> lazypower: We updated charmhelpers (to fix another bug) without noticing that the leader stuff ahd been pulled in.
<Odd_Bloke> lazypower: So at the moment I'm trying to jerry-rig the "lowest numbered unit is the leader" logic back in (to fix existing deployments).
<lazypower> That sounds troublesome
<Odd_Bloke> lazypower: And then we will look at moving forward to proper leader election.
<lazypower> also keep in mind leadership functions landed in 1.23 - so anything < (eg: whats shipping in archive) will not work w/ leadership functions
<lazypower> i ran into this with the etcd charm
<Odd_Bloke> Yeah, that's part of the reason we aren't moving straight forward to leadership election.
<lazypower> the charm just blatantly enters error state, sets status, and complains loudly in the logs if you're using < minimum version.
<pmatulis> is this the only place where users can look in order to choose a tools version to upgrade to? https://streams.canonical.com/juju/tools/releases/
<Odd_Bloke> Because we need to take stock of where this is deployed, and maybe manage them through a Juju upgrade.
<Odd_Bloke> lazypower: But I was surprised that one or both of cluster-{broken,departed} weren't called on other units in the service when I stopped the machine running another unit.
<lazypower> broken.departed are implicit actions during the relation-destroy cycle
<lazypower> i dont think they get called when the machine is just stopped
<Odd_Bloke> OK.
<lazypower> pmatulis, might want to try asking that in #dev - i dont think they monitor #juju as actively as the eco peeps.
<Odd_Bloke> lazypower: So in a pre-leadership-election world, how do you get notified of/handle a machine going AWOL?
<tvansteenburgh1> that is surprising if true lazypower
<lazypower> Odd_Bloke, to be completely honest, i dont think we did, because there was no good way to handle it without an implicit action causing a hook to be fired. The work around to something like this is use DNS and hide everything behind load balancers.
<lazypower> tvansteenburgh, i've shot a few services in the aws control panel and never saw a broken/departed hook fire
<lazypower> this might be a regression i witnessed
<lazypower> here, i'll stand up an etcd cluster, scale to 4 nodes and kill of 1
<tvansteenburgh> Odd_Bloke: i'd ask about that in #juju-dev too
<lazypower> lets test this theory on 1.24.5 and see if it behaves as we expect
<tvansteenburgh> if you don't i will
<lazypower> bootstrapping, should be g2g in ~ 8
<tvansteenburgh> lazypower: cool, i'll wait :)
<lazypower> i mean the more we talk about this
<lazypower> yeha it seems like a big oversight
<lazypower> so i'm hoping i witnessed oddity in one environment, or mis-remembering
<mthaddon> hi folks, can someone help me with the juju local provider on vivid with 1.24.5 (i386)? I was running into https://bugs.launchpad.net/juju-core/+bug/1441319, set the mtu as advised, and am now getting "container failed to start and was destroyed"
<mup> Bug #1441319: intermittent: failed to retrieve the template to clone: template container juju-trusty-lxc-template did not stop <canonical-bootstack> <cisco> <cpec> <deployer> <landscape> <lxc> <oil> <regression> <systemd> <upstart> <juju-core:Triaged by cherylj> <https://launchpad.net/bugs/1441319>
<cherylj> hi mthaddon, there was another issue where the local provider wasn't working on vivid.  Let me double check that it was fixed in 1.24.5
<mthaddon> cherylj: great, thanks
<cherylj> mthaddon: in the meantime, can you get the contents of /var/log juju/containers/juju-trusty-lxc-template/console.log into pastebin or something for me to look at?
<mthaddon> cherylj: is that from one of the instances in the environment? I don't see that on my local machine
<cherylj> mthaddon: it should be on your system if you're running the local provider.
<mthaddon> cherylj: https://pastebin.canonical.com/139636/
<pmatulis> lazypower: alrighty
<mthaddon> er, I mean http://paste.ubuntu.com/12338803/ as some in this channel won't be able to see the one above
<cherylj> mthaddon: sorry!  it's in /var/lib/juju/containers
<cherylj> muscle memory of going to /var/log/juju
<mthaddon> cherylj: that's a 0 byte file on my machine :/
<cherylj> mthaddon: okay, let me take a quick look at 1.24.5
<cherylj> mthaddon: if you unset the mtu and try to bootstrap again, can you see if that console.log file gets created?
<cherylj> the setting of the mtu was for that very specific environment in that bug
<mthaddon> sure, gimme a few mins - was pulled into a call, but will get to it soon
<cherylj> mthaddon: np, I'm going to spin up a vivid machine and see if I can recreate
<mthaddon> cherylj: removed and still getting "container failed to start and was destroyed", but this time I have logs - http://paste.ubuntu.com/12339144/
<mthaddon> "Incomplete AppArmor support in your kernel. If you really want to start this container, set lxc.aa_allow_incomplete = 1 in your container configuration file"
<mwenning> lazypower, good morning
<lazypower> mwenning, o/
<mwenning> lazypower, any ideas from my pastebin?
<lazypower> mwenning, honestly, haven't had a chance to take a look - let me wrap up this debug session im' doing for Odd_Bloke  and i'll take another look
<mwenning> lazypower, k no hurry
<lazypower> tvansteenburgh, ok a 7 node etcd cluter just settled.... i wont get into why its 7 nodes large.
<lazypower> but it rhymes with i'm impatient
<lazypower> tvansteenburgh, Odd_Bloke - aggreable method on testing this is to just terminate the machin ein the AWS control panel?
<Odd_Bloke> lazypower: I was doing this on GCE and stopped rather than terminated; but yes, that sounds reasonable.
<lazypower> ok state server received an EOF from the unit in question, no action taken so far
<cherylj> mthaddon: I haven't seen that error before.  Let me poke around a bit more.
 * mthaddon nods
<lazypower> Odd_Bloke, 3 minutes in and no action taken. Unless it suddenly decides to execute those hooks i think my assertion stands that it does nothing for you without an implicit breaking action.
<Odd_Bloke> Right.
<Odd_Bloke> And that is expected behaviour?
<lazypower> i dont know that i would expect it to do that
<lazypower> i think the state server should do its dilligence to run the broken/departed hooks on that units relations until it comes back
<lazypower> tvansteenburgh, ^
<lazypower> Odd_Bloke, also - you cannot terminate the machine via conventional means - juju destroy-machine # --force just to get it out of the enlistment makes it go away, however the units departed/borken hooks do not run
<lazypower> so we still have possible broken config left around in the cluster
<Odd_Bloke> Blargh.
<lazypower> so, looks like we found a pretty gnarly case that we need to file for
<lazypower> get it on the docket to be looked at
<Odd_Bloke> OK, well at least I'm not going crazy. :p
<lazypower> Odd_Bloke, https://bugs.launchpad.net/juju-core/+bug/1494782
<mup> Bug #1494782: should *-broken *-departed hooks run when a unit goes AWOL? <juju-core:New> <https://launchpad.net/bugs/1494782>
<lazypower> Anything you can add here would be great, as i'm not sure i did a great explanation of the problem domain
<lazypower> mwenning, looking now
<lazypower> mwenning, the invalid config items strike me as the first issue -  deployer.deploy: Invalid config charm ceph osd-devices=/tmp/ceph0
<mwenning> lazypower, I was assuming those would go away once it could find the local ceph charm.
<mwenning> lazypower, the bundle was exported from a running juju session
<cherylj> mgz, I'm looking at the artifacts for bug 1494356, and I only see the container information for the juju-trusty-lxc-template container.
<mup> Bug #1494356: OS-deployer job fails to complete <blocker> <ci> <regression> <juju-core:Triaged by cherylj> <juju-core 1.25:Triaged by cherylj> <https://launchpad.net/bugs/1494356>
<Odd_Bloke> lazypower: So if I wanted to get those hooks to fire, what do I do?  juju remove-unit?
<lazypower> Odd_Bloke, i was trying to figure that out and by destroying the machine it removed the unit
<lazypower> so it effectivel blocked me from doing anything to reconfigure the service
<mgz> cherylj: I think we have some namespace collision issues
<cherylj> mgz: I was wondering if that was the issue.
<mgz> cherylj: the logs are named the same thing and dir isn't preserved
<Odd_Bloke> lazypower: Ah, yes, remove-unit has triggered cluster-relation-departed
<lazypower> well that helps
 * mwenning is rebooting after a kernel update...
<lazypower> ack mwak
<lazypower> er
<lazypower> misping
<mgz> cherylj: what else is in those dirs apart from logs?
<mgz> cherylj: wondering if I can just archive the complete dirs
<cherylj> mgz: the logs and the cloud config for cloud init.  Nothing too large
<lazypower> mwenning, ok, lets see if we cant iron this out. When you comment out those config directives does the bundle deploy still fail by not finding the charm?
 * mwenning is waiting for juju to bootstrap
<lazypower> mwenning, also which version of juju-deployer are you running?
<mgz> cherylj: I updated the logging, there's a CI run in progress though so won't get anything new for a while
<cherylj> mgz: thanks!  ping me when it's done and I'll take a look
<mwenning> lazypower, juju-deployer 0.5.1-3
<lazypower> ok, thats the most recent release of deployer
 * lazypower checks off one box
<lazypower> Hey, if anybody here is interested in delivering docker app containers with juju - I'd love a review on this PR if you've got time - https://github.com/juju/docs/pull/672
<mwenning> lazypower, found at least part of it, waiting for bootstrap again
<mwenning> lazypower, the problem was that the charm dirs were named differently than "ceph" and "ceph-dash" .
<lazypower> i thought it boiled down tos oemthing like that. Deployer wasn't able to find the charms it was looking for
<mwenning> This worked OK with the command-line 'juju deploy', but juju-deployer apparently uses a different way of finding them (?)
<lazypower> well cool - glad you sorted it mwenning
<lazypower> it does. Deployer creates a cache in $JUJU_HOME
<lazypower> and it looks for dir names that match the charms as thats part of proof
<lazypower> juju is a bit more forgiving with that, raising a warning that charm_name doesn't match the dir name - but still deploys.
<mwenning> lazypower, good to know
<pmatulis> i just did 'juju upgrade-juju' and got back < ERROR invalid binary version "1.24.5--amd64" > . indeed i am running juju-core 1.24.5
<pmatulis> agents are currently using 1.22.8
<Walex> I upgraded Juju from 1.23.3 to 1.24.5 should I also upgrade MAAS from 1.5.4 to 1.7.6 (all this on ULTS 14). Is the MAAS upgrade likely to be pretty painless? It is a provider for Juju.
<Walex> pmatulis: I had much the same issue going 1.23.3 to 1.24.5 and things are complicated. Have a look at the recent thread here: http://comments.gmane.org/gmane.linux.ubuntu.juju.user/2824
<pmatulis> Walex: i read it but i don't see my error. i will try being explicit (--version) with a version other than 1.24.5 . i wonder what the rules are for a version to be considered "valid"?
<firl> Charles butler on?
<Walex> pmatulis: your error is described in the first message
<pmatulis> Walex: i don't see it
<Walex> pmatulis: http://permalink.gmane.org/gmane.linux.ubuntu.juju.user/2824
<pmatulis> Walex: nothing on 'invalid binary' there
<natefinch> marcoceppi: https://github.com/marcoceppi/juju.fail/pull/3
<natefinch> marcoceppi: oops, crud, missing a comma
<Walex> pmatulis: look a bit harder
<Walex> pmatulis: the words "invalid binary" don't indeed appear.
<Walex> pmatulis: it is your own choice to look for those words.
<Walex> pmatulis: look for ""1.24.5--amd64"
<firl> Anyone know how to change the âhttp://ubuntu-cloud.archive.canonical.com â mirror for installed unit packages?
<pmatulis> Walex: yes, i see that. i'm assuming my error is the same then
<lazypower> firl, Hello
<firl> hey lazypower
<lazypower> i believe you're looking for me?
<firl> Didnât realize you were the person emailing, was just going to say thanks for sending me the links to the .md files for the docker layer
<lazypower> Anytime!
<lazypower> Really excited to get your feedback there
<firl> Yeah, It might be a few weeks, but the hope is to wrap some of our servies into that layer, I am interested to see how easy it works with a private docker hub
<lazypower> If you find any bugs that you need sorted to support that, feel free to file them on the GH repo for the docker layer and we will so our best. There's a todo item for charming up the private registry and adding relation stubs to support configuration ootb
<lazypower> so it may not do what you need just yet without some manual intervention
<firl> gotcha; I will see how far I can get. One of the requirements is to put a virtual bridge between a physical nic into docker. So I might have to contribute some stuffs anyways
<blahdeblah> Hi all.  Can anyone point me to an example charm which makes use of leader election?
<lazypower> blahdeblah, we use the leader election bits in etcd - http://bazaar.launchpad.net/~kubernetes/charms/trusty/etcd/trunk/view/head:/hooks/hooks.py#L36
<lazypower> its pretty simplistic however
<blahdeblah> lazypower: simplistic is what I want for now - thanks :-)
<natefinch> ahasenac, dpb1_: note that a fix to 1486553 has landed in 1.24
<natefinch> er ahasenack ^
<dpb1_> natefinch: that is fantastic
<natefinch> https://bugs.launchpad.net/juju-core/+bug/1486553
<mup> Bug #1486553: i/o timeout errors can cause non-atomic service deploys <cisco> <landscape> <juju-core:In Progress by natefinch> <juju-core 1.24:Fix Committed by natefinch> <juju-core 1.25:In Progress by natefinch> <https://launchpad.net/bugs/1486553>
<dpb1_> natefinch: will there be another half landed in 1.24, or will that part be in 1.25?
<natefinch> dpb1_: figuring that out now.  The other half is a little more tricky, so we may put it into 1.25 instead
<dpb1_> natefinch: ok, understood
#juju 2015-09-13
<hermanbergwerf> Can I run juju on cloudstack?
#juju 2016-09-12
<lutostag> any way to make juju create a privileged lxd container rather than a userspace one?
<lutostag> (perhaps constraints)?
<rick_h_> lutostag: you have to change the lxd profile
<rick_h_> lutostag: there's a conversation on how to allow customizing this at deploy time for an application but it's not implemented yet
<pragsmike> hi everyone!
<rick_h_> lutostag: https://jujucharms.com/u/james-page/openstack-on-lxd mentions customizing the lxd profile used in the "LXD profile for Juju" section
<rick_h_> howdy pragsmike
<rick_h_> pragsmike: the but to track I was telling you about is: https://bugs.launchpad.net/juju/+bug/1566791
<mup> Bug #1566791: VLANs on an unconfigured parent device error with "cannot set link-layer device addresses of machine "0": invalid address <2.0> <4010> <cpec> <network> <juju:In Progress by dimitern> <https://launchpad.net/bugs/1566791>
<tonyanytime> Hello all, new to juju, stuck with a dying juju-gui, container doesn't exist, it doesn't  remove machine Or let me add it again. Kind of stuck in between. Any ideas?
<chaitu> Hi all, I am deploying an autopilot setup, First i deployed successfully and then my controller node got crashed. Can someone suggest me if there is any way to add a controller to the same cluster or to redeploy the same controller
<bloodearnest_> marcoceppi, cory_fu: heys guys - quick question: is it possible to add a --no-install-recommends to the basic layer's install of python3-pip?
<bloodearnest> reasons are many. a) is only 3 packages, rather than ~43 b) I don't really want a full compiler toolchain install by default on all charms using layer:basic! :D
<bloodearnest> is there a reason we pull in recommends (which includes build-essentials)
<bloodearnest> ?
<bloodearnest> is there a reason not to have apt_install do --no-recommends by default?
<Anita_> Hi
<Anita_> Hi Matt
<Anita_> Hi Matt Bruzek
<balloons> ping mgz
<mgz> balloons: yo
<Anita_> sudo apt-get install juju-local gives error : The following packages have unmet dependencies:  juju-local : Depends: lxc (>= 1.0.0~alpha1-0ubuntu14) but it is not going to be installed               Depends: lxc-templates but it is not going to be installed E: Unable to correct problems, you have held broken packages.
<Anita_> any idea
<Anita_> trying to install 1.25
<marcoceppi> bloodearnest: the pip modules embedded are source wheels, and need to be built on the machine in case of architecture dependencies
<marcoceppi> bloodearnest: feel free to open a bug though
<bloodearnest> marcoceppi, this is apt installing python3-pip
<marcoceppi> bloodearnest: yes, we pip install those wheelhouses from the wheelhouse directory in a charm
<bloodearnest> marcoceppi, yes, but the initial pip (to pip install the bundled pip), we install python3-pip, python3-setuptools and python3-yaml:
<bloodearnest> https://github.com/juju-solutions/layer-basic/blob/master/lib/charms/layer/basic.py#L46
<bloodearnest> and when I say install I mean apt install
<bloodearnest> which is 45 packages
<bloodearnest> in xenial anyway
<marcoceppi> bloodearnest: right, I understand you're point I don't think I'm making my counter point very clear. I'm blitzing to the charmer summit start in 1.5 hours, can I recommend opening a bug on layer basic so we can continue the conversation there?
<venom3> Hello!
<venom3>  I have a question about Juju gui.
<bloodearnest> marcoceppi, ack, ta
<venom3> I need to enable insecure mode into embedded juju gui (I'm not using the juju-gui charm)
<marcoceppi> venom3: that's a good question, urulama ^ any opinions?
<bloodearnest> marcoceppi, I think I follow now, sorry
<marcoceppi> bloodearnest: it's unfortunate, 46 extra packages is quite annoying to instance boot time
<bloodearnest> yeah :(
<bloodearnest> marcoceppi, might it be possible to have a layer option the says: build-required? And only pull in the extras if that is true?
<marcoceppi> bloodearnest: it'd be better if we auto-detected if the module needed to be compiled
<bloodearnest> right
<marcoceppi> not sure if that's possible
<bloodearnest> or just build a snap :)
<bloodearnest> ah, same problem, of course
<urulama> venom3: you mean GUI in the controller with "juju gui" command?
<venom3> urulama: yes. I need an http address, but  when I type "juju gui --no-browser", I get https:...
<urulama> frankban: do we support this? ^
<frankban> urulama, venom3: no, the GUI is only served via https
<pragsmike> I've had problems with the gui being rejected by my browser because the certificate too closely resembles one that the browser thinks it has seen recently
<venom3> frankban, urulama: sorry, but I read in "https://blog.jujugui.org/" that this feature was re-enabled
<venom3> posted by "jeffpihach 8:30 pm on February 17, 2016"
<frankban> venom3: that's about the juju-gui charm
<frankban> venom3: not the GUI as provided directly by Juju
<venom3> frankban: thanks. So do you suggest to deploy this charm to enable http protocol?
<venom3> In this case, could I have any conflict or other problems?
<frankban> venom3: so after you deploy the charm from https://jujucharms.com/juju-gui/ you could in theory just set "juju set juju-gui secure=false" to be able to access the GUI from http. this is of course highly insecure
<frankban> and discouraged
<frankban> venom3: can I ask what's your use case?
<venom3> frankban: yes, of course. we deployed maas and juju in a private infrastructure. Everything is internal. The first idea was to use ngnix as proxy, but we have problem with https (ok, i admit my lacks of knowledge).
<venom3> Maas was a joke ('proxy_pass http://192.168.110.1/MAAS/;')
<frankban> venom3: cool
<venom3> frankban: we resolved by iptables
<frankban> venom3: so you can use the GUI directly from the controller? much better!
<pragsmike> venom3 what was the issue?  couldn't you use the https: URI?
<venom3> urulama, frankban: thank you for your time, really (i don't know what's happened, I've lost connection). Bye.
<hatch> venom3: why was the https url not working for you?
<venom3> hatch: we tried to use nginx as reverse proxy. It was simple for http protocol exposed from MAAS, but a pain for https from juju-gui. The idea is set juju gui insecure. This is not a problem, because is behind a secure network.
<hatch> venom3: alright no problem - I was just curious if there is anything we could do to make this easier
<hatch> but it's definitely a workflow we don't recommend :)
<hatch> thanks
<venom3> hatch: no problem. I read the documentation and I know your recommendation. We are using Juju for a short time, so we have plenty of informations to acquire. This worths the effort, because Openstack is a pain without tools like this
<venom3> And the community is doing a great work!
<hatch> :) glad you like it!
<venom3> glad all you exist, really.
<devop01> i have a fresh xenial install with the juju dev ppa and it can't seem to deploy a local charm.  I give it the path and it says it can't find it
<hatch> devop01: Juju 2?
<devop01> hatch: yes
<devop01> hatch: i don't get any debug logs or anything indicating what i did wrong
<hatch> devop01: if you navigate to the root path of your charm you should be able to go `juju deploy .`
<hatch> what command were you trying to run?
<devop01> hatch: juju deploy . also doesn't work.
<hatch> really...can you paste the error?
<devop01> i was trying to run `juju deploy ./repos/ceph-mon` which is my local copy
<hatch> devop01: did you get it solved?
<devop01> hatch: no i'm not sure how to get it to deploy local charms.  I'm on juju 2 beta18.  Deploying anything from the store seems to work fine
<hatch> devop01: can you paste the error message that it outputs?
<devop01> hatch: I think i buggered something with the charm.  I tried a different local charm and it deployed ok.
<hatch> devop01: alright, that was going to be my next suggestion :)
<hatch> you can use charm proof to see what you might be missing
<devop01> :)
<devop01> mskalka: https://jujucharms.com/docs/2.0/clouds-LXD
<x58> juju status says "ERROR connection is shut down" how do I restart this connection?
<Brochacho> What's the lxc image alias for xenial?
<catbus1> Is there any change on the charmstore in the last two weeks that causes this issue: http://pastebin.ubuntu.com/23170237/
<PCdude> catbus1: u realise u are using a beta version? I have had trouble before with beta versions of JUJU. My advice would be to stay with 1.25.6 until 2.0 comes out of beta
<x58> catbus1: beta17 is out... you might want to grab that and try again.
<catbus1> x58: I can do that, but the same version worked 2 weeks ago.
<PCdude> x58: that one is meant for ubuntu 16.10
<PCdude> catbus1: I dont think u are using 16.10?
<catbus1> PCdude: no, we aren't using 16.10.
<PCdude> catbus1: then beta15 is the latest rn
<x58>   Version table:
<x58>      2.0-beta18-0ubuntu1~16.04.1~juju1 500
<x58>         500 http://ppa.launchpad.net/juju/devel/ubuntu xenial/main amd64 Packages
<x58>         500 http://ppa.launchpad.net/juju/devel/ubuntu xenial/main i386 Packages
<x58> It's in the devel PPA for JuJu
<x58> xenial-updates has beta15...
<PCdude> x58: very strange, I cant even find that version on the launchpad site of juju itself
<x58> sudo add-apt-repository ppa:juju/devel
<x58> sudo apt-get update
<x58> PCdude: You mean right here: https://launchpad.net/~juju/+archive/ubuntu/devel
<PCdude> x58: this one: https://launchpad.net/juju-core
<PCdude> x58: even if u click further through, I cant find beta18
<PCdude> but indeed I can find it after adding the dev PPA
<x58> beta15 is the latest in xenial updates, wonder if they only show "stable" releases on juju-core
<x58> catbus1: there were changes to the charmstore IIRC. I can't find the bug report at the moment.
<hatch> catbus1: you'll want to update to the most recent version of Juju - there were charmstore changes made that require a more recent version of Juju than you're using
<hatch> assuming you want to stick with Juju 2
<catbus1> hatch: x58: understood, updated to beta18 now and trying again
<hatch> great, hope it works for you
<catbus1> hatch: x58: it's working, didn't throw out that error anymore.
<hatch> great glad to hear it
<mbruzek> hmo: http://ppa.launchpad.net/juju/devel/ubuntu/pool/main/j/juju-core/juju-core_2.0-beta15-0ubuntu1~16.04.1~juju1.debian.tar.xz
<ChrisHolcombe> i'm running into an issue where i can't deploy a charm from the local dir.  I'm seeing: juju.cmd.juju.application deploy.go:791 cannot interpret as local bundle: read .: is a directory.  I'm on 16.04 with juju 2 beta 18
<hatch> ChrisHolcombe: it appears that Juju doesn't see the path you're passing in as a local charm
<ChrisHolcombe> hatch, is there something special i need to do to make it see it differently?
<hatch> is it possible that your charm is invalid? Have you ran charm proof on it?
<hatch> there isn't
<ChrisHolcombe> charm proof passes
<hatch> it should "just work" :)
<hatch> well then!
<ChrisHolcombe> the only thing i get is a W for no copyright file
<ChrisHolcombe> no biggie :)
<hatch> haha right
<hatch> you could also try via the GUI using a zip of the charm
<ChrisHolcombe> hmm ok
<hatch> I'm assuming that you're just running `juju deploy .` ?
<ChrisHolcombe> hatch, yup
<ChrisHolcombe> hatch, deploying full path with ./{path} doesn't work either
<hatch> typically when I've seen that problem it's been because of a proof issue
<ChrisHolcombe> yeah
<rick_h_> hatch: no, proof and deploy aren't connected
<rick_h_> so not sure wtf, ChrisHolcombe what's the ls of the directory look like?
<hatch> no they aren't, but I've usually found that proof will catch why deploy doesn't work :)
<rick_h_> ChrisHolcombe: I think it keys off either bundle.yaml or metadata.yaml
<hatch> for some reason it's thinking you're trying to deploy a bundle
<ChrisHolcombe> rick_h_, nothing special.  has a hooks dir, metadata.yaml, config.yaml.  All the usual pieces.  This deployed fine back on juju 1.25.x
<rick_h_> ChrisHolcombe: right, there was a change in the you have to do ./ in the path
<rick_h_> ChrisHolcombe: maybe go up one dir and try juju deploy ./dirname ?
<ChrisHolcombe> rick_h_, yeah i tried that also.
<ChrisHolcombe> rick_h_, i also tried that :D
<rick_h_> ChrisHolcombe: heh ok. have to check
 * rick_h_ goes to download a charm and try
<ChrisHolcombe> rick_h_, https://gist.github.com/cholcombe973/7c33286c38bc36caf233bfc7a511c2ed
<hatch> ohh
<hatch> yeah reading the source here, that i suppose is expected
<elopio> \o/ quassel installed in canonistack with juju. Sooo nice.
<ChrisHolcombe> hatch, ok cool so the real error is it can't find my charm?
<hatch> yeah
<hatch> fwiw I'm not a core dev, I'm just reading the source :D
<ChrisHolcombe> yup
<ChrisHolcombe> i can sorta read Go
<ChrisHolcombe> hatch, it looks like maybeReadLocalBundle falls through to maybeReadLocalCharm
<hatch> yeah, I had a laugh at the names
<hatch> hah
<rick_h_> ChrisHolcombe: hatch yea, there was some recent changes in that to clean it up a bit when the output got cleaned up
<hatch> it looks like it's uploading as revision 3 then trying to deploy revision 9
<hatch> charms?revision=3&schema=local&series=trusty
<rick_h_> ChrisHolcombe: was the charm previously installed?
<hatch> Deploying charm "local:trusty/gluster-charm-9".
<hatch> so revision drift it appears?
<ChrisHolcombe> hatch, it seems to ref the revision every time i try to deploy it
<ChrisHolcombe> rev*
<rick_h_> there's a bug around the upgrade of a local: charm if I recall
<rick_h_> it should be +1'ing the revision automatically each deploy
<hatch> ahh this might be it then
<rick_h_> since it's local and not tied ot the charmstore
<ChrisHolcombe> rick_h_, ok
<ChrisHolcombe> rick_h_, could i deploy from the cs and then upgrade via local?
<rick_h_> ChrisHolcombe: you can use --switch, but I'm not sure there. I just tried with the ubuntu charm and deployed it three times ok
<rick_h_> ChrisHolcombe: can you create a new model and try again?
<ChrisHolcombe> hmm i haven't used that before.  let me try
<rick_h_> ChrisHolcombe: maybe there's something in the history there in that model that's causing something. Just to narrow down wtf
<ChrisHolcombe> rick_h_, i have a revision file that says 3.  I'm not sure i remember where that came from.  It hasn't been touched in a year
<hatch> Ohh that might be where the 3 is comign from then
<hatch> :)
<rick_h_> ChrisHolcombe: it's probably coming from the revision file
<rick_h_> ChrisHolcombe: as the start revision
<ChrisHolcombe> ok
<rick_h_> my ubuntu depoloy started at rev 7
<ChrisHolcombe> rick_h_, right.  depends on if you've deployed that before in the model i think
<ChrisHolcombe> rick_h_, hacky workaround didn't work haha.  i tried to deploy from the cs and upgrade from local
<lp_sprint> perrito666: tweet me that pic plz
<perrito666> lp_sprint: https://launchpad.net/juju/+milestone/2.0-beta18
<catbus1> Hi, how do I check if the lxd containers are coming up? All the containers are in 'error' state in the machine section of juju status, and they show 'waiting for agent init to complete' for a while.
<hatch> catbus1: I believe that this is a known issue right now
<hatch> there is a workaround, one moment
<hatch> catbus1: well to answer your question you can run `lxc list` :)
<catbus1> lxc list shows empty list
<catbus1> running juju beta12 thought
<catbus1> though
<hatch> ohh, ok I'm not sure, are you able to upgrade to b18?
<catbus1> yes
<hatch> lots has changed since b12 :)
<catbus1> will do
<hatch> catbus1: feel free to ping me if the issue persists with b18
<catbus1> mliberte: hey
<mliberte> hello
<catbus1> ping
<catbus1> mliberte: ping
<catbus1> all set
<catbus1> hatch: err.. with b18, only one machine picked up by maas to deploy (should be 4), and juju status shows machine 1, 2, and 3 in error state.
<hatch> these are still lxd? Are they on Xenial?
<catbus1> one sec
<hatch> if they are, you may have to run
<hatch> juju set-model-config enable-os-refresh-update=false
<hatch> juju set-model-config enable-os-upgrade=false
<hatch> I'm not sure if that bug had been fixed or not yet
<hatch> (assuming they are stuck in pending)
<catbus1> hatch: juju controller shows maas isn't able to find machines match the tag contraints, but we only specify machine 0 with tags, not 1, 2, or 3.
<hatch> hmm, that is out of my area of expertise :)
<lazy_sprint> perrito666: https://bugs.launchpad.net/juju/+bug/1595720
<mup> Bug #1595720: Problems using `juju ssh` with shared models <ssh> <usability> <juju:Triaged> <https://launchpad.net/bugs/1595720>
<catbus1> will add the same tag to other machines to see if it works
<catbus1> can not reproduce the issue anymore.
#juju 2016-09-13
<Anita_> while installing juju 1.25, getting error The following packages have unmet dependencies:  juju-local : Depends: lxc (>= 1.0.0~alpha1-0ubuntu14) but it is not going to be installed               Depends: lxc-templates but it is not going to be installed E: Unable to correct problems, you have held broken packages.
<Anita_> for sudo apt-get install juju-local
<Anita_> how to resolve this?
<Anita_> sudo apt-get install juju-local gives following error
<Anita_> The following packages have unmet dependencies:  juju-local : Depends: lxc (>= 1.0.0~alpha1-0ubuntu14) but it is not going to be installed               Depends: lxc-templates but it is not going to be installed E: Unable to correct problems, you have held broken packages.
<Anita_> any idea?
<Anita_> Hi Matt
<anita_> getting error while installing juju-local
<anita_> on 1.25
<anita_> The following packages have unmet dependencies:  juju-local : Depends: lxc (>= 1.0.0~alpha1-0ubuntu14) but it is not going to be installed               Depends: lxc-templates but it is not going to be installed E: Unable to correct problems, you have held broken packages.
<anita_> any idea how to resolve this
<anita_> Hi Matt
<caribou> A question on actions (juju 1.25) : what is required in order for an action to return success (python script)
<caribou> I'm returning sys.exit(0) but my action systematically fails, even if it returns 0 in debug-hooks
<bdx> marcoceppi: can you clear out my aws charm dev account?
<D4RKS1D3> Hi, I need to add new hard disk to an lxc
<D4RKS1D3> but I received this line "lxc-start: utils.c: safe_mount: 1284 No such file or directory - Mount of '/media/disk3' onto '/usr/lib/x86_64-linux-gnu/lxc//sdc1' failed"
<D4RKS1D3> Someone knows what happends?
<D4RKS1D3> Thanks!
<caribou> nevermind, I found the reason why it always failed : was relying on deprecated JUJU_HOME :-/
<magicaltrout> fscking jet lag
<catbus1> Hi, juju debug-log doesn't work with beta18, is it known? Any workaround?
<catbus1> any other way to view the juju logs during the deployment
<rick_h_> catbus1: you need to run it from the controller model. juju switch controller
<catbus1> rick_h_: thanks!
<magicaltrout> ahhh one of those days where you've had 4 hours sleep and decide to rewrite large chunks of your talk...
<x58> magicaltrout: Best kind of days.
<magicaltrout> we'll see at 1pm if i have a talk or not ;)
<x58> lol
<lpSummit> Greetings programs
<magicaltrout> they called him chuuuuuuck... chuuuuck the truck.....
<lpSummit> magicaltrout: i see what you did there
<magicaltrout> have to do something to entertain myself after ~4 hours sleep
<lpSummit> i know that feeling sir. I was up a bit too late myself
<magicaltrout> i have the inverted problem. Wake up at 4:30 cause of the jet lag
<lpSummit> I'm ready for a mid day coma
<magicaltrout> hehe
<lpSummit> s/coma/nap/
<cholcombe> rick_h_, looks like beta 17 is broken also for me.  i'm trying to find beta 16
<rick_h_> cholcombe: doh, ok.
<rick_h_> cholcombe: just to verify, this is the local charm deploy issue?
<rick_h_> cholcombe: beta15 I think is in the xenial archive might be a next middle ground to check easily
<magicaltrout> anyone know what uses layer:apt so I can see what I've done wrong in my own?
<cargill> I have this issue now, possibly with beta 18, seems like lxd issue, juju status and juju controllers --refresh error out with 'invalid controller tag "" returned from login: "" is not a valid tag'
<cholcombe> rick_h_, correct
<cholcombe> rick_h_, ok i'll try that
<rick_h_> cargill: this is because it's a beta17 controller with the beta18 client
<rick_h_> cargill: there was a breakage and the controller needs to be rebootstrapped with beta18
<cargill> how do I upgrade the controller?
<rick_h_> cargill: you cannot
<cargill> oh, so remove and start over
<rick_h_> cargill: yes please
<rick_h_> cargill: soon with rc1 we'll support upgrades but the betas still do not
<lpSummit> magicaltrout: - we've used the apt layer, one sec while i find you an example
<magicaltrout> i found your example lpSummit
<magicaltrout> thanks
<magicaltrout> in beats
<lpSummit> :D
 * lpSummit sings the beats stack song
 * lpSummit notes it has no lyrics
<lpSummit> boodoom, bud duh boodoom boodooom buh duh buh duh boodoom boodoom
<magicaltrout> lpSummit: http://144.76.63.131:5050/#/
<lpSummit> oooo mesos
<magicaltrout> yup, vanilla mesos on Xenial
<magicaltrout> in a semi working charm
<mliberte> Hi everyone. How do I add heat to openstack bundle?
<lpSummit> magicaltrout: question for you
<lpSummit> magicaltrout: have you considered making any level of SDN abstracted in the mesos charms?
<magicaltrout> hey lpSummit sorry was on a call
<magicaltrout> absolutely I have
<lpSummit> magicaltrout: great, the juniper guys are interested in talking with you if you have a moment
<lpSummit> magicaltrout: we're in the container track room
<magicaltrout> lpSummit: cool, i'm prepping for my 1pm demo, but we can talk after that
<lpSummit> sounds good to me
<catbus1> Hi, I don't see 2.0 version of this page https://jujucharms.com/docs/1.25/authors-charm-upgrades, is it the same for 2.0?
<magicaltrout> yeah i don't think its changed catbus1
<catbus1> ok
<magicaltrout> lpSummit: you still there?
<gQuigs> is there a juju 1.25 daily PPA?
<coreycb> pragsmike, https://jujucharms.com/u/james-page/openstack-on-lxd
<pragsmike> thanks
<cargill> what would make juju debug-hooks <unit> <hook> not actually do what https://jujucharms.com/docs/stable/developer-debugging#the-'debug-log'-command promises? It just lands me in a clean tmux session, no JUJU_HOOK_NAME or anything set
<jose> cargill: that's intended, and you will get a new tmux window as soon as the hook fires up
<cargill> jose: oh, thanks, now that I reread the docs I can see that being described clearly
<jose> :) np!
<kk> https://gist.github.com/anonymous/ed1d1878b8de26ce43e8b73a59c0a602
<perrito666> kk: tx
<cargill> is there an example how to use charmhelpers.core.services.helpers.TemplateCallback?
#juju 2016-09-14
<venom3> Hello. I have a question about juju 2.0 public-address. We have defined in maas 6 network spaces. Does exist a way (like a constraint) to bind Juju's public-address to one of these spaces?
<pragsmike> conjure-up worked well to spin up an OpenStack on local LXD, so all processes are on one physical box.
<pragsmike> is it possible to add-machine some other physical box, so I can put compute/ceph-osd on that?
<pragsmike> I know it's possible with the manual provider to add some machine that you can ssh to
<pragsmike> so the question is, could that same technique work even with some other provider (like local lxd)?
<petevg> cory_fu: is there an issue w/ the new review queue and tests? I don't get nice little clicky links with results for the last two revisions of the Zookeeper charm: https://review.jujucharms.com/reviews/5?revision=19 (Also, two of the latest set of tests seem to be stuck in "pending")
<kjackal> cory_fu kwmonroe petevg: Going to publish to bigdata-dev the latest kafka build. Any objections? (Will include build for xenial, fixes from pete on interfaces, make openjdk optional)
<magicalt1out> dont break things kjackal !
<kjackal> magicalt1out: do not take the fun out of it!
<petevg> Break things! Have fun! It's dev, so it's all good.
<magicalt1out> you always break things kjackal :(
<petevg> He can actually blame half the things on me, this time around.
<petevg> kjackal: you should submit it to the new review queue
<kjackal> But everything are already reviewed!
<kjackal> We just never pushed to bigdata-dev
<kjackal> magicalt1out: just get a new car to go with the palm tree!
<magicalt1out> i may
<magicalt1out> although it wont fit in my check in luggage
<cargill> when using th postgresql layer, why would it not create the <db>-relation-* hooks?
<cargill> oh, nevermind, a typo in layer.yaml
<lpSummit> cargill: - i was about to ask what your layer.yaml contents was
<lpSummit> cargill: also o/ greets agian. How are your adventures with juju going?
<magicaltrout> yeah yeah lpSummit whatever.... ;)3
<lpSummit> magicaltrout: dont hate the charmer, hate the game
<magicaltrout> hehe
<magicaltrout> nice talk
<lpSummit> man, i skipped so much info though
<lpSummit> i wantd to go deeper into actions
<lpSummit> like when is it appropriate, why you would write one, etc.
<magicaltrout> well you should have scheduled a proper talk then eh? :)
<lpSummit> yeah, but there were community members that had talks, and *that* is way more exciting than hearing me bleat on about juju primitives
<magicaltrout> hehe
<lpSummit> i want to hear *them* bleat on about juju primitives and super neat things they are doing with them
<magicaltrout> http://mesos.apache.org/documentation/latest/cni/ light reading for Mesos networks.... :)
<lpSummit> magicaltrout: actually, i was looking into CNI. We abstracted SDN for this reason in the k8s bundles
<lpSummit> have you touched on CNI so far? i don't think its complete from what i'm hearing in the k8s developer chat over on slack... but there are quite a few staging deployments on top of it.
<magicaltrout> nope i'm a complete SDN/CNI newb
<lpSummit> ok :)
<magicaltrout> lpSummit: http://52.53.182.98:8080/ui/#/apps
<lpSummit> magicaltrout: i think i broke it
<magicaltrout> http://52.53.182.98:5050/#/
<magicaltrout> there's a recurring theme
<smgoller-> Hey all, I'm trying to deploy openstack on a MAAS 2.0 cluster using juju 2.0-beta15. I'm using the openstack lxd bundle from https://jujucharms.com/u/openstack-charmers-next/openstack-lxd but I seem to have a more basic problem. The LXD environment never fully comes up on deployed hosts because it can't download images.
<smgoller-> I'm seeing errors like: "juju.provisioner broker.go:97 incomplete DNS config found, discovering host's DNS config"
<smgoller-> the host itself can resolve things like cloud-images.ubuntu.com just fine.
<alexisb> smgoller-, to start you are going to want the beta18 that is in the ppa
<smgoller-> ppa:juju/devel?
<alexisb> smgoller-, yes
<smgoller-> for sure, doing that now. Thanks!
<elopio> lutostag: https://github.com/ubuntu/snappy-playpen/pull/238
<lpSummit> mbruzek: https://github.com/juju-solutions/kubernetes/pull/19/files
<smgoller-> alexisb, got a couple other logs of interest now:
<smgoller-> 2016-09-14 20:53:01 ERROR juju.worker.proxyupdater proxyupdater.go:160 lxdbr0 has no ipv4 or ipv6 subnet enabled
<smgoller-> It looks like your lxdbr0 has not yet been configured. Please configure it via:
<smgoller->         sudo dpkg-reconfigure -p medium lxd
<smgoller-> this is on one of the machines i'm trying to deploy to
<smgoller-> still getting the incomplete DNS config entries as well
<smgoller-> ok, so the incomplete DNS config is not so bad
<smgoller-> it's just pulling from the hosts resolv.conf which is fine
<smgoller-> neutron-gateway/0        error        idle        0        10.118.25.4            hook failed: "config-changed"
<smgoller-> from juju status
<smgoller-> hm. i'll let it keep going, because a lot is still happening
<pragsmike> It does turn out to be possible to add manually-provisioned "ssh" machines to a model in local lxd controller.
<pragsmike> With a controller that looks like this: default  local       lxd/localhost  2.0-beta18
<pragsmike> I was able to add a machine like this: juju add-machine ssh:ubuntu@10.124.44.246
<pragsmike> as long as it's reachable from the lxdbr0 bridge, it does work, and apps can be deployed to it
 * pragsmike is happy
<coreycb> valeech, https://bugs.launchpad.net/charms/+source/openstack-dashboard
<cargill> I've got some of my juju units into a weird state and I'm tryig to drop the model, but running destroy-model default does nothing apart from unselecting it
<cargill> this is the only thing I can see in the controller's debug log: ERROR juju.worker.dependency "is-responsible-flag" manifold worker returned unexpected error: lease manager stopped
#juju 2016-09-15
<viswesn> Hi all,  I getting error on running 'juju bootstrap lxd-test localhost' in Ubuntu 16.04
<viswesn> Error msg is "Unable to get LXD image for ubuntu-xenial: The requested image couldn't be found"
<viswesn> I am followed the doc link
<viswesn> https://jujucharms.com/docs/stable/getting-started
<viswesn> Is there any one to help me on juju bootstrap issue?
<viswesn> #juju :  Need help on juju bootstarp
<viswesn> @all: Is there any one who can help me on juju bootstrap
<thumper> hey
<thumper> which version of juju?
<thumper> also, are you on a closed network?
<viswesn> juju version 2.0
<viswesn> No
<viswesn> cannot start bootstrap instance: unable to get LXD image for ubuntu-xenial: The requested image couldn't be found.
<thumper> what do you get if you just go "lxc launch ubuntu:x"
<viswesn> It is retrieving the image now
<viswesn> $> lxc launch ubuntu:xenial
<thumper> no idea why it didn't work before
<thumper> juju should be able to use it now
 * thumper done for the day
<thumper> good luck
<viswesn> @all  me getting different error now on running juju bootstrap
<viswesn> ERROR cmd supercommand.go:458 failed to bootstrap model: Juju cannot bootstrap because no tools are available for your model. You may want to use the 'agent-metadata-url' configuration setting to specify the tools location.
<viswesn> How to resolve this issue
<viswesn> @all Can I get help on Juju bootstrap issue
<viswesn> juju version 2.0-beta15-xenial-amd64
<viswesn> @all any one here to help me on juju bootstrap issue?
<viswesn> @all any one here to help me on juju bootstrap issue?
<viswesn> @all any one here to help me on juju bootstrap issue?
<viswesn> I am seeing error during juju bootstrap
<elspru> can I deploy an openCL program on juju?
<geetha> Hi, when I run `juju attach` command to upload resource, it's giving me the error: http://paste.ubuntu.com/23181757/. Can anyone please help me o this?
<geetha> Hi, when I run `juju attach` command to upload resource, it's giving me an error: http://paste.ubuntu.com/23181757/. Can anyone please help me on this?
<lazyPower1> geetha: whats the output of `juju version` ?
<geetha> lazyPower1: output of `juju version`: 2.0-beta15-xenial-s390x
<lazyPower1> geetha:  there have been a lot of improvements to resources in the later 2.0 betas. Can you try upgrading to juju 2.0-beta18 and giving it another go?
<geetha> lazyPower1: ok, I'll try with juju 2.0-beta18
<geetha> lazyPower1: thank you
<lazyPower1> no problem
<geetha> Hi, I'm getting this error: http://paste.ubuntu.com/23182940/
<lazyPower1> geetha: - is that still on beta-15?
<lazyPower1> geetha: that was bugged and fixed in the later betas according to this bug: https://bugs.launchpad.net/juju/+bug/1591387
<mup> Bug #1591387: juju controller stuck in infinite loop during teardown - "lease manager stopped" errors <2.0> <destroy-controller> <juju:Fix Released by fwereade> <https://launchpad.net/bugs/1591387>
<geetha> no on beta -18?
<geetha> beta-18.
<geetha> It's displaying juju status message as: resolver loop error
<Siva_> I am using Juju 2.0 beta version 18
<Siva_> I am trying to juju bootstrap and I am seeing the following error
<Siva_> root@juju-api-client:~# juju bootstrap juniper-juju-controller junipermaas --to juju-controller.maas --show-log 11:10:25 INFO  juju.cmd supercommand.go:63 running juju [2.0-beta18 gc go1.6.2] 11:10:25 INFO  cmd cmd.go:141 Adding contents of "/root/.local/share/juju/ssh/juju_id_rsa.pub" to authorized-keys 11:10:25 INFO  cmd cmd.go:141 Adding contents of "/root/.ssh/id_rsa.pub" to authorized-keys 11:10:30 ERROR cmd supercommand.go:458 Get 
<Siva_> Any idea what does this mean?
<Siva_> I am able to ping 192.168.1.2
<Siva_> I am able to ping 192.168.1.2 which is the MAAS container
<madhu> Hello all, we are using juju 2.0-beta18-xenial-amd64 version
<madhu> When we are trying to use the juju bootstrap command to connect to the MAAs container, its giving us the below error
<madhu> root@juju-api-client:~# juju bootstrap juniper-juju-controller junipermaas --to juju-controller.maas --show-log 11:10:25 INFO  juju.cmd supercommand.go:63 running juju [2.0-beta18 gc go1.6.2] 11:10:25 INFO  cmd cmd.go:141 Adding contents of "/root/.local/share/juju/ssh/juju_id_rsa.pub" to authorized-keys 11:10:25 INFO  cmd cmd.go:141 Adding contents of "/root/.ssh/id_rsa.pub" to authorized-keys 11:10:30 ERROR cmd supercommand.go:458 Get 
<Siva_> any help is much appreciated
<madhu> 11:10:30 ERROR cmd supercommand.go:458 Get http://192.168.1.2/MAAS/api/1.0/version/: invalid proxy address "http://[fe80::1%eth0]:13128": parse http://[fe80::1%eth0]:13128: invalid URL escape "%et"
<cclarke> I have MAAS/juju installed on a physical server and I am able to use juju to bootstrap and deploy charms to different nodes. Issue is all docus for Ubuntu 14 say to use quantum-gateway, but that is end of life and says to use neutron-gateway. But it stays blocked and the error is: (config-changed) Missing relations: messaging
<cclarke>  Is this the correct channel to look for assistance deploying openstack with MAAS/juju and does anyone have any ideas or has seen this before?
<cmars> is there an example working charm that uses layer:docker I can look at?
<lazyPower1> stokachu: thanks for the turnaround on layer-nginx :)  It cleaned up nicely
<lazyPower1> cmars: our kubernetes charms, i wrote a nice tutoral over on insights
<cmars> lazyPower1, link?
<lazyPower1> cmars: http://dasroot.net/posts/2016-08-03-layer-docker-deep-dive/
<lazyPower1> or
<lazyPower1> https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/layers/kubernetes
<cmars> lazyPower1, thanks
<lazyPower1> cmars: I was giving this some thought during the summit, there are probably some very straight forward use cases that we could condense that code down quite a bit as well. feedback/bugs welcome.
<lazyPower1> eg: framework it so you generate a lot of that boilerplate from a compose.yaml if thats your end goal is just to spin up the container formation defined there and not relate to anything, but i  dont know that we want to encourage that behavior in our ecosystem... as relations are a good chunk of the reason juju rocks our socks
<jhobbs> did set-model-config go away in beta18?
<lazyPower1> jhobbs: is just model-config now
<lazyPower1> jhobbs: they dropped the set- infront of it.  so juju model-config enable-os-upgrades=true  would be the current syntax
<jhobbs> ok thanks lazyPower1
<jhobbs> i guess i missed the memo on that one
<lazyPower1> i did too, dont feel bad :)
<magicaltrout> afternoon
 * magicaltrout has checked into a hotel 1 block away... what a  waste of time
<smgoller-> what's the best bundle off jujucharms for deploying openstack mitaka on maas-2.0? I'm currently trying out this one: https://jujucharms.com/u/openstack-charmers-next/openstack-base-xenial-mitaka
<smgoller-> oh yeah, and xenial. :)
#juju 2016-09-16
<madhu> Hey, I am using juju 2.0bet18 version. I have a lxc container on which I have installed maas and which has two users ubuntu and maas. Maas detected a physical machine which I had pxe booted. When I deploy that physical machine using maas, it gets deployed. But this deploy process does not copies the ssh key of maas user, it copies the ssh key of ubuntu user. Any reason for such behavior?
<madhu> On the newly deployed physical machine, I want to create vm's and deploy it through maas. The commissioning of those VM fails with the error "Failed to login to virsh console". I am suspecting that it could be because of keys. Any one has idea about this issue?
<hoenir> Is 1 GB the lowest amount of RAM juju expects when provisioning a machine? It the ram could be expressed in MB?
<hoenir> anyone?
<jamespage> marcoceppi, charmhelpers 0.9.1 add_source has broken support for UCA and proposed pockets
<jamespage> marcoceppi, https://code.launchpad.net/~james-page/charm-helpers/fix-ca-sources/+merge/305953
<kjackal> Hello Juju World
<natefinch> easy review anyone? +12 -12 https://github.com/juju/version/pull/2
<natefinch> oops, hang on, there was a problem when I merged
<natefinch> ok, try again, easy review anyone? +12 -12 https://github.com/juju/version/pull/2
<natefinch> oops wrong channel
* lazyPower changed the topic of #juju to: Welcome to Juju! || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP || Youtube: https://www.youtube.com/c/jujucharms || Juju 2.0 beta-18 release notes: https://jujucharms.com/docs/devel/temp-release-notes
<natefinch> marcoceppi: do we have a way to grep all bundles in the store?  I'm wondering how often people actually use cpu-power
<rick_h_> natefinch: will have to script it with the api/client.
<rick_h_> natefinch: https://api.jujucharms.com/v4/wiki-simple/archive/bundle.yaml
<rick_h_> basically have to search for all bundles, build the yaml file link, and then fetch it/grep against it
<natefinch> rick_h_: are there docs for the API?
<rick_h_> natefinch: https://api.jujucharms.com/v4/search?type=bundle&limit=300
<rick_h_> natefinch: yes, sec. https://github.com/juju/charmstore/tree/v4/docs
<rick_h_> uiteam, where's the link to the GH project for the blues client for making api calls to the charmstore? /me is blind today
<rick_h_> natefinch: ^ might be of use as well if I could find it
<rick_h_> natefinch: ah here we go https://github.com/juju/theblues
<plars> I'm not sure if this is a snapd problem or a juju problem, but I'm deploying a xenial instance right now and it keeps getting stuck. It appears it's still stuck in cloud-init, and appears to be running 'snap booted'
<mbruzek> Hi #juju we need some help about availability zones. What is the constraint to deploy to a specific availability zone in Juju 2.0?
<coreycb> hello, I have a juju/maas 2.0 deployment to maas with some services in lxd containers, but containers on one machine can't access containers on another.  should juju be using a bridge other than lxdbr0?
<kjackal> hi mbruzek I got a question on jujuresources, can you spare a few minutes?
<lazyPower1> coreycb: so, i dont know how automagic this needs to be, but i do know that if you configure the fan, and have your lxd containers spun up on that fan bridge, you will get the networking you seek
<lazyPower1> dimitern would have more info on that, and i thought he said that juju does this automatically now... but I may have misunderstood
<lazyPower1> kjackal: mbruzek is out today on site @ a customer engagement. I can lend a hand, whats up?
<kjackal> yeap thanks lazyPower1
<kjackal> so here is what, I have this PR from cory_fu that adds optional juju resources to apache-kafka charm https://github.com/juju-solutions/layer-apache-kafka/pull/13
<coreycb> lazyPower1, thanks
<kjackal> lazyPower1: The idea is that we will be doing a resource_get and if that fails we would fall back to what we had besore (fetch kafka binary from our own se bucket)
<kjackal> lazyPower1: that seems to work on juju 2.0 deployment but on a juju 1.25 I gaet this exception: http://pastebin.ubuntu.com/23187371/
<admcleod_> kjackal: et al, any idea why your slaves might think the resourcemanager daemon is running on the namenode unit?
<admcleod_> kjackal: also have you added HA capability to yarn?
<kjackal> admcleod_: that is on the bigtop slaves?
<admcleod_> yes
<kjackal> admcleod_: HA is under review
<admcleod_> kjackal: well..
<admcleod_> kjackal: this is why my teraosrt isnt working: https://pastebin.canonical.com/165768/
<admcleod_> is it under review but enabled in the xenial charms?
<kjackal> where do you get the charms from?
<kjackal> admcleod_: do you have  bundle you can share?
<admcleod_> hang on
<lazyPower1> kjackal: - feedback left
<lazyPower1> kjackal: - and yes, when we implement resources, we're locking the charms to 2.0+ only. resources in metadata cause 1.25 to panic on deployment
<admcleod_> kjackal: https://github.com/ubuntu-openstack/bopenstack/blob/master/juju-bundles/spark-hadoop-processing.yaml
<kjackal> so lazyPower1 if we move on with that PR we will be breaking the juju 1.25 charm deployments, right?
<lazyPower1> correct, unless you catch that NotImplementedError charm-tools is surfacing for you
<lazyPower1> updated my review comments to reflect that
<kjackal> thank you
<kjackal> lazyPower1: ^
<lazyPower1> np
<kjackal> admcleod_: I see you get the charms from bigdata-dev where HA should be available
<kjackal> admcleod_: checking!
<kjackal> admcleod_: going to deploy the bundle, it will take me some time
<admcleod_> the relations arent quite right
<admcleod_> anyway, -1 from me for resourcemanager HA review :P
<kjackal> admcleod_: you can make this official here: https://canonical.leankit.com/Boards/View/112674289/123088605
<admcleod_> kjackal: which other unit is it supposed to be running on?
<kjackal> admcleod_: what is "it"?
<admcleod_> kjackal: resourcemanger
<kjackal> admcleod_: the resourcemamager should only be running on the hadoop-resourcemanager
<kjackal> it is the namenode that is suposed to be running in two of the three namenode units
<admcleod_> kjackal: what?
<kjackal> the resource manager should only be running on the hadoop-resource manager
<kjackal> the namenode (primary and standby) should be running on the namenode units
<kjackal> admcleod_: ^
<admcleod_> kjackal: lets start again. are you making resourcemanager HA?
<kjackal> admcleod_: no. Only the namenode
<admcleod_> kjackal: so why did my resouremanager go into standby mode as if its been configured for HA?
<magicaltrout> kjackal: did you break things?
<magicaltrout> bloody hacker
<kjackal> magicaltrout: again, I do not break things, I obliderate!
<magicaltrout> lol
<admcleod_> kjackal: something is definitely not right, because: "Cannot run -getServiceState when ResourceManager HA is not enabled"
<kjackal> admcleod_: I do not know why this happened, it might be that bigtop decides that you cannot have an HDFS in HA without a RM in HA?
<kjackal> admcleod_: Ok, give me some time to finish with the deployment
<admcleod_> kjackal: theres no bigtop setting like that as far as i remember
<admcleod_> kjackal: also no one really uses resourcemanager HA anyway
<kjackal> but even so, without a zookeeper we should not be messing with HA anywhere
<kjackal> strange, I am on it
<admcleod_> kjackal: ok. im probably going to eow in about 2 hours ago
<kjackal> admcleod_: ok, deployed
<kjackal> admcleod_: lets see the tera sort now
<magicaltrout> *kaboom*
<rick_h_> soooooo, is that like bad?
<lazypower> What'd i miss?
<kjackal> magicaltrout: what was that? Coconut falling?
<lazypower> i'm fiddling with the znc charm getting my bouncer stood back up in a locked model
<lazypower> word to the wise, ensure you're in the correct model context when you execute bundletester, OR, ensure you've locked the models you care about
<lazypower> lest you find yourself in my position of redeploying your apps... (from a mistake made nearly 2 weeks ago)
<kjackal> admcleod_: teragen finished ok. Terasort takes a bit more because it runs on a single machine with 1GB of ram
<kjackal> admcleod_: also I am running everything on canonistack
<magicaltrout> a mistake lazypower,  from you? surely not?! :)
<lazypower> magicaltrout: story of my life :)
<admcleod_> kjackal:
<admcleod_> kjackal: but why is the resourcemanager going into standby
<kjackal> admcleod_: No, idea! And I cannot reproduce the issue
<kjackal> do you think you could give me access to the machine where the slave is?
<kjackal> I would be interested in the yarn-site.xml on the RM and slave
<kjackal> admcleod_: ^
<kjackal> admcleod_: are you colocating any services?
<jcastro> balloons: hey so, now that client is unblocked from running on fedora
<jcastro> will I get a new client in --edge at some point or do I need to wait for release?
<balloons> jcastro, you should already have it. Edge gets a daily build about midday
<balloons> jcastro, I'd be very curious to know if that unblocks arch linux ;p
<balloons> or perhaps fedora
<jcastro> ok going to give it a shot
<jcastro> balloons: works so far!
<magicaltrout> jcastro: your form is hidden for non canonical people
<jcastro> fixing, thanks!
<jcastro> try now pls
<magicaltrout> much improved
<jcastro> that picture is fresh yo
<magicaltrout> luckily i'm not in it
 * marcoceppi fires up photoshop to get magicaltrout into the photo
<magicaltrout> booo
<valeech> lazyPower: how do you lock a model to protect it?
<lazypower> valeech: juju disable-command all "model locked down"
<lazypower> valeech: juju disable-command --help  and juju enable-command --help respectively
#juju 2016-09-17
<valeech> Thanks lazyPower!!
<magicaltrout> can you specify a non standard ssh port for manual provisioning?
<magicaltrout> sleeping channele
<magicaltrout> -e
<aisrael> Asleep and/or jetlagged ;)
<magicaltrout> disgraceful
<aisrael> Wait. Hungover is also an option.
<magicaltrout> that one is more acceptable
<magicaltrout> i had this cool idea for a demo next week where I have charmed up mesos master and agents and spun up a bunch of nodes
<magicaltrout> then I have spun up a bunch of ubuntu containers with sshd installed
<magicaltrout> so I was hoping to do a manual provision of juju stuff onto a bunch of these containers
<magicaltrout> but they have to map their ports to a random port number
<magicaltrout> so I can't get juju to ssh in
<magicaltrout> sad times!
<aisrael> You could manually ssh using the juju key
<aisrael> assuming you know the random port
<magicaltrout> explain further aisrael !
<magicaltrout> i do know the port
<aisrael> so, you could do something like: ssh -p 1234 -i ~/.local/share/juju/ssh/juju_id_rsa ubuntu@1.2.3.4
<aisrael> Which is also handy for debugging a machine that's been created but the agent is hanging while allocating.
<magicaltrout> ah yeah, but that doesn't solve my problem of bootstrapping a manual provider does it?
<aisrael> Oh, I thought you'd already have a manual/local controller
<magicaltrout> nope thats why i'm sad
<magicaltrout> you can append the port to the ip address when bootstrapping
<magicaltrout> er
<magicaltrout> can't
<aisrael> Here's a thought:
<aisrael> hm, that might not work.
<aisrael> You can add a manual machine to an lxd controller, but that still doesn't get you past Juju not letting you specify the ssh port
<aisrael> add a manual machine via ssh
<magicaltrout> yeah
<magicaltrout> i suspect i've hit a hard stop
<aisrael> You might be able to do some iptables wizardry to do it: http://askubuntu.com/questions/28516/redirect-requests-to-my-external-ip-port-to-a-different-external-ip-port
<magicaltrout> yeah i wondered about that but in a multi container single host setup you couldn't just map everything to 22
<magicaltrout> https://github.com/juju/juju/blob/master/environs/manual/addresses.go#L17
<magicaltrout> probably need to extend that or something
<aisrael> unless you could give each container a publicly routable address
<aisrael> It's definitely a valid feature request to file. I fear the potential workarounds start getting hairy.
<magicaltrout> hehe
<aisrael> Off to bed for me. I gotta repack and head back to the airport tomorrow.
<magicaltrout> hehe see ya aisrael
<KpuCko> how to go to the hooks, to learning how it is work, without it is in failed or error state, if it is in error state i can call debug-hooks, but when i hooks are sucessfully completed i cannot do debug-hooks because juju returns me hook is not in error state
<lazypower> KpuCko: Try changing configuration or juju upgrade-charm. If the context you need is with relations, you can remove/re-add the relationship
<KpuCko> hmm i think i dont understand you.. my case is, im playing with wordpress trusty charm, everyting about the hooks going flowless, but when i try to open the blog im getting 502 Bad Gateway
<KpuCko> so how to debug this?
<eject_ck> Hi all
<eject_ck> trying to bootstrap juju 2 with maas 2 in 16.04 getting juju bootsrap hung when "Fetching" gui. I've added âdebug and log. Please advice why hangs ? https://0bin.link/paste/hehQjxRA#sv0PImHpJHuOv1-FA3yUmgMpN2ycDe+1CS3HEg5oNl8
<pragsmike> greets
#juju 2016-09-18
<holster> hello everyone :)
<junaidali> Hi everyone, I'm having an issue with juju 2.0 on xenial. LXDs in my setup are getting IPs from lxdbr0 bridge while it should get IP from OpenStack management network (br-ens configured).
<junaidali> any idea what might be the issue?
<PCdude> junaidali: I have tried openstack on xenial too, MAAS is out of beta, juju not yet. I would suggest using trusty with maas 1.9.4 and juju 1.25.6. I got it to work that way
<PCdude> junaidali: when JUJU is out of beta u can try again and switch from then on
<PCdude> junaidali: out of curiosity do u use dedicated machines or something like VMware ESXI for the nodes and controller?
<junaidali> Usually we use dedicated machines but this issue is hitting on my test setup (comprised of VMs).
<junaidali> juju 1.25 with maas 1.9 is working fine for me too.
<junaidali> I was trying to test with maas and juju 2.0 on xenial.
<PCdude> https://www.reddit.com/r/homelab/comments/4p3k9j/trouble_getting_lxc_networking_up_containers_not/
<PCdude> look at that link, I had some trouble with LXC containers getting their IP addresses and it turned out to be ESXI that was the problem
<junaidali> previously i was able to successfully do a deployment with maas 2.0rc1 but then i upgraded my setup to 2.1.0 alpha3 and this started happening
<junaidali> thanks PCdude. let me check the link
<PCdude> ok, but come on, using an alpha is almost asking for a problem :)
<PCdude> junaidali: strange question, but on ur dedicated machines do u have MAAS running in a HA setup?
<junaidali> yes but with maas 1.9.
<junaidali> the test setup that i'm currently trying out is a single controller setup
<PCdude> junaidali: ah ok cool! I want to do it too on 1.9, how did u do it? what tutorial did u use?
<PCdude> is that link helpful at all?
<PCdude> basically turn on ur promiscuous mode in  ESXI for the networks that are used by openstack, MAAS and JUJU
<junaidali> i'm not using VMware. I created VMs with virt-manager to go for a quick test.
<PCdude> junaidali: ah ok got it, well honestly why do u wanna use an alpha version? I mean that is so early in the process
<junaidali> yeah, you're right. To move to new version smoothly from older ones, I'm trying out the new version. Also wanted to check the hiccups. I guess stable is not so far now.
<junaidali> regarding the tutorial, I'm following maas.io and jujucharms.com
#juju 2017-09-11
<stokachu> fallenour_: how'd it go?
<fallenour_> still trying to install
<fallenour_> keep getting faile derror, just realized though the error was lying, and the install was fine
<fallenour_> so the past about 6 installs were for nothing, and wasted all because of a false error reporting
<fallenour_> needless to say, juju is bringing me bad juju, and making me one sad panda
<fallenour_> yogurt made my night better though
<fallenour_> also question, Im installing standard conjure-up deployment, why is it that ceph-osd 2 and 3, the standard ones +1 arent seeing the ceph-mon, even though the system automatically installs it?
<fallenour_> @stokachu
<stokachu> fallenour_: they should be related and once the deployment is complete they would see each other
<stokachu> fallenour_:also we are testing `sudo snap refresh conjure-up --candidate`, you may have a better experience there
<fallenour_> @stokachu yea its cleaning up
<fallenour_> you guys should really consider using my project as a large scale guinea pig
<fallenour_> as insane as it drives me, its a great project, and a great idea, and I use openstack to provide a lot of services for free to a lot of major efforts
<fallenour_> as insane as it drives me, its a great project, and a great idea, and I use openstack to provide a lot of services for free to a lot of major efforts
<stokachu> what project?
<fallenour_> Project PANDA, short for platform accessibility and development acceleration
<stokachu> is it public?
<fallenour_> Its designed to provide free infrastructure and services to nonprofits, research institutes, universities, and OSS developers
<fallenour_> yeap, very public
<stokachu> whats the project url?
<fallenour_> pending these last hurdles, I expect to take it fully public and live by the end of this month
<fallenour_> 100 Gbps pipe, and about 10 racks of gear to start with
<fallenour_> 3 supercomputers (small beowulf clusters)
<fallenour_> 3 supercomputers (small beowulf clusters)
<fallenour_> damn, neutron gateway errored out
<stokachu> fallenour_:yea neutron needs access to a bridge device
<fallenour_> @stokachu giving me a "config-changed " error
<stokachu> so depending on your server you can set a range of bridges for neutron to search through
<fallenour_> it should have one
<fallenour_> right now the test stack is about 15 servers
<fallenour_> does it configure a bridge when building via conjure-up?
<fallenour_> it deploys the system, I figured it did by default
<fallenour_> via eth1....
<fallenour_> o.o
<fallenour_> 8O
<stokachu> fallenour_:https://jujucharms.com/neutron-gateway/237 look at port configuration
<stokachu> fallenour_:not openstack on maas, that's up to you
<stokachu> you can configure the port in the configure section for neutron gateway
<fallenour_> oh my dear lawd! https://jujucharms.com/neutron-gateway/234
<fallenour_> Holy geebus Batman! you even provided me the config links via the status command output
<fallenour_> its not letting me ssh in?
<fallenour_> isnt it supposed to inherit my maas ssh key?
<fallenour_> hmmm
<fallenour_> @stokachu Hey just an fyi, one of the systems we are working on we is an equivalent to Redhat Satellite for Openstack Environments, didnt know if thats a system already
<fallenour_> but its major helpful for us, especially because we have limited bandwidth at the current location
<fallenour_>    
<fallenour_>      
<fallenour_> not seeing two of my storage nodes in my volumes, can anyone provide any insight as to why?
<fallenour_> I have ceph-mon and ceph-osd installed
<fallenour_> ceph-mon shows 5/5 of cluster
<bdx> rick_h: just to recap, I was haggling the collectd charm to get the prometheus-node-exporter, I just ended up going with subordinate that relates to prometheus on the scrape interface https://jujucharms.com/u/jamesbeedy/prometheus-node-exporter/1
<bdx> and just dropping collectd
<tlyng> I'm trying to bootstrap a controller on azure and it's stuck at "Contacting Juju controller at <internal-ip> to verify accessibility...". The controller VM get assign an internal IP and an external IP. I've tried connecting to the external IP using SSH and that is successful. How is juju supposed to connect to an internal IP at azure which is not routable from here? Apart from that I noticed the API server is listening on port 17070 or so
<tlyng> Is there a list of ports that need to be open (apart from ssh) in firewall to actually manage to use juju on public clouds?
<tlyng> I deployed Kubernetes using JAAS, but when trying to download the kubectl configuration from kubernetes-master/0 I get an authentication error. My private ssh key is not recognized by that node (juju scp kubernetes-master/0:config ~/.kube/config), how am I supposed to get hold of this configuration?
<mhilton> tlyng: have you tried running juju add-ssh-key to add your key to the model?
<tlyng> mhilton: no, didn't even know that command existed (I'm new :-)) I will try it. Should I do it before I deploy the model or is it possible to do it after it's up and running?
<mhilton> tlyng: I think it should work after the model is up and running.
<rogpeppe1> tlyng: what mhilton says
<mhilton> tlyng: if your key is in github or launchpad then it can also be imported with juju import-ssh-key which might be slightly easier.
<tlyng> mhilton: Ok thanks, I'll try. Another quick question if you have time / knowledge about it. I've tried bootstrapping my own controller at Azure, but after it has launched the bootstrap agent it tries to connect to the VM's internal IP address - which is not routable.
<tlyng> Contacting Juju controller at 192.168.16.4 to verify accessibility... ERROR unable to contact api server after 1 attempts: try was stopped
<mhilton> tlyng, azure can be slow to bootstrap, it sometimes has to wait a while before it get's an external IP address. What version of juju have you got (output of "juju version")
<tlyng> 2.2.3-sierra-amd64
<tlyng> (the one provided by homebrew on mac)
<tlyng> it connects using the external IP to bootstrap (after it first try to use the internal IP). But when it's waiting for the controller it only tries the internal IP, it deletes everything when it fails.
<mhilton> tlyng: OK that's interesting. I'll see if I see the same behaviour.
<tlyng> Sadly I have to use Azure, at least for the time being. It looks like Microsoft has created this stuff called "security" and told the authorities about it. So if you're in the financial industry only "azure" is certified/approved by the government.
<mhilton> tlyng, I've just successfully bootstrapped an Azure controller with that juju version. I think your bootstrapping problem was that it couldn't talk to port 17070 on the external address. Even though it only said it was contacting the internal address it will be contacting all of them at the same time.
<mhilton> tlyng: port 17070 is the only port you'll need access to for juju to communicate with the controller.
<tlyng> mhilton: Ok, thank you. From now on I will use my phone as modem. Did I mention I hate firewalls?
<mhilton> tlyng: The easist way to run models on Azure is through JAAS
<rick_h> tlyng: I'm testing it as well and seeing some issues. I'm working to collect a bootstrap with --debug for filing a bug. At the moment seems Juju can't get the agents needed. :/
<rick_h> tlyng: I'll bug balloons once it finishes timing out and get a bug report going
<tlyng> What about persistence volume claims after deploying to Azure, does they work out of box?
<tlyng> Currently it says "Pending" and it's been like that for some time.
<urulama> mhilton, rick_h: fyi, i was able to bootstrap on azure/westeurope with 2.2.3 ... might be region thing
<ejat> hi .. can we use --constraints with bundle ?
<fallenour_> !ceph
<rick_h> ejat: you stick the constrains on the machine or application in the bundle.
<BarDweller> Nice work on adding flush =) my vagrant provisioning is a little more chatty now =) nice to see it slowly put the world together =)
<rick_h> urulama: mhilton tlyng so I did get azure to bootstrap but it literally took 13min to get there.
<tlyng> rick_h: yes, it's slow. I'm still unable to use azure storage and the loadbalancer stuff (for services). It doesn't look like the canonical distribution of kubernetes actually configure cloud-providers, which I would say is broken.
<tlyng> using ceph on cloud providers ain't that wise
<tlyng> (due to fault domains, data locality etc)
<stokachu> BarDweller: nice!
<SimonKLB> tlyng: juju currently doesnt enable charms to do anything cloud native such as setting up policies etc but with conjure-up there is some initial work on bootstrapping the kubernetes cluster (only on aws for now)
<fallenour_> I figured out my problem is that the keyrings are in the wrong place, which is why it never got configured, but I need to know the cluster id so I can move the keyring to the appropriate directory @stokachu
<SimonKLB> tlyng: see https://github.com/conjure-up/spells/pull/79
<stokachu> coreycb: jamespage ^ do you know anything about this wrt ceph-mon/ceph-osd?
<stokachu> tlyng: yea azure is next on our list to enable their storage/load balancer
<fallenour_> the ceph god himself o.o
<stokachu> :)
<fallenour_> I am not worthy o.o
<fallenour_> by the way, for future reference guides on Ceph-OSD, please see: http://docs.ceph.com/docs/jewel/rados/operations/add-or-rm-osds/ http://docs.ceph.com/docs/master/radosgw/admin/ http://docs.ceph.com/docs/master/radosgw/config-ref/ https://fatmin.com/2015/08/13/ceph-simple-ceph-pool-commands-for-beginners/
<fallenour_> http://docs.ceph.com/docs/dumpling/rados/operations/pools/ http://ceph.com/geen-categorie/how-data-is-stored-in-ceph-cluster/
<fallenour_> all very good resources
<jamespage> fallenour_: give me the 101 on what you are trying todo
<fallenour_> @jamespage hey james, https://github.com/fallenour/panda this is what I am working towards.
<fallenour_> right now my struggle is getting the environment stable so I can go live, which is proving to be difficult
<fallenour_> right now I think the issue is related to ceph-osd and ceph-mon, specifically with the /var/lib/ceph/mds directories missing on all ceph-mon  and ceph-osd systems
<fallenour_> the error output is  "No block devices detected using current configuration" and "  auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring: (2) No such file or directory"
<fallenour_> my direct thoughts are that since the directory /var/lib/ceph/mds was never created, and the /etc/ceph/ceph.conf file points to it for a keyring for mds, that is the reason why its not working or responding to ceph-osd commands, w hich would make since why it thinks there arent any ceph block storage
<fallenour_> what confuses me the most though is in my horizon, I see 3 of the 5 storage devices.
<fallenour_> my guess is that because nova-compute is still working, the host can still see the storage, even though it may not be able to use it.
<fallenour_> Ive identified that /var/lib/ceph/mon has its keyring, and both upgrade keyrings are present, but im not sure what keyring to copy to fix it, or if thats even the issue.
<fallenour_> one thing I did realize is that the keyring in /var/lib/ceph/mon/$cluster_id is the same keyring across multiple systems, but im not sure what uses it.
<jamespage> fallenour_: thats generated by ceph during the cluster bootstrap process I think
<fallenour_> yea I found the bootstrap scripts for that
<fallenour_> @jamespage Do you think the error might be that the mds directories were never created? And if so, why didn't the yaml build script build those?
<jamespage> the mds directory being missing should not be a problem - that's related to ceph-fs
<fallenour_> mmk
<jamespage> where are you trying to run the ceph commands from?
<fallenour_> from juju
<jamespage> example?
<jamespage> which unit?
<fallenour_> juju run --unit ceph-osd/3 .....
<fallenour_> and ive tried runnign it on multiple systems
<fallenour_> do I need to run it specifically against the radosgw system?
<fallenour_> also, one thing I just noticed, my $cluster_id variable is empty on the ceph nodes. If im not mistaken, that variable is used to define where keyrings are located
<fallenour_> @jamespage how can I verify that the variable is populated properly, aside from juju run --unit ceph-osd/3 'echo "$cluster_id"'
<jamespage> fallenour_: that's internal to ceph, not an environment variable
<jamespage> the cluster_id is by default 'ceph'
<fallenour_> @jamespage ahh I see. So what happens when ceph needs that variable or something outside of ceph needs the variable info in order to locate the keyring?
<jamespage> that all gets passed via command line options
<jamespage> fwiw the ceph-osd units don't get admin keyrings so you won't be able to run commands from those units
<jamespage> only from the ceph-mon units, where "sudo ceph -s" should just work
<fallenour_> @jamespage my /etc/ceph/ceph.conf file still reads at /var/lib/ceph/mon/$cluster-id in the config file
<fallenour_> @jamespage I made all of my units a ceph-osd / ceph-mon pair. I didnt know if they all needed ceph mon, so I made 5 and 5 respectively
<jamespage> fallenour_: ok so that's actually broken atm - you can't co-locate the charms (there is a bug open)
<fallenour_> @jamespage ooooh...
<jamespage> fallenour_: normally we deploy three ceph-mon units in LXD containers, and ceph-osd directly on the hardware
<fallenour_> @jamespage Yea thats what I did
<fallenour_> @jamespage I put all 5 on hardware, and 5 in lxd containers, ceph-osd hardware, ceph-mon lxd
<fallenour_> @jamespage I figured it was done that way for a reason, so I copied the design for the other 2 additional storage units
<jamespage> oh well that should work just fine - what does "sudo ceph -s" on a ceph-mon unit do?
<jamespage> but 5 is overkill - 3 is fine
<jamespage> there is no horizotal scale-out feature for ceph-mon - its control onlu
<jamespage> have to drop for a bit to go find my room at the PTG
<fallenour_> I didnt want to have to scale it later, I figured 5 for 500 PB of storage would be good
<fallenour_> Output: cluster fc36db4c-9693-11e7-aae7-00163e20bc2c      health HEALTH_ERR             196 pgs are stuck inactive for more than 300 seconds             196 pgs stuck inactive             196 pgs stuck unclean             no osds      monmap e2: 5 mons at {juju-950b53-0-lxd-0=10.0.0.51:6789/0,juju-950b53-1-lxd-0=10.0.0.10:6789/0,juju-950b53-2-lxd-0=10.0.0.37:6789/0,juju-950b53-3-lxd-0=10.0.0.252:6789/0,juju-950b53-4-lxd-0=10.0.0.40:
<Dweller_> when I'm running --edge, I had to install lxd with snap before installing conjure-up, and I don't think I have a conjure-up.lxc command anymore..
<stokachu> Dweller_: yea all that went away now you just use the snap lxd
<Dweller_> but when I do lxc list .. it doesn't show the juju containers?
<Dweller_> mebbe I need to set a config somewhere
<stokachu> does juju status show anything?
<Dweller_> it does.. until I first do lxc list, and then it breaks
<Dweller_> (rebuilding the vm at the mo, will be able to confirm when it comes back up)
<fallenour_> @jamespage just an fyi, power is becoming unstable, hurricane is coming towards georgia, so if i dont respond, that is why.
<Dweller_> ok.. vm is back up.. juju status shows all containers as running and active
<Dweller_> is there any config I should set for lxc to list the containers using lxc ?
<magicaltrout> hello ya'll completely off the wall question here, but here goes
<magicaltrout> if I wanted to run K8S in LXC/LXD on Centos, my understanding is that conjure-up makes some changes to the profile to allow it on Ubuntu? Can i manually make those changes on Centos or is that out of the question?
<stokachu> magicaltrout: i think the changes made are related to app armor
<stokachu> some of the changes
<stokachu> the others are just enabling privledged etc
<magicaltrout> hrmmm
<magicaltrout> k
<stokachu> magicaltrout: https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/steps/lxd-profile.yaml
<stokachu> thats what our profile looks like
<stokachu> the lxc.aa_profile is apparmor
<stokachu> not sure if devices apply either
<magicaltrout> okay cool thanks stokachu i'll have a prod
<stokachu> magicaltrout: np
<Dweller_> confirmed.. I'm probably doing something wrong.. I install lxd with snap install lxd .. then I install conjure-up with  snap install conjure-up --classic --edge  then I bring up kube with  conjure-up kubernetes-core localhost  ..  after which   juju status  shows the stuff up and running..  I then do lxc list  and it mumbles about generating a client certificate, then lists no containes at all, and after that.. juju status
<Dweller_> just hangs and doesnt work anymore
<stokachu> hmm
<stokachu> Dweller_: what does `which lxc` show
<Dweller_>  /usr/bin/lxc
<stokachu> try /snap/bin lxc list
<stokachu> im curious
<Dweller_> that works
<Dweller_> ok.. so stock ubuntu has an lxc that isn't the one that juju used =) no probs.. I can work with that
<stokachu> Dweller_: yea conjure-up uses the snap lxd for it's deployments
<stokachu> though i thought the environment's PATH had /snap/bin listed first
<Dweller_> for me, /snap/bin is at the end
<stokachu> ok, it may just be something i have to document for now
<Dweller_> I wonder if I can apt uninstall the old lxc
<stokachu> until snap lxd becomes the default
<stokachu> Dweller_: yea if you aren't using the deb installed one
<Dweller_> added apt-get purge -y lxd lxd-client to my vagrantfile =) that should sort it
<Dweller_> hmm.. my last 2 bringups have got stuck at the 'setting relation' bit
<Dweller_> interesting.. I need to confirm this.. but I _think_ if I apt-get purge lxd-client before I do conjure-up lxd / conjure-up kubernetes-core .. then conjure-up kubernetes-core hangs at the 'Setting relation ...' phase (never gets to 'Waiting for deployment to settle' log output)
<Dweller_> which really kinda makes you wonder whats going on there, and could it be using the 'wrong' lxc atm ?
<Dweller_> hmm.. I mean snap install xd / conjure-up kubernetes-core ;p
<Dweller_> s/xd/lxd
#juju 2017-09-12
<stokachu> Dweller_: Yea if you can confirm if like to know
<Dweller_> results so far, with lxd/lxd-client purged before snap install lxd / snap install conjure-up / conjure-up kubernetes-core localhost == hang, apt-purge of just lxd-client before same == hang..  apt-purge line commented out = no hang..  currently testing with apt-purge lxd-client moved to after the conjure-up kubernetes-core is complete..
<stokachu> Dweller_: this is on edge channel too?
<Dweller_> aye.. snap install conjure-up --classic --edge
<stokachu> Is this through vagrant as well?
<stokachu> Dweller_: and virtual box?
<Dweller_> still running tests tho.. so bear with me.. it takes quite a while for each conjure-up to complete (especially if it hangs ;p)
<stokachu> Dweller_: np, if it's vagrant and you have a vagrantfile to share I can try to reproduce here as well
<Dweller_> and contradicting everything I've said so far.. I've got one terminal that I think should have hung, that's currently on the 00_deploy-done step (thats traditionally very quite for quite a while)
<stokachu> Dweller_: you can do a watch juju status in a another terminal to make sure things are progressing
<Dweller_> Yeah I did when it had hung.. it said everything was up and active
<stokachu> Ah
<Dweller_> the "Running step: 00_deploy-done" takes quite the long while
<Dweller_> I wonder if you could have each sub container output something as they complete their init step or whatever during the 00_deploy_done phase
<Dweller_> like .. I started this current vm off back at 14mins past, and it's now 52mins past .. ;p
<Dweller_> it took about 7 mins to do the apt-get upgrade, snap installs, etc.. (I dump date out before launching the conjure-up kubernetes-core localhost)
<Dweller_> at 21mins past it started the conjure-up kubernetes-core localhost
<Dweller_> 54mins past now, and still on "Running step: 00_deploy-done"
<Dweller_> ooh.. I dont think its going to complete...
<Dweller_> from juju status
<Dweller_> Machine  State    DNS            Inst id        Series  AZ  Message
<Dweller_> 0        down                    pending        xenial      failed to setup authentication: cannot set API password for machine 0: cannot set password of machine 0: read tcp 10.193.182.114:33282->10.193.182.114:37017: i/o timeout
<Dweller_> 1        started  10.193.182.34  juju-46dac4-1  xenial      Running
<Dweller_> 2        pending                 pending        xenial
<Dweller_> 3        pending                 pending        xenial
<Dweller_> i/o timeout would sound bad..
<Dweller_> I'll have another look tomorrow
<Dweller_> k8s-local: [error] cannot add relation "flannel:cni kubernetes-master:cni": read tcp 10.4.78.199:32894->10.4.78.199:37017: i/o timeout
<Dweller_> (diff attempt)
<Dweller_> meh.. off to sleep.. will try again tomoz.. feels like sommat isnt waiting long enough anymore
<erik_lonroth_> rick_h: I've sent you an email with details on our problems connecting to AWS. We have found it relates to a session token which is provided for time-limited API keys. The environment variable used is "AWS_SESSION_TOKEN" and according to AWS documentation site it is used to sign API requests. I've commented in the bug report: https://bugs.launchpad.net/juju/+bug/1714022
<mup> Bug #1714022: Juju failed to run on aws with authentication failed <juju:New> <https://launchpad.net/bugs/1714022>
<erik_lonroth_> rick_h: I'm looking into the code of juju and can't yet see if support for AWS_SESSION_TOKEN/KEY is in juju yet. It prevents us from using AWS in our current federated setup. How would you suggest we proceed?
<rick_h> erik_lonroth_: this is what I need to get to the core folks. I don't believe it's supported and so I want to get engineers looking into what it'll take to support. We need to.
<rick_h> jam: ^
<rick_h> erik_lonroth_: ty for updating the bug with details.
<erik_lonroth_> We can start looking into this also from our end, however, we just need to double check that there is indeed a need for this before we start up a pull request.
<erik_lonroth_> The documentation on on AWS if "pretty" clear on how the extra signing of API calls need to happen, but we are not that experienced developers of juju so we don't want to fuck your code up and waste your time fixing our code. =/
<erik_lonroth_> *spelling is great*
<Dweller_> [error] cannot add relation "flannel:cni kubernetes-master:cni": read tcp 10.4.78.199:32894->10.4.78.199:37017: i/o timeout
<Dweller_>   :(
<stokachu> Dweller_: hmm
<stokachu> Dweller_: are you running this with vagrant+virtualbox?
<Dweller_> yep =)
<Dweller_> still on edge, but hadn't seen timeouts until yesterday evening
<Dweller_> the system is the twin xeon rig, 24g of ram, and the cpu's are bearly breaking a sweat running the vagrant box.. plenty of ram left, no swap in use, and the only disk is an ssd
<Dweller_> hmm.. mebbe networking issues?
<Dweller_> [error] cannot get resource metadata from the charm store: Get https://api.jujucharms.com/charmstore/v5/~containers/easyrsa-15/meta/resources: dial tcp: lookup api.jujucharms.com on 10.157.242.1:53: read udp 10.157.242.193:41730->10.157.242.1:53: i/o timeout
<Zic> hello here: one of my kubernetes-master is blocked in "maintenance" in juju status, but in fact it's ok
<Zic> (was after a juju upgrade-charm kubernetes-master)
<Zic> I have two other master in this K8s cluster which are "idle"
<Zic> "Starting the Kubernetes master services." is the message for "maintenance"
<Zic> I'm seaching something like "juju resolved kubernetes-master/0" but "resolved does not work on status "maintenance"
<kjackal> Hi Zic
<kjackal> Zic: is this deployment one that got updated?
<kjackal> Zic: can you show me the output of this: juju run --unit kubernetes-master/0 'charms.reactive --format=yaml get_states'
<Zic> yeah, it was from 1.6.2 to 1.7.4
<Zic> but I found something new/weird : all my nodes are in NotReady in kubectl get nodes :/
<Zic> and their logs say:
<Zic> kubelet_node_status.go:106] Unable to register node "ig1-k8s-01" with API server: the server has asked for the client to provide credentials (post nodes)
<Zic> seems they lost their certificate
<Zic> http://paste.ubuntu.com/25521019/ <= kjackal
<kjackal_> Zic: I do not see the kube-api-server relation between the master and the workers
<kjackal_> Zic: is this a production cluster? If not I would remove and re-add the kubernetes-master <-> kubernetes-worker relations
<kjackal_> Zic: from 1.7 we did harden the auth mechanism between master-workers and admins
<kjackal_> that means you should also grab the updated config file from the master: juju scp kubernetes-master/0:config ~/.kube/
<Dweller_> hmm.. can't bring up vagrant with conjure up kubernetes core since yesterday.. seems something is now taking too long, causing a timeout that leads to failure
<stokachu> Dweller_: can you pastebin your vagrantfile?
<stokachu> i can try to reproduce
<Dweller_> sure.. give me a mo..
<Dweller_> it's in github, but our enterprise one .. which wont help you.. lemme paste it =)
<Dweller_> stokachu: https://pastebin.com/DvuSEWvs
<stokachu> is bento/ubuntu-16.04 like an official image?
<Dweller_> aye.. it's ubuntu-16.04
<Dweller_> http://chef.github.io/bento/
<Dweller_> but it was all working pretty well until yesterday evening.. but since then I've not been able to bring up a vm
<stokachu> Dweller_: ok ill try with this bento project but you should use https://app.vagrantup.com/ubuntu instead
<Dweller_> I just tried one reverted to the non edge version (you'll need to edit the vagrant file if you want to try edge.. looks like I lost the --edge too in my hackery)
<stokachu> those are the ones we build
<Dweller_> sure.. will try that one now..
<stokachu> ok i need to install virtualbox and vagrant it'll be a few minutes
<Dweller_> no probs.. I've got it attempting with ubuntu/xenial64 at the mo
<Zic> kjackal_: was a dev cluster yup, testing if upgrading directly from 1.6.2 to 1.7.4 was possible
<Dweller_> although given it used to work, and stuff is timing out.. I'm wondering if networking somewhere is causing me grief
<Zic> kjackal_: I think I miss something, I just read the upgrade page from kubernetes.io about Ubuntu/Juju and note the specific release note of every release
<stokachu> Dweller_: yea im running it now
<Zic> do you have a link to read all upgrade-step between CDK versions or do I need to find old articles on Ubuntu Insights?
<Zic> I know that this blogpost sometime have more extra-step than https://kubernetes.io/docs/getting-started-guides/ubuntu/upgrades/
<kjackal_> Zic: Here is the anounce ment we had when 1.7 came out: https://insights.ubuntu.com/2017/07/07/kubernetes-1-7-on-ubuntu/
<kjackal_> Looking for the upgrade and release doc
<Zic> yup, just found it, just saw the auth/cert part, don't know what happens with my charms relation so :(
<Dweller_> stokachu: so the official box uses 'ubuntu' as the user, not 'vagrant' the file will need changes for that..
<Zic> I also juste noted that my Juju GUI is down on https://<host>:17070/
<Zic> just*
<stokachu> Dweller_: yea just ran into that :)
<Zic> < HTTP/1.1 400 Bad Request
<Zic> * no chunk, no close, no size. Assume close to signal end
<stokachu> Dweller_: it's deploying now
<Zic> (fixed for the Juju GUI, some part of the full URI to access it missed)
<Zic> kjackal_: can you confirm the juju remove-relation / juju add-relation ? I fear to do something nasty :)
<Zic> even if it's non-prod, I prefer to try to solve properly in case if it happens one day in prod
<Dweller_> ==> k8s-local: error: cannot perform the following tasks:
<Dweller_> ==> k8s-local: - Download snap "core" (2844) from channel "stable" (Get https://068ed04f23.site.internapcdn.net/download-snap/99T7MUlRhtI3U0QFgl5mXXESAiSwt776_2844.snap?t=2017-09-12T16:00:00Z&h=4d4b35a936b3094a2dcbba86a2d9063de4b843ac: dial tcp: lookup 068ed04f23.site.internapcdn.net on 192.168.1.1:53: server misbehaving)
<kjackal_> Zic: I am not sure why that deployment went into this state. However, I see a state missing indicating this relation is not in place
<kjackal_> so...
<Zic> yup, and it seems logic so that master does not recognize its nodes
<stokachu> Dweller_: what's your bridge defined as?
<stokachu> Dweller_: i picked lxdbr0 for mine
<Dweller_> enp2s0 .. the adapter with access to my lan
<stokachu> Dweller_: hmm ok so that's one thing i did differently
<stokachu> Dweller_: picked a virtual bridge
<stokachu> Dweller_: oh, are you running out of space on the device?
<stokachu> Dweller_: because that just happened to me
<Dweller_> 184g available
<stokachu> 9.7G for / is not enough
<stokachu> what does `df -h` show
<Dweller_> oh.. you mean inside the vm ?
<stokachu> yea
<Dweller_>  /dev/sda1       9.7G  1.3G  8.4G  14% /
<stokachu> yea you're going to run out of space
<stokachu> that's one issue
<stokachu> that's probably why it seemed like it was hanging
<Dweller_> I wonder how big the bento image was ;p
<Dweller_> still 8.3g available on / tho.. how much does conjure-up kubernetes-core need?
<stokachu> well i was at 00_deploy-done and all 9.7G was used
<stokachu> i dont know how much exactly but i would do at least a 40G /
<Dweller_> https://github.com/sprotheroe/vagrant-disksize  =)
<stokachu> Dweller_: cool!
<stokachu> Dweller_: yea that gave me a 40GB partition
<stokachu> re-running now
<Dweller_> same
<kjackal_> Zic: did it work?
<Zic> kjackal_: oh oops, I questioned you about the exact juju remove-relation/add-relation command since I fear to do something nasty, habitually I only use the Juju GUI to prepare new deployment
<stokachu> Dweller_: did you get past that snap download error?
<Dweller_> not this time..
<Dweller_> ==> k8s-local: error: cannot install "conjure-up": Get
<Dweller_> ==> k8s-local:        https://api.snapcraft.io/api/v1/snaps/details/core?channel=stable&fields=anon_download_url%2Carchitecture%2Cchannel%2Cdownload_sha3_384%2Csummary%2Cdescription%2Cdeltas%2Cbinary_filesize%2Cdownload_url%2Cepoch%2Cicon_url%2Clast_updated%2Cpackage_name%2Cprices%2Cpublisher%2Cratings_average%2Crevision%2Cscreenshot_urls%2Csnap_id%2Csupport_url%2Ccontact%2Ctitle%2Ccontent%2Cversion%2Corigin%2Cdeveloper_id%2Cpri
<Dweller_> vate%2Cconfinement%2Cchannel_maps_list:
<Dweller_> ==> k8s-local:        net/http: request canceled while waiting for connection (Client.Timeout
<Dweller_> ==> k8s-local:        exceeded while awaiting headers)
<stokachu> Dweller_: im thinking you got some network issues happening
<kjackal_> Zic: removing and readding relations is safe, should always work
<Dweller_> yarp.. gonna add some changes to my lan & see if I cant route that box out via a different network provider
<Dweller_> (I have 3 exits from my lan to the internet, loadbalanced using mwan3 on openwrt)
<stokachu> Dweller_: ok, it all came up for me
<stokachu> Dweller_: oh i also added `apt-get remove -qyf lxd lxd-client`
<stokachu> so that it doesn't get confused there
<Dweller_> yeah.. thats what broke me yesterday evening.. although I'm suspecting now thats when my network went nuts, rather than it being the cause
<stokachu> Dweller_: ack
<Zic> kjackal_: "kube-control relation removed between kubernetes-worker and kubernetes-master."
<Zic> is it good?
<kjackal_> Zic: sure add it back
<kjackal_> there is also the relation kube-api-endpoint missing between master and worker
<kjackal_> Zic: ^
<kjackal_> actually is you do a juju add-relation kubernetes-master kubernetes worker you will see the two relations that need to be added between master and worker
<Zic> kjackal_: yup, I did that, now the master is in "blocked / Waiting for workers"
<Zic> I'm waiting a bit :)
<Zic> but nothing much happen now in "juju debug-log"
<kjackal_> Zic:  did you add the kubernetes control  relation between master and worker? This message comes is shown when the relation is not there: https://github.com/kubernetes/kubernetes/blob/master/cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py#L420
<Zic> kjackal_: yup, I redit the get_states command after: http://paste.ubuntu.com/25521483/
<kjackal_> Zic: did you also added the kube-api-endpoint relation? https://github.com/kubernetes/kubernetes/blob/master/cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py#L432
<kjackal_> Zic: workrs need to know where the api-server is
<Zic> # juju add-relation kubernetes-worker kube-api-endpoint
<Zic> ERROR application "kube-api-endpoint" not found (not found)
<Zic> hmm?
<kjackal_> wait Zic this is a relation between master and workers
<kjackal_> should be something like: juju add-relation kubernetes-master:kube-api-endpoint kubernetes-worker:kube-api-endpoint
<kjackal_> Zic: ^
<Zic> thanks, it works for now
<kjackal_> Awesome
<kjackal_> I have to go Zic
<Zic> thanks anyway for your help kjackal_ ;)
<kjackal_> Should be back in a few hours, sorry
<Fallenour> @jamespage hey Im back, survived hurricane minus modest house damage.
<Fallenour> I didnt see any of the last replies though after you went to your hotel. Did you get the ceph output?
<Fallenour> can anyone help me out with a ceph issue?
<stormmore> o/ juju world... hey rick_h I have started to build my first charm :)
<rick_h> stormmore: woot woot
<rick_h> stormmore: whatcha building?
<stormmore> rick_h: sub-ord charm for etckeeper
<Fallenour> @rick_h @stormmore hopefully not ceph, talk about a rough start
<stormmore> Fallenour: thankfully not, I know that charm well though
<stormmore> Fallenour: well well-ish even
<Fallenour> @stormmore its giving me nothing short of pure hell. Built an entire openstack charm, got it working EXCEPT for ceph LOL
<rick_h> Fallenour: yay on openstack but :/ on ceph. Sorry, not an expert there.
<Fallenour> @stormmore so pretty much, I have a fully blown, rocking hard openstack, it just has the memory of a goldfish LOL
<stormmore> Fallenour: why build when there is a good bundle already avialable?
<Fallenour> @stormmore I did use that build, but I also built a separate one with hyperscalability. The current issue with the current trusty charm built in for newton is that its not ocata, and its trusty, at least last I checked. As for the openstack base from charmers, its only for 3, and I need at least 5 ceph-mon boxes to handle all the future storage add
<rick_h> stormmore: cool on the sub for etckeeper.
<stormmore> Fallenour: ah Trusty enough said! As far as being able to scale, I haven't seen any issue with add-unit from the main bundle
<Fallenour> @stormmore @rick_h Current upgrading is being a huge pain in the ass though, and its holding up my project really bad
<Fallenour> @stormmore The concern isnt the add to, the issue is drive management requests once you get over several Petabytes. My end objective is currently over 500PB for the current project build as is. 3 Ceph-Mon systems cant manage that many requests.
<stormmore> Fallenour: oh I get that but you should just have to manipulate the bundle yaml
<stormmore> put in the number of units you want and the placements
<Fallenour> @stormmore yea the overall idea is fix the issues at small scale, then implement to yaml, push to Juju / Salt power combo, and scale like a mad man
<stormmore> I have been playing with OS on LXD on my poor laptop
<Fallenour> @stormmore the current issue is if I cant get 5 ceph-osd / ceph-mon nodes to work, theres not way im gonna get 5000 to play nice
<stormmore> Fallenour: on I understand the problem, just don't really see it as a charm problem, more of a charm bundle
<stormmore> oh*
<Fallenour> @stormmore Well it kind of is. The issue is if the basic charms dont deploy correctly, which they didnt, I cant trust them to work at scale
<catbus> Fallenour: how do they not deploy correctly? What's the symptom?
 * stormmore is still wondering why Trusty not Xeniel
<Fallenour> @catbus ceph is down, even though they show 3 of the 5 nodes in openstack horizon, but are missing two of the larger nodes.
<Fallenour> @catbus what makes it more confusing , is it shows 5/5 and 5/5 respectively
<zeestrat> @Fallenour is there a copy of the bundle you're deploying?
<Fallenour> @catbus checks of the /etc/ceph/ceph.conf show all configs configured properly, but health outputs show pgs not building properly, all 196 pgs are stuck for some reason, and its not responding to ceph pg repair commands
<Fallenour> @zerestrat yea, its the standard that ships with juju, so like millions of copies
<Fallenour> @zerestrat the only difference is after build failure, I simply ran the upgrade commands and let it upgrade to xenial.
<Fallenour> @zerestrat my first thought was that it needed to upgrade in order to work, so I pushed upgrade across all, but still didnt resolve issue.
<Fallenour> @catbus @zerestrat @stormmore I can dump ceph health, ceph tree, and ceph -s if that helps
<catbus> Fallenour: I am no ceph expert but I can look up if I see something suspicious. Can you also show juju status?
<catbus> in a pastebin.ubuntu.com
<Fallenour> @catbus @zeestrat @stormmore @rick_h Here is the full paste, includes: juju status, ceph health, ceph health detail, ceph ph dump_stuck unclean
<Fallenour> http://pastebin.ubuntu.com/25522179/
<Fallenour> my thoughts on a nuclear option: ceph osd force-create-pg <pgid>
<stormmore> Fallenour: have you looked at the juju logs for the osds? /var/log/juju/unit-*?
<catbus> Fallenour: You see "No block devices detected using current configuration" for ceph-osd units in juju status?
<Fallenour> @catbus yea, I saw that. I saw this too: juju run --unit ceph-mon/1 'ceph osd stat'      osdmap e35: 0 osds: 0 up, 0 in             flags sortbitwise,require_jewel_osds
<stormmore> Fallenour: what catbus said is what I am getting at. I suspect that it can't create the PGs cause it doesn't know what block devices to use
<catbus> Fallenour: thanks for using conjure-up. conjure-up with openstack-base is using this bundle https://api.jujucharms.com/charmstore/v5/openstack-base/archive/bundle.yaml, which specifies /dev/sdb for ceph-osd to use.
<Fallenour> @catbus @stormmore No, not yet. Power issues kept me from going much further.
<Fallenour> @catbus yea, conjure up is pretty amazing. I used the current one that ships with the up to date conjure-up. Thats my big question
<catbus> Fallenour: you can modify the ceph-osd config when you select openstack with novaKVM.
<Fallenour> @catbus why are 3 of the 5 working? but yet none of them workign?
<catbus> after you select the spell, it will present all the service configurations, you can select ceph-osd and find the 'osd-devices' to modify accordingly to your environment.
<Fallenour> @catbus Thats exactly what I did, I did the configure, added 2 machines, added OSD to bare metal, mon to lxd
<Fallenour> @catbus thats exactly what I did
<Fallenour> @catbus other than that, and assigning the specific machines, I let it auto deploy the rest of the way
<Fallenour> @catbus @stormmore do you think I should rebuild?
<catbus> Fallenour: what are the block devices you set for ceph-osd units?
<Fallenour> @catbus Dell R610 and R710s.
<catbus> Fallenour: how many hard disk drives are in these servers?
<catbus> each.
<Fallenour> @catbus 8,8,2,2,2. the 2, 2 , and 2 are the 610s, and their drives are raided. Should I break the raid?
<zeestrat> And what are the device names for the block devices?
<Fallenour> @zeestrat systems were autonamed with maas, so like fresh-llama and hot-seal (not my idea I swear).
<catbus> Fallenour: you can have raid for driver number >=2, but for 2 drives only servers, break the raid, so you can have 1 drive to ceph-osd to use.
<Fallenour> @catbus, soo, break the raid, which blows away OS, let it rebuild those three, or just rerun the charm install?
<zeestrat> @Fallenour, those sound like the hostnames of the servers. the 'osd-devices' needs a list of block devices
<catbus> Fallenour: i'd start over since the OS will be gone.
<Fallenour> @catbus #sadpanda :'( its gonna take a while with my current connection at 6/1
<stormmore> catbus: couldn't Fallenour use a loopback device at least for a PoC before going to the extreme of a rebuild?
<stormmore> or resize the OS drive and make a data partition?
<catbus> stormmore: re-build usually takes ~1 hour for me, so I usually prefer to start over. It's up to Fallenour.
<stormmore> those might be faster than a rebuild to confirm the setup but I would defintely do a rebuild
<stormmore> catbus: oh I am with you there ;-)
<catbus> Fallenour: and you can put '/dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh' for 'osd-devices' in ceph-osd configuration. It will take whatever is available on the host.
<Fallenour> @catbus If the rebuild is a confirmed kill, Ill take that for sure. Whatever it takes to get it up and running, I have a lot of people depending on me to get this system live, I cant keep pushing back anymore
<catbus> Fallenour: if you need professional service with SLA, Canonical provides that, you know? ;)
<Fallenour> @catbus I really wish I could, but Im already eating massive losses by giving everything away for free now as is
<catbus> Fallenour: https://jujucharms.com/ceph-osd/245 for ceph-osd charm configuration reference.
<stormmore> just out of curiousity, does anyone have an example of a charm with an action unit test using action_get?
<Dweller_> stokachu: network gremlins must like me again.. back up and running, and managed to use the registry action on the worker node to add the registry running in the host, and have done a simple test of a custom image pushed / built to that docker from outside the vm, and then deployment/exposing of that image via kubectl from outside.. and it worked (more or less.. I need to sort out my image a little more, but it did send back my
<Dweller_> ctx root not found error page from liberty running in the container)
<Dweller_> so at this point.. I now have a kube-in-a-box =)
<stokachu> Dweller_: \o/
<stokachu> Dweller_: if possible please post a blog post or something on your setup
<stokachu> so we can share that out
<Dweller_> yep, there'll be one via the gameontext blog site .. I'll ping you a preview if possible to give you a chance to point out if I've been idiotic somewhere ;p
<Dweller_> gameontext.org is a microservice based text adventure that a bunch of us contribute to as a way to learn about microservices, and related technologies.. each room in the game is its own microservice, and the core is built with a 12 factor approach.. it's currently running in IBM Bluemix, and we regularly post bits about it =)
<Dweller_> in the not too distant future, we're planning to move the core from docker-compose to k8s, and part of that story includes figuring out a sensible local development story for that
<Fallenour> @Dweller_ Once I get all this running, id be more than happy to host your project for free. its right up our alley of projects we support
<Dweller_> and having kube in a box that we can target for docker builds & k8s deploys, fulfils that pretty neatly
<Dweller_> Thanks for the offer, but we're not paying for it at the mo either =) (Full disclosure, I work for IBM, in the Cloud Native Java team, so this kinda stuff is important to us too)
<Dweller_> At somepoint, I should really look into what it would take to add bluemix to the set of juju supported clouds too =)
<Fallenour> @Dweller_  LOL! Well thats definitely a good benefit to have XD. You guys ever think of cleaning out your DCs to upgrade, give me a call. We will decom and drag the gear off for free. A lot better than paying 100000k+ a quarter for decom.
<stokachu> Dweller_: really happy conjure-up helps with that
<stokachu> and by extension juju
<Dweller_> Yeah.. problem with IBM is its huge.. like 400k employees worldwide huge, which means I have virtually no visibility over that stuff.. I work remote out of Montreal CA, my team is based in UK, Austin, and New York =)
<Dweller_> stokachu: its a nice solution.. I like it.. it has a lot of scope for expansion and experimentation.. which makes it much better suited to my goals, than say, minikube
<Fallenour> @dweller_ yea I saw a similar issue with AT&T and Verizon. Nobodies knows where the gear comes or goes from or to, just that it does haha
<stokachu> Dweller_: awesome, feature requests welcomed too if you think something conjure-up could do to help out
<Dweller_> aye.. when I worked out of UK we used to know a few ppl in goods inwards.. and once or twice heard about equipment being skipped that we got to salvage
<Dweller_> but that was like once or twice in 17 years
<Dweller_> that said.. I ended up with a large back of 72pin simms which is still handy for Amiga's and stuff today
<Dweller_> back/bag
<Dweller_> stokachu: from an education perspective, is there a way to have conjure-up say what it's doing? .. I love that it can do it all for me.. but I also want to know what it did.. I can mostly figure it out now by going and looking at the stuff in github.. conjure up feels like a big macro engine for juju ;p and juju is like a swiss army knife.. where I don't totally understand what the blades are, or how many it has ;p
<stokachu> Dweller_: yea true, cory_fu  and I were kicking around an idea at one point where we basically record what the juju equivalent commands are during each step of the deployment
<Dweller_> mebbe a variant on headless where it writes out a script with the juju commands it would have run.. then you can edit & run the script
<Fallenour> @catbus @stormmore @rick_h @Dweller_ The issue is definitely my raid. If someone has the same issue as me in the future, tell them to break the raid on their Nova-Compute nodes, and make sure their dedicated storage nodes have at least 2 PDs per Span, with at least 2 Spans. Otherwise the install will fail, and the rebuild will be a sad drink of coffee
<rick_h> Fallenour: :( sucky
<rick_h> Fallenour: glad you found some root cause to attack
<rick_h> ty catbus stormmore and Dweller_ for helping out
<stormmore> not a problem rick_h
<Dweller_> aye.. fwiw, I stopped using RAID, in favour of file duplicating stuff like snapraid / drivepool etc
<stormmore> glad to when I can
<Fallenour> @rick_h yurp. on the funny note, when I pulled one of the drives out to physically check it, the sled was empty LOL, so I got a great laugh outta that one. Lesson Learned, Dont build boxes will beer:30, will not end well XD
<rick_h> Fallenour: hah
<stormmore> Fallenour: lol
<rick_h> Fallenour: #lifelessons
<Dweller_> have over 70tb running under windows using drivebender pooling.. and abotu half that again using snap/flex raid on linux
<rick_h> anyone gotten ssl and haproxy playing nice? I'm trying to get the charm to proxy something with ssl termination on it
<bdx> rick_h: I've put a bit of time in there
<rick_h> bdx: I've put a ssl_key and ssl_cert and I see the unit did create a valid .pem file but the config written doesn't do any ssl config
<rick_h> bdx: I'm missing flipping some bit in the charm I'm thinking
<bdx> rick_h: https://gist.github.com/jamesbeedy/d587cbf048038fb274ef4cd55c4ee3dd
<rick_h> bdx: ah, so you setup the services yourself bummer
<bdx> yeah ...
<bdx> my way is the simplemans way
<rick_h> bdx: heh, I wanted simpler the charm to go "oh I see you like you some ssl here" :P
<Fallenour> @dweller_ @rick_h @catbus @stormmore @stokachu do any of you happen to know of a guide or collection of guides to publish a charm bundle? Im building a self-configuring, audit compliant cloud environment, and I want to make it available via the charm store, how do I do that?
<bdx> rick_h: you can provide that info via relation too, instead of making it static in the config
<bdx> the reverseproxy relation is difficult because of the formatting
<rick_h> Fallenour:https://jujucharms.com/docs/stable/charms-bundles is the start
<rick_h> bdx: yea, gotcha. K, I'll poke at it. TY for your sample config
<Fallenour> ok here we go
<Fallenour> wish me luck. if this all goes well, im live. If not, im probably gonna cry. grown men shouldnt cry. at least not over spilt bits
<rick_h> mhilton_: interesting, that looks close to what I'm doing. I wonder what I've got off.
<rick_h> mhilton_: does the controller website charm add the service with https then I wonder?
<rick_h> mhilton_: oh hmm, that seems to be non-https setup. interesting
<xarses_> so I changed the password that is in my cloud credentials yaml file, ran juju update-credentials, how can I ensure that these are the credentials the model is using now? P.S. the credential keeps locking out and is shared with some other systems, so I can currently neither prove or disporve that juju is the problem
<rick_h> xarses_: hmmm...juju add-unit and check the dashboard?
<rick_h> xarses_: this is something we're actively working to improve right now as it's come up with folks that need to swap credentials on running models so I admit it's kind of sucky atm
<xarses_> story of my life ...
<rick_h> xarses_: we need to get you a better life
<Fallenour> @rick_h @stokachu wait..we? Youve both said "we" and "working", are both of you part of the official juju team? o.O
<rick_h> Fallenour: yes, stokachu works on conjure-up and I work on jaas
<Fallenour> 8O
<Fallenour> so...
<rick_h> Fallenour: so we're canonical folks working around the juju community of projects
<Fallenour> if plebian is like...1
<Fallenour> and godmode is like a 10
<Fallenour> you guys are like...35?
<rick_h> hah, no. we're like 5 or 6
<xarses_> rick_h: if you are going to keep a bucket list, credential validation, always using env_vars for openstack, and supporting clouds.yaml + secrets.yaml (from openstack_cloud_config)
<xarses_> autoadd doesn't support the last
<rick_h> xarses_: interesting on the openstack bits. Can you file a bug on those and I can group them into the credential mgt discussion?
<rick_h> xarses_: no, but does normal add-credential support a file?
<rick_h> xarses_: if not that should be the right way to grab a standard file I think
<rick_h> it's kind of how the gce one works. We just accept the json file it dumps out
<xarses_> rick_h: file? ya in the juju format, sure but thats not how any of the other providers lead you to storing credentials to use against them
<rick_h> xarses_: huh? I missed that sorry
<rick_h> xarses_: you mean the secreta.yaml?
<rick_h> oh sorry, I thuoght you meant that openstack_cloud_config would dump a file
<xarses_> "does normal add-credential support a file" I think so, but not the clouds.yaml format
<rick_h> xarses_: gotcha
<xarses_> rick_h: they use different key names
<xarses_> id hope that auto-add would be able to scan it, but add file, I wasn't holding my breath
<Fallenour> @rick_h @stokachu @catbus @stormmore so far it looks like exact same issue, even with raids broken.
<catbus> Fallenour: can you show juju status in a pastebin.ubuntu.com?
<Fallenour> @catbus Give me a bit, it looks like its finalizing now. might take a moment.
<Fallenour> @catbus It was exact same issue as last time. Same output. This time all im gonna do is conjure-up, novakvm install, assign devices via standard configure, no extra machines, deploy all 16. If this fails, Im not insane, and something via conjure-up simply doesnt work
<Fallenour> Any ideas as to why I keep getting neutron-gateway/0 "hook failed: "config-changed" error? This time it was  totally native, no changes or additions run of conjure-up @stokachu
<stokachu> Fallenour: you need to `juju ssh neutron-gateway/0; cd /var/log/juju; pastebinit unit-neutron-gateway-0.log`
<stokachu> i imagine it is because it can't find the interface
<stokachu> that's that whole port mapping thing i pointed you to on sunday
<Fallenour> @stokachu alright, Ill do that once the install is finished. hopefulyl that will be the only issue. I imagine if it is, a simple change of the interface, and a reboot will change that?
<Fallenour> @stokachu I hate to ask, but can you relink me that info? I lost everything when the hurricane hit and wiped out power :(
<stokachu> Fallenour: so you'll want to juju config neutron-gateway <key>=<value>
<stokachu> then juju resolved neutron-gateway/0
<stokachu> Fallenour: https://jujucharms.com/neutron-gateway/238 look under Port Configuration
<stokachu> specifically note:
<stokachu> If the device name is not consistent between hosts, you can specify the same
<stokachu> bridge multiple times with MAC addresses instead of interface names. The charm
<stokachu> will loop through the list and configure the first matching interface.
<catbus> Fallenour: can you confirm 'same output' as "No block devices detected using current configuration" for ceph-osd units in juju status?
<Fallenour> @catbus yea on the previous build it was the exact same. Im doing a very generic conjure-up this time, no additional machines, no additional services
<Fallenour> @catbus so far, it looks good, but only time will tell.
<stokachu> For ceph do your machines have 2 disks?
<catbus> ok.
<xarses_> @rick_h: never created the machine, so I'm not sure but I think juju is stuck with my dead credentials
<xarses_> and now it wont destory, because the a machine is in pending state
<stormmore> does anyone know how install a trusty charm? I am aware of the Ubuntu charm but I have only managed to get to install Xenial so far
<xarses_> just set the series, or declare it explicitly if the charm has multiple
<xarses_> https://jujucharms.com/docs/2.2/charms-deploying
<Fallenour> @catbus @stokachu @stormmore @rick_h same exact issue, completely native install this time. What gives? Is the conjure-up instance simply not working natively? Shoul dI use a different install bundle?
<catbus> Fallenour: what's the issue exactly? Please show error messages.
<Fallenour> @catbus hang on. I did a raid 0 thinking it would split the disks, lemme rebuild....again
<stormmore> Fallenour: I would have to see what you are putting in for the devices for ceph-osd
<stormmore> Fallenour: it still sounds like ceph-osd is not finding the drives where it is looking to me
<catbus> stormmore: juju deploy cs:trusty/ubuntu?
<stormmore> catbus: that is what I was wondering :) trying it now
<Fallenour> @stormmore I used a raid 0 instead of simply leaving the drives as is.
<stormmore> Fallenour: then ceph needs a loopback device of some sort to act as a virtual drive
#juju 2017-09-13
<fallenour> @stormmore @rick_h @jampespage @catbus @Dweller_ @zeestrat I GOT IT! MWHAHAH! Ok so the issue is with the Dell Series systems, you have to configure all of your drives to VD from PD to Raid 0, with each drive configured individually and initialized individually in order for it to create each disk individually as VDs with the Perc cards. Once you do that, it works like a champ
<fallenour> and here I was so excited :(
<fallenour> I only see 400 of the total 3TB of storage available x..x
<fallenour> upgrade from previous is though that all OSD devices show as active idle, unit is ready, 1 OSD each. So thats a good sign at the very least.
<nofun> Hey all
<nofun> Could anyone help answer a question
<magicaltrout> hello folks, random design pattern question. We have a requirement to setup some services, in effect, as a base layer on all our nodes
<magicaltrout> other than manually applying them to each node, or making them subordinate to something
<magicaltrout> is there a way to do that thats nice? or not? :)
<rick_h> magicaltrout: heh we do that internally. There's a basenode charm that is actually what's deployed and then other charms are hulk-smashed to those.
<rick_h> magicaltrout: subordinate is nice way if you can work it that way
<rick_h> magicaltrout: I think one of the issues our folks had was that some of the setup needed to be done before the main charm could operate so it had to be a promise that basenode was laid down first
<stub> 'basenode' exists to configure apt repositories to our internal mirrors, which needs to happen before charms attempt to install stuff
<stub> rick_h: Some sort of a predeploy hook that lets us mess with a machine between provisioning and deploying the charm would make my life a lot easier. we could lose basenode, and all the fallout which is significant.
<stub> magicaltrout: We are currently deploying from local branches, which lets us dump stuff in $CHARMDIR/exec.d, and probably best documented in https://github.com/juju-solutions/layer-basic/blob/master/lib/charms/layer/execd.py
<stub> (all reactive charms get the feature, whereas non-reactive need to explicitly make the correct charm-helpers calls)
<rick_h> stub: rgr
<fallenour> hey errbody!
<Dweller__> mornin
<tvansteenburgh> o/
<rick_h> fallenour: how'd a night's rest go? Saw you made some progress late?
<fallenour> @rick_h yea Ive got it fully up and running, now I just need to know why it only sees 400 of the 3 TB of space in horizon. Any ideas? All osds reporting as good to go.
<fallenour> Im assuming that since the system will take the first drive for OS, and the rest for storage, I should only be down roughly 3 drives of the total available 22, so 19 at 146 gb each.
<rick_h> fallenour: sorry, /me is storage ignorant. I just like building stuff that uses storage
<fallenour> @rick_h lol @stokachu @jamespage any ideas then guys?
<zeestrat> @fallenour: I suggest taking a look at ceph to get an idea of the state of your storage and then seeing if that matches up with what is presented in horizon
<fallenour> @zeestrat I really wish I could, but I dont exactly know how to do that.. :(
<fallenour> @zeestrat I know that the difference between the pure ceph-OSD node and the ones working is the nova-compute services are installed, which is why they appear in the hypervisor section. But other than that, and running drive related commands on the systems themselves, I dont know what to do next.
<jamespage> fallenour: hmm - not sure that horizon well tell you the avaliable storage in cinder?
<jamespage> normally it tells you the ephermeral storage capacity on the compute nodes
<fallenour> @jamespage Yea that was my concern. I guess my next question, is I made the ceph nodes in order to expand my overall storage in openstack, how do I make sure that storage is available for my instances?
<fallenour> @jamespage otherwise, Ive lost over 85% of the available storage for viable use, which is bad news bears.
<jamespage> fallenour: so ceph can be used in multiple ways in openstack
<jamespage> i suspect you have it configured as a cinder backend only; with this configuration you can do boot from volume for instances + attach persistent volumes to instances which have root disks on local disk on compute hypervisors
<jamespage> fallenour: you can also configure nova to use ceph as the backend for all instance storage - the nova-compute charm supports this, but its not the 'default'
<fallenour> @jamespage right now all I have installed on the nodes specifically is ceph-osd. Did I need to install something in addition to that?
<jamespage> you have to toggle a configuration option on the nova-compute charm
<fallenour> @jamespage ahh Im guessing thats the missing link. I do believe I installed cinder-ceph, but I dont know if nova accepts that.
<fallenour> @jamespage does that mean another rebuild? Q___Q
<jamespage> fallenour: no it should be switchable
<jamespage> fallenour: https://jujucharms.com/nova-compute/#charm-config-libvirt-image-backend
<jamespage> set that option to rbd
<fallenour> @jamespage Libvirt-image-backend from lvm to rbd I take it?
<fallenour> @jamespage will that also update the rados gateway and ceph osd nodes accordingly?
<jamespage> fallenour: it won't be set at all right now
<jamespage> fallenour: it should dtrt
<fallenour> @jamespage is there a specific series of commands I should issue, or do I need to just modify the /etc/nova/nova.conf config files on all the nova boxes?
<fallenour> @jamespage Another question, I would like to expand ont he ceph option osd-devices. Default only includes /dev/sdb, but with systems with more than two devices, will the system do a naming of /dev/sda, /dev/sdb, /dev/sdc, etc? If so, wont the default only capture the first avaialble drive only, and ignore the others? If so I would like to configure that to grab the others accordingly
<xarses_> @rick_h: Please enjoy https://bugs.launchpad.net/juju/+bug/1716948 I sure didn't
<mup> Bug #1716948: juju controller caches credentials from bootstrap <docs> <juju:New> <https://launchpad.net/bugs/1716948>
<rick_h> xarses_: ruh roh
<xarses_> ya, I didn't enjoy yesterday
<rick_h> The Juju Show #21 in 40 minutes. Get your coffee cups ready!
<rick_h> hml: kwmonroe tvansteenburgh hatch marcoceppi bdx magicaltrout beisner and anyone else able to make the show in 20min <3
<rick_h> folks that want to chat in the show can join via https://hangouts.google.com/hangouts/_/okwrzmr46fgrvcwcokclw7yqcie
<rick_h> and those that want to watch load up https://www.youtube.com/watch?v=3658lsehjKM
<bdx> rick_h: include-file config only available in bundles, not charm config?
<rick_h> bdx: correct, it's done client side as the bundle is processed
<rick_h> bdx: charm already takes a --config option that can be yaml so not sure how useful it'd be
<bdx> Got it, thx
<bdx> rick_h: how do we get stale charms out of the charm store?
<bdx> rick_h: contact the team and ask them to update the charm or revoke privs?
<rick_h> File a bug. We file a thing with folks that have delete permissions.
<bdx> rick_h: ah nice
<rick_h> bdx: oh, yea if it's someone else's yea. Engage them is first steps
<bdx> totally
<bdx> ok, looks like I need to do both
<bdx> I've got crufty trial and error charms just hanging around it would be nice to wipe off the face of the earth
<bdx> rick_h: file a bug where?
<rick_h> bdx: github.com/canonicalltd/jujucharms.com
<kwmonroe> bdx: super quick fix is to remove read perms from everyone... like "charm revoke ~user/charm everyone".  you'll still see it (assuming you're the owner of the charm), but noone else will.
<bdx> rick_h: totaly
<bdx> rick_h: https://github.com/CanonicalLtd/jujucharms.com/issues/487
<tychicus> sudo cat /var/lib/mysql/mysql.passwd
<tychicus> I get: No such file or directory
<tychicus> is there another method for getting the root mysql password?
#juju 2017-09-14
<RageLtMan> why did conjure-up drop landscape? sort of a "very breaking change"
<rick_h> RageLtMan: sorry that hit you there. I think the main thing was conjure-up landscape was a way to get an easy openstack with autopilot but conjure-up ended up going more direct into doing a solid openstack install walk through
<RageLtMan> rick_h: thanks for the clarification. Is there a current documentation source for using conjure-up directly? It seems the sort of thing i'd be able to feed a json/yaml file into instead the curses config...
<stokachu> RageLtMan: you want to do headless install?
<RageLtMan> that would be great too - have Chef just execute it all :)
<magicaltrout> https://imgflip.com/i/1vuudt
<stokachu> RageLtMan: so you could checkout the openstack spell and provide a bundle fragment with your changes
<stokachu> RageLtMan: it' s not documented yet but we're working on it
<RageLtMan> stockachu: thank you much, will look into this when i get back in this evening
<fallenour> day 8, the war continues. It is 8 days since juju has not worked as desired. I continue to fight on. Troops are running thin, injuries are countless, coffee supplies have almost run out. We receive further orders in briefings, but the cries of the battles the night before rage in my mind, muting whatever words escape the mouths of high command.
<fallenour> The enemy, rbd, continues to elude us, hiding in the obscurity of multiple config files, and the overarching sophistication of ceph.
<tvansteenburgh> fallenour: sorry for your troubles but i'm enjoying the journal
<fallenour> @tvansteenburgh LOL
<fallenour> its a bloody mess man
<fallenour> ive tried damn near everything to make it work
<fallenour> im at that "just give me your ssh key, and you fix it" point.
<fallenour> its so painful, and the build times are terribly long for me becasue of my 6/1 connection speed
<fallenour> @stokachu hey if I add another nova node in the future, will it continue to leverage the already active rbd config, or will it roll back the ephemeral storage (default) unless I inject "unknown-syntax" as an option with the juju deploy -n1 nova-* command
<magicaltrout> random question, why does juju not bother to update /etc/hosts to help units keep track of one another?
<rick_h> magicaltrout: scale
<magicaltrout> fair enough
<rick_h> magicaltrout: think that's all there is to it.
<magicaltrout> other random question CDK related
<magicaltrout> kubectl exec -it microbot-3325520198-djs5f -- /bin/bash
<magicaltrout> Error from server: error dialing backend: dial tcp: lookup k8s-12 on 10.108.4.4:53: no such host
<magicaltrout> anyone seen that?
 * rick_h ducks and hides
<magicaltrout> yeah so to fix it rick_h
<magicaltrout> i had to.....
<magicaltrout> add all most nodes to all my hosts file
<magicaltrout> add all most nodes to all my hosts files
<magicaltrout> so the kube dns knows where to find my nodes
<magicaltrout> this is a manual deployment, so i wonder how that differs in a cloud deployment
<fallenour> @magicaltrout do you have dhcp turned on?
<magicaltrout> yeah fallenour its just manual deployment within openstack
<fallenour> @magicaltrout easiest thing to do is point it all at relevant dns names
<magicaltrout> well internally kube dns is looking for k8s-12
<fallenour> that way you dont have to worry about IP, and can just focus on names independantly, let dhcp worry about IP addresses.
<magicaltrout> but the kubernetes worker doesn't have a clue what the k8s-12 is
<fallenour> @magicaltrout to be honest, I dont either, but if k8s-12 isnt a DHCP server, it isnt gonna matter, because it aint gonna work, and Ill tell you now, manual additions of IP > Host to /etc/host files will get to non-scalable real fast.
<fallenour> @magicaltrout its in your best interest that if for some reason kube dns isnt working with your current dhcp server, that you build another one.
<magicaltrout> fallenour: yes, that much i'm aware of, so what i'm curious about is, if you deploy k8s on ec2 for example how it'd know what k8s-12 is
<fallenour> @magicaltrout sadly you dont. if you dont control the dhcp, you cant configure the dhcp, and the systems arent shared on the same dhcp, the only way to make it work is with a subdomain name, and point it there with a routable IP over WAN.
<fallenour> Its again why I stress the reasons why I dont like critical infrastructure in the cloud.
<magicaltrout> but i'm not buying that kubectl exec doesn't work in EC2
<magicaltrout> in which case the resolution must work
<magicaltrout> but you don't magically get a dhcp server in EC2 if you deploy juju
<fallenour> The issue isnt that it will or wont, the issue is that in EC2, you are in a cloud infrastructure, but your servers may be miles apart from each other in two geographcailly close DCs, or racks down. Either way, different switches, different broadcast domains. The issue is your DHCP query wont be on the same DHCP servers, specifically unless you put them on the same l2 device on the same broadcast domain on the same vlan. The issue is 
<fallenour> in a cloud infrastructure. As such, you either have ot put everything on the same server, and virtualize to ensure they all use the same etherswitch
<fallenour> or you have to put all your critical infrastructure on ahrdware you own and control.
<fallenour> Otherwise, its no dice @magicaltrout
<magicaltrout> fallenour i have no idea what you're saying, either way i suspect it doesn't match the issues i'm seeing :)
<fallenour> The downside to containization, is theres no hardware to control, so theres no controlling.
<fallenour> @magicaltrout Ok so DHCP works by broadcasting and listening to broadcasts for queries and requests for DHCP IP addresses, and responds accordingly
<fallenour> @magicaltrout the issue is, in order to get that request to or from a system, they have to send or receive it. You have to be on the same vlan, on the same broadcast domain in order to receive/send it to the same two systems
<fallenour> @magicaltrout in a cloud infrastructure, your devices are very rarely on the same rack stack, much less the same DC in many cases, which is why the WAN IPs are often so different from one another.
<fallenour> @magicaltrout that means they arent on the same broadcast domain, which means the DHCP server each device is talking to is very likely to be different from one another, which means theyll never get the same information, and wont know how to route it to you, which is why i recommended a subdomain name over a WAN address. Its the only feasible way with using EC2
<fallenour> @magicaltrout for isntance, you can do dhcp.magicaltrout.com with a nginx box, and point that nginx box ip to your internal dhcp server. This will allow you to move the dhcp request over dns (or ddns) to your dhcp server, over nginx, and serve that query over the internet to your dhcp server over the wan.
<fallenour> @magicaltrout its convoluted, and incredibly complex, but it works very well, but requries a much more indepth knowledge of protocols and load balancing, as well as geographic based traffic flow management.
<fallenour> @stokachu ok so ceph is just Satan. Ive got HALF my OSD boxes green to go. Why does Ceph hate me so @jamespage @stokachu @catbus
<tvansteenburgh> rick_h: how does juju resolve hostnames normally, dns on the controller?
<tvansteenburgh> magicaltrout: bottom line is b/c you manually provisioned, dns isn't taken care automagically for you
<magicaltrout> well..... balls :)
<fallenour> @stokachu @catbus @jamespage @rick_h Btw, I fixed my neutron issue by simply clicking the autotune feature on. I would highly recommend that be a default config for future versions
<fallenour> @tvansteenburg its DHCP
<fallenour> @tvansteenburg DHCP registers the IPs to the hostname that the MAAS issues, and then registers their info accordingly with itself. From there, it queries the DHCP server for the DNS info, and executes accordingly
<tvansteenburgh> magicaltrout: kubedns only manages container dns, not the hosts themselves
<magicaltrout> yeah tvansteenburgh
<magicaltrout> i'll write a charm to manage hosts files or something
<fallenour> @stokachu @rick_h @jamespage In the future version as well, for RBD deployments, can you please add a note in the conjure-up text that /dev/sdb, /dev/sdc, etc has to be added to annotate the drives individually by comma separator in order for larger disk counts to work effectively for RBD deployments? Its annotated for comma separators in other areas, so hoping for some uniformity there in the future. It was a lesson learned the hard
<tvansteenburgh> magicaltrout: Dmitrii-Sh might have a recommendation, i think he did a CDK on manual provider recently
<Dmitrii-Sh> in a deployment I had to work with the environment had some automation to provision VMs, assign IP addresses to an IPAM and add the necessary entries to a DNS service
<Dmitrii-Sh> given that it was a custom piece of automation we could provide no integration juju-wise
<Dmitrii-Sh> so it was manual provider
<magicaltrout> hrm
<magicaltrout> considering manual connections are... manual... why couldn't juju provide dns services for manual stuff?
<fallenour> My dear lord I wanna scream, why on earth is it only at 270GB (two drives) of 8 drives, when 8 drives were provided? Why does this damn system hate me so?
<catbus> fallenour: what do you have in for 'osd-devices' in the ceph-osd configuration?
<fallenour> rbd, with /dev/sbb, /dev/sbc, /dev/sbd, /dev/sbe, /dev/sbf, /dev/sbg, /dev/sbh
<fallenour> it configured for rbd it looks like
<fallenour> at least it feels that way
<fallenour> @catbus whats the command for listing osd-devices?
<Dmitrii-Sh> magicaltrout: well, normally juju relies on a cloud provider to give it a node. If that cloud provider also has a responsibility of managing DNS entries then it won't interfere because it wouldn't be generic (who knows what kind of infra do you have, right?). With a manual provider you do everything manually including making sure your nodes know who to talk to (routing) and how to resolve stuff.
<Dmitrii-Sh> MAAS, for example, has it's own bind service
<catbus> fallenour: try 'juju config ceph-osd' and look for osd-devices. there should be a command to list the value of the parameter directly, but I don't off the top of my head.
<Dmitrii-Sh> if you need to update an upstream server, you can use dhcp snippets + ddns
<Dmitrii-Sh> https://wiki.debian.org/DDNS
<magicaltrout> yeah i just had a look at ec2 k8s and saw the internal ec2.internal pointer
<fallenour> @catbus Yea, I was right value: /dev/sdb, /dev/sdc, /dev/sdd, /dev/sde, /dev/sdf, /dev/sdg, /dev/sdh
<fallenour> @catbus @jamespage so why isnt it recognizing that I have more than 2 drives in each server? I can tell that the server is using both drives on the third server, but no more than the 2. Does it cap the total amount of drives usable by server based on the server with the lowest drive count?
<catbus> fallenour: I believe it should be separated by space, not comma. https://jujucharms.com/ceph-osd/246
<fallenour> @catbus please tell me theres a 1 line command to fix this. Rebuild just makes me wanna cry
<catbus> fallenour: juju config ceph-osd osd-devices='/dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh'
<fallenour> @catbus ok I updated, is there anything else i need to do, or will ceph-mon / ceph-osd automatically expand the pools and correct my mistake?
<fallenour> @catbus OOOOO!
<fallenour> @catbus THE GODS BLESS ME THIS DAY!
<fallenour> @catbus I SHALL BUILD A STATUE IN YOUR NAME!!!
<catbus> fallenour: You should thank the openstack-charmers team.
<fallenour> @catbus Oh I plan on giving them somethign extra special this year. Theyve made my life about a millions times more easy
<catbus> fallenour: that's exactly the idea behind Juju/Charm. :)
<fallenour> @catbus the challenge is finding out who they all are. Aside from @stokachu "they" are the only person I know on the openstack-charmers? maybe?
<magicaltrout> as its fixed, does this mean you'll type less?
<fallenour> @magicaltrout nope! As its working, i have to type more, a lot more o.o Now I can turn the project fully public, and start to scale it, and start adding all the non-profits
<magicaltrout> oh well it was worth a shot
<catbus> fallenour: what are you building this openstack cloud for if you don't mind sharing a bit details?
<fallenour> @magicaltrout dont worry though, itll be a lot of cool stuff ahead. now that the heat on me will die down, i can start focusing on my stronger areas,  and scaling systems. A lot of people stand to benefit from the platform, and itll help a lot of groups, and OSS projects move forward. A lot of people have been waiting for me to kick the last kinks out, and Openstack storage was the last one
<fallenour> @catbus Im building a IaaS for Opensource developers, Research Institutes, Non-Profits, and Universities to use to develop on, free of charge. I provide the hardware, the environment, and the SaaS, and they build to their hearts content. Its the missing piece to the perfect storm for OSS Community.
<fallenour> @catbus I realized a long time ago how financially fortunate I was compared to most other OSS developers, so Ive taken a large portion of my income for several years to build a Datacenter where I can host all the gear so people can share in what i have, and support their favorite projects without having to pay anywhere from 600-3500 a month for the privilege of giving to the community. Now all they have to give is their time.
<bdx> fallenour: thats awesome, keep us posted
<fallenour> @bdx I will, and im more than happy to. The updates on the project are posted at www.github.com/fallenour/panda
<fallenour> Ill be adding updates in the near future, to include the slides from the last presentation, and the current updates probably today. Its been a huge pain in the ass getting this all working, so I think im gonna go drown myself in beer.
<catbus> fallenour: awesome!
<fallenour> oh damn, one last question here @catbus one of the nodes failed to spin up properly and deploy, its juju deploy -n1 ceph-osd  correct?
<catbus> fallenour: juju add-unit -n 1 ceph-osd
<fallenour> @catbus and itll deploy with the current ceph-osd configs the other systems use?
<catbus> fallenour: yes, including all the relations it needs to have with other services.
<fallenour>   @catbus aaand, right back in the fire. Now its saying I have about...3x more space than whats physically possible. Any ideas?
<catbus> fallenour: I am no ceph expert. sorry.
<catbus> fallenour: but I'd like to know how you get that conclusion that it reports 3x more space.
<fallenour> @catbus because in horizon it shows available space of 6.3TB, when the maximum drive count possible in OSD devices is 17, at 146GB drives each
<catbus> maybe someone else on the channel has ideas about what causes this.
<magicaltrout> tvansteenburgh: if I was to create an NFS persistent volume
<magicaltrout> the snappage of kubelet shouldn't interfere should it?
<magicaltrout> cause its classic snap, it doesn't see any difference in filesystem does  it?
<magicaltrout> scrap tha
<magicaltrout> t
<magicaltrout> user error
<magicaltrout> well thats an interesting side effect
<magicaltrout> juju add-relation kubernetes-worker telegraf it appears that that does bad things! :)
<tvansteenburgh> magicaltrout: bad enough to file a bug?
<tvansteenburgh> i'll make it easy for you! https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/new
<magicaltrout> i don't know if you'd call it a bug tvansteenburgh
<magicaltrout> eh, it did it again
 * magicaltrout backs out the relation
<magicaltrout> tvansteenburgh: it created
<magicaltrout> telegraf:prometheus-client           kubernetes-worker:kube-api-endpoint  http              regular
<magicaltrout> which seemed to knock all my workers offline
<tvansteenburgh> yeah i don't think you wanna be connecting to kube-api-endpoint
<magicaltrout> yeah
<magicaltrout> it made my cluster very sad
<tvansteenburgh> i thought telegraf was a subordinate that you relate to prometheus
 * tvansteenburgh looks
<magicaltrout> well on my mastere i have telegraph:juju-info related to kubernetes-master:juju-info
<magicaltrout> and stats flowing
<magicaltrout> but i may have guess wrong, it was tricky to guess the flow
<tvansteenburgh> yeah that should work on worker too
<magicaltrout> yeah i put that in, you have to do a full juju add-relation kubernetes-worker:juju-info telegraf:juju-info though
<magicaltrout> else it does the bad one :)
<tvansteenburgh> don't be so lazy magicaltrout
<magicaltrout> haha thanks!
<tvansteenburgh> <3
<tvansteenburgh> the problem is juju won't connect the juju-info relation implicitly
<tvansteenburgh> so it saw that both sides had an http interface, and connected that
<magicaltrout> yeah, its fair enough
<magicaltrout> does brick your cluster for a while though ;)
<magicaltrout> maybe i should add a feature request for relation warnings like "if x connects to y" then warn the user it might explode
<bdx> magicaltrout: triggers in your charm^
<magicaltrout> yeah bdx
<magicaltrout> like "you can do this, technically, but we dont advise it" :)
<bdx> well like .... right now, I seem to end up with something like this in every charm https://github.com/jamesbeedy/layer-django-base/blob/master/reactive/django_base.py#L35,L48
<bdx> triggers will entirely simplify the code for what I am trying to do there
<bdx> what you are talking about it similar
<bdx> (P and Q) -> R
<bdx> like
<bdx> if this relation is made, or flag is set, then warn user
<bdx> cory_fu: whats the timeline looking like before 0.5.0 drops?
<thumper> hmm... do we have a list of definitive types supported in charm options?
<thumper> stokachu, cory_fu_: ^^ ?
<anastasiamac> thumper: yes, gimme a sec
<anastasiamac> https://github.com/juju/charm/blob/v6-unstable/config.go#L53
<anastasiamac> thumper: ^^
<thumper> anastasiamac: I was hoping for something on https://jujucharms.com/docs :)
<thumper> anastasiamac: thanks
 * thumper wonders if we have that type validation in bundle options...
<thumper> hmm...
<bdx> honestly, I couldn't be more disappointed with the decision to put the elasticsearch charm back on lp
<bdx> ;(
<bdx> elasticsearch-charmers: sup
#juju 2017-09-15
<magicaltrout> https://www.airbnb.co.uk/rooms/15654637 this is my house rick_h ;)
<rick_h> magicaltrout: <3 awesome
<rick_h> magicaltrout: ever need to crash my way we've got a trailer setup hah
<rick_h> not an airstream thuogh
<cory_fu> Beisner, tinwood: Do you have any idea of the status of testing charms.reactive 0.5.0 for release?
<tinwood> cory_fu, I've not tested it yet.  Is it the one tagged "release-0.5.0.b0"?  I'll build a couple of charms and test it.
<cory_fu> tinwood: Yep, that's the one
<tinwood> cory_fu, layer-basic brings charms.reactive in?
<cory_fu> tinwood: You're aware of the dep override feature in candidate charm-build,  yes?
<tinwood> cory_fu, ah, is there another way then?  i vaguely saw something.
<tinwood> I was just going to override the layer.
<cory_fu> tinwood: https://github.com/juju/charm-tools/pull/338
<cory_fu> I think it's in candidate, but might only be in edge.
<tinwood> cory_fu, that looks good, but I think it will be easier (right now) just to override in layer-basic.
<tinwood> (assuming it is layer-basic)
<tinwood> Our tooling makes it fairly easy for manual testing.
<cory_fu> tinwood: Whatever works
<cory_fu> But yes, layer-basic is what brings in reactive
<tinwood> Okay, it'll take a couple of hours to run through some tests with a few charms.
<cory_fu> tinwood: It's also released to pypi, so you can use ==0.5.0-b0
<cory_fu> Rather than having to point to the branch
<cory_fu> (It's a dev release, so not picked up by default)
<tinwood> cory_fu, okay, sounds good.  I'll get it pulled in somehow :)
<tinwood> cory_fu, so, charms.reactive.bus.StateList has disappeared or moved?
<cory_fu> tinwood: Hrm.  Yeah, it moved to charms.reactive.flags.  I didn't realize it was actually being used.  https://github.com/juju-solutions/charms.reactive/blob/master/charms/reactive/flags.py#L21
<tinwood> cory_fu, I probably used it on ONE charm. :)  (the one I started testing with).
<cory_fu> :)
<cory_fu> I marked it as deprecated, because it was an experiment that I don't think really added any value
<tinwood> cory_fu, I think that is probably reasonable.  I do have one charm (so far) that won't install, but does pass its tests -- which is weird.  I'll dig, but test another charm.
<tinwood> cory_fu, okay, a little more serious is that charms.reactive.bus.get_state has gone (probably get_flag now).  We use this in charms.openstack (currently) to get interface objects for states.  I thought we were going for backwards compatibility?
<cory_fu> tinwood: Hrm.  That also moved to flags.  There was an issue with import order or I would have imported it back to bus.  OTOH, that was supposed to be for internal use only, so I'm not sure why it's being used in a charm?
<tinwood> cory_fu, the bus.*_state functions were mirrored into charms.openstack as useful helpers.  (e.g. self.set_state() in a OpenStackCharm class.  whether they were the _correct_ functions to access, is now answered!
<cory_fu> tinwood: Yeah, the function I would have recommended was helpers.is_state() as the value associated with the state was more of an implementation detail for relations.  If it's going to be a breaking issue, though, I can look in to figuring out a way to make it compatible
<tinwood> cory_fu, except we actually wanted the object (for some thing; update-status), which is why is_state() wasn't used.  We also used RelationBase.from_state(...) too, in charms.openstack to fetch back relation objects (or None) for some things.
<tinwood> cory_fu, I'll raise bugs on these so that discussion can commence on github?
<cory_fu> tinwood: Sure
<cory_fu> Thanks
<tinwood> cory_fu, np.  We've probably been a bit 'naughty' in going into the internals of charms.reactive in charms.openstack, but then they are quite closely coupled (from charms.openstack's perspective).  Be good to resolve the 'proper' way to do it.
<cory_fu> tinwood: I'm really curious what you were using the value object for?
<tinwood> cory_fu, so in barbican, one of the actions actually needed data from a relation if it actually existed.  I could've gone to relation, but thought that if I could grab the interface object, I could get the data from that.
<tinwood> cory_fu, then in one of our core classes, we grab various interface objects (if they exist) to grab data for configuring common template fragments (e.g. SSL, clustering).  It was 'easier' to do this, than try to pass objects from @when(...)
<cory_fu> tinwood: You should move to relation_from_flag rather than RelationBase.from_state directly so that you can use interfaces written with the new Endpoint class (https://github.com/juju-solutions/charms.reactive/pull/123).  But either way, if you use from_state then why would you also need get_state?
<tinwood> cory_fu, probably 'evolution'; i.e. it built up over time, and multiple ways of doing things have ended up in charms.openstack.  We have a plan to try to rationalise the multiple ways of doing things during the next cycle (at least I think we do).
 * stormmore puts the Juju Show on for background noise
<stormmore> hey rick_h wouldn't a scenario where you want to "rebuild" the controller be the reason to leave a machine instance around after remove-machine?
<stormmore> / b 8
<rick_h> stormmore: so I thought about it a bit and the only thing I can think of is speed of reusing the machine?
<rick_h> stormmore: but honestly I'm not sure what use case drove the feature. maybe hml has some insight (poor hml being the US TZ juju core person :P)
<xarses_> Is there a way to update the concurrency in the number of relations that are updated in parallel? This is very slow updating all my amqp relations
<stormmore> rick_h: I was just thinking of being able to replace a problematic controller w/o using another system
<rick_h> stormmore: hmm, not sure about that.
<rick_h> xarses: no knobs unfortunately
<hml> rick_h: the joys of being a (relative) newbee.  not much history.  :-) which feature are we talking about?  might get lucky
<stormmore> rick_h: still thinking it through, definitely like the blog post btw
<stormmore> hml: removing a machine from a controller without destroying the system
 * hml ponders
<rick_h> stormmore: ty glad you found the post useful
<rick_h> hml: yea new feature in 2.2.3 but no bug associated so we're curious what the use case/need was driving it.
<xarses_> rick_h: so thats, AWFUL. its been 1.5 hours and now its finally to a point that I can verify that the config change is faulty
<rick_h> xarses_: so help me understand what's up. This is the number of relations in a single unit updated in parallel? I wonder if you're hitting kind of the fact that a unit only runs one hook at a time because the hooks could do things like change state and such and if they're all firing...ungood
<rick_h> xarses_: so are you saying that only one unit at a time was processing some event? that shouldn't be?
<xarses_> only one relation is updating at a time as inference by the debug log
<xarses_> in this case, I tried to enable ssl only on amqp
<xarses_> since this is openstack, there are over a hundred relations to amqp
<xarses_> and since I can't rationalize what amqp:34 relation actually connects
<xarses_> I was waiting until the nova/neutron api's updated
<xarses_> and found that it cant validate the certs
<xarses_> 1.5 hours later
<rick_h> xarses_: I see, and amqp has to return the joined hook over and over 1 at s time.
<rick_h> s/return/rerun
<xarses_> ya, and it takes at least 5 sec to run each relation
<rick_h> xarses_: I'd suggest filing a bug on the charm. Maybe it can be more intelligent. Help give some idea of what scale you're seeing this so folks can test 'does it work well at this scale'
<xarses_> well, we need to get every one moved over to the new tls interface
<xarses_> so I don't have to do this noise by hand and mis-configure it
<rick_h> xarses_: +1
<xarses_> so how do read these older modules that don't use the new layers
<hml> rick_h: stormmore: per the PR - itâs to fix this bug: https://bugs.launchpad.net/juju/+bug/1671588
<mup> Bug #1671588: Remove MAAS machines in Failed_Deployment state <maas-provider> <sts> <juju:Triaged> <juju 2.2:Fix Released by wallyworld> <https://launchpad.net/bugs/1671588>
<xarses_> apparently you cant page-up/page-down the status page
<xarses_> on the gui
<rick_h> hml...whoa...interesting.
<rick_h> xarses_: isn't that your terminals job?
<xarses_> the juju status takes like a minute on the cli
<rick_h> I can't help but feel like juju failed the user there. Making it the users job to remember the flag seems :/
<xarses_> also, the length of the output is absurd
<xarses_> on the cli
<stormmore> rick_h: my old phrase for demo mistakes use to be "accidentally on purpose"
<xarses_> juju status | wc -l
<xarses_> 1238
<xarses_> quite too long for `watch`
<stormmore> xarses_: have you tried juju status <app> before?
<xarses_> yes
<xarses_> still takes however long juju is slow for
<xarses_> it aslo won't tell me which relations are executing
<stormmore> yes that has varying levels of reduction of lines
<xarses_> rick_h: amqp relations are about 280 machines * 2 (nova | neutron) * rabbitmq hosts (2)
<rick_h> xarses_: it's good feedback as things hit such big models
<rick_h> xarses_: it's also why the CMR work is important to allow breaking things into more manageable chunks
<xarses_> takes about 5 seconds per relation, so 20 sec per machine, so 20 * 280 = 5600 / 60 = 93.333~ minutes to render a relation change
<xarses_> uh, this is one of our smaller clouds, and we keep being told this is a small scale of things for juju to handle
<xarses_> oh, we have 3 relations now
<xarses_> amqp relations are about 280 machines * 3 (nova | neutron | ceilometer ) * rabbitmq hosts (2)
<xarses_> 280 * 3 * 5 * 2 = 8400 / 60 = 140min
<rick_h> xarses_: perhaps the folks that work on the openstack charms have some tips/suggestions for managing the scale out of the amqp there. Might be worth an ask on their mailing list/etc
 * xarses_ just glances  in jamespage 's general direction
<xarses_> so how do I reconcile the differences between a layer/reactive charm and these older ones that seem way more disjointed
<xarses_> urgh, I switched ssl to back off
<xarses_> and it didn't clean up the ssl config on the units correctly
<xarses_> it implies that the change from ssl=on is still floating around, and all units haven't updated so have ssl=off
<xarses_> but its like half removed some of the config
<rick_h> xarses_: maybe leverage juju run to dbl check the units
<xarses_> juju run what?
<xarses_> the units are in a invalid config state
<rick_h> Cat or grep the config files for the ssl details?
<xarses_> oh, ive spot checked them, they are wrong
<xarses_> it has ssl_port, and use_ssl enabled, the rabbitmq units don't have the port open any more, but the certs are still set in the config, the config line for the ca is missing, but its set in config. and ssl=off in juju config
<xarses_> so its literally an invalid config at this point
<fallenour_> hey @catbus @rick_h @stokachu if I wanted to deploy a specific charm to a specific machine into a specific container, 2 questions:
<fallenour_> 1. Is this the command id use to do that?
<fallenour_> juju add-unit mysql --to 24/lxc/3
<fallenour_> and 2. does the container have to exist prior to the action?
<rick_h> fallenour_: yes to the first question. If the container doesn't exist yet you'd just say a new container on the machine '--to lxd:3' https://jujucharms.com/docs/2.2/charms-deploying#deploying-to-specific-machines-and-containers
<rick_h> Heh your 24/... Was copied from that doc page
<rick_h> Container numbers increment so you can't specify it
<fallenour_> @rick_h sweet, thanks a bunch. Next question, my juju status keeps locking up, and juju list-machines arent populating data either.
<fallenour_> @catbus @rick_h @stokachu does the system not automatically resync with juju controllers once the system is rebooted? Or do I need to reboot my juju controller to refresh?
<rick_h> fallenour_: try with --debug and see if anything more helpful shows up.
<rick_h> fallenour_: agents should update the controller if their ip changes. If the controller ip changed ungood things I think can happen if the controller moves
<rick_h> fallenour_: need more details on what was rebooted and what's 'the system'
 * rick_h has to grab the boy at school
<fallenour_> @rick_h must eat the boy o.o CONSUME HIM, GAIN HIS POWER!
<fallenour_> foudn the reason, most certainly operator error. The odds though, like damn. I left the damn cable unplugged when movign them all to the new switch
<fallenour_> asdf
<fallenour_> yay!!
<xarses_> well it's been 3 hours of supposedly updating juju units
<xarses_> the config is still very broken on hosts
<xarses_> how do I resolve the endpoints of 'amqp:40'
<xarses_> so I can see the current config being sent to it
<fallenour_> hey @rick_h If I wanted to spin up multiple machines at the same time, and on each system, put several systems on them, is that possible? E.G. I want to put apache, mysql, ceph-osd, nova-compute on machines 7,8,9, and 10, would I do:
<fallenour_> ju deploy ceph-osd nova-compute --to lxd:7,8,9,10 && juju add-unit --to 1 apache mysql
<fallenour_> ju deploy ceph-osd nova-compute --to lxd:7,8,9,10 && juju add-unit --to 7,8,9,10 apache mysql *
<fallenour_> as in error correction*
<bdx> fallenour_: you have to have an affinity between the deploy command and the service you are deploying
<bdx> e.g. `juju deploy nova-compute ceph-osd` has to be `juju deploy nova-compute && juju deploy ceph-osd`
<bdx> fallenour_: possibly what you are looking for is a bundle?
<bdx> fallenour_: bundles allow you to stand it all up with a single command
<xarses_> rick_h: 5 hours later, and we've just given up and manually repaired the problem
<rick_h> fallenour: yea bundles are what you want. Check out the docs for them
<rick_h> xarses_: bummer. Please do file bugs on those charms. There's a whole team of folks working on those OpenStack charms handling those config changes and such. It sounds like something that they can fix up.
<xarses_> yay, more problems
<xarses_> >failed to start instance (no "xenial" images in CLOUD with arches [amd64 arm64 ppc64el s390x]), retrying in 10s (3 more attempts)
<xarses_> yet, the simplestream service that I _just_ bootstrapped the controller with has one
<xarses_> and debug-log doesn't have any information
#juju 2017-09-16
<jujuguy> i am using a juju cloud provider for maas
<jujuguy> and i have deployed openstack-base. Now trying to delete everything but it won't release the servers/remove the applications
<jujuguy> tried to force everything but still nothing
<jujuguy> any ideas ?
<jujuguy> i even resolved the errors
#juju 2017-09-17
<fallenour> when deploying gitlab on juju, what is the best method of modifying the yaml? juju configure?
#juju 2018-09-10
<veebers> wallyworld: to confirm, if we have a unit status of Blocked and a container status of Blocked, which wins? Show the unit blocked message of container?
<wallyworld> veebers: if the container itself reports as blocked, we need to show that as that will surface the more relevant error. consider the gitlab charm case. gitlab will report blocked as db relation is missing, but gitlab pod may not be able to be started
<veebers> wallyworld: ack. Makes sense
<babbageclunk> wallyworld: hey, do I need to merge that change forward from 2.3 to 2.4, or will that get done periodically?
<wallyworld> babbageclunk: only if someone does it; feel free to do it this time
<babbageclunk> ok, so a full forward merge from 2.3 to 2.4, right?
<babbageclunk> wallyworld: ^
<wallyworld> yeah
<wallyworld> sorry, otp, distracted
<veebers> wallyworld: I'll need to revert my change to my caas charm where it sets active before setting podspec, as we never overwrite that unless the cloud container is error, blocked or allocating
<babbageclunk> kelvinliu__: did you get anywhere with the flag problem?
<kelvinliu__> babbageclunk, I manually chown the log files to solve the problem. but I am still solving a different error now
<veebers> hmm, actually is this a different issue
<kelvinliu__> babbageclunk, {"request-id":2,"error":"unknown version (2) of interface \"Agent\"","error-code":"not implemented","response":"'body redacted'"} Agent[""].GetEntities, seems the Agent facade is not registered correctly.
<veebers> wallyworld: no, I think it's my crappy active message that has me wrong. Charm will set active, pod starts up, if it encounters errors they are displayed (and in  history) then once resolved the charm active status is used (including in history)
<babbageclunk> kelvinliu__: I wouldn't have thought the log file would cause any problem - it tries to create and chown the file but doesn't worry if it fails.
<babbageclunk> kelvinliu__: weird, that sounds like a mismatch between what the client and the server thinks that facade version should be.
<babbageclunk> If you turn on trace-level logging for juju.apiserver, can you see the contents of the login response?
<babbageclunk> That should show all of the registered versions of all facades.
<kelvinliu__> babbageclunk, let me have a try,
<anastasiamac> babbageclunk: do u have a chance to review this little gem? https://github.com/juju/juju/pull/9175
<babbageclunk> anastasiamac: of course!
<anastasiamac> babbageclunk: \o/
<wallyworld> veebers: sorry, otp. seems like you got it sorted
<veebers> aye, sorry for the noise
<babbageclunk> anastasiamac: sorry, got distracted - reviewed!
<babbageclunk> anastasiamac: did you tag the wrong PR for your trello task?
<anastasiamac> thnx!
<anastasiamac> and yes but untagged within seconds :(
<babbageclunk> anastasiamac: do we normally need to get merge PRs reviewed? This one was small and pretty straightforward. https://github.com/juju/juju/pull/9181
<anastasiamac> babbageclunk: no
<babbageclunk> sweeeeet
<anastasiamac> and fwiw i had a pcik at ur merge and +1
<babbageclunk> Thanks!
<kelvinliu__> babbageclunk, NewConnFacade.caller.BestFacadeVersion('Agent') --> 2         CallNotImplementedError -> &rpcreflect.CallNotImplementedError{RootMethod:"Agent", Version:2, Method:""}
<kelvinliu__> , and we have only one v2 facade for Agent on apiserver   do u think what's wrong here
<anastasiamac> babbageclunk: i have not clicked the button but will for ur peace of  mind
<babbageclunk> anastasiamac: thanks! Then it's diffuse responsibility when it breaks everything,
<anastasiamac> babbageclunk: certainly, especially in that universe where we held reviewers accountable as much as devs :)
<babbageclunk> :D
<anastasiamac> babbageclunk: funny. i did originally do everything in the loop but though it'd be cleaner to separate individual logical steps.. i'll revert
<babbageclunk> anastasiamac: thanks!
<anastasiamac> babbageclunk: nws, the difference of course is that u will always get an additional pass through the loop
<anastasiamac> babbageclunk: so in most cases, instead of going once, u'll loop twice
<anastasiamac> babbageclunk: since the loop will be do-while instead of while-do
<babbageclunk> anastasiamac: I think that's ok, but you could just have an if-break in the middle if you wanted it to be the other way.
<babbageclunk> kelvinliu__: sorry, I have to go for a bit, but I'll be back online later on
<anastasiamac> babbageclunk: i could but in terms of lean loop, the ifs and other logic stmts, make it yuck...
<kelvinliu__> babbageclunk, no worries. cu later
<babbageclunk> anastasiamac: I'm not sure - it's not a big deal either way
<babbageclunk> kelvinliu__: did you get that facade issue sorted out? want any help?
<kelvinliu__> babbageclunk, just got some hints from Ian, im testing on it now. Thank you
<babbageclunk> kelvinliu__: cool cool
<kelvinliu__> babbageclunk, have a good one, cu tmr
<babbageclunk> kelvinliu__: you too! :)
<wallyworld> kelvinliu__: if you get a chance between testing, here's a small PR which fixes a critical k8s agent issue affecting CI and end users which I'd like to land https://github.com/juju/juju/pull/9182
<kelvinliu__> wallyworld, looking now
<wallyworld> ty
<kelvinliu__> wallyworld, LGTM thanks
<wallyworld> tyvm
<wallyworld> how's the jujud? did you find any state errors?
<kelvinliu__> wallyworld, found a mongo auth issue, looking on it now
<wallyworld> kelvinliu__: ah good! at least that explains the error
<kelvinliu__> wallyworld, yeah, thanks!
<boritek> hello, after a machine restart "juju status" hangs, and the gui is not reachable
<boritek> also the IP not pingable
<boritek> how can I start it again?
<boritek> it seems the controller is not running
<stickupkid> if you've lost your controller and you no longer have access to it, then you need to bootstrap and start from scratch...
<boritek> stickupkid: how can i lose it?
<boritek> nothing special happened. is it not production ready?
<stickupkid> boritek: which provider did you use, lxd, aws, azure, etc?
<boritek> stickupkid: I configured my own maas
<boritek> and maas-controller is on the same host where juju controller was
<stickupkid> boritek: i don't know enough about setting maas up to be more helpful, some people will come on later today that will be able to help you more
<boritek> stickupkid: does juju controller not run in a container?
<boritek> how could I manually start it up?
<boritek> is it in lxc, lxd ?
<rick_h_> boritek: so the controller runs on top of hardware registered in MAAS
<rick_h_> boritek: the controller runs on the cloud with the software you want to run/manage so that we know it can reach everything network-wise and such
<rick_h_> TFW your print says it's out of paper and you can't recall the last time you put paper in it or where fresh paper might be...
<veebers> so with the fixes for 1.11 going into 2.3 and 2.4 I suspect that we're looking at doing the next release with go 1.11?
<rick_h_> veebers: just trying to test windows
<rick_h_> veebers: we're not updating everything to 1.11. the issues fixed are valid 1.10
<rick_h_> veebers: my understanding is 1.11 is a sprint topic
<veebers> rick_h_: ack I see, thanks for confirming
#juju 2018-09-11
<kelvinliu_> wallyworld, could we have a quick chat?
<wallyworld> sure
<kelvinliu_> wallyworld, thanks, standup HO?
<veebers> wallyworld: FYI pushed up the latest for the cloud container (also: https://pastebin.canonical.com/p/YQGrRfcS2S/, note my dumb message for active needs to be better)
<wallyworld> veebers: will look after kelvin
<veebers> awesome, thanks
<veebers> hmm, might have to re-run my full test, may have mucked it up
<babbageclunk> wallyworld: I think the leases in that bug are a red herring. We don't have any way to look at that system anymore, do we?
<wallyworld> babbageclunk: might do as it's an IS model
<babbageclunk> wallyworld: The reason I ask is that there's a note at the bottom saying he'll need to clear it out soon and it was a week ago.
<wallyworld> babbageclunk: ah ok, might not have it then. maybe just leave a note on the bug and ask for more info?
<wallyworld> or we can hack the db to orphan a lease?
<babbageclunk> Yeah, doing that now. I can't find anything that would cause a lease to keep a model around.
<kelvinliu_> wallyworld, babbageclunk the reset apiRoot issue is solved. I can see workers started, then can see lots of NEW errors which is great! thanks for the help!
<wallyworld> yay
<babbageclunk> kelvinliu_: oh awesome - what was the problem?
<babbageclunk> (ha, yay lots of new errors! :)
<kelvinliu_> babbageclunk, i wrongly removed the a.root.rpcConn.ServeRoot in login method coz it caused a different error before..
<babbageclunk> ah, right
<kelvinliu_> babbageclunk, yeah, it's expected to have lots of errors! lol
<wallyworld> veebers: there's still this: "workload   active       Instantiating pod"
<wallyworld> it shouldn't be active if the pod has not come up yet
<veebers> wallyworld: aye, that's the charm setting 'active', with message 'Instantiating pod' (it does that just after setting the podspec)
<veebers> that's the crummy message I pt there
<wallyworld> right, but if the contianer status is not there or is blocked/waiting, we need to filter that
<wallyworld> filter the active status (regardless of message)
<wallyworld> as it's not active yet
<veebers> wallyworld: aye, that happens; That pastebin is me deploying without setting trust, it goes into error (you see the pod errors there) then setting trust, it goes though and sorts it all out etc.
<wallyworld> but the workload status goes through an active state
<wallyworld> which is wrong
<wallyworld> as the pod is not up at all at that stage
<veebers> wallyworld: ah right you are, yeah that idea of ours of the charm setting active when setting the pod spec is wrong. Let me address that
<wallyworld> we said it could do that
<wallyworld> because it has no way to know otherwise
<wallyworld> hence we need to have that filter to correct it
<wallyworld> gitlab would set blocked initially, but when relaton is joined it will set active then and as with mariadb, pod may not ve ready then either
<veebers> right, so at the moment it'll set the pod spec, that will come through and we probably won't have a container status with it, nor any historic ones so it uses the unit status. It needs a tweak there
<wallyworld> yup, i think we said if container status is missing, count that as waiting for container
<veebers> yeah, that's what we have. I'm re-running as I may have screwed up what I was actually running against.
 * veebers triple checks that unit test
<veebers> wallyworld: ah, having a look it appears to be because AddUnitOperation.Done(..) calls SetStatus for unit status, which calls probablyUpdateHistory etc. adding a unit test for that and looking at how to resolve
<wallyworld> veebers: righ, but the map of global key to status should have been updated to have the inferred status
<veebers> wallyworld: that's UpdateUnitOperation, not Add*
<wallyworld> ah ok. makes sense. so similar fix needed there also
<wallyworld> babbageclunk: +14/-4 :-) https://github.com/juju/juju/pull/9185
<babbageclunk> wallyworld: looking!
<babbageclunk> wallyworld: approved
<wallyworld> tyvm
<wallyworld> babbageclunk: i need the extra check - !os.IsNotExist(err) returns true for nil err and thus returns without doing the download. and worse, the returned error is nil and so the caller thinks there's nothing wrong
<babbageclunk> Oh right!
<wallyworld> babbageclunk: a subtle bug - we were getting nil charm urls *sometimes* (and no errors logged)
<babbageclunk> No, hang on - if the err is nil that means the file is there, right?
<babbageclunk> (and there was no other error statting it)
<babbageclunk> Oh, I see
<wallyworld> the dir is there
<babbageclunk> ah
<babbageclunk> yup
<wallyworld> and we weren't therefore setting the url to return
<babbageclunk> doh
<wallyworld> a fine mess
<veebers> you have a moment for me to pick your brain? I was hoping to copy the pattern used in UpdateUnitOperation (creating the ops for status) so we can seed what status doc gets used for history (and to avoid using setStatus in Done as that sets history). This code errors 'not found' because the createStatusOps from the addUnitOps call hasn't run yet: http://paste.ubuntu.com/p/DSnYcYwwFS/ thoughts?
<veebers> wallyworld: ^^ d'oh never actually pinged you, that wall of text is for you :-)
<veebers> is there a nice way to create and apply ops in the Done method? That seems a bit off though
<wallyworld> veebers: looking
<veebers> wallyworld: would it be sensible to have the 'new' code not in Build, but as part of done, some ops.application.db().Run(func() { the stuff I have there that will return status ops })
<wallyworld> veebers: might be easier to jump on hangout
<veebers> ack
<veebers> jumping in standup
<Doctor_Nick> what sort of dark secrets were revealed on that hangout? we may never know
<veebers> Doctor_Nick: lol, a bit of going in circles then realising late that the solution we came up with won't work for all cases and then starting the process again ^_^
<boritek> hello
<boritek> how can I change the virtual IP for the juju cloud-controller?
<boritek> guys, after the host restart, I couldnt reach the juju gui anymore, therefore I started everything from scratch, deleted/killed the controller and recreated with juju bootstrap, but it hangs now at the "Fetching Juju GUI 2.13.2" phase
<boritek> and how can I set a static IP for the cloud controller while bootstraping?
<rick_h_> boritek: try bootstrapping with --debug phase. When it says fetching it's kind of wrong often about that stage. The next steps are boring and don't output stuff but I'm guessing the issue is more there
<rick_h_> boritek: as far as a static IP, on what cloud?
<boritek> rick_h_: in the end it could recreated the controller, but it was slow
<boritek> rick_h_: static IP for my maas-cloud-controller gui
<boritek> rick_h_: am I right that it is an lxc container but is somehow hidden?
<rick_h_> boritek: so it'll use the same IP as the controller itself for the GUI. It's served via the same http setup. The controller should listen to all IPs on the machine and so any IP on the machine can/should work
<boritek> it does not show up with lxc list
<rick_h_> no, there's no lxc by default
<boritek> rick_h_: no juju controller (probably not lxc but snap) has a different IP than the phyisical machine underneath, which is the maas-controller
<boritek> so juju-controller has a virtual IP, but I am not sure how to change it
<rick_h_> boritek: ok, you've got a maas controller. When you bootstrap the controller will get a machine from MAAS to install onto
<rick_h_> boritek: what machines are in your MAAS that Juju is pulling from? e.g. what node in maas shows up as used?
<boritek> when I bootstrapped juju controller i set it up to connect to the maas-controller underneath, beyond that maas it self sees 32 physical machines
<boritek> but i can see the snapp app on the controller node by "snap list"
<boritek> juju              2.4.3      5139  stable    canonicalâ  classic
<boritek> i guess it is not only the juju client app but also the controller and gui part too
<rick_h_> boritek: so that just means the juju client is there. The controller does not use a snap
<boritek> ah, so where is it then?
<rick_h_> boritek: a controller only comes into being when you run juju bootstrap $cloud
<boritek> yeah i ran that
<boritek> so where it went to?
<rick_h_> boritek: run `juju controllers`
<rick_h_> boritek: that will list out all of your known controllers out there
<rick_h_> boritek: and then you can use `juju switch x` to switch to the controller
<boritek> maas-cloud-controller*  default  admin  superuser  maas-cloud         2         1  none  2.4.3
<rick_h_> boritek: and `juju gui` to see the GUI from that controller
<boritek> yeah i know that, gui works now
<boritek> but i want to change its virtual IP
<rick_h_> boritek: so that's going to be running on a MAAS node then.
<boritek> and also have a problem that gui will stop working after the physical host restarted
<rick_h_> boritek: if you want to know which node you can look at your maas dashboard or do this: juju switch controller; juju status
<rick_h_> and it'll show the machine information of the controller in status
<rick_h_> boritek: hmm, not sure on that. When you restart the jujud service should restart and the GUI is served via the same jujud your client talks to
<boritek> there is no jujud service
<rick_h_> boritek: it's running on that machine
<boritek> i was also searching for stuffs like that
<rick_h_> boritek: so when you run juju status and see the 0: machine it should show you the IP of it
<rick_h_> boritek: and you can use `juju ssh 0` to connect to the controller node
<rick_h_> boritek: and see things like jujud running on it
<boritek> juju status:
<boritek> default  maas-cloud-controller  maas-cloud    2.4.3    unsupported  12:27:22Z
<boritek> Model "admin/default" is empty.
<rick_h_> boritek: right, you need to change to the controller model
<rick_h_> boritek: `juju switch controller`
<rick_h_> boritek: and try juju status again
<boritek> ah ok
<rick_h_> boritek: check out the tutorials. There's some good info in there on adding models, etc.
<boritek> 0        started  10.189.242.63  eypwax   bionic  default  Deployed
<boritek> this is the one
<rick_h_> boritek: https://docs.jujucharms.com/2.4/en/tut-google and such
<boritek> so how to change the IP?
<rick_h_> boritek: right, that's the running controller machine in MAAS with that address
<rick_h_> boritek: so that's up to MAAS and not Juju
<boritek> ah
<rick_h_> boritek: it's going to get the IP on the machine which I'm guessing is handed out/provided by MAAS
<boritek> so it means it is running on a physical host?
<rick_h_> boritek: since you're not on an AWS/etc you don't have a elastic IP to stick on it via an API
<rick_h_> boritek: correct
<boritek> i thoguht this will be a container
<boritek> yeah MAAS(-controller) is also the DHCP server
<rick_h_> boritek: no, it gets a machine on whatever cloud you're using it against. So in AWS/GCE/etc it's an instance there. In LXD it's a container, etc
<rick_h_> boritek: it's using the cloud-api to set things up on whatever cloud it's pointed at and MAAS can only provide instances from its pool like AWS can only provide instances from its pool
<boritek> rick_h_: how can I ask the bootstrap process to deploy it to a selected machine?
<boritek> or even better to a container?
<rick_h_> boritek: so you can specify --bootstrap-constraints that guide it to use characteristics, or you can use MAAS to tag machines and to specify a tag at bootstrap or deploy time
<boritek> rick_h_: yeah i have a pool, but what if i dont want it to be random
<rick_h_> boritek: to do a container you have to do more work to manually add the container, register it in maas, tag it, and bootstrap to MAAS specifying that tag
<rick_h_> boritek: well in the cloud world we very much follow the "think cattle, not pets" mantra
<rick_h_> boritek: so you specify the type of machine you want as far as cpu, ram, etc and we ask the cloud for one
<rick_h_> boritek: if you want to be that specific then you have to do things like unique tags or the like
<boritek> ok, I see, i would prefer all kind of controllers to be on the same node
<boritek> others could be more random, but it would also nice to fill up machines with contrainers from top to bottom
<boritek> rick_h_: well now i tried to login to the cloud-controller machine, but it does not let me in with the ubuntu user
<boritek> does it not deploy my keys automatically as with other nodes with maas?
<boritek> how can i login?
<boritek> or same keys with the gui?
<rick_h_> boritek: it should. You can use Juju to ssh in via `juju ssh 0` (machine or unit id)
<boritek> ah
<boritek> perfect
<rick_h_> boritek: and then you can do manually with the SSH key that's in MAAS for the user that Juju is using and then ssh ubuntu@$IP
<rick_h_> boritek: but you have to have your SSH key setup in MAAS for that user that the API key you're using
<boritek> yes i have my keys in maas
<boritek> and it worked for other physical nodes
<boritek> but not here
<rick_h_> boritek: hmm, not sure. It should.
<boritek> rick_h_: juju does not communcate and share info with maas?
<boritek> juju list-machines only sees 1 machine that it created for the controller
<boritek> but i have some other machines deployed from maas gui
<rick_h_> boritek: sorry was on the phone. So no, MAAS is just a cloud to Juju. If you go to the cloud and do work Juju ignores it. If you want Juju to manage things it has to be done through Juju. It only tracks work done through the controller using the Juju client.
<rick_h_> boritek: and it'll communicate with MAAS about getting instances, what to run on them, etc. MAAS will communicate back details about the machines given, etc.
<rick_h_> boritek: but Juju will not "pick up" stuff on the underlying cloud and auto add to itself any knowledge about it
<boritek> ok, understood.
<boritek> rick_h_: thank you very much for your help so far. I need to go now, but will continue working and learning about it tomorrow. Espacially regarding the containers
<rick_h_> boritek: cool np. Happy tinkering
<manadart> externalreality: Landed that patch, which makes its follow-up ready to review: https://github.com/juju/juju/pull/9186/files
<manadart> No rush; tapping out for the day.
<externalreality> manadart, ack
<externalreality> manadart, have a nice evening
<manadart> externalreality: Cheers; have a good one.
<externalreality> manadart, watchers driving me crazy, but will try :-D
<manadart> externalreality: I muse about the watcher pattern sometimes. I usually come around to thinking about streams.
<externalreality> manadart, roger that
<asbalderson> Good day everyone!
<asbalderson> Is it possible to use juju to deploy something like rehl?
<rick_h_> asbalderson: sure, there's an ubuntu charm that basically does that. It's setup to be the ubuntu series and just brings up an instance.
<rick_h_> asbalderson: you could do one for centos I believe in most clouds or in your own MAAS with custom images
<asbalderson> rick_h_: I've been having a lot of trouble browsing the charm store to find something like this; where can i find it?
<asbalderson> also, thank you :)
<rick_h_> asbalderson: so there's the ubuntu and ubuntu-lite charms that show the idea: https://jujucharms.com/ubuntu/12 and https://jujucharms.com/u/jameinel/ubuntu-lite/7
<rick_h_> asbalderson: there's not a community contributed one for centos atm, it'd be a good thing to have submitted :)
<magicaltrout> we also mulled over the idea of a more generic layer-basic a while ago, so its easy to write charms from both ubuntu and centos.... if you feel inspired... ;)
<NickZ> does anyone know where the documentation on how config.yaml options are exposed to the install hooks?
#juju 2018-09-12
<hloeung> NickZ: all the configs are exposed to all hooks, you need to use 'config-get' to retrieve the vaules
<hloeung> NickZ: might want to look at using charmhelpers which helps make writing charms easier
<hloeung> NickZ: anyways, I think this might help you - https://docs.jujucharms.com/2.4/en/reference-hook-tools#config-get
<NickZ> ah hah
<NickZ> thats the one
<NickZ> thanks
<veebers> wallyworld: FYI https://paste.ubuntu.com/p/XtJPrZzMgN/ just fleshing out a unit test or 2 to make sure the changes are covered properly (and not just tangentially)
<wallyworld> looks good
<veebers> wallyworld: *sigh* I don't see the 'active' status set come through at all on a 'good run', investigating
<wallyworld> that would be triggered when container status flips to running
<veebers> wallyworld: it's not in the collection at all, status is 'Started Container' (i.e. from cloud container status update)
<wallyworld> that cloud container status change to "started container" should cause the unit status to be re-evaluated
<veebers> indeed
<wallyworld> and written to history
<wallyworld> kelvinliu_: this PR add constrains support. the actual change is small - the k8s tests add several lines to the diff https://github.com/juju/juju/pull/9187
<kelvinliu_> wallyworld, looking it now
<wallyworld> ty. no rush
<kelvinliu_> wallyworld, looks awesome! just not sure if the constraint is the max or min?
<wallyworld> kelvinliu_: it is the max for k8s
<wallyworld> similar to lxd
<wallyworld> whereas for clouds it is the min
<wallyworld> we haven't sorted out a syntax yet to cover both cases in a way the everyone could agree on
<kelvinliu_> wallyworld, ic. great thanks
<wallyworld> kelvinliu_: any luck with the controller.yaml hacking?
<kelvinliu_> wallyworld, agent.conf is always changing for each run of `jujud machine`/`jujud bootstrap-state`. the changes include controllercert, controllerkey, password,ipaddress, sharedserets, etc
<wallyworld> kelvinliu_: that's because jujud bootstrap-state re-generates all that stuff (from memory). it is only supposed to be run once to set everything up
<kelvinliu_> wallyworld, im wondering how to register a 2nd controller.
<kelvinliu_> wallyworld, im guessing whenever jujud machine starts, it will be a brand new controller?
<wallyworld> kelvinliu_: "jujud machine-0..." is the start of a configured controller and can be done many times as the systemd server restarts. agent.conf is all set up and the key things like cert etc are fixed.
<wallyworld> jujud bootstrap is used to initialise things and only runs once
<kelvinliu_> wallyworld, `var/lib/juju/tools/machine-0/jujud machine --data-dir '/var/lib/juju' --machine-id 0 --debug` always changes the conf file for each run
<wallyworld> in what way?
<kelvinliu_> wallyworld, yes
<wallyworld> what changes is jujud machine macking?
<wallyworld> it might update some things, but not ca cert or api addresses
<kelvinliu_> it changes  controllercert, controllerkey,ipaddress
<wallyworld> something is wrong then i think. i expected the cert and key to be set up as part of bootstrap and then left alone
<wallyworld> i have a meeting in a minute, i can look as the code after
<kelvinliu_> wallyworld, i m going to grab some food, then continue to look into it soon.
<wallyworld> no worries
<wallyworld> kelvinliu_: how'd you go, did you need me to look at anything?
<kelvinliu_> wallyworld, did some hardcode, and fixed a bug at /github.com/juju/juju/utils/scriptrunner/scriptrunner.go, just got `juju status` talked with apiserver successfully.
<wallyworld> great
<wallyworld> kelvinliu_: so you run jujud bootstrap-state once, and then jujud machine as many times as you want
<wallyworld> and that specific controller agent works with a hacked controllers.yaml
<kelvinliu_> wallyworld, yes, the controller key/cert changed still, but I made the apiaddress unchanged by hack the code.
<wallyworld> hmmm, the key/cert should not change when jujud machine is run
<kelvinliu_> wallyworld, I added the local controller info into my juju home
<kelvinliu_> wallyworld, seems we include the current ipadress into the cert
<kelvinliu_> wallyworld, so now i have to cp the latest cert from agent.conf to ~/.local/share/juju/controllers.yaml to let juju cli working whenever jujud restarted
<hml> stickupkid: i moved the pr for changing the v7-unstable to reference itself into the review colomn, that âshouldâ land before the profile changes.   weâll see after the mtg
<stickupkid> hml: i've made a few comments, but not approved until after the meeting :D
<hml> stickupkid: ty - understood.  iâll have the other piece up shortly
<hml> stickupkid: iâm thinking empty should check for config and devices, i wasnât going to bother with name or description.  any preference?
<stickupkid> hml: yeah, that's fine with me, I'm not a fan of name anyway, so config and devices wfm
<hml> stickupkid: quick HO?
<stickupkid> hml: of course
<hml> stickupkid: pr updates available for your viewing pleasure.  :-)
<stickupkid> hml: sweet
<stickupkid> hml: note we can do this https://play.golang.org/p/YNNbd7jAqCh
<stickupkid> hml: i.e. we can move between structs without manual conversion, as long as the types align
<hml> stickupkid: do the func magic() inside juju?
<stickupkid> hml: yeah
<hml> k
<rick_h_> kwmonroe: cory_fu bdx magicaltrout zeestrat juju show countdown, 54min to go
<rick_h_> juju show everyone! refill your coffee, batten down all hatches!
<aisrael> cory_fu, I noticed charm-tools depends on python3.5 when I try to build, but 3.5 isn't available on bionic. Thoughts on either moving it to 3.6 or build/testing on bionic?
<cory_fu> aisrael: Yeah, it should work with 3.6.  I didn't know there was a hard req on 3.5
<cory_fu> aisrael: I'm working on the snap build, so I can take a look
<aisrael> cory_fu: yep, fails w/"ERROR:   py35: InterpreterNotFound: python3.5" when I run `make`
<aisrael> I pushed a change for review, but haven't been able to test it locally because of that
<zeestrat> rick_h_: sorry, can't make this one. As a suggestion for the next one, it would be cool to have some of the Openstack charmers on to talk about their latest release.
<rick_h_> zeestrat: I'll ping beisner on that <3
<cory_fu> aisrael: Change tox.ini to have these:
<cory_fu> envlist = py27, py35, py36, py37
<cory_fu> skip_missing_interpreters = true
<aisrael> cory_fu, ack
<rick_h_> folks that want to join the juju show and chat can use: https://hangouts.google.com/hangouts/_/vw67fkh53jhurbafbnrohikvfme
<rick_h_> agenda so far: https://discourse.jujucharms.com/t/juju-show-39-beedy-show-and-tell-and-news/233
<rick_h_> and those that want to watch the stream: https://www.youtube.com/watch?v=OH1TMQzep1s
<beisner> Just a heads up on that skip missing interpreters trick, if the machine has none of those versions, it will exit cleanly and pass without executing any tests.
<rick_h_> beisner: hah, that sounds less than helpful
<beisner> cory_fu aisrael fyi ^
<aisrael> beisner, sounds about right ^_^
<cory_fu> beisner: yeah, but I don't know of an alternative
<cory_fu> aisrael: You could maybe try just using "py3" but I feel like I remember that not working
<beisner> py3 works now
<beisner> But itâs also not a perfect solution as you may want to actually test against multiple py3xâs.
<cory_fu> beisner: True.
<cory_fu> aisrael: I do think py3 would be better for the charm-tools case, as we don't have any specific py3.x requirements
<beisner> rick_h_  What is the date and time of that session?
<rick_h_> beisner: so it'd be two weeks from right now
<rick_h_> beisner: or we could shift it it's helpful for TZs
<rick_h_> bdx: ping-ping
<aisrael> cory_fu, makes sense. I'll update tox and see about adding a test for my pr
<rick_h_> 3min warning for anyone else that wants to join in
<beisner> rick_h_: should work - can you send me an invite / details?
<beisner> for +2wks
<kwmonroe> hey! you weren't lying.. my jaas models are 2.4.3.  neat!
<rick_h_> kwmonroe: would I like to you :p
<NickZ> aw
<NickZ> i missed it
<NickZ> what's going on in salt lake city?
<veebers> Morning all o/
<thumper> NickZ: the next product sprint in october
<NickZ> ah hah
<veebers> wallyworld: we have an issue, if the charm sets status as active before the pods are done we loose that history (status is stored, but not history, so 'juju status' shows the active message once everything is resolved, but show-status-log won't show it as the current active)
<wallyworld> veebers: when the container status changes state, it is supposed to see if unit status would be different and update history
<wallyworld> call probablyUpdateHistory() - that checks the last known historical value and updates if needed
<veebers> wallyworld: hmm, good point, I think it might be a small tweak then; We're no longer losing that data as I just fixed that (active being overwritten by 'waiting for container' due to timing)
<wallyworld> perceived unit status = func(unit status, container status).... so when either changes, call probablyUpdateHistory()
<veebers> aye
#juju 2018-09-13
<veebers> wallyworld: sweet, test highlighted the issue, quick fix in the end (conditional at wrong level, let probablySetStatusHistory make that choice)
<wallyworld> yup, it checks for status being the same
<veebers> aye, I had to tweak that a little with the history overwrite bits
<wallyworld> kelvinliu_: no rush, just sometime today if possible, i added placement support https://github.com/juju/juju/pull/9190
<kelvinliu_> wallyworld, yeah, will take a look soon
 * wallyworld pops out to buy coffee
<veebers> wallyworld: seeing this in the operator pod log: https://paste.ubuntu.com/p/WS8STGfb9F/ did something land recently that might cause that? I'm pretty certain it's unrelated to my changes at all
<wallyworld> operator pod code has changed, but your local copy of the code is stale
<wallyworld> you just need to rebase
<veebers> ack thanks, will do
<veebers> wallyworld: ok, finally got that PR updated, history looks good: https://paste.ubuntu.com/p/FnVQ7q3QX8/ I did see these logs in the controlelr debug-log, I don't think it's my changes, I did touch the provisioner though: https://paste.ubuntu.com/p/pjfvpmYngV/
<wallyworld> veebers: yeah, those are "normal". just getting a coffee and then will look
<veebers> ack, thanks. should be pretty much there this time :-)
 * thumper sighs
<thumper> anastasiamac, wallyworld: do you know of any CLI tests that ensure there is a controller entry for a mock CLI test?
<thumper> using testing.BaseSuite
<thumper> which has a fake xdg data suite thingy
<wallyworld> you mean in the local yaml files?
<thumper> yeah, to fake a controller entry
<wallyworld> hmmm, there's a few slightly different ways tests do it; i can't recall a specific example without digging around, i can take a look
<thumper> oh...
<thumper> it uses a different base...
<thumper> uses JujuOSEnvSuite
<wallyworld> there's a MemStore that tests can use
<wallyworld> and that's passed ion to the CLI constrcution
<wallyworld> and the mem store is set up with data
<wallyworld> so the test never hits disk at all
<thumper> is that in the model wrapper?
<thumper> wallyworld: https://github.com/juju/juju/pull/9191
<wallyworld> righto
<thumper> wallyworld: it seems github diff doesn't like me renaming the state_test.go to state_internal_test.go
<thumper> and then add a different state_test.go
<wallyworld> sigh
<thumper> if only it tracked renames :)
 * thumper EODs
<thumper> see y'all tomorrow
<wallyworld> oh joy! +5,633 â5,423
<babbageclunk> wallyworld: yeah, that description seems too brief given the size of the PR
<babbageclunk> wallyworld: want a shorter one? https://github.com/juju/juju/pull/9192
<anastasiamac> most of it just a test move, like file rename kind of...
<babbageclunk> anastasiamac: ah right - yeah, there are a lot of status tests.
<anastasiamac> yep
<stickupkid> I loke how we have two different places for escaping mongo "." in names
<stickupkid> s/loke/like
<manadart> stickupkid: Regarding recent changes to CI for formatting, what might cause this? http://ci.jujucharms.com/job/github-check-merge-juju/3433/console
<stickupkid> manadart: what go version you using?
<manadart> gsamfira: Want to jump in here? stickupkid: should that matter for CI?
<stickupkid> manadart: if it's 1.11, then you need to downgrade to 1.10.x - golang made a breaking change
<stickupkid> RE: https://github.com/golang/go/issues/26098#issuecomment-400859370
<stickupkid> manadart: we're going to force 1.11 next week at the sprint, so all outstanding branches etc will need updating
<gsamfira> stickupkid: gotcha. Downgrading. I'm running 1.11
<gsamfira> thanks
<stickupkid> gsamfira: np :D
<stickupkid> manadart: i'm looking at escaping documents from mongo, have you seen this in passing? or do we do it late in the day, before we use it
<stickupkid> s/escape/unescape
<stickupkid> mandart: I'm looking if there is any common pattern for this
<manadart> stickupkid: Not sure. I've not had accommodate it directly.
<stickupkid> manadart: fair
<rick_h_> stickupkid: manadart I thuoght we had a tool around that. I remember jam mentioning making things "fat hyphen" and such
<stickupkid> rick_h: we do, i found it
<stickupkid> rick_h: well i found it twice, there is code duplication :D
<rick_h_> stickupkid: hah, ok cool
<hml> stickupkid: changes made: https://github.com/juju/charm/pull/264
<hml> stickupkid: i plan to squash the commits before merge.
<stickupkid> hml: yeah, fine by me
<stickupkid> hml: LGTM - :D
<hml> stickupkid: merged
<parlos> Good Morning
<rick_h_> morning parlos
<rick_h_> externalreality: QA'd the branch there. Looks awesome!
<rick_h_> externalreality: one typo note and I thought I recalled a conversation around the error at the end of complete? If that's for followup work then carry on
<parlos> I've used the openstack-bundle-57 to deploy OS, works nice. However, I ran in to trouble a few miles down the line. What I discovered was that nova compute ran out of local storage, however afaik, there is no config option for that in the nova-compute service, as that is handed by Libvirt. Which then got me wondering, who installes/configures libvirt? Guess it comes with the OS, and if so, how could this be modified from the bundle?
<stickupkid> hml: are you using "gopkg.in/charm.v6-lxdprofile" in the dep file
<stickupkid> hml: i'm trying to do it the standard way, rather than injecting the git repo directly into the vendor folder...
<rick_h_> parlos: so the bundle is a collection so applications and each application will have its own config available. I'd expect that it's something you can tweak in the nova-compute charm config: https://jujucharms.com/nova-compute/ (look at the config section)
<rick_h_> parlos: for expertise on setting that up better I'd reach out to the openstack folks but hopefully that helps
<parlos> rick_h; from that you can config the empheral storage, but the storage i think i'm refering too is managed by libvirt. which isnt a service in the bundle...
<parlos> rick_h; from that I can config the ephemeral storage, but the storage i think i'm referring too is managed by libvirt. which isnt a service in the bundle...
<parlos> rick_h; the question can be generalized, who configures the 'base' services on a node? SSH seems to be configured by maas in my case, but juju also does something too it (i think) but ssh is not listed as a service..hope my question is understandable.
<stickupkid> hml: this is really annoying, we can't namespace dep toml file to point to "gopkg.in/juju/charm.v6-lxdprofile"
<stickupkid> hml: it's because https://github.com/niemeyer/gopkg/blob/master/version.go#L16 hardcodes unstable :|
<hml> stickupkid: poopy
<stickupkid> hml: i've got an idea :D
<stickupkid> hml: yes it works :D
<hml> stickupkid: w00t
<stickupkid> hml: one sec, i'll make a branch, that we can use as our base branch
<stickupkid> hml: this could be away to get rid of the patches folder as well
<hml> stickupkid: cool, jam was wondering if that could be done with deps
<stickupkid> hml: yeap :p
<stickupkid> hml: CR here https://github.com/juju/juju/pull/9193
<stickupkid> hml: so we don't have to change any source code in juju, just update the revision, when the v6-lxdprofile feature branch is done, then we just update the dep Gopkg.toml + lock file and we're done :D
<stickupkid> hml: right, i can start working on the validation stuff now :D
<stickupkid> hml: don't feel dirty hacking on my vendor folder now
<hml> stickupkid: how does that work?  charm/lxdprofile instead of charm/v6-lxdprofile?
<stickupkid> hml: typo, just needs to be charm, one sec
<stickupkid> hml: it ignores everything after the project anyway
<stickupkid> hml: try now
<hml> stickupkid: trying
<rick_h_> parlos: so juju will touch things on a node that it needs to communicate/work like ssh keys. It doesn't do anything else. All of that is up to the charms on it. So I'd expect that if nova-compute needs to manage things for vms it would have those knobs. Otherwise i'd expect some other charm to provide the tools/controls for something else running.
<stickupkid> manadart: is there away to have multiple leases at once, i.e. could leases overlap and how do know which leases are being offered
<hml> stickupkid: worked for me.  :-)
<hml> stickupkid: looking at the safeLXDProfile now
<stickupkid> hml: https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRIOg8AK-PpstPTj5MeHL-loVkvw4V9RRhFrwMs9sVnz5AP7-k1
<hml> :-D
<parlos> rick_h: so, your guess would be that nova-compute (or some of the charms in the bundle) would install and configure libvirt in this case.
<parlos> guess I have to do some digging, to see if thats something I can configure. thanks for the answers
<rick_h_> parlos: correct, that would be my assumption if nova is creating VMs on top of libvirt. That's its job to manage then.
<stickupkid> hml: we should be able to do that from the charmstore now - in theory "juju deploy cs:~simonrichardson/lxd-profile-3"
<hml> stickupkid: yes
<hml> stickupkid: or from juju/juju : juju deploy --debug ./testcharms/charm-repo/quantal/lxd-profile
<stickupkid> hml: i wanted to make sure that it worked from the charmstore :D
<hml> stickupkid: did you push a charm up to the store?  I haven't
<hml> just been using local charms todate
<stickupkid> hml: yeah, we can always remove it later, but will help with manual testing
<hml> stickupkid: i believe we were going to add as juju-qa or something
<hml> stickupkid: thereâs some up there already
<stickupkid> hml: fair
<stickupkid> hml: i'll see how that works in a minute
<rick_h_> externalreality: are you sure on the do-release-upgrade? Did that get pushed up? I don't see it in the diff and the second round of QA still shows the space.
<veebers> Morning all o/
#juju 2018-09-14
<anastasiamac> wallyworld: was there a PR u wnated me to look at?
<anastasiamac> 9194?
<wallyworld> anastasiamac: yeah, if you have time, otherwise it can wait
<anastasiamac> nws, will look now...
#juju 2019-09-09
<wallyworld> kelvinliu: no hurry, this PR fixes a couple of action test races https://github.com/juju/juju/pull/10612
<kelvinliu> wallyworld: yep looking
<kelvinliu> wallyworld: lgtm, just small question. thanks
<wallyworld> ty
<kelvinliu> np
<wallyworld> kelvinliu: that's the issue - using defer is insufficient because the Stop() does not occur soon enough -it needs to happen immediately when Exec() returns
<wallyworld> the defers are there to ensure Stop() is called in the case of early exit
<wallyworld> and that's why also Stop() was made idempotent
<kelvinliu> ok, ic
<wallyworld> hopefully that makes sense
<wallyworld> under --race it can lead to incorrect capture of stdout
<wallyworld> the original local only exec used a non-defer Stop()
<wallyworld> i changed to defer and that broke
<kelvinliu> is it ok to combine Stdout n StdoutLogger?
<wallyworld> in the adaptor struct?
<kelvinliu> yeah
<wallyworld> don't quite follow, quick HO?
<kelvinliu> sure
<hpidcock> small one https://github.com/juju/juju/pull/10613
<timClicks> wallyworld: is there a spec for user friendly actions IDs in juju/names?
<hpidcock> https://github.com/juju/names/commit/a74eaa582535eb78813038ae2d8166522518ae90
<hpidcock> <application>-<number> or <UUID>
<hpidcock> anyone?
<anastasiamac> hpidcock: anyone for? review of 10613?
<anastasiamac> fwiw i disagree witht the original implementation -that check againsta an arbitrary regular expression should really be a chack against names pkg valid definition of action 'id' (whichever version of that pkg was shipped with that Juju)
<anastasiamac> otherwise it's an ongoing headache to keep changing this reg exp to match our improved understand of id... or as per ur pr making it braod enough to work for all scenarios...
<anastasiamac> broad*
<wallyworld> anastasiamac: the CI test in Python does replicate the same check as is done in the juju names pkg. the names pkg was recently updated but the python test wasn't
<anastasiamac> wallyworld: i understand but it has a copy of reg exp, rather than use the actual pkg that Juju was shipped with... anyway
<wallyworld> one is Go, the other is Pythin though
<wallyworld> gawd, why do I keep typing "puthin"
<wallyworld> fark, fail
<wallyworld> "pythin" even
<wallyworld> i like test that break when we change implementation as it forces you to reevaluate the correctness of the test etc
<anastasiamac> with all my typos, i got it :) thank you for understanding me over the years \o/
<wallyworld> i need an autocorrect for it
<wallyworld> but i have have stand up arguments with other people in the past who disagree with me on that aspect of test writing
<hpidcock> wallyworld: pythin or pythick?
<wallyworld> lol
<SeubertE> Hi there everyone! I'm new to using Juju and trying to set it up on my home lab. I had a few questions that came up while reading the docs (https://jaas.ai/docs/getting-started-with-juju). First, do I need to use JAAS with juju? Or can I run juju all on my home network?
<wallyworld> SeubertE: hi there. you can run juju locally using LXD containers (no need for JAAS to get started). The current docs do not reflect that so well. These are being updated to give a much better getting started experience. So long as you have snap installed LXD, you can "juju bootstrap lxd" to get up and running
<SeubertE> Thanks wallyworld, that's what I've been doing so far, but got confused since the docs kind of push JAAS in getting started. I've also been having issues with the gui, would it be better to just post on discourse?
<wallyworld> that would work yeah, so then others can benfit from the answers
<SeubertE> Awesome, thanks, I'll post my issue there!
<rick_h> hmm
<rick_h> works here, but not in another channel geeze
<jam> guild can I get a review of https://github.com/juju/juju/pull/10615 (its quite small)
<stickupkid> jam, should this be on 2.6 as well, or are we not bothered?
<timClicks> does it make sense to say something like this: "
<timClicks> Charms can define "storage" endpoints. Clouds provide "storage pools", tied to the capabilities of the underlying provider."?
<timClicks> in particular, do charms define storage endpoints? I'm trying to figure out the best way to express what's defined in metadata.yaml
<timClicks> answering self - the terminology we use currently is a storage label.
<anastasiamac> timClicks: yeah i would not call them "endpoints" :)
<wallyworld> timClicks: generically, clouds tend to have ways to provision storage, either as a block device, or a filesystem. some clouds use a "storage pools" concept but at the end of the day, juju models it as asking the cloud for either a block device or filesystem
#juju 2019-09-10
<babbageclunk> wallyworld: Do we have a convenient way for taking multiple model operations and applying them as one transaction?
<wallyworld> babbageclunk: yes, ModelOperation
<babbageclunk> wallyworld: but if I have multiple ModelOperations is there a function to combine them into one?
<babbageclunk> ie, I'm calling LeaveScopeOperation on n relation units, but I want to run them all as one transaction.
<babbageclunk> Hmm, actually that might not be a good idea, since each one will change the unitcount and the last needs to remove the relation
<babbageclunk> ok, I'll just run them as separate txns
<wallyworld> yeah +1
<wallyworld> kelvinliu: i don't understand the newPreconditionDeleteOptions stuff - why do we need to wait for UID sometimes and not others?
<kelvinliu> wallyworld: that ensures it asserts the deleted resource has the that id
<wallyworld> and name is not sufficient?
<wallyworld> we seem to use both?
<kelvinliu> ideally, we should follow this pattern for all deleting resources
<wallyworld> given name is unique, why is that not sufficient?
<kelvinliu> name is uniq, but the deletion is async
<wallyworld> we can't create another one with the same name until the current one is totally gone, not just in terminating state though
<kelvinliu> and it's not deleted immediately which usually takes a bit time
<kelvinliu> but we can't prevent non-juju ppl or app create a resource with same name, right?
<wallyworld> true, but while it's being terminated, we can't create another can we? you get an error from the api
<kelvinliu> there was a discussion about this, i will find it for u
<wallyworld> ok
<wallyworld> if we need to change our code we should do it for everything rather than leave stuff done 2 different ways
<kelvinliu> wallyworld: u can see i left a TODO for refactoring all updating/deleting places
<wallyworld> ok
<kelvinliu> wallyworld: i think this change is really good to have.
<wallyworld> kelvinliu: is there a reason for swapping the order of update and create in the ensure() methods? we have been calling update() first then create()
<kelvinliu> https://github.com/kubernetes/kubernetes/issues/20572
<kelvinliu> yes, there is a reason, and i think we  should refactor all the ensure like this later as well
<kelvinliu> wallyworld: HO to discuss?
<wallyworld> sure
<atdprhs> hello everyone, everytime I try to run juju add-unit kubernetes-worker i get `failed to start machine 28 (failed to acquire node: No available machine matches constraints: [('agent_name', ['93d500ee-7e14-4ece-81ae-69137d451f3a']), ('cpu_count', ['6']), ('mem', ['32768']), ('storage', ['root:1024']), ('zone', ['default'])] (resolved to
<atdprhs> "cpu_count=6.0 mem=32768.0 storage=root:1024 zone=default")), retrying in 10s (9 more attempts)`
<atdprhs> can anyone help please?
<wallyworld> atdprhs: looks like the cloud on which k8s is deployed doesn't have a machine with the required memory to host the new worker
<wallyworld> looks like this is cdk on maas?
<anastasiamac> wallyworld: autoload-creds ask-or-tell PTAL https://github.com/juju/juju/pull/10617
<wallyworld> ok
<anastasiamac> \o/
<wallyworld> anastasiamac: lgtm, ty
<atdprhs> Hello wallyworld, sorry I missed your message
<atdprhs> yes, it is on KVM Pod on MaaS
<wallyworld> i'm not 100% across kvm pod allocation on maas. there would be maas specific set up which controls how kvm pods are allocated. you can use juju constraints to limit what is asked for which may help, but it could still be additional maas setup is needed. eg the error messages says cpu count = 6. perhaps asking for 2 or something would work
<atdprhs> ahhh, got it
<atdprhs> will test it out
<wallyworld> the maas ux will show what machines have been allocated already and what's available
<wallyworld> but i can't give you specific advice as i'm not a heavy maas user
<wallyworld> one thing to note is adding a unit will use the constraints the original charm was deployed with
<wallyworld> if you want to use different ones, you need to use juju add-machine --constraints="blah" first, and then "juju add-unit --to <machinenum>"
<wallyworld> atdprhs: you could also try asking here https://discourse.maas.io/
<wallyworld> the maas folks there can help much better than me about the kvm setup stuff
<babbageclunk> wallyworld: review plz? https://github.com/juju/juju/pull/10618
<babbageclunk> wallyworld: ended up being a bit fiddlier than I expected.
<wallyworld> babbageclunk: no worries, looking
<wallyworld> babbageclunk: lgtm, a couple of comments and a question
<kelvinliu> wallyworld: ru still there?
<manadart> jam: I am going to do QA testing on PR 10566 today, but I want to squash the commits at some point. If you want to take a look before I do (preserving the context of your prior comments) please go ahead.
<jam> manadart: np, I think the big thing is to know which parts are the interesting ones and just focus there
<manadart> jam: core/network and state.
<manadart> jam: And probably apiserver/facades, where ProviderAddress -> SpaceAddress conversion is now occurring.
<manadart> I am doing my own review now, and will annotate where appropriate.
<atdprhs> hello everyone, I seem to have a model or controller that keeps creating VMs no matter how many times I keep deleting them, it recreates them
<atdprhs> do anyone know if there is any way for me to find out whose doing that?
<atdprhs> I keep seeing > 120 VMs that gets created
<atdprhs> is there anyway that I can cleanup juju from all controllers, models everything
<timClicks> atdprhs: what is the output of juju status?
<timClicks> do you know what command triggered the first VM to be created?
<atdprhs> it's stuck
<atdprhs> I deleted the wrong vm
<atdprhs> it doesn't respond anymore, i am really flooded with tons of them
<timClicks> that sounds really horrible
<atdprhs> yes, extremely horrible
<timClicks> i recommend trying to remove the models first, hopefully the logs on the controller can be saved for a postmortem
<atdprhs> luckily this is a test environment
<timClicks> juju destroy-model <model-name>
<atdprhs> ok, I'll destroy all of the moels
<timClicks> actually do this
<timClicks> juju destroy-model <model-name> --force --no-wait
<timClicks> if you have many units, that will be faster
<timClicks> you can also add a -y flag to avoid the confirmation prompts
<atdprhs> one problem
<atdprhs> I can't see models
<atdprhs> and so I don't know what models I have
<timClicks> okay
<atdprhs> it says no controller registered
<timClicks> oh, that's not okay
<timClicks> what is the output of juju controllers
<atdprhs> ERROR No controllers registered.
<atdprhs> timClicks: is there a way that I can just simply reset or clean up juju?
<stickupkid> atdprhs, what do you mean by reset? what does `juju controllers` say?
<timClicks> not without the controller, I don't think
<atdprhs> so I'll have to live with those VMs that keeps getting created?
<atdprhs> :O
<timClicks> it's the controller that would be creating them
<atdprhs> is htere something like juju discover controllers?
<timClicks> juju register
<stickupkid> atdprhs, can you tell me what's in "less ~/.local/share/juju/controllers.yaml"
<atdprhs> it's empty
<timClicks> atdprhs: I'll hand over to stickupkid (it's after 10pm where I am)
<stickupkid> atdprhs, do you have access to where vm software, i.e. where juju registered the controller?
<atdprhs> thanks timClicks
<atdprhs> yes, it's KVM
<stickupkid> atdprhs, so you should be able to list all your vms, virsh list --all or similar
<atdprhs> There are  lot of VMs
<atdprhs> not sure which one
<atdprhs> cuz sadly when juju creates a VM, it doesn't give MaaS a proper name for the VM
<atdprhs> so I end up with ideal-bream, clean-cougar, fresh-beetle, free-chimp, comic-orca, ace-mink
<stickupkid> atdprhs, so without knowing which is the controller, you'd have to resort to arp -n or ifconfig to get the ip address and then kill it
<atdprhs> I'd just delete all of the Vms
<atdprhs> they are all created by juju
<stickupkid> atdprhs, then you can start again, by bootstrapping, sorry I'm not much more of a help
<atdprhs> the thing is
<atdprhs> that if i rebooted the server
<stickupkid> atdprhs, otherwise juju register would be better to go down
<atdprhs> juju will continue to recreate the VMs again
<stickupkid> but if you've removed the controller, I've no idea how it's doing that
<rick_h> stickupkid:  atdprhs did you juju unregister? that just removes the local cache of the controller information. It doesn't remove the controller
<rick_h> stickupkid:  atdprhs if you unregistered and don't have it anywhere else you're going to be a bit stuck unfortunately. If you know what machine the controller is running on you can try to login to it, but you need to know the password of the admin user
<stickupkid> rick_h, according to tailback atdprhs did juju destroy-model
<rick_h> stickupkid:  looking at the traceback atdprhs is getting "it says no controller registered" so can't actually run any commands/etc
<jam> rick_h: mine, brb
<atdprhs> hi stickupkid / rick_h : I deleted the VMs
<atdprhs> i'll bootup the server and see if anything gets created
<atdprhs> If you guys say that no controller nothing gets created, so could it be possible MaaS? But what in MaaS could be doing that?
<atdprhs> but from the specs of the VMs that gets created, I know it's from juju for one reason, the specs matches exactly the storage machines that I was trying to create
<atdprhs> I booted up the server and I couldn't find any new machines
<atdprhs> very strange
<stickupkid> achilleasa, let me switch and bootstrap the last test case, land and if it no worky i'll let you know
<achilleasa> stickupkid: sure thing
<stickupkid> achilleasa, it worked
<achilleasa> stickupkid: rick_h can I get a sanity check on https://github.com/juju/juju/pull/10620?
<stickupkid> achilleasa, of course
<stickupkid> achilleasa, scanned all the files, seems like very thing is spot on
<achilleasa> stickupkid: the commits merged more or less cleanly (I had to tweak some imports because we have renamed some network pkgs on develop)
<stickupkid> achilleasa, nice nice
<stickupkid> hml, got a sec?
<hml> stickupkid:  sure
<stickupkid> hml, meet in daily?
<hml> stickupkid: omw
<pepperhead> o/
<pepperhead> If I have a node reporting "pending" when running "juju status", how can it be forcibly cleared?
<pepperhead> I think it happened because I removed an application before it finished deploying.
<pepperhead> then aborted from MAAS
<pepperhead> Anyone know how to tell if my bootstrap of juju installed the gui juju-gui?
<timClicks> pepperhead: execute `juju gui`
<pepperhead> SWEET! You ROCK!
<pepperhead> Harder question: I have a juju State of pending on a maas node that wont stop. Is there a way to force kill it?
<pepperhead> It was a deploy job that a killed "removed" before it ended, now stuck.
<timClicks> juju remove-unit --force <app>/<n>
<babbageclunk> pepperhead: does `juju remove-machine --force <id>` work?
<pepperhead> started deploying mysql, and removed it in the middle, bad idea.
<pepperhead> does remove-machine destroy the machin in maas?
<timClicks> perhaps `juju remove-machine <n>`
<babbageclunk> it should release it back to the maas available pool
<timClicks> it'll put that machine back into the maas pool iirc
<babbageclunk> ha timClicks beat you that time
<timClicks> ha
<pepperhead> Looks like that did it, I was afraid to try something that said "remove machine"... :)
<timClicks> pepperhead: luckily juju doesn't have the ability to order a truck to send hardware to the landfill
<pepperhead> But what if it did ;)
 * babbageclunk starts coding it up
<pepperhead> LOL
<timClicks> please use the coffee pot protocol
<timClicks> pepperhead: in terms of the wording though, everything juju-related is wrapped within a model
<timClicks> so when you see remove anywhere, it means remove from the model
<pepperhead> Ahhhhh, THAT makes sense.
<timClicks> also remove-* commands are recoverable, they have a symmetric command add-* that reverses the removal
<timClicks> destroy-* commands are, in a sense, unrecoverable
<timClicks> they require you to start from scratch if you want to get back to pre-destruction
<pepperhead> My last Q of the day, promise: I want to try deploying OpenStack on a set of hardware nodes, but they only have one drive and one nic in eeach. Is this possible? the blog I was going to follow mentioned it REQUIRED two drives in each and two nics.
<timClicks> pepperhead: which guide?
<pepperhead> Or is that more a mass thing, need to work around with curtin?
<pepperhead> https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/rocky/install-openstack.html
<timClicks> i'm not familiar enough with openstack to speak authoritatively on production-grade deployments, but it's likely those "requirements" are actually "very strong recommendations"
<timClicks> if you have the compute resources available, my recommendation would be to try it and see what happens
<pepperhead> I am just doing a POC. Newly hired and the company REALLY need a private cloud to test terraform k8s control. It looked like Openstack look great, and I sold them on the maas/juju solution.
<pepperhead> timClicks yes, I have struggling getting juju up. They handed me a stack of Intel NUC's, and juju would NOT bootstrap. FINALLY found it was a bios issue.
<pepperhead> been struggling even
<pepperhead> The bios is super poorly designed. Had to turn off the optical drive boot selection, the node being bootstrapped froze on reboot looking for it it seems. Also had to turn off "boot nic last", it overrode the boot order. and the boot order was specified in two locaions in a gui that required a mouse. Nightmare.
<pepperhead> Thanks Intel
<timClicks> sounds like you've had a fun few hours then
<pepperhead> This fun is driving me to drinkin
<pepperhead> admittedly a short drive
<pepperhead> So the juju gui allows building a model, but wouldnt help with drive requirements/locations, right?
<pepperhead> WOOT, the gui is working. VERY NICE!
<pepperhead> VERY polished interface, kudos if any of y'all worked on it
<pepperhead> Got that node re-commissioned, thanks again for the help!
<timClicks> pepperhead: really great to hear that you're moving forward; do make sure that you're signed up here: https://discourse.jujucharms.com/
#juju 2019-09-11
<wallyworld> babbageclunk: i may have found another --force case. it seems an app with a hook error in the install hook cannot be removed. not sure if you've come across that before
<babbageclunk> blech
<babbageclunk> no, I haven't seen that. At least it's easy to reproduce!
<wallyworld> yeah, i had the ubuntu charm locally with an exit 1 in the instal lhook
<wallyworld> and remove --force didn't
<wallyworld> will add it to the queue of things to look at
<babbageclunk> ok
<wallyworld> kelvinliu: so yeah, can confirm that it works for iaas, i'll try and investigate
<kelvinliu> so it must be somewhere was missing for caas
<kelvinliu> wallyworld: got this PR to fix encoding issue on secret data, +1 plz https://github.com/juju/juju/pull/10621  thanks!
<wallyworld> ok
<wallyworld> kelvinliu: why is it dcoding the base54 data? the k8s secret spec Data attribute expects encoded data doesn't it?
<kelvinliu> k8s always trys to encode
<kelvinliu> again
<wallyworld> hmmmm, that seems to be in conflict with the comment on the Data attribute?
<wallyworld> 	// Data contains the secret data. Each key must consist of alphanumeric
<wallyworld> 	// characters, '-', '_' or '.'. The serialized form of the secret data is a
<wallyworld> 	// base64 encoded string, representing the arbitrary (possibly non-string) data value here.
<wallyworld> the yaml examples appear to pass in encoded data
<kelvinliu> i tested
<kelvinliu> if we pass encoded  to data directly, k8s will encode it again
<wallyworld> i guess i don't understand why k8s struct needs to be created with unencoded values for both Data and StringData when the yaml doesn't
<wallyworld> ie
<wallyworld> stringData:
<wallyworld>   username: administrator
<wallyworld> vs
<wallyworld> data:
<wallyworld>   username: YWRtaW5pc3RyYXRvcg==
<wallyworld> Where YWRtaW5pc3RyYXRvcg== decodes to administrator
<kelvinliu> when u mkubectl get -o yaml, the stringData is merged to Data and encoded as well
<kelvinliu> after applied, no stringData and no raw strings anymore, and all encoded.
<kelvinliu> StringData is just a helper func
<wallyworld> so you say "encode"
<wallyworld> but the PR decodes
<kelvinliu> we decode, then k8s encode
<wallyworld> ok
<kelvinliu> if we don't decode, kubectl get result will be double encoded
<wallyworld> seems weird to me but if that's how k8s works then who am i to argue. lgtm, ty
<kelvinliu> thx
<wallyworld> kelvinliu: i am still testing the resource upgrade thing. seems to work on 2.7, so i'll test again on 2.6 to see what's happenng
<kelvinliu> so it works on caas 2.7?
<wallyworld> i think so
<wallyworld> but i need to check more since it seems broken earlier
<wallyworld> kelvinliu: ah, it works only for file type resources
<kelvinliu> ah
<wallyworld> kelvinliu: found the issue - there's a method which uses the resource fingerprint to see of the resource has changed, but for oci image resources, the fingerprint is always "" as we don't have the oci image to calculate the hash from
<kelvinliu> should we simplge using image path?
<kelvinliu> full path
<wallyworld> something like that, looking into it
<wallyworld> babbageclunk: did you have vsphere creds handy, any chance you could pull joe's branch and smoke test bootstrap, deploy, ssh on vsphere?
<babbageclunk> yeah, I do - sure
<wallyworld> i've done k8s and azure and it looks good
<wallyworld> ty
<babbageclunk> looking now
<wallyworld> leave a comment on his PR
<wallyworld> i tested bootstrap, deploy, ssh, add-unit
<kelvinliu> hi wallyworld: got 2mins to discuss metadata change?
<wallyworld> kelvinliu: sure
<kelvinliu> HO?
<kelvinliu> wallyworld: wait, one more thing..
<wallyworld> HO?
<kelvinliu> yes, plz
<hpidcock> wallyworld: tomorrow is fine but it's ready https://github.com/juju/juju/pull/10606
<wallyworld> ok
<wallyworld> heading out to dinner soon so will look later or tmw
<hpidcock> wallyworld: tomorrow sounds great enjoy your night
<wallyworld> will do, son's gf's b'day
<stickupkid> manadart, in regards to thumper email, isn't this the availability zone issue that I've got a PR for
<stickupkid> manadart, one that i keep opening and closing and never merging
<manadart> stickupkid: Not sure. I was going to diff 2.6 vs develop to see why edge is OK, but 2.6 not...
<stickupkid> manadart, I did "git diff develop 2.6 provider/lxd"
<stickupkid> manadart, and "container/lxd"
<stickupkid> manadart, didn't show much tbh, other than the new packing, network and ineffassign stuff
<achilleasa> manadart: the packing stuff landed yesterday so I would recommend diffing before that
<achilleasa> s/packing/packaging/
<achilleasa> btw, is that error from juju or from lxd?
<stickupkid> achilleasa, lxd
<stickupkid> achilleasa, it's because it's trying to unmarshall an error if I remember correctly
<achilleasa> Could it be resolved if we delete the cached images?
<achilleasa> I think I 've stumbled on that error before
<stickupkid> dunno
<stickupkid> probably
<manadart> Dependency is the same too...
<stickupkid> yeah first thing i checked
<stickupkid> anyone else hit the GOCACHE isn't set when using 1.12, probably old news
<stickupkid> anyone going to port forward 2.6 into develop, or shall I do it?
<pepperhead> Good Morning! o/
<pepperhead> Well, nearing afternoon already.
<pepperhead> Quick Question hopefully: Are there instructions or capability to run a conjure-up deploy of
<pepperhead> Quick Question hopefully: Are there instructions or capability of a charm bundle to run a deploy of OpenStack on four machines as nodes that ONLY have on drive and one NIC port avail? I now have maas w/juju bootstrapped on two other machines. Six machines total.
<pepperhead> Sorry, asked in conjure-up as well. But they are more of a one machine solution. Moved the Q over here.
<atdprhs> hello everyone, juju deploy works perfect with everything except when I try to deploy ceph-osd, I usually get "failed to start machine 15 (failed to acquire node: No available machine matches constraints: [('agent_name', ['623fe854-6d68-463d-8337-73ce136cc4bb']), ('storage', ['root:0,0:32,1:32,2:8']), ('zone', ['default'])] (resolved to
<atdprhs> "storage=root:0,0:32,1:32,2:8 zone=default")), retrying in 10s (10 more attempts)"
<atdprhs> `failed to start machine 15 (failed to acquire node: No available machine matches constraints: [('agent_name', ['623fe854-6d68-463d-8337-73ce136cc4bb']), ('storage', ['root:0,0:32,1:32,2:8']), ('zone', ['default'])] (resolved to "storage=root:0,0:32,1:32,2:8 zone=default")), retrying in 10s (10 more attempts)`
<atdprhs> This is following https://ubuntu.com/kubernetes/docs/storage / `juju deploy -n 3 ceph-osd --storage osd-devices=32G,2 --storage osd-journals=8G,1`
<pepperhead> atdprhs I think Ceph needs a second drive, or at least specified in the config? I am trying to work around ceph drive req as well.
<atdprhs> thanks pepperhead, do you know which config I should be checking in this regards?
<atdprhs> after 10 attempts, now it's `No available machine matches constraints: [('agent_name', ['623fe854-6d68-463d-8337-73ce136cc4bb']), ('storage', ['root:0,0:32,1:32,2:8']), ('zone', ['default'])] (resolved to "storage=root:0,0:32,1:32,2:8 zone=default")`
<atdprhs> question, what I do know `juju deploy -n 3 ceph-osd --storage osd-devices=32G,2 --storage osd-journals=8G,1` can be run as `juju deploy -n 3 ceph-osd --storage osd-devices=<storagePool>,32G,2 --storage osd-journals=<storagePool>,8G,1`
<atdprhs> please correct me if I'm mistaken, storage pool can be obtained via `virsh pool-list`
<atdprhs> the reason I'm looking into storage pool, cuz i'm thinking that maybe it's failing because of storage pool?
<atdprhs> pepperhead r u here?
<atdprhs> I don't think storage pool could be the issue because it has free space just fine
<atdprhs> pepperhead you're right, it seems that it could be an issue with the second drive
<atdprhs> I just tried to deploy nfs and it works just fine on my side, maybe i'll just stick to nfs
<pepperhead> Sorry stepped away to grab luinch. Just learning juju myself. Running into what seems to be similar roadblocks, ceph mainly.
<atdprhs> that's alright
<pepperhead> Unsure but I think you can specify a directory instead of a drive in that config, but not sure how to get juju/maas to create the directory.
<pepperhead> Maybe its a "curtin thing?"
<atdprhs> I have no idea tbh
<atdprhs> i've been stuck with this for more than 3 weeks tbh
<atdprhs> maybe for now, i can live with nfs, but i would have 1 question about it
<atdprhs> I chose 1T for the disk size `juju deploy nfs --constraints root-disk=<disk-Size>` would it be better than having multiple ones with 200G?
<atdprhs> considering that each 200G would be a machine that would consume 1 CPU
<pepperhead> nfs? What does `juju deploy nfs --constraints root-disk=<disk-Size>` do?
<pepperhead> ahh https://jaas.ai/nfs/9
<pepperhead> so can juju nfs be added to the bundle to provide for a "second drive"?
<atdprhs> I don't know what you mean by the 2nd drive, but I am following https://ubuntu.com/kubernetes/docs/storage
<atdprhs> `Deploy NFS` worked for me
<atdprhs> @pep
<pepperhead> ceph seemed to require a second drive
<pepperhead> or second location
<atdprhs> yes, it does, but actually I just realized the VMs gets created but for some strange weird reason MaaS and juju did not see them
<atdprhs> so I explain, I just realized `failed to start machine 15 (failed to acquire node: No available machine matches constraints: [('agent_name', ['623fe854-6d68-463d-8337-73ce136cc4bb']), ('storage', ['root:0,0:32,1:32,2:8']), ('zone', ['default'])] (resolved to "storage=root:0,0:32,1:32,2:8 zone=default")), retrying in 10s (10 more attempts)`
<atdprhs> no, sorry, maybe from the start, I deploy 3 ceph-osd so 3 VMs got created + 10 more attempts above that failed so 3+(3*10)=33
<atdprhs> I have just deleted 33 VMs manually
<pepperhead> ouch
<pepperhead> All I know about ceph is that it is part of the openstack bundle
<atdprhs> There is something I thought about trying which is refresh my MaaS pod when it says it failed
<atdprhs> some of those VMs actually got deployed too btw
<atdprhs> oh, i am working on CDK
<atdprhs> Kubernetes
<pepperhead> Yeah I want to get openstack running, and then deploy kubernetes into openstack with terraform
<atdprhs> Googling around, ceph seems to be pretty new tech, nfs is more stable though and from what I see here, I think I am very happy with NFS since it works straight away
<pepperhead> Hoping that the same tf could deploy kubernetes into say...rackspace
<atdprhs> i don't know about Openstack
<pepperhead> Therefore managing kubernetes by state change
<atdprhs> but may I ask, what is the core difference between both?
<pepperhead> OpenStack is a private cloud, IaaS.
<atdprhs> I thought k8s is too?
<atdprhs> I mean private cloud
<pepperhead> So with Openstack you can manage storage, networking, loadbalancing and all via terraform
<atdprhs> ahh seeing this >> https://www.terraform.io/
<atdprhs> `Create reproducible infrastructure` << this is very nice
<pepperhead> Openstack provides the "hardware". So you spin your nodes, in my case LXD containers, in openstack.
<pepperhead> It provides the networking infrastructure as well as containers and such for k8s
<pepperhead> So OpenStack is an IaaS, and Kubernetes is a foundation on which to build a PaaS.
<atdprhs> Thanks pepperhead , now it makes sense
<atdprhs> is it complex or easier with OpenStack?
<pepperhead> And I was hoping JuJu was the magic to make the OpenStack easier to manage. But my available hardware is handicapped by not have but one hard drive and one NIC.
<atdprhs> Do you have enough disk space on that HDD?
<pepperhead> Well, OpenStack adds complexity, but gets you closer to working with deploying k8s to a cloud.
<pepperhead> The company I recently joined doesnt have Metal, but they want a private cloud. Trying to make do with Intel NUCs
<atdprhs> oh BTW, considering that you're trying to deploy osd like i did, there is a chance you might end upw ith 33 VM like I did :D
<atdprhs> did you check your LXD's containers
<pepperhead> I grabbed a Dell Poweredge r720 used from a guy on cragslist, and got openstack up in a couple hours
<pepperhead> LOL. I am deploying to metal, Intel NUCs
<pepperhead> A stack of them
<pepperhead> MaaS is managing them, and Juju is theoretically directing MaaS.
<atdprhs> kind of yes
<pepperhead> The Openstack Basic charm bundle looks perfect
<atdprhs> In my case, I am using KVM instead of LXD (I could've used LXD)
<pepperhead> I think LXD is lighter on resources overall, as they use a shared kernel
<atdprhs> So MaaS is metal as a service
<pepperhead> yes
<pepperhead> It kinda turns a machine into a VM
<atdprhs> yes, something goes wrong with LXD, can be a risk factor with shared kernel afaik
<pepperhead> And juju is developed along side it. I thiunk it can also manage VM's
<atdprhs> you can either have juju to use LXD direct or use MaaS (private cloud)
<pepperhead> An LxD container doesnt hold the kernel, so it can only harm itseself I think
<atdprhs> I used https://conjure-up.io/
<pepperhead> I got DevStack runniung on my single server, works great, about to try conjure-up
<atdprhs> (y)
<pepperhead> How did the conjure-up work out?
<pepperhead> On a single machine?
<atdprhs> conjure-up is kind of like a wizard takes through steps to deploy your environment
<atdprhs> if you already deployed devstack then you don't need it
<atdprhs> if you haven't and need to deploy openstack
<pepperhead> But starts you getting familiar with juju/maas I assume
<atdprhs> not really, i actually jumped straight into conjure-up then started to learn later juju
<pepperhead> Just doing conjure up to see how or if it differs in performance from devstack
<atdprhs> conjure-up basically does the installation part for you, further customizations and configurations can be later carried out by you manually via juju
<pepperhead> I wish juju-gui allowed for adding a hardware item, like a directory, before running the bundle. Like adding nfs to the bundle and telling it to create a drive of X size from the first drive, then deploying ceph to it.
<atdprhs> maas on the other hand can be the very minimal version of openstack for me, it just allows me to manage my metal servers as if they are cloud
<atdprhs> so maas is like a private cloud that serves me the hardware and i use conjure-up to kickoff k8s on maas
<pepperhead> See I want to build and manage k8s with terraform
<pepperhead> IaC
<atdprhs> I'm not expert tbh, but maybe this could help >> https://tutorials.ubuntu.com/tutorial/install-openstack-with-conjure-up
<atdprhs> in term so storage, I can't honestly help you with it, I already gave up on ceph and currently using NFS
<atdprhs> I spent over 3 weeks on ceph a  lone
<pepperhead> ouch
<atdprhs> maybe this can help >> https://ubuntu.com/openstack/storage
<pepperhead> OHHhhhh, so Ceph is just storage. So INSTEAD of Ceph you deployed nfs and use it for storage. The only thing I know of ceph is that it is in the way of allowing my openstack to build
<pepperhead> I thought you used nfs to create a point for Ceph to store its "stuff"
<pepperhead> Typically Ceph wants and entire drive and it takes it for its "stuff"
<atdprhs> no, i used NFS as storage for my k8s
<pepperhead> I have heard it being like using ZFS
<pepperhead> Creating an array
<atdprhs> in k8s, you have the choice to choose between CEPH and NFS
<atdprhs> I guess in openstack, you have to choose between Swift and Ceph  as per https://ubuntu.com/openstack/storage
<atdprhs> zfs is a storage managed by Ubuntu I guess, that's host OS file system management
<atdprhs> yes to > "OHHhhhh, so Ceph is just storage. So INSTEAD of Ceph you deployed nfs and use it for storage."
<pepperhead> Now I get it. I think openstack uses Ceph as an array of storage, possibly redundancy in case one fails?
<pepperhead> Kinda like a zfs1 array.
<atdprhs> Chopper3 explained the difference well here >> https://serverfault.com/a/911582
<pepperhead> Nice post, gratzi for the heads up!
<atdprhs> no worries, 3 weeks didn't go by that easy
<pepperhead> I set up a freenas server with 18TB of storage, and CAN share that out as NFS. Something to think about.
<atdprhs> does openstack support nfs?
<pepperhead> I use it for storing movies :)
<atdprhs> lol
<atdprhs> I use plex
<pepperhead> I think it can, but thats my home machine, I would have to talk the company into even MORE hardware for that here
<pepperhead> Yes, I run plex to srerve the shows stored on freenas
<pepperhead> I use zfs2 array on freenas, I would need to lose three physical drives to actually lose data
<atdprhs> ahh i thoguht freenas is like plex
<pepperhead> Its just a storage server
<pepperhead> Take big pile of drives and turn them into a network served array of storage
<atdprhs> i usually upload my library to plex server itselff
<atdprhs> maybe i'll try freenas
<atdprhs> i am going to bed, it's almost 5 AM
<pepperhead> WHAT!?
<pepperhead> Where are you?
<atdprhs> Australia
<pepperhead> Oi down under
<pepperhead> Good luck and get some sleep mate!
<atdprhs> lol, yup it is
<atdprhs> thx, and goodnight
<pepperhead> I am in Atlanta Georgia
<pepperhead> worlds apart. G'night
<atdprhs> agreed
<ec0> hey all, building a charm and charm proof is returning an ascii decode error, can't find any files which could be causing this issue in the charm: https://paste.ubuntu.com/p/qqjzBn4YFh/
<ec0> is anyone else seeing this if they try to charm proof with the latest snap installed charm?
<ec0> I'll go file a bug, but just curious if it's just me
<ec0> have tried charm all the way up to edge
<rick_h> ec0:  hmm, what version of python?
<ec0> on my host, 3.7.3
<ec0> the snap appears to be using 3.6....something
<rick_h> hmm, I ran charm proof today w/o issue but not sure on the version/update today or the like
<ec0> interesting, I just tried on a different charm I haven't touched in a while and it proofed
<ec0> I've checked the git history for this charm and can't see any occurrences of 0xe6 in the changed files
<ec0> (grep -P '\xe6' * -R
<ec0> for example
<ec0> interesting, line 380 is reading the README
<rick_h> maybe wipe the readme and see if it proofs
<ec0> trying now
<ec0> hmm, no sadly
<ec0> OK, I'll dig at this a little bit more when I've got time to swing back to it, and file an issue if needed, thanks for checking rick_h
<ec0> oh lol, rick_h - I built my own copy of charm tools, and added some extra debugging, charm proof is trying to read ".README.md.swp", which is obviously the vim swap file.
<ec0> I'll file an issue
<ec0> seems related to https://github.com/juju/charm-tools/issues/421
#juju 2019-09-12
<babbageclunk> wallyworld: plz review the fix for a bug I realised I'd left in the relation cleanup (occurred to me while working on application cleanup) https://github.com/juju/juju/pull/10623
<wallyworld> will do, just finishing anoher
<babbageclunk> thanks!
<wallyworld> hpidcock: jump in standup a bit early?
<hpidcock> Sorry was getting coffee
<hpidcock> 1 sec
<wallyworld> hpidcock: lgtm
<hpidcock> wallyworld: I'll make those changes and get it landed
<timClicks> if anyone has a few minutes (hopefully less), I would appreciate it if you could deploy and road test the new hello-juju charm
<timClicks> juju deploy hello-juju
<wallyworld> timClicks: you gonna do a k8s version :-)
<timClicks> wallyworld: when microk8s is working again
<wallyworld> works now
<timClicks> oh cool
<wallyworld> just need to add yourself to the microk8s group
<wallyworld> timClicks: https://discuss.kubernetes.io/t/explicit-use-of-sudo-in-microk8s-cli/7605
<wallyworld> juju won't work without that step
<wallyworld> so we need to document that - and/or have the CLI prompt to do it when juju and microk8s arte used together
<wallyworld> kelvinliu: here's a fix for the attach resource issu in 2.6 https://github.com/juju/juju/pull/10624
<timClicks> wallyworld: sorry meant to mention that i've updated the microk8s-cloud docs
<timClicks> wallyworld: https://jaas.ai/docs/microk8s-cloud
<wallyworld> timClicks: you'll want to mentioned the newgrp command otherwise those steps won't work without a logout and login
<timClicks> wallyworld: will do (just added the instruction from the link above)
<kelvinliu> wallyworld: just back from lunch, looking now
<wallyworld> no worries
<wallyworld> ty
<kelvinliu> wallyworld: so the url querystring includes fingerprint?
<wallyworld> it in the header
<kelvinliu> i saw it's in  query url.Values
<wallyworld> yeah, that sounds right. i didn't change any of that
<kelvinliu> ok, lgtm, thanks
<wallyworld> ty
<stickupkid> jam, any thoughts on this, before I continue on? https://github.com/juju/description/pull/62#issuecomment-530721114
<stickupkid> jam, it's still a WIP, but good feedback early...
<achilleasa> manadart: can you take a look at https://github.com/juju/juju/pull/10625?
<manadart> achilleasa: Yep.
<thedac> Anyone know if spaces are supported on the AWS provider?
<hml> thedac:  you can manually create them in jujuâ¦
<hml> juju will act according to the user created spaces, but aws wonât have explicit knowledge of them.
<thedac> interesting. OK
<hml> thedac:  only maas and aws support spaces.
<thedac> ack
<rick_h> thedac:  what are you thinking of poking at?
<thedac> rick_h: we were looking at https://bugs.launchpad.net/vault-charm/+bug/1843809 and I wanted to rule out AWS space support as the problem. They are related but it is not juju's fault ;)
<mup> Bug #1843809: vault-kv relation is broken when not using network spaces <vault-charm:New> <https://launchpad.net/bugs/1843809>
<wallyworld> babbageclunk: can you +1 this merge from 2.6, with the relation cleanup fixes plus a few others? https://github.com/juju/juju/pull/10627
<babbageclunk> wallyworld: yup yup
<babbageclunk> wallyworld: weird - there's been a typo introduced in environs/config/config.go - ContainerInheritPropertiesKey -> ContainerInheritProperiesKey
<babbageclunk> not sure which pr it was from
<wallyworld> oh jeez, ok. there was a conflict in the merge
<wallyworld> i chose what looked like the right one
<wallyworld> good pickup
<babbageclunk> I mean, it might be right depending on what else is in the code
<babbageclunk> (but we should probably still fix it)
<wallyworld> babbageclunk: i think the typo was fixed in develop
<wallyworld> as a driveby
<babbageclunk> oh right
<babbageclunk> yeah that would do it
<wallyworld> i'
<wallyworld> i'll just fix the typo in my PR
<babbageclunk> as long as it still compiles I guess!
<wallyworld> yeah, it doe snow
<babbageclunk> it certainly do. But not in Brisbane.
<wallyworld> was in the middle of doing that while you were revuewing
<wallyworld> lol
<babbageclunk> approved
<babbageclunk> wallyworld: ^
<wallyworld> babbageclunk: huh, the forward port has lint errors that are not picked up in 2.6
<babbageclunk> maybe the develop linter has more stringent settings?
<wallyworld> yup
<babbageclunk> wallyworld: https://github.com/juju/juju/pull/10628 The fix for remove-application --force after a stuck remove-application
<wallyworld> ok, looking after i fix the lint issues
<babbageclunk> Sorry, got distracted by the raft stuff at the end of the day yesterday
<babbageclunk> thanks
<wallyworld> babbageclunk: one is on this line, scope, role, unitName, err := unpackScopeKey(doc.Key)
<wallyworld> i think we can just return err there right
<babbageclunk> ? looking
<babbageclunk> oh right - yup, just forgot to check the error
<wallyworld> no worries, fixing
<babbageclunk> thanks ineffassign!
<wallyworld> babbageclunk: i think the query should be alive or dying to exclude dead units
<wallyworld> +1 with that change
<wallyworld> once this lands we should ask for this to be included in 2.6.9 testing
<babbageclunk> I'm not sure about that - if the machine agent is down then the unit could be dead but it'll never be removed, so the application will still be stuck
<babbageclunk> I'm going to try it now
<babbageclunk> wallyworld: ^
<wallyworld> babbageclunk: ok
<wallyworld> if i'm wrong, that's fine, just land
<babbageclunk> wallyworld: ok - worth checking though
<babbageclunk> wallyworld: yeah, that's what happens - I've added a comment to clarify.
<wallyworld> ty
<wallyworld> timClicks: you going to land your older PRs?
<timClicks> wallyworld: keen to, but wasn't/am not sure what the best window was/is
<wallyworld> 4 weeks ago :-)
<wallyworld> the azure one is targetted at 2.6 but i think we can now target to develop
<wallyworld> and just land anytime
<wallyworld> there's now a juju call command for juju v3 which might need mproved help text in addition to run-action (which is now deprecated)
<wallyworld> maybe that PR coud also look at juju call
<wallyworld> babbageclunk: my PR to forward port 2.6 had an intermittent error so will pull in your change before trying again
<babbageclunk> wallyworld: ah, right
<babbageclunk> wallyworld: remind me of the trick for bootstrapping on openstack when there's no metadata?
#juju 2019-09-13
<timClicks> https://jaas.ai/docs/getting-started - would love people's feedback on this, esp. on whether it makes you more motivated to share it!
<wallyworld> babbageclunk: sorry, was at bank, i can tell you in standup
<babbageclunk> ok
<wallyworld> babbageclunk: got a sec? standup?
<babbageclunk> wallyworld: sure omw
<wallyworld> babbageclunk: and remove-application --force works as well as remove-relation --force, right?
<babbageclunk> It should do... looking in the code now, hang on
<wallyworld> babbageclunk: and also, remove-offer --force should send through the correct option to force remove the app which then force removes the relation etc
<wallyworld> normally you can't remove an offer if there's still the app it is offering
<babbageclunk> yeah, remove-offer --force definitely does that
<babbageclunk> I'm not sure about the remove-application --force one removing the relation when there's a stuck unit - I'm sure it will if the --force is done straight away but it's possible that if you do it unforced first then maybe the removeCount handling will mean that the application doesn't get removed.
<babbageclunk> I'll give it a try this afternoon
<babbageclunk> no sign of those raft timeouts on scaling stack
<wallyworld> i'm also going to test removing an offering model
<wallyworld> we should be sure that --force works anytime
<wallyworld> even after a try without
<kelvinliu> wallyworld: we don't support `deployment` field in metadata yet, right?
<wallyworld> we do
<wallyworld> for service type
<kelvinliu> charm-tools doesn't support it yet
<wallyworld> oh, maybe not
<wallyworld> that will need fixing
<kelvinliu> wondering how to build the charm has deployment field
<wallyworld> i think i have edited the built charm in the past
<wallyworld> can't recall exactly
<kelvinliu> i guess, Juju itself support but we never have any charm use it yet
<wallyworld> yeah, probably
<kelvinliu> ok, i will have a 3rd PR on charm tools repo
<wallyworld> ok
<babbageclunk> wallyworld: managed to test the scenario you were wondering about - remove-application with a stuck unit in a relation, then remove-application --force - it worked!
<wallyworld> babbageclunk: awesome! i tested too
<wallyworld> i stopped a lxd controller and did stuff to the other controller in cmr
<wallyworld> seemed to go ok
<achilleasa> manadart: I have updated my PR to only persist the subnetID and fixed some broken tests. Can you take a look? I am now looking at the instancepoller bits...
<manadart> achilleasa: Yep.
<jhobbs> hi kwmonroe, could you please help get a new version of the grafana charm published to pick up a change in layer snap? https://bugs.launchpad.net/grafana-charm/+bug/1843745
<mup> Bug #1843745: snap install causes error if there are non-ascii characters in the output <cdo-qa> <foundations-engine> <Grafana Charm:New> <Snap Layer:Fix Released> <https://launchpad.net/bugs/1843745>
<achilleasa> manadart: CI passes for my PR. Should I go ahead and land it?
<kwmonroe> jhobbs: https://jaas.ai/grafana/31 is in edge.  i'm running the upgrade tests now; pending success, you cool if i push that through to stable, or would you like some time to test?
<jhobbs> kwmonroe: i'll give it a spin real quick, thanks
<manadart> achilleasa: Yes.
<achilleasa> manadart: I need a tick ;-)
<manadart> achilleasa: Oops. Did the ol' comment review. Done.
<kwmonroe> jhobbs: one thing we've done for other charms that include layer:snap is provide a 'core' resource so people can attach a core.snap to facilitate the install in offline deployments.  prometheus2 and graylog have this, but i see that grafana does not.  is that important for you?
<jhobbs> kwmonroe: i'm surprised that hasn't come up yet; I think the field team drove those requests before. I suspect it will be required for grafana too
<jhobbs> kwmonroe: my test just passed so I'm +1 whenever yours complete
<kwmonroe> fwiw, we'd add a 'core' and 'grafana' resource -- 0-bytes by default.  it came up for a smattering of charms: https://bugs.launchpad.net/graylog-charm/+bug/1828063, but we didn't have grafana on the list.
<mup> Bug #1828063: core snap is not a defined resource for charms that have other snaps as a resource definition <cpe-onsite> <Etcd Charm:Fix Released by cynerva> <Kubernetes E2E Test Charm:Fix Released by cynerva> <Kubernetes Master Charm:Fix Released by cynerva> <Kubernetes Worker Charm:Fix Released
<mup> by cynerva> <Graylog Charm:Fix Released by kwmonroe> <Prometheus2 charm:Fix Released by kwmonroe> <vault-charm:Fix Released by cynerva> <https://launchpad.net/bugs/1828063>
<kwmonroe> jhobbs: upgrade tests just passed; https://jaas.ai/grafana/31 is now through to stable.
<kwmonroe> i'll note grafana on the above bug as "affected projects" and get that worked soonish.
<jhobbs> kwmonroe: great, thanks on both counts!
<achilleasa> rick_h: I am still going through the docs/deploy cmd sources for the binding bits. If it turns out that it's not that hard to also update existing bindings when upgrading charms do we want to land that as a single chunk of work?
<rick_h> achilleasa:  have a sec to chat?
<achilleasa> rick_h: sure. meet you in daily?
<rick_h> achilleasa:  sure thing
<pmatulis> can juju pass autostart order information to containers? it should be able to pass any configuration key right?
<hml> pmatulis: i donâ tbelieve you can
<pmatulis> hml_, oh!
<hml_> pmatulis:  that data would go in the profile for the container right?
<pmatulis> hml_, i don't think so
<hml_> pmatulis:  how would you get it up then?
<pmatulis> it would be key 'boot.autostart.priority' (see https://lxd.readthedocs.io/en/latest/containers/)
<hml_> pmatulis:  right.. those values end up in the container profiles
<pmatulis> ohh
<hml_> pmatulis:  juju wonât pass up anythingâ¦ and boot is an excluded key in lxdprofiles from a charm
<pmatulis> i thought boot priority would be at the lxd daemon level. doesn't a profile just apply to a single container? seems this key controls multiple containers
<hml_> pmatulis:  you can have a default profile which applies to all containers
<hml_> you can multiple profiles on a container if you want
<pmatulis> oh, that makes sense then
<pmatulis> it would be sweet to have a bundle dictate the order in which containers would come up
<pmatulis> but i guess not
<hml_> pmatulis: the order to boot in?  or are created in?
<pmatulis> hml_, booted, on the host
<ezobn> Hi! I have some issues. I had got nice working juju openstack Queens deployment. So trying to upgrade it to Rocky. Have got upgrades all charms themselves. Everything good except openstack compute service list is empty. And debug on nova-cloud-controller shows that it is some timeouts reading nova cells that have found. But them never been configured in the system.  Any bugs on this by the chance?
#juju 2019-09-14
<gnuoy> Hi, I'm trying out cmr support in bundles but juju seems to be ignoring the cmr directive: https://paste.ubuntu.com/p/xXwVg3gfxf/ . Could anyone point me in the right direction ?
<Android> amulet for learning tlhIngan hol
<bluelake> Ø§ÙÙÙ
